text stringlengths 270 6.81k |
|---|
is the Pontrjagin–Thom map, h is the Hurewicz homomorphism, tZ is the homological Thom isomorphism, the bottom map is induced by the stabilization. Bibliography [1] Abe, E., Hopf Algebras. Cambridge Tracts in Math. 74, Cambridge University Press, 1980. 485 [2] Adams, J. F., On the non-existence of elements of Hopf invariant one. Ann. of Math. 72 (1960), 20–104. 518 [3] Adams, J. F., Vector fields on spheres. Ann. of Math. 75 (1962), 603–632. 136, 339 [4] Adams, J. F., A variant of E. H. Brown’s representability theorem. Topology 10 (1971), 185–198. 216 [5] Adams, J. F., Atiyah, M. F., K-theory and the Hopf invariant. Quart. J. Math. Oxford 17 (1966), 31–38. 518 [6] Aguilar, M., Gitler, S., Prieto, C., Algebraic topology from a homotopical viewpoint. Universitext, New York, Springer 2002. 357 [7] Allday, C., Puppe, V., Cohomological methods in transformation groups. Cambridge Stud. Adv. Math. 32, Cambridge University Press, 1993. 328 [8] Araki, S., Toda, H., Multiplicative structure in mod q cohomology theories. I. Osaka J. Math. 2 (1965), 71–115; II - Osaka J. Math. 3 (1966), 81–120. 256 [9] Atiyah, M. F., K-theory and Reality. Quart. J. Math. Oxford (2) 17 (1966), 367–386. 357 [10] Atiyah, M. F., Bott periodicity and the index of elliptic operators. Quart. J. Math. Oxford (2) 72 (1968), 113–140. 357 [11] Atiyah, M. F., K-Theory. New York, W.A. Benjamin 1967. 357 [12] Atiyah, M. F., Collected Works, Vol I–V. Oxford Science Publishers 1988. 357 [13] Atiyah, M. F., Bott, R., On the periodicity theorem for |
complex vector bundles. Acta Math. 112 (1964), 229–247. 357 [14] Atiyah, M. F., Bott, R., Shapiro A., Clifford modules. Topology 3 (1964), 3–38. 357 [15] Atiyah, M. F., Hirzebruch, F., Vector bundles and homogeneous spaces. Proc. Symp. in Pure Math., Amer. Math. Soc. 3 (1961), 7–38. 357 [16] Atiyah, M. F., Singer, I. M., The index of elliptic operators I. Ann. of Math. 87 (1968), 484–534. 528 [17] Barrat, M., Whitehead, J. H. C., The first non-vanishing group of an.n C 1/-ad. Proc. London Math. Soc. (3) 6 (1966), 417–439. 267 [18] Baues, H. J., Algebraic homotopy. Cambridge Stud. Adv. Math. 15, Cambridge Uni- versity Press, 1989. 95 [19] Behrens, M. J., A new proof of the Bott periodicity theorem. Topology Appl. 119 (2002), 167 -183. 357 [20] Betti, E., Sopra gli spazi di un numero qualunque dimensioni. Ann. Mat. Pura Appl. 2 (4) (1871), 140–158. 310 542 Bibliography [21] Blakers, A. L., Some relations between homology and homotopy groups. Ann. of Math. 49 (1948), 428–461. 235 [22] Blakers, A. L., Massey, W.S., The homotopy groups of a triad II. Ann. of Math. 55 (1952), 192–201. 133 [23] Borsuk, K., Drei Sätze über die n-dimensionale euklidische Sphäre. Fund. Math. 20 (1933), 177–190. [24] Bredon, G. E., Introduction to compact transformation groups. Pure Appl. Math. 46, New York, Academic Press 1972. [25] Bredon, G. E., Topology and Geometry. Grad. Texts in Math. 139, New York–Berlin– Heidelberg, Springer 1997. 251 [26 |
] Brinkmann, H.-B., Puppe D., Kategorien und Funktoren. Lecture Notes in Math. 18, Berlin, Springer 1966. [27] Bröcker, Th., Lineare Algebra und Analytische Geometrie. Grundstud. Math., Basel, Birkhäuser, 2003. 16 [28] Bröcker, Th., tom Dieck, T., Kobordismentheorie. Lecture Notes in Math. 178, Berlin, Springer 1970. 527, 536 [29] Bröcker, Th., tom Dieck, T., Representations of Compact Lie Groups. Grad. Texts in Math. 98, New York, Springer 1995. 16, 77, 361, 401 [30] Bröcker, Th., Jänich, K., Einführung in die Differentialtopologie. Heidelberger Taschenbücher 143, Berlin, Springer 1973. 358, 365, 378 [31] Brown, E. H., Cohomology theories. Ann. of Math. 75 (1962), 467–484; Ann. of Math. 78 (1963), 201. 216 [32] Brown, M., A proof of the generalized Schoenflies Theorem. Bull. Amer. Math. Soc. 66 (1960), 74–76. 251 [33] Brown, M., Locally flat imbeddings of topological manifolds. Ann. of Math. 75 (1962), 331–341. [34] Brown, R., Groupoids and van Kampen’s theorem. Proc. London Math. Soc. (3) 17 (1967), 385–401. 45 [35] Burde, G., Zieschang, H., Knots. de Gruyter Stud. Math. 5, Berlin–New York, W. de [36] Gruyter 2003. ˇCadek, M., Crabb, M., G-structures on spheres. Proc. London Math. Soc. (3) 93 (2006), 791–816. 339 [37] Conner, P. E., Floyd, E. E., Differentiable periodic maps. Ergeb. Math. Grenzgeb. 33, Berlin–Göttingen–Heidelberg, Springer 1964. 536 [38] Crabb, M., James, I., Fibrew |
ise Homotopy Theory. Springer Monogr. Math., Berlin– Heidelberg–New York, Springer 1998. 32 [39] Dicks, W., Dunwoody, M. J., Groups acting on graphs. Cambridge Stud. Adv. Math. 17, Cambridge University Press, 1989. 52 [40] tom Dieck, T., Klassifikation numerierbarer Bündel. Arch. Math. 17 (1966), 395–399. Bibliography 543 [41] tom Dieck, T., Partitions of unity in homotopy theory. Compositio Math. 23 (1971), 159–167. [42] tom Dieck, T., Transformation groups and representation theory. Lecture Notes in Math. 766, Berlin, Springer 1979. [43] tom Dieck, T., Transformation Groups. de Gruyter Stud. Math. 8, Berlin–New York, de Gruyter 1987. 328 [44] tom Dieck, T., Topologie. (2. Auflage). de Gruyter Lehrbuch, Berlin–New York, de Gruyter 2000. 56, 57, 199, 222, 355, 358, 375, 409, 447 [45] tom Dieck, T., General topology (Preliminary version). www.uni-math.gwdg.de/tammo [46] tom Dieck, T., Kamps, K. H., Puppe. D., Homotopietheorie. Lecture Notes in Math. 157, Berlin, Springer 1970. 118, 148 [47] tom Dieck, T., Petrie, T., Contractible affine surfaces of Kodaira dimension one. Japan. J. Math. (1990), 147–169. 362 [48] Dold, A., Partitions of unity in the theory of fibrations. Ann. of Math. 78 (1963), 223–255. [49] Dold, A., Halbexakte Homotopiefunktoren. Lecture Notes in Math. 12, Berlin, Springer 1966. 95 [50] Dold, A., Die Homotopieerweiterungseigenschaft (= HEP) ist eine lokale Eigenschaft. Invent. Math. 6 (1968), 185–189. [51] Dold, A., Lectures |
on Algebraic Topology. Grundlehren Math. Wiss. 200, Berlin– Heidelberg–New York, Springer 1972. 179, 311 [52] Dold, A., The fixed point index of fibre-preserving maps. Invent. Math. 25 (1974), 281–297. 311 [53] Dold, A., A simple proof of the Jordan-Alexander complement theorem. Amer. Math. Monthly 100 (1993), 856–857. 249 [54] Dold, A., Puppe, D., Duality, trace, and transfer. In Proceedings Int. Conf. Geometric Topology, PWN - Polish Scientific Publishers, Warszawa 1980, 81- 102. 159, 179 [55] Ebbinghaus, H.-D., et al., Zahlen. Grundwissen Math. 1, Berlin, Springer 1983. 518 [56] Eilenberg, S., Singular homology theory. Ann. of Math. 45 (1944), 407–447. 223, 235 [57] Eilenberg, S., Mac Lane, S., Acyclic models. Amer. J. Math. 79 (1953), 189–199. 286 [58] Eilenberg, S., Steenrod, N., Axiomatic approach to homology theory. Proc. Nat. Acad. Sci. 31 (1945), 117–120. 244 [59] Eilenberg, S., Steenrod, N., Foundations of Algebraic Topology. Princeton University Press, 1952. 244, 245 [60] Eilenberg, S., Zilber, J. A., On products of complexes. Amer. J. Math. 75 (1953), 200 -204. [61] Euler, L., Demonstratio non nullarum insignium proprietatum quibus solida hedris planis indusa sunt praedita. Novi commentarii academiae scientiarum Petropolitanae 4 (1752/3), 140–160. Opera Mathematica Vd. 26, 94–108 (1758) 309 544 Bibliography [62] Euler, L., Elementa doctrinae solidorum. Novi commentarii academiae scientiarum Petropolitanae 4 (1752/3), 109–140. Opera Mathematica Vd. 26, 71 |
–93 (1758) 309 [63] Farkas, H. M., Kra, I., Riemann Surfaces. Grad. Texts in Math. 71, New York– Heidelberg–Berlin, Springer 1980. 317 [64] Federico, P. J., Descartes on Polyhedra. Sources Hist. Math. Phys. Sci. 4, New York– Heidelberg–Berlin, Springer 1982. 309 [65] Félix, Y., Halperin, S., Thomas, J.-C., Rational homotopy theory. Grad. Texts in Math. 205, New York–Berlin–Heidelberg, Springer 2001. 516 [66] Freudenthal, H., Über die Klassen von Sphärenabbildungen I. Compositio Math. (1937) 5, 299–314. 135 [67] Fritsch, R., Piccinini, R. A., Cellular Structures in Topology. Cambridge University Press, 1990. 199, 218 [68] Fulton, W., Algebraic Topology. A first course. Grad. Texts in Math. 153, New York, Springer 1995. [69] Gabriel, P., M. Zisman, Z., Calculus of Fractions and Homotopy Theory. Ergeb. Math. Grenzgeb. 35, Berlin–Heidelberg–New York, Springer 1967. 95 [70] Goebel, K., Kirk, W. A., Topics in metric fixed point theory. Cambridge Stud. Adv. Math. 28, Cambridge University Press, 1990. 140 [71] Goerss, R. G., Jardine, J. F., Simplicial homotopy theory. Progr. Math. 174, Basel, Birkhäuser 1999. [72] Gray, B., Homotopy theory. Pure and Appl. Math. 64, New York, Academic Press 1975. [73] Goodwillie, Th. G., Calculus II. Analytic functors. K-Theory 5 (1991/1992), 295–332. 152 [74] Greene, J. E., A new short proof of Kneser’s conjecture. Amer. Math. Monthly (2002), 918–920. 261, 264 [75] Greub, W., Halperin, S., Vanstone R., Pure and Appl |
. Math. 47, Connections, Curva- ture, and Cohomology. New York, Academic Press 1972. [76] Griffiths, H. B., The fundamental group of two spaces with a common point. Quart. J. of Math. 5 (1954), 175–190. [77] Guillou, L., Marin, A. (Ed.), A la Recherche de la Topologie Perdue. Progr. Math. 62, Boston, Birkhäuser 1986. 526 [78] Hewitt, E., Ross, K. A., Abstract harmonic analysis I. Grundlehren Math. Wiss. 115, Berlin–Göttingen–Heidelberg, Springer 1963. [79] Hirsch, M. W., A proof of the nonretractability of a cell onto its boundary. Proc. Amer. Math. Soc. 14 (1963), 364–365. 139 [80] Hirsch, M.W., Differential topology. New York–Heidelberg–Berlin, Springer 1976. 311 [81] Hirzebruch, F., Neue topologische Methoden in der algebraischen Geometrie. 2. Aufl., Ergeb. Math. Grenzgeb. 9, Berlin, Springer 1962. 493, 528 Bibliography 545 [82] Hirzebruch, F., Gesammelte Abhandlungen. Bde I–II. Berlin–Heidelberg–New York, Springer 1987. [83] Hirzebruch, F., Mayer K.H., O.n/-Mannigfaltigkeiten, exotische Sphären und Singu- laritäten. Lecture Notes in Math. 57, Berlin, Springer 1968. [84] Hochschild, G., The structure of Lie groups. San Francisco, Holden-Day 1965. 348, 361 [85] Hocking, J. G., Young G. S., Topology. Reading Mass., Addison-Wesley 1961. [86] Hopf, H., Über die Abbildungen der dreidimensionalen Sphäre auf die Kugelfläche. Math. Ann. 104 (1931), 637–665. [87] Hopf, H., Die Klassen der Abbildungen der n-dimensionalen |
Polyeder auf die n- dimensionale Sphäre. Comment. Math. Helv. 5 (1933), 39–54. [88] Hopf, H., Über die Abbildungen von Sphären auf Sphären niedrigerer Dimension. Fund. Math. 25 (1935), 427–440. 517 [89] Hopf, H., Systeme symmetrischer Bilinearformen und euklidische Modelle der projektiven Räume. Vierteljahresschrift der Naturforschenden Gesellschaft in Zürich 85 (1940), 165–177. 366 [90] Hopf, H., Über den Rang geschlossener Liescher Gruppen. Comment. Math. Helv. 13 (1940/41), 119–143. 401 [91] Hopf, H., Über die Topologie der Gruppen-Mannigfaltigkeiten und ihrer Verallge- meinerungen. Ann. of Math. 42 (1941), 22–52. 90, 485 [92] Hopf, H., Selecta Heinz Hopf. Berlin–Göttingen–Heidelberg, Springer 1964. [93] Huppert, B., Endliche Gruppen I. Grundlehren Math. Wiss. 134, Berlin, Springer 1979. [94] Hurewicz, W., Wallman, H., Dimension theory. Princeton Math. Ser. 4, Princeton University Press, 1948. 140 [95] James, I. M., Euclidean models of projective spaces. Bull. London Math. Soc. 3 (1971), 257–276. 366 [96] James, I. M., General topology and homotopy theory. Berlin, Springer 1984. 32 [97] James, I. M., Fibrewise topology. Cambridge Tracts in Math. 91, Cambridge University Press, 1989. 32 [98] James, I. M. (Editor), Handbook of Algebraic Topology. Amsterdam, Elsevier 1995. [99] James, I. M. (Editor), History of Topology. Amsterdam, Elsevier 1999. [100] van Kampen, E. H., On the connection between the fundamental group of some related spaces. Amer. J. Math. 55 ( |
1933), 261–267. 45 [101] Kamps, K. H., Porter, T., Abstract homotopy and simple homotopy theory. Singapore, World Scientific 1997. 95 [102] Karoubi, M., K-Theory. Grundlehren Math. Wiss. 226, Berlin, Springer 1978. 357 [103] Klaus, S., Kreck, M., A quick proof of the rational Hurewicz theorem and a computation of the rationnal homotopy groups of spheres. Math. Proc. Camb. Philos. Soc. 136 (2004), 617–623. 515 546 Bibliography [104] Klein, F., Gesammelte Mathematische Abhandlungen. Bd. I-III. Berlin, Springer 1921–1923. [105] Kneser, M., Aufgabe 360. Jahresbericht der Deutschen Mathematiker-Vereinigung 58 (1955), 27. 264 [106] Kono, A., Tamaki, D., Generalized cohomology. Transl. Math. Monogr. 230, Provi- dence, R.I., Amer. Math. Soc. 2006. 357 [107] Kosinski, A., Differential Manifolds. Pure and Appl. Math. 138, Academic Press 1992. 358 [108] Kreck, M., Surgery and duality. Ann. of Math. 149 (1999), 707–754. 390 [109] Laitinen, E., Lück, W., Equivariant Lefschetz classes. Osaka J. Math. 26 (1989), 491–525. 311 [110] Lamotke, K., Semisimpliziale algebraische Topologie. Grundlehren Math. Wiss. 147, Berlin–Heidelberg–New York, Springer 1968. [111] Lewis Jr., L. G., May, J. P., Steinberger, M., McClure, J. E., Equivariant stable homo- topy theory. Lecture Notes in Math. 1213, Berlin, Springer 1986. [112] Lillig, J., A union theorem for cofibrations. Arch. Math. 24 (1973), 410–415. 114 [113] Liulevicius, A., Arrows, symmetries and representation rings. J. Pure |
Appl. Algebra 19 (1980), 259–273. 489 [114] Lovász, L., Kneser’s conjecture, chromatic number and homotopy. J. Combin. Theory Ser. A 25 (1978), 319–324. 264 [115] Lück, W., Transformation groups and algebraic K-theory. Lecture Notes in Math. 1408, Berlin, Springer 1989. [116] Lück, W., The universal functorial Lefschetz invariant. Fund. Math. 161 (1999), 167–215. 311 [117] Lück, W., Chern character for proper equivariant homology theories and applications to K- and L-theory. J. Reine Angew. Math. 543 (2002), 193–234. [118] Lück, W., L2-invariants: Theory and applications to geometry and K-theory. Ergeb. Math. Grenzgeb. 44, Berlin–Heidelberg–New York, Springer 2002. [119] Lück, W., Algebraische Topologie. Wiesbaden, Vieweg 2005. [120] MacLane, S., Homology. Grundlehren Mathematischen Wiss. 114, Berlin–Göttingen– Heidelberg, Springer 1963. 294 [121] Magnus, W., Noneuclidean tesselations and their groups. Pure and Appl. Math. 61, New York, Academic Press 1974. 316 [122] Magnus, W., Karras, A., Solitar, D., Combinatorial group theory. New York, Inter- science Publishers 1966. 52 [123] Massey, W. S., Algebraic Topology: An Introduction. New York, Harcourt, Brace & World 1967. 56, 311 [124] Massey, W. S., Homology and cohomology theory. Monogr. Textbooks Pure Appl. Math. 46, New York, Marcel Dekker 1978. Bibliography 547 [125] Matoušek, J., Using the Borsuk-Ulam theorem. Universitext, Berlin–Heidelberg–New York, Springer 2003. [126] May, J. P., Classifying spaces and fibrations. Mem. Amer. Math. Soc. 1 (1955), no.1, 155. [ |
127] May, J. P., A concise course in algebraic topology. Chicago Lecture Notes in Math., Chicago University Press, 1999. [128] May, J. P., Sigurdsson, J., Parametrized homotopy theory. Math. Surveys Monogr. 132, Providence, R.I., Amer. Math. Soc. 2006. 32, 195 [129] Mayer, K. H., Algebraische Topologie. Basel–Boston–Berlin, Birkhäuser 1989. [130] McCleary, J., A user’s guide to spectral sequences. 2. ed., Cambridge Stud. Adv. Math. 58, Cambridge University Press, 2001. 303 [131] Milnor, J., Construction of universal bundles. II. Ann. of Math. 63 (1956), 430–436. 345 [132] Milnor, J., On spaces having the homotopy type of a C W -complex. Trans. Amer. Math. Soc. 90 (1958), 272–280. 218 [133] Milnor, J., On axiomatic homology theory. Pacific J. Math. 12 (1962), 337–342. 408 [134] Milnor, J., Morse Theory. Ann. of Math. Stud. 51, Princeton University Press, 1963. 390 [135] Milnor, J., Micro bundles. Topology 3, Suppl. 1 (1964), 53–80. [136] Milnor, J., Topology from the differentiable view point. Charlottesville University Press, 1965. 365 [137] Milnor, J., Lectures on the h-cobordism theorem. Princeton University Press, 1965. 390 [138] Milnor, J., Moore, J. C., On the structure of Hopf algebras. Ann. of Math. 81 (1965), 211–264 485 [139] Milnor, J., Stasheff, J. D., Characteristic classes. Ann. Math. Stud. 76, Princeton University Press, 1974. [140] Möbius, A. F., Gesammelte Werke. Bd. I -IV, Leipzig, Hirzel KG 1886. 342 [141] Moise, E. E., Geometric Topology in Dimensions 2 and 3. Grad. Texts in Math. 47, Berlin–Heidelberg–New York, Springer |
1977. 198, 251 [142] Montgomery, S., Hopf algebras and their actions on rings. Amer. Math. Soc. Regional Conf. Series Math. 82, Washington, DC, 1993. 485 [143] Munkres, J. R., Elementary Differential Topology. Princeton University Press, 1966. 198 [144] Munkres, J. R., Topology. A first Course. Englewood Cliffs, Prentice-Hall 1975. [145] Munkres, J. R., Elements of Algebraic Topology. Reading, Mass., Addison-Wesley 1984. [146] Nagata, J., Modern general topology. Groningen, Nordhoff 1968 [147] Nomura, Y., On mapping sequences. Nagoya Math. J. 17 (1960), 111–145. 99 548 Bibliography [148] Peano, G., Sur une courbe, qui remplit tout une aire plane. Math. Ann. 36 (1890), 157–160. [149] Pommerenke, Ch., Boundary behaviour of conformal maps. Grundlehren Math. Wiss. 299, Berlin, Springer 1992. 251 [150] Poincaré, H., Sur la généralisation d’un théorème d’Euler relatif aux polyèdres. Compt. Rend. Acad. Sci. Paris 117 (1893), 144–145. 310 [151] Poincaré, H., Analysis situs. Journal de l’École Polytechnique 1 (1895), 1–121. 42 [152] Poincaré, H., Complément à l’Analysis Situs. Rend. Circ. Mat. Palermo 13 (1899), 285–343. 310 [153] Poincaré, H., Cinquième complément à l’Analysis Situs. Rend. Circ. Mat.Palermo 18 (1904), 45–110. [154] Poincaré, H., Œuvres de Henri Poincaré VI. Paris, Gauthier-Villars 1953. [155] Puppe, D., Homotopiemengen und ihre induzierten Abbildungen. I. Math. Z. 69 (1958), 299–344. 33 |
, 81, 95, 195 [156] Puppe, D., Bemerkungen über die Erweiterung von Homotopien. Arch. Math. 18 (1967), 81–88. [157] Puppe, D., Some well known weak homotopy equivalences are genuine homotopy equivalences. Symposia Mathematica 5 (1971), 363–374. [158] Puppe, D., Homotopy cocomplete classes of spaces and the realization of the singular complex. In: Topological Topics (I. M. James ed.), London Math. Soc. Lecture Note Ser. 86, Cambridge University Press, 1983, 55–69. [159] von Querenburg, B., Mengentheoretische Topologie. Berlin–Heidelberg–New York, Springer 1979. [160] Quillen, D., Elementary proofs of some results of cobordism using Steenrod opera- tions. Adv. Math. 7 (1971), 29–56. 527, 536 [161] Radó, T., Über den Begriff der Riemannschen Fläche. Acta Sci. Math. (Szeged) 2 (1924), 101–121. 198 [162] Ranicki, A., Algebraic and geometric surgery. Oxford Math. Monographs, Oxford University Press, 2002. 390 [163] Ravenel, D. C., Complex cobordism and stable homotopy groups of spheres. New York, Academic Press 1986. 517 [164] Schubert, H., Topologie. Stuttgart, Teubner, 4. Auflage 1975. [165] Segal, G., Classifying spaces and spectral sequences. Inst. Hautes Études Sci. Publ. Math. 34 (1968), 105–112. [166] Seifert, H., Konstruktion dreidimensionaler geschlossener Räume. Ber. Sächs. Akad. Wiss. 83 (1931), 26–66. 45 [167] Seifert, H., Threllfall, W., Lehrbuch der Topologie. Leipzig, Teubner 1934. 56, 311, 438 [168] Selick, P., Introduction to homotopy theory. Fields Institute Monogr. 9, Amer. Math. Soc., |
Providence, R.I., 1997. Bibliography 549 [169] Serre, J.-P., Homologie singulière des espaces fibrés. Ann. of Math. 54 (1951), 425–505. [170] Serre, J.-P., Groupes d’homotopie et classes de groupes abéliens. Ann. of Math. 58 (1953), 258–294. 504 [171] Serre, J.-P., Trees. Berlin–Heidelberg–New York, Springer 1980. 52 [172] Spanier, E. H., Algebraic Topology. New York, McGraw–Hill 1966. [173] Sperner, E., Neuer Beweis für die Invarianz der Dimensionszahl und des Gebietes. Abh. Math. Sem. Univ. Hamburg 6 (1928), 265–272. 139 [174] Steen, L. A., Seebach, J. A. jr., Counterexamples in topology. Holt, Rinehart ans Winston 1970. [175] Steenrod, N., The topology of fibre bundles. Princeton Math. Ser. 14, Princeton Uni- versity Press, 1951. [176] Steenrod, N., A convenient category of topological spaces. Michigan Math. J. 14 (1967), 13–152. [177] Sternberg, S., Lectures on differential geometry. Englewood Cliffs, Prentice Hall 1964. 365 [178] Stöcker, R., Zieschang, H., Algebraische Topologie. Mathematische Leitfäden, Stuttgart, Teubner 1988. [179] Strøm, A., Note on cofibrations. Math. Scand. 19 (1966), 11–14. [180] Strøm, A., Note on cofibrations II. Math. Scand. 22 (1968), 130–142. 103 [181] Stong, R. E., Notes on cobordism theory. Princeton University Press, 1968. 536 [182] Sweedler, M. E., Hopf algebras. Math. Lecture Note Ser., New York, Benjamin 1980. 485 [183] Switzer, R. M., Algebraic Topology – Homotopy and Homology. Gru |
ndlehren Math. Wiss. 212, Berlin–Heidelberg–New York, Springer 1975. [184] Thom, R., Quelques propriétés globales des variétés différentiables. Comment. Math. Helv. 28 (1954), 17–86. 521, 527, 528 [185] Toda, H., Composition methods in homotopy groups of spheres. Ann. of Math. Stud. 49, Princeton University Press, 1962. 517 [186] Vogt, R., Convenient categories of topological spaces for homotopy theory. Arch. Math. 22 (1971), 545–555. [187] Waldhausen F., Algebraic K-theory of spaces. In: Proc. Conf., Rutgers New Brunswick 1983, Lecture Notes in Math. 1126, Springer, 1985, 318–419. [188] Wall, C. T. C., Finiteness conditions for CW-complexes. Ann. of Math. 81 (1965), 56–69; Finiteness conditions for CW-complexes II, Proc. London Math. Soc. 1966, 129–139. [189] Wall, C. T. C., On the exactness of interlocking sequences. Enseign. Math. 12 (1966), 95–100. 245 [190] Wall, C. T. C., A geometric introduction to topology. Reading, Mass., Addison-Wesley 1972. 550 Bibliography [191] Wall, C. T. C., Ranicki, A., Surgery on compact manifolds. Math, Surveys Monogr, 69, Providence, R.I., Amer. Math. Soc. 1999. 390 [192] Whitehead, G. W., Elements of homotopy theory. Grad. Texts in Math. 61, New York, Springer 1978. 214 [193] Whitehead, J. H. C., On C 1-complexes. Ann. of Math. 41 (1940), 809–824. 198 [194] Whitehead, J. H. C., Combinatorial homotopy. Bull. Amer. Math. Soc. 55 (1949), 213–245. [195] Whitehead, J. H. C., Mathematical Works. Vol I– IV. Oxford, Pergamon Press 1962. [196] Whitney, H., On regular closed curves in the plane. Compos |
Gauss map, 352 genus, 312 geometric realization, 198, 321 graded module, 283 graph, 197, 264 chromatic number, 264 colouring, 264 Grassmann manifold, 23, 362 Grothendieck ring, 356 group discrete, 15 free, 53 general linear, 16 normalizer, 21 orthogonal, 16 special linear, 16 special orthogonal, 16 special unitary, 16 topological, 15 topological subgroup, 16 torus, 16 unitary, 16 Weyl, 21 group algebra, 486 group object, 90 group-like, 486 groupoid, 42 topological, 335 Gysin sequence, 431 h-fibration, 118 half-space, 369 Hausdorff space, 3 h-coexact, 92 h-cofibration, 118 HEP, 101 h-equivalence, 28 h-exact, 97 hexagon lemma, 277 Hirzebruch L-polynomials, 494 HLP, 66, 115 homeomorphism, 2 homogeneous space, 18 homological orientation, 444, 445 homologous, 283 homology additive, 180 Index 561 additivity axiom, 245 coefficient groups, 245 dimension axiom, 245 for pointed spaces, 179 group, 283 integral, rational, mod.p/, 237 module, 283 homology product, 242 homology group, 283 local, 392 reduced, 252 relative, 225 singular, 224 homology module, 283 homology theory, 244 one-space, 528 homotopic, 27 homotopy, 27, 38 constant, 28 equivariant, 17 G-homotopy, 17 inverse, 28 linear, 29 product, 28 relative, 28 homotopy category, 28 homotopy class, 28 homotopy cocartesian, 85 homotopy colimit, 270 homotopy equivalence, 28 weak, 144 homotopy equivalent, 28 homotopy extension property, 101 homotopy fibre, 120 homotopy functor, 215 homotopy inverse, 28 homotopy lifting property = HLP, 66 homotopy pushout, 85 homotopy type, 28 Hopf algebra, 484 primitive element, 485 duality, 485 562 Index group-like element, 486 pairing, 485 Hopf fibration, 332 Hopf invariant, 517 Hopf space, 90 H-space |
, 296 cohomology, 418 homology, 238 universal covering, 78 universal group, 355 vector bundle, 336 finite type, 336 inverse, 351 numerable, 336 orientable, 336 orientation, 336 subbundle, 337 tautological, 339 vector bundles stably equivalent, 356 vector field, 136 vertex, 197 Wang sequence, 426 weak equivalence, 144 wedge, 31 well-pointed, 102 Weyl group, 21 Whitehead complex, 199 subcomplex, 199 Whitney sum, 351 winding number, 50, 258, 403uting into the differential equation gives y dx. Hence it is c = λeλx and Definition (Characteristic equation). The characteristic equation of a (secondorder) differential equation ay + by + c = 0 is aλ2 + bλ + c = 0. In this case there are two solutions to the characteristic equation, giving (in principle) two complementary functions y1 = eλ1x and y2 = eλ2x. If λ1 and λ2 are distinct, then y1 and y2 are linearly independent and complete — they form a basis of the solution space. The (most) general complementary function is yc = Aeλ1x + Beλ2x. Example. y − 5y + 6y = 0. Try y = eλx. The characteristic equation is λ2 − 5λ + 6 = 0. Then λ = 2 or 3. So the general solution is y = Ae2x + Be3x. Note that A and B can be complex constants. Example (Simple harmonic motion). y + 4y = 0. Try y = eλx. The characteristic equation is λ2 + 4 = 0, with solutions λ = ±2i. Then our general solution is y = Ae2ix + Be−2ix. However, if this is in a case of simple harmonic motion in physics, we want the function to be real (or look real). We can write y = A(cos 2x + i sin 2x) + B(cos 2x − i sin 2x) = (A + B) cos 2x + i(A − B) sin 2x = α cos 2x + β sin 2x where α = A + B and β = i(A − B), and α and β are independent constants. In e� |
�ect, we have changed the basis from {e2ix, e−2ix} to {cos 2x, sin 2x}. 30 5 Second-order differential equations IA Differential Equations Example (Degeneracy). y − 4y + 4y = 0. Try y = eλx. We have λ2 − 4λ + 4 = 0 and (λ − 2)2 = 0. So λ = 2 or 2. But e2x and e2x are clearly not linearly independent. We have only managed to find one basis function of the solution space, but a second order equation has a 2 dimensional solution space. We need to find a second solution. We can perform detuning. We can separate the two functions found above from each other by considering y − 4y + (4 − ε2)y = 0. This turns into the equation we want to solve as ε → 0. Try y = eλx. We obtain λ2 − 4λ + 4 − ε2. The two roots are λ = 2 ± ε. Then y = Ae(2+ε)x + Be(2−ε)x = e2x[Aeεx + Be−εx] Taking the limit as ε → 0, we use the Taylor expansion of eεx to obtain y = e2x[(A + B) + εx(A − B) + O(Aε2, Bε2)] We let (A + B) = α and ε(A − B) = β. This is perfectly valid for any non-zero ε. Then A = 1 ε ) and B = 1 2 (α − β ε ). So we have 2 (α + β y = e2x[α + βx + O(Aε2, Bε2)] We know for any ε, we have a solution of this form. Now we turn the procedure around. We fix some α and β. Then given any ε, we can find some constants A, B (depending on ε) such that the above holds. As we decrease the size of ε, we have A, B = O( 1 ε ). So O(Aε2) = O(Bε2) = O(ε). So our solution becomes y |
= e2x[α + βx + O(ε)] → e2x(α + βx) In this way, we have derived two separate basis functions. In general, if y1(x) is a degenerate complementary function of a linear differential equation with constant coefficients, then y2(x) = xy1(x) is an independent complementary function. 5.1.2 Second complementary function In general (i.e. if we don’t have constant coefficients), we can find a second complementary function associated with a degenerate solution of the homogeneous equation by looking for a solution in the form y2(x) = v(x)y1(x), where y1(x) is the degenerate solution we found. Example. Consider y − 4y + 4y = 0. We have y1 = e2x. We try y2 = ve2x. Then 2 = (v + 2v)e2x y 2 = (v + 4v + 4v)e2x. y Substituting into the original equation gives (v + 4v + 4v) − 4(v + 2v) + 4v = 0. Simplifying, this tells us v = 0, which forces v to be a linear function of x. So y2 = (Ax + B)e2x for some A, B ∈ R. 31 5 Second-order differential equations IA Differential Equations 5.1.3 Phase space If we are given a general nth order differential equation of the form an(x)y(n) + an−1y(n−1) + · · · + a1(x)y + a0(x)y = f (x), and we have a solution y, then we can plot a graph of y versus x, and see how y evolves with x. However, one problem is that for such an equation, the solution is not just determined by the initial condition y(x0), but also y(x0), y(x0) etc. So if we just have a snapshot of the value of y at a particular point x0, we have completely no idea how it would evolve in the future. So how much information do we actually need? At any point x0, if we are given |
the first n − 1 derivatives, i.e. y(x0), y(x0), · · ·, y(n−1)(x0), we can then get the nth derivative and also any higher derivatives from the differential equation. This means that we know the Taylor series of y about x0, and it follows that the solution is uniquely determined by these conditions (note that it takes considerably more extra work to actually prove rigorously that the solution is uniquely determined by these initial conditions, but it can be done for sufficiently sensible f, as will be done in IB Analysis II). Thus we are led to consider the solution vector Y(x) = (y(x), y(x), · · ·, yn−1(x)). We say such a vector lies in the phase space, which is an n-dimensional space. So for each x, we thus get a point Y(x) lying in the n-dimensional space. Moreover, given any point in the phase space, if we view it as the initial conditions for our differential equation, we get a unique trajectory in the phase space. Example. Consider y + 4y = 0. Suppose we have an initial condition of y1(0) = 1, y 1(0) = 0. Then we can solve the equation to obtain y1(x) = cos 2x. Thus the initial solution vector is Y1(0) = (1, 0), and the trajectory as x varies is given by Y1(x) = (cos 2x, −2 sin 2x). Thus as x changes, we trace out an ellipse in the clockwise direction: y Y1(x) y Another possible initial condition is y2(0) = 0, y the solution y(x) = sin 2x, with a solution vector Y2(x) = (sin 2x, 2 cos 2x). 2(0) = 2. In this case, we obtain Note that as vectors, our two initial conditions (1, 0) and (0, 2) are independent. Moreover, as x changes, the two solution vectors Y1(x), Y2(x) remain independent. This is an important observation that allows the method of variation of parameters later on. 32 5 Second-order differential equations IA Differential Equations In general, for a 2 |
nd order equation, the phase space is a 2-dimensional space, and we can take the two complementary functions Y1 and Y2 as basis vectors for the phase space at each particular value of x. Of course, we need the two solutions to be linearly independent. Definition (Wronskian). Given a differential equation with solutions y1, y2, the Wronskian is the determinant W (x) = y1 y 1 y2 y 2. Definition (Independent solutions). Two solutions y1(x) and y2(x) are independent solutions of the differential equation if and only if Y1 and Y2 are linearly independent as vectors in the phase space for some x, i.e. iff the Wronskian is non-zero for some x. In our example, we have W (x) = 2 cos2 2x + 2 sin2 2x = 2 = 0 for all x. Example. In our earlier example, y1 = e2x and y2 = xe2x. We have W = e2x 2e2x xe2x e2x + 2xe2x = e4x(1 + 2x − 2x) = e4x = 0. In both cases, the Wronskian is never zero. Is it possible that it is zero for some x while non-zero for others? The answer is no. Theorem (Abel’s Theorem). Given an equation y + p(x)y + q(x)y = 0, either W = 0 for all x, or W = 0 for all x. i.e. iff two solutions are independent for some particular x, then they are independent for all x. Proof. If y1 and y2 are both solutions, then y2(y y1(y 1 + py 2 + py 1 + qy1) = 0 2 + qy2) = 0 Subtracting the two equations, we have Note that W = y1y 2 − y2y y1y 2 − y2y 1 + p(y1y 1 and W = y1y 2 − y2y 2 + y 1) = 0 1y 2 − (y 2y 1 + y2y 1 ) = y1y 2 − y2y 1 W + |
P (x)W = 0 W (x) = W0e− P dx, Where W0 = const. Since the exponential function is never zero, either W0 = 0, in which case W = 0, or W0 = 0 and W = 0 for any value of x. In general, any linear nth-order homogeneous differential equation can be written in the form Y + AY = 0, a system of first-order equations. It can then be shown that W + tr(A)W = 0, and W = W0e− tr A dx. So Abel’s theorem holds. 5.2 Particular integrals We now consider equations of the form ay + by + cy = f (x). We will come up with several ways to find a particular integral. 33 5 Second-order differential equations IA Differential Equations 5.2.1 Guessing If the forcing terms are simple, we can easily “guess” the form of the particular integral, as we’ve previously done for first-order equations. f (x) emx sin kx cos kx polynomial pn(x) yp(x) Aemx A sin kx + B cos kx qn(x) = anxn + · · · + a1x + a0 It is important to remember that the equation is linear, so we can superpose solutions and consider each forcing term separately. Example. Consider y − 5y + 6y = 2x + e4x. To obtain the forcing term 2x, we need a first order polynomial ax + b, and to get e4x we need ce4x. Thus we can guess yp = ax + b + ce4x y p = a + 4ce4x p = 16ce4x y Substituting in, we get 16ce4x − 5(a + 4ce4x) + 6(ax + b + ce4x) = 2x + e4x Comparing coefficients of similar functions, we have 16c − 20c + 6c = 1 ⇒ c = 6a = 2 ⇒ a = −5a + 6b = 0 ⇒ b = 1 2 1 3 5 18 Since the complementary function is yc = Ae3 |
x + Be2x, the general solution is y = Ae3x + Be2x + 1 2 e4x + 1 3 x + 5 18. Note that any boundary condition to determine A and B must be applied to the full solution, not the complementary function 5.2.2 Resonance Consider ¨y + ω2 0y = sin ω0t. The complementary solution is yc = A sin ω0t + B cos w0t. We notice that the forcing is itself a complementary function. So if we guess a particular integral yp = C sin ω0t + D cos ω0t, we’ll simply find ¨yp + ω2 0yp = 0, so we can’t balance the forcing. This is an example of a simple harmonic oscillator being forced at its natural frequency. We can detune our forcing away from the natural frequency, and consider ¨y + ω2 0y = sin ωt with ω = ω0. Try yp = C(sin ωt − sin ω0t). 34 5 Second-order differential equations IA Differential Equations We have ¨yp = C(−ω2 sin ωt + ω2 0 sin ω0t). Substituting into the differential equation, we have C(ω2 0 − ω2) = 1. Then yp = sin ωt − sin ω0t ω2 0 − ω2. We can simplify this to yp = 2 0 − ω2 cos ω2 ω0 + ω 2 t sin ω − ω0 2 t We let ω0 − ω = ∆ω. Then yp = −2 (2ω + ∆ω)∆ω cos ω + ∆ω 2 t sin ∆ω 2 t. yp t yp 2 t sin ∆ω − sin ∆ω 2 t O 1 ∆ω This oscillation in the amplitude of the cos wave is known as beating. This happens when the forcing frequency is close to the natural frequency. The wavelength of the sin function has order O( 1 ). As ∆ω → 0, the wavelength of the beating envelope → ∞ and we just have the initial linear growth. ∆ω ) and cos has wavelength O( 1 � |
�0 Mathematically, since sin θ ≈ θ as θ → 0, as ∆ω → 0, we have yp → −t 2ω0 cos ω0t. In general, if the forcing is a linear combination of complementary functions, then the particular integral is proportional to t (the independent variable) times the non-resonant guess. 5.2.3 Variation of parameters So far, we have been finding particular integrals by guessing. Here we are going to come up with a method that can systematically help us find the particular integral. Of course, this is substantially more complicated than guessing. So if the form of the particular integral is obvious, we should go for guessing. Suppose we have a second order differential equation y + p(x)y + q(x)y = f (x). 35 5 Second-order differential equations IA Differential Equations We then know any particular solution vector can be written in the form Y(x) = y(x) y(x), and our job is to find one solution vector that satisfies the equation. We presuppose such a solution actually exists, and we will try to find out what it is. The trick is to pick a convenient basis for this space. Let y1(x) and y2(x) be linearly independent complementary functions of the ODE. Then for each x, the solution vectors Y1(x) = (y1(x), y 2(x)) form a basis of the solution space. So for each particular x, we can find some constants u, v (depending on x) such that the following equality holds: 1(x)) and Y2(x) = (y2(x), y Y(x) = u(x)Y1(x) + v(x)Y2(x), since Y1 and Y2 are a basis. Component-wise, we have yp = uy1 + vy2 1 + vy p = uy y 2 Differentiating the second equation, we obtain p = (uy y 1 + uy 1) + (vy 2 + vy 2) (a) (b) (c) If we consider (c) + p(b) + q(a), we have y |
1u + y 2v = f. Now note that we derived the equation of y p from the vector equation. This must be equal to what we get if we differentiate (a). By (a) - (b), we obtain y1u + y2v = 0. Now we have two simultaneous equations for u and v, which we should be able to solve. We can, for example, write them in matrix form as y1 y 1 u v 0 f y2 y 2 = Inverting the left matrix, we have u v = 1 W y 2 −y2 −y y1 1 0 f So u = − y2 W f and v = y1 W f. Example. y + 4y = sin 2x. We know that y1 = sin 2x and y2 = cos 2x. W = −2. We write yp = u sin 2x + v cos 2x. Using the formulae above, we obtain u = cos 2x sin 2x 2 = sin 4x 4, v = − sin2 2x 2 = cos 4x − 1 4 So u = − cos 4x 16, v = sin 4x 16 − x 4 36 5 Second-order differential equations IA Differential Equations Therefore yp = 1 16 (− cos 4x sin 2x + sin 4x cos 2x − x 4 cos 2x) = 1 16 sin 2x − x 4 cos 2x Note that − 1 a complementary function, so the results agree. 4 x cos 2x is what we found previously by detuning, and 1 16 sin 2x is It is generally not a good idea to remember the exact formula for the results we’ve obtained above. Instead, whenever faced with such questions, you should be able to re-derive the results instead. 5.3 Linear equidimensional equations Equidimensional equations are often called homogeneous equations, but this is confusing as it has the same name as those with no forcing term. So we prefer this name instead. Definition (Equidimensional equation). An equation is equidimensional if it has the form ax2y + bxy + cy = f (x), where a, b, c are constants. To understand the name “equidimensional”, suppose we are doing physics and variables have dimensions. Say y has dimensions L and x has dimensions T. Then y |
has dimensions LT −1 and y has dimensions LT −2. So all terms x2y, xy and y have the same dimensions. Solving by eigenfunctions Note that y = xk is an eigenfunction of x d dx. We can try an eigenfunction y = xk. We have y = kxk−1 and thus xy = kxk = ky; and y = k(k − 1)xk−2 and x2y = k(k − 1)xk. Substituting in, we have ak(k − 1) + bk + c = 0, which we can solve, in general, to give two roots k1 and k2, and yc = Axk1 +Bxk2. Solving by substitution Alternatively, we can make a substitution z = ln x. Then we can show that a d2y dz2 + (b − a) dy dz + cy = f (ez). This turns an equidimensional equation into an equation with constant coefficients. The characteristic equation for solutions in the form y = eλz is of form a2λ2 + (b − a)λ + c = 0, which we can rearrange to become aλ(λ − 1) + bλ + c = 0. So λ = k1, k2. Then the complementary function is yc = Aek1z + Bek2z = Axk1 + Bxk2. 37 5 Second-order differential equations IA Differential Equations Degenerate solutions If the roots of the characteristic equation are equal, then yc = {ekz, zekz} = {xk, xk ln x}. Similarly, if there is a resonant forcing proportional to xk1 (or xk2), then there is a particular integral of the form xk1 ln x. These results can be easily obtained by considering the substitution method of solving, and then applying our results from homogeneous linear equations with constant coefficients. 5.4 Difference equations Consider an equation of the form ayn+2 + byn+1 + cyn = fn. We can solve in a similar way to differential equations, by exploiting linearity and eigenfunctions. We can think of the difference operator D[ |
yn] = yn+1. This has an eigen- function yn = kn. We have D[yn] = D[kn] = kn+1 = k · kn = kyn. To solve the difference equation, we first look for complementary functions satisfying We try yn = kn to obtain ayn+2 + byn+1 + cyn = 0 akn+2 + bkn+1 + ckn = 0 ak2 + bk + c = 0 from which we can determine k. So the general complementary function is n = Akn yc 2 if k1 = k2. If they are equal, then yc n = (A + Bn)kn. 1 + Bkn To find the particular integral, we guess. fn kn kn 1 np Anp + Bnp−1 + · · · + Cn + D yp n Akn if k = k1, k2 nkn 1 Example (Fibonacci sequence). The Fibonacci sequence is defined by yn = yn−1 + yn−2 with y0 = y1 = 1. We can write this as We try yn = kn. Then k2 − k − 1 = 0. Then yn+2 − yn+1 − yn = 0 k2 − = 38 5 Second-order differential equations IA Differential Equations We write k = ϕ1, ϕ2. Then yn = Aϕn 1 + Bϕn 2. Our initial conditions give A + B = 1 Aϕ1 + Bϕ2 = 1 We get A = ϕ1√ 5 and B = −ϕ2√ 5. So yn = ϕn+1 1 − ϕn+1 2√ 5 = 5.5 Transients and damping n+1 ϕn+1 1 − −1 ϕ1 √ 5 In many physical systems, there is some sort of restoring force and some damping, e.g. car suspension system. Consider a car of mass M with a vertical force F (t) acting on it (e.g. mouse jumping on the car). We can consider the wheels to be springs (F = kx) with a “shock absorber” (F = l ˙x). Then the equation |
of motion can be given by M ¨x = F (t) − kx − l ˙x. So we have ˙x + ¨x + l M k M Note that if we don’t have the damping and the forcing, we end up with simple harmonic motion of angular frequency k/M. Write t = τ M/k, where τ is dimensionless. The timescale M/k is proportional to the period of the undamped, unforced system (or 1 over its natural frequency). Then we obtain 1 M F (t). x = ¨x + 2κ ˙x + x = f (τ ) where, ˙x means dx and f = F k. By this substitution, we are now left with only one parameter κ instead of dτ, κ = l kM √ 2 the original three (M, l, k). We will consider different possible cases. Free (natural) response f = 0 We try x = eλτ ¨x + 2κ ˙x + x = 0 λ2 + 2κλ + 1 = 0 λ = −κ ± κ2 − 1 = −λ1, −λ2 where λ1 and λ2 have positive real parts. 39 5 Second-order differential equations IA Differential Equations Underdamping If κ < 1, we have x = e−κτ (A sin √ 1 − κ2τ + B cos √ 1 − κ2τ ). The period is 1−κ2 and its amplitude decays in a characteristic of O( 1 2π√ κ ). Note that the damping increases the period. As κ → 1, the oscillation period → ∞. x τ Critically damping If κ = 1, then x = (A + Bτ )e−κτ. The rise time and decay time are both O( 1 κ ) = O(1). So the dimensional rise and decay times are O(M/k). x τ Overdamping If κ > 1, then x = Ae−λ1τ + Be−λ2τ with λ1 < λ2. Then the decay time is O(1/λ1) and the rise time is O(1/λ2). x τ 40 5 Second-order diff |
erential equations IA Differential Equations Note that in all cases, it is possible to get a large initial increase in amplitude. Forcing In a forced system, the complementary functions typically determine the shorttime transient response, while the particular integral determines the long-time (asymptotic) response. For example, if f (τ ) = sin τ, then we can guess xp = C sin τ + D cos τ. In this case, it turns out that xp = − 1 The general solution is thus x = Ae−λ1τ + Be−λτ − 1 2κ cos τ. 2κ cos τ ∼ − 1 2κ cos τ as τ → ∞ since Re(λ1,2) > 0. It is important to note that the forcing response is out of phase with the forcing. 5.6 Impulses and point forces 5.6.1 Dirac delta function Consider a ball bouncing on the ground. When the ball hits the ground at some time T, it experiences a force from the ground for some short period of time. The force on the ball exerted by the ground F (t) is 0 for most of the time, except during the short period (T − ε, T + ε). Often we don’t know (or we don’t wish to know) the details of F (t) but we can note that it only acts for a short time of O(ε) that is much shorter than the overall time O(t2 − t1) of the system. It is convenient mathematically to imagine the force acting instantaneously at time t = T, i.e. consider the limit ε → 0. Newton’s second law gives m¨x = F (t) − mg. While we cannot solve it, we can integrate the equation from T − ε to T + ε. So T +ε T −ε F (t) dt − T +ε T −ε mg dt T +ε m d2x dt2 dt = T +ε dx dt T −ε m = I − 2εmg T −ε ∆p = I − O(ε) Where ∆p is the change in momentum and the impulse I = T +ε T −ε F (t) dt is the area under the force curve. Note that the impulse I is the only property of F that influences the macroscopic behaviour |
of the system. If the contact time 2ε is small, we’ll neglect it and write ∆p = I Assuming that F only acts on a negligible amount of time ε, all that matters to us is its integral I, i.e. the area under the force curve. wlog, assume T = 0 for easier mathematical treatment. We can consider a family of functions D(t; ε) such that lim ε→0 D(t; ε) = 0 for all t = 0; ∞ −∞ lim ε→0 D(t; ε) dt = 1. 41 5 Second-order differential equations IA Differential Equations So we can replace the force in our example by ID(t; ε), and then take the limit as ε → 0. For example, we can choose D(t; ε) = 1 √ ε π e−t2/ε2 D ε = 1 ε = 0.5 t This has height O(1/ε) and width O(ε). It can be checked that this satisfies the properties listed above. Note that as ε → 0, D(0; ε) → ∞. Therefore lim ε→0 D(0; ε) does not exist. Definition (Dirac delta function). The Dirac delta function is defined by δ(x) = lim ε→0 D(x; ε) on the understanding that we can only use its integral properties. For example, when we write ∞ g(x)δ(x) dx, we actually mean −∞ ∞ −∞ lim ε→0 g(x)D(x; ε) dx. In fact, this is equal to g(0). More generally, b provided g is continuous at x = c. a g(x)δ(x − c) dx = g(c) if c ∈ (a, b) and 0 otherwise, This gives a convenient way of representing and making calculations involving impulsive or point forces. For example, in the previous example, we can write m¨x = −mg + Iδ(t − T ). Example. y − y = 3δ(x − π is split into two parts by x = π |
2. 2 ) with y = 0 at x = 0, π. Note that our function y First consider the region 0 ≤ x < π 2. Here the delta function is 0, and we have y − y = 0 and y = 0 at x = 0. Then y = Cex + De−x = A sinh x + B cosh x and obtain B = 0 from the boundary condition. In the region π 2 < x ≤ π, we again obtain y = C sinh(π − x) + D cosh(π − x) and (from the boundary condition), D = 0. 42 5 Second-order differential equations IA Differential Equations When x = π 2, first insist that y is continuous at x = π 2. So A = C. Then note that we have to solve y − y = 3δ x − π 2 But remember that the delta function makes sense only in an integral. So we integrate both sides from π 2 +. Then we obtain − to π 2 [y dx = 3 Since we assume that y is well behaved, the second integral is 0. So we are left with So we have [y] + − = 3 π 2 π 2 Then we have −C cosh π 2 − A cosh π 2 = 3 A = C = −3 2 cosh π 2 y = −3 sinh x 2 cosh π 2 −3 sinh(π−x) 2 cosh Note that at x = π 2, our final function has continuous y, discontinuous y and infinite y. In general, differentiating a function makes it less continuous. This is why we insisted at first that y has to be continuous. Otherwise, y would look like a delta function, and y would be something completely unrecognizable. Hence the discontinuity is always addressed by the highest order derivative since differentiation increases the discontinuity. 5.7 Heaviside step function Definition (Heaviside step function). Define the Heaviside step function as: H(x) = x −∞ δ(t) dt 43 5 Second-order differential equations IA Differential Equations We have H(x) |
= x < 0 0 1 x > 0 undefined x = 0 y By the fundamental theorem of calculus, dH dx = δ(x) x But remember that these functions and relationships can only be used inside integrals. 44 6 Series solutions IA Differential Equations 6 Series solutions Often, it is difficult to solve a differential equation directly. However, we can attempt to find a Taylor series for the solution. We will consider equations of the form p(x)y + q(x)y + r(x)y = 0. Definition (Ordinary and singular points). The point x = x0 is an ordinary point of the differential equation if q p and r p have Taylor series about x0 (i.e. are “analytic”, cf. Complex Analysis). Otherwise, x0 is a singular point. If x0 is a singular point but the equation can be written as P (x)(x − x0)2y + Q(x)(x − x0)y + R(x)y = 0, where Q P and R P have Taylor series about x0, then x0 is a regular singular point. Example. (i) (1 − x2)y − 2cy + 2y = 0. x = 0 is an ordinary point. However, x = ±1 are (regular) singular points since p(±1) = 0. (ii) sin xy + cos xy + 2y = 0. x = nπ are regular singular points while all others are ordinary. √ x)y − 2xy + 2y = 0. x = 0 is an irregular singular point because (iii) (1 + √ x is not differentiable at x = 0. It is possible to show that if x0 is an ordinary point, then the equation is guaranteed to have two linearly independent solutions of the form y = ∞ n=0 an(x − x0)n, i.e. Taylor series about x0. The solution must be convergent in some neighbourhood of x0. If x0 is a regular singular point, then there is at least one solution of the form y = ∞ n=0 an(x − x0)n+σ with |
a0 = 0 (to ensure σ is unique). The index σ can be any complex number. This is called a Frobenius series. Alternatively, it can be nice to think of the Frobenius series as y = (x − x0)σ ∞ an(x − x0)n n=0 = (x − x0)σf (x) where f (x) is analytic and has a Taylor series. We will not prove these results, but merely apply them. 45 6 Series solutions IA Differential Equations Ordinary points Example. Consider (1 − x2)y − 2xy + 2y = 0. Find a series solution about x = 0 (which is an ordinary point). We try y = ∞ n=0 anxn. First, we write the equation in the form of an equidimensional equation with polynomial coefficients by multiplying both sides by x2. This little trick will make subsequent calculations slightly nicer. We obtain (1 − x)2(x2y) − 2x2(xy) + 2x2y = 0 an[(1 − x2)n(n − 1) − 2x2n + 2x2]xn = 0 an[n(n − 1) + (−n2 − n + 2)x2]xn = 0 We look at the coefficient of xn and obtain the following general recurrence relation: n(n − 1)an + [−(n − 2)2 − (n − 2) + 2]an−2 = 0 n(n − 1)an = (n2 − 3n)an−2 Here we do not divide by anything since they might be zero. First consider the case n = 0. The left hand side gives 0 · a0 = 0 (the right hand side is 0 since an−2 = 0). So any value of a0 satisfies the recurrence relationship, and it can take any arbitrary value. This corresponds to a constant of integration. Similarly, by considering n = 1, a1 is arbitrary. For n > 1, n and n − 1 are non-zero. So we have an = n − 3 n − 1 an−2 In this case (but generally not), we can further simplify it to obtain: an− an−4 an = =... a2k = −1 2 |
k − 1 a0, a2k+1 = 0. So So we obtain y = a0[1 − = a0 1 − x2 1 x 2 − ln x6 x4 − 5 3 1 + x 1 − x − · · · ] + a1x + a1x Notice the logarithmic behaviour near x = ±1 which are regular singular points. 46 6 Series solutions IA Differential Equations Regular singular points Example. Consider 4xy + 2(1 − x2)y − xy = 0. Note that x = 0 is a singular point. However, if we multiply throughout by x to obtain an equidimensional equation, we obtain 4(x2y) + 2(1 − x2)xy − x2y = 0. P = 1−x2 Since Q point. Try 2 and R P = − x2 4 both have Taylor series, x = 0 is a regular singular y = ∞ n=0 anxn+σ with a0 = 0. Substituting in, we have anxn+σ[4(n + σ)(n + σ − 1) + 2(1 − x2)(n + σ) − x2] By considering the coefficient of xn+σ, we obtain the general recurrence relation [4(n + σ)(n + σ − 1) + 2(n + σ)]an − [2(n − 2 + σ) + 1]an−2 = 0. Simplifying the equation gives 2(n + σ)(2n + 2σ − 1)an = (2n + 2σ − 3)an−2. The n = 0 case gives the indicial equation for the index σ: 2σ(2σ − 1)a0 = 0. Since a0 = 0, we must have σ = 0 or 1 analytic (“Taylor series”) solution, while σ = 1 one. 2. The σ = 0 solution corresponds to an 2 corresponds to a non-analytic When σ = 0, the recurrence relation becomes 2n(2n − 1)an = (2n − 3)an−2. When n = 0, this gives 0 · a0 = 0. So a0 is arbitrary. For n > 0, we can divide and obtain an = 2n − 3 2n( |
2n − 1) an−2. We can see that a1 = 0 and so are subsequent odd terms. If n = 2k, i.e. n is even, then a2k = a2k−2 4k − 3 4k(4k − 1) 1 4 · 3 1 + y = a0 x2 + 5 8 · 7 · 4 · 3 x4 + · · · Note that we have only found one solution in this case. Now when σ = 1 2, we obtain (2n + 1)(2n)an = (2n − 2)an−2 47 6 Series solutions IA Differential Equations When n = 0, we obtain 0 · a0 = 0, so a0 is arbitrary. To avoid confusion with the a0 above, call it b0 instead. When n = 1, we obtain 6a1 = 0 and a1 = 0 and so are subsequent odd terms. For even n, an = n − 1 n(2n + 1) an−2 So y = b0x1/2 1 + 1 2 · 5 x2 + 3 2 · 5 · 4 · 9 x4 + · · · Resonance of solutions Note that the indicial equation has two roots σ1, σ2. Consider the two different cases: (i) If σ2 − σ1 is not an integer, then there are two linearly independent Frobenius solutions (x − x0)σ1 y = ∞ n=0 an(x − x0)n + (x − x0)σ2 bn(x − x0)n. ∞ n=0 As x → x0, y ∼ (x − x0)σ1, where Re(σ1) ≤ Re(σ2) (ii) If σ2 −σ1 is an integer (including when they are equal), there is one solution of the form y1 = (x − x0)σ2 ∞ n=0 an(x − x0)n with σ2 ≥ σ1. In this case, σ = σ1 will not give a valid solution, as we will later see. Instead, the other solution is (usually) in the form y2 = ln(x − x0)y1 + ∞ n=0 bn(x − x0)n+σ1 |
. This form arises from resonance between the two solutions. But if the resonance somehow avoids itself, we can possibly end up with two regular Frobenius series solutions. We can substitute this form of solution into the differential equation to determine bn. Example. Consider x2y − xy = 0. x = 0 is a regular singular point. It is already in equidimensional form (x2y) − x(y) = 0. Try y = ∞ n=0 anxn+σ with a0 = 0. We obtain anxn+σ[(n + σ)(n + σ − 1) − x] = 0. 48 6 Series solutions IA Differential Equations The general recurrence relation is (n + σ)(n + σ − 1)an = an−1. n = 0 gives the indicial equation σ(σ − 1) = 0. Then σ = 0, 1. We are guaranteed to have a solution in the form σ = 1. When σ = 1, the recurrence relation becomes (n + 1)nan = an−1. When n = 0, 0 · a0 = 0 so a0 is arbitrary. When n > 0, we obtain So an = 1 n(n + 1) an−1 = 1 (n + 1)(n!)2 a0. y1 = a0x 1 + x 2 + x2 12 + x3 144 + · · ·. When σ = 0, we obtain n(n − 1)an = an−1. When n = 0, 0 · a0 = 0 and a0 is arbitrary. When n = 1, 0 · a1 = a0. However, a0 = 0 by our initial constraint. Contradiction. So there is no solution in this form (If we ignore the constraint that a0 = 0, we know that a0 is arbitrary. But this gives exactly the same solution we found previously with σ = 1) The other solution is thus in the form y2 = y1 ln x + ∞ n=0 bnxn. 49 7 Directional derivative IA Differential Equations 7 Directional derivative 7.1 Gradient vector Consider a function f (x, y) and a displacement ds = (dx, dy). The change in f (x, y) during that displacement is We can also write this as df |
= ∂f ∂x dx + ∂f ∂y dy ∂f ∂x, ∂f ∂y df = (dx, dy) · = ds · ∇f where ∇f = gradf = f. ∂f ∂x, ∂f ∂y are the Cartesian components of the gradient of We write ds = ˆs ds, where |ˆs| = 1. Then Definition (Directional derivative). The directional derivative of f in the direction of ˆs is df ds df ds = ˆs · ∇f. = ˆs · ∇f. Definition (Gradient vector). The gradient vector ∇f is defined as the vector that satisfies Officially, we take this to be the definition of ∇f. Then ∇f = theorem that can be proved from this definition. We know that the directional derivative is given by ∂f ∂x, ∂f ∂y is a df ds = ˆs · ∇f = |∇f | cos θ where θ is the angle between the displacement and ∇f. Then when cos θ is maximized, df ds = |∇f |. So we know that (i) ∇f has magnitude equal to the maximum rate of change of f (x, y) in the xy plane. (ii) It has direction in which f increases most rapidly. (iii) If ds is a displacement along a contour of f (i.e. along a line in which f is constant), then So ˆs · ∇f = 0, i.e. ∇f is orthogonal to the contour. df ds = 0. 50 7 Directional derivative IA Differential Equations 7.2 Stationary points There is always (at least) one direction in which df parallel to the contour of f. However, local maxima and minima have ds = 0, namely the direction df ds = 0 for all directions, i.e. ˆs · ∇f = 0 for all ˆs, i.e. ∇f = 0. Then we |
know that ∂f ∂x = ∂f ∂y = 0. However, apart from maxima and minima, in 3 dimensions, we also have saddle points: In general, we define a saddle point to be a point at which ∇f = 0 but is not a maximum or minimum. When we plot out contours of functions, near maxima and minima, the contours are locally elliptical. Near saddle points, they are locally hyperbolic. Also, the contour lines cross at and only at saddle points. 7.3 Taylor series for multi-variable functions Suppose we have a function f (x, y) and a point x0. Now consider a finite displacement δs along a straight line in the x, y plane. Then δs d ds = δs · ∇ The Taylor series along the line is f (s) = f (s0 + δs) We get = f (s0) + δs df ds + 1 2 = f (s0) + δs · ∇f + (δs)2 d2f ds2 1 2 δs2(ˆs · ∇)(ˆs · ∇)f. δs · ∇f = (δx) ∂f ∂x + (δy) ∂f ∂y = (x − x0) ∂f ∂x + (y − y0) ∂f ∂y 51 7 Directional derivative IA Differential Equations and δs2(ˆs · ∇)(ˆs · ∇)f = (δs · ∇)(δs · ∇)f + δy δx ∂ ∂y ∂f ∂x = δx ∂ ∂x = δx2 ∂2f ∂x2 + 2δxδy fxx fyx = δx δy ∂2f ∂x∂y fxy fyy δx δy + δy ∂f ∂y + δy2 ∂2f ∂y2 Definition (Hessian matrix). The Hessian matrix is the matrix ∇∇f = fxx fyx f |
xy fyy In conclusion, we have f (x, y) = f (x0, y0) + (x − x0)fx + (y − y0)fy + 1 2 [(x − x0)2fxx + 2(x − x0)(y − y0)fxy + (y − y0)2fyy] In general, the coordinate-free form is f (x) = f (x0) + δx · ∇f (x0) + 1 2 δx · ∇∇f · δx where the dot in the second term represents a matrix product. Alternatively, in terms of the gradient operator (and real dot products), we have f (x) = f (x0) + δx · ∇f (x0) + 1 2 [∇(∇f · δx)] · δx 7.4 Classification of stationary points At a stationary point x0, we know that ∇f (x0) = 0. So at a point near the stationary point, 1 2 where H = ∇∇f (x0) is the Hessian matrix. f (x) ≈ f (x0) + δx · H · δx, At a minimum, Every point near x0 has f (x) > f (x0), i.e. δx · H · δx > 0 for all δx. We say δx · H · δx is positive definite. Similarly, at a maximum, δx · H · δx < 0 for all δx. We say δx · H · δx is negative definite. At a saddle, δx · H · δx is indefinite, i.e. it can be positive, negative or zero depending on the direction. This, so far, is not helpful, since we do not have an easy way to know what sign δx · H · δx could be. Now note that H = ∇∇f is symmetric (because fxy = fyx). So H can be diagonalized (cf. Vectors and Matrices). With respect 52 7 Directional derivative IA Differential Equations to these axes in which H is |
diagonal (principal axes), we have δx · H · δx = (δx, δy, · · ·, δz) λ1 λ2... λn = λ1(δx)2 + λ2(δy)2 + · · · + λn(δz)2 δx δy... δz where λ1, λ2, · · · λn are the eigenvalues of H. So for δx · H · δx to be positive-definite, we need λi > 0 for all i. Similarly, it is negative-definite iff λi < 0 for all i. If eigenvalues have mixed sign, then it is a saddle point. Finally, if there is at least one zero eigenvalue, then we need further analysis to determine the nature of the stationary point. Apart from finding eigenvalues, another way to determine the definiteness is using the signature. Definition (Signature of Hessian matrix). The signature of H is the pattern of the signs of the subdeterminants: fxx fyx fxx, |H1| fxy fyy |H2|, · · ·, fxx fyx... fzx fxy fyy... fzy · · · · · ·... · · · |Hn|=|H| fxz fyz... fzz Proposition. H is positive definite if and only if the signature is +, +, · · ·, +. H is negative definite if and only if the signature is −, +, · · ·, (−1)n. Otherwise, H is indefinite. 7.5 Contours of f (x, y) Consider H in 2 dimensions, and axes in which H is diagonal. So H = λ1 0. 0 λ2 Write x − x0 |
= (X, Y ). Then near x0, f = constant ⇒ xHx = constant, i.e. λ1X 2 +λ2Y 2 = constant. At a maximum or minimum, λ1 and λ2 have the same sign. So these contours are locally ellipses. At a saddle point, they have different signs and the contours are locally hyperbolae. Example. Find and classify the stationary points of f (x, y) = 4x3 − 12xy + y2 + 10y + 6. We have fx = 12x2 − 12y fy = −12x + 2y + 10 fxx = 24x fxy = −12 fyy = 2 53 7 Directional derivative IA Differential Equations At stationary points, fx = fy = 0. So we have 12x2 − 12y = 0, −12x + 2y + 10 = 0. The first equation gives y = x2. Substituting into the second equation, we obtain x = 1, 5 and y = 1, 25 respectively. So the stationary points are (1, 1) and (5, 25) To classify them, first consider (1, 1). Our Hessian matrix H = 24 −12 2 −12. Our signature is |H1| = 24 and |H2| = −96. Since we have a +, − signature, this an indefinite case and it is a saddle point. At (5, 25), H = So |H1| = 120 and |H2| = 240 − 144 = 96. 120 −12 2 −12 Since the signature is +, +, it is a minimum. To draw the contours, we draw what the contours look like near the stationary points, and then try to join them together, noting that contours cross only at saddles. 54 (5,25)(1,1)-20246-100102030 8 Systems of differential equations IA Differential Equations 8 Systems of differential equations 8.1 Linear equations Consider two dependent variables y1(t), y2(t) related by ˙y1 = ay1 + by2 + f1(t) ˙y2 = cy1 + dy2 + f2(t) We |
can write this in vector notation by ˙y1 ˙y2 = a b y1 d c y2 + f1 f2 or ˙Y = M Y + F. We can convert this to a higher-order equation by ¨y1 = a ˙y1 + b ˙y2 + ˙f1 = a ˙y1 + b(cy1 + dy2 + f2) + ˙f1 = a ˙y1 + bcy1 + d( ˙y1 − ay1 − f1) + bf2 + ˙f1 so ¨y1 − (a + d) ˙y1 + (ad − bc)y1 = bf2 − df1 + ˙f1 and we know how to solve this. However, this actually complicates the equation. if we have a high-order So what we usually do is the other way round: equation, we can do this to change it to a system of first-order equations: y ˙y If ¨y + a ˙y + by = f, write y1 = y and y2 = ˙y. Then let Y = Our system of equations becomes ˙y1 = y2 ˙y2 = f − ay2 − by1 or ˙Y = 0 1 −b −a y1 y2 0 f. Now consider the general equation ˙Y = M Y + F ˙Y − M Y = F We first look for a complementary solution Yc = veλt, where v is a constant vector. So we get We can write this as λv − M v = 0. M v = λv. So λ is the eigenvalue of M and v is the corresponding eigenvector. We can solve this by solving the characteristic equation det(M − λI) = 0. Then for each λ, we find the corresponding v. 55 8 Systems of differential equations IA Differential Equations Example. ˙Y = −4 24 1 −2 Y + 4 1 et The characteristic equation of M is 8)(λ − 2) = 0 and λ = 2, −8. When λ = 2, v satis� |
�es −4 − λ 1 24 −2 − λ = 0, which gives (λ + −6 24 1 −4 v1 v2 = 0, and we obtain v1 =. 4 1 When λ = −8, we have 4 1 24 6 v1 v2 = 0, and v2 = −6 1. So the complementary solution is Y = A 4 1 e2t + B −6 1 e−8t To plot the phase-space trajectories, we first consider the cases where Y is an eigenvector. If Y is an integer multiple of 4 1, then it will keep moving −6 1, outwards in the same direction. Similarly, if it is an integer multiple of it will move towards the origin. y2 v1 = (4, 1) y1 v2 = (−6, 1) We can now add more (curved) lines based on these two: 56 8 Systems of differential equations IA Differential Equations y2 v1 = (4, 1) y1 v2 = (−6, 1) To find the particular integral, we try Yp = uet. Then u1 u2 − −4 24 1 −2 5 −24 3 −1 u1 u2 u1 u2 u1 u2 4 −1 = 4 1 24 5 So the general solution is Y = A 4 1 e2t + B −6 1 e−8t − 4 1 et In general, there are three possible cases of ˙Y = M Y corresponding to three different possible eigenvalues of M : (i) If both λ1, λ2 are real with opposite sign (λ1λ2 < 0). wlog assume λ1 > 0. Then there is a saddle as above: v1 = (4, 1) v2 = (−6, 1) (ii) If λ1, λ2 are real with λ1λ2 > 0. wlog assume |λ1| ≥ |λ2|. Then the phase portrait is 57 8 Systems of differential equations IA Differential Equations v2 v1 If both λ1, λ2 < 0, then the arrows point towards the intersection and we say there is a stable node. If both are positive, they point outwards and there is an unstable node. |
(iii) If λ1, λ2 are complex conjugates, then we obtain a spiral If Re(λ1,2) < 0, then it spirals inwards. If Re(λ1,2) > 0, then it spirals outwards. If Re(λ1,2) = 0, then we have ellipses with common centers instead of spirals. We can determine whether the spiral is positive (as shown above), or negative (mirror image of the spiral above) by considering the eigenvectors. An example of this is given below. 8.2 Nonlinear dynamical systems Consider the second-order autonomous system (i.e. t does not explicitly appear in the forcing terms on the right) ˙x = f (x, y) ˙y = g(x, y) It can be difficult to solve the equations, but we can learn a lot about phase-space trajectories of these solutions by studying the equilibria and their stability. Definition (Equilibrium point). An equilibrium point is a point in which ˙x = ˙y = 0 at x0 = (x0, y0). Clearly this occurs when f (x0, y0) = g(x0, y0) = 0. We solve these simultane- ously for x0, y0. 58 8 Systems of differential equations IA Differential Equations To determine the stability, write x = x0 + ξ, y = y0 + η. Then ˙ξ = f (x0 + ξ, y0 + η) ∂f ∂x = f (x0, y0) + ξ (x0) + η ∂f ∂y (x0) + O(ξ2, η2) So if ξ, η 1, ˙ξ ˙η = fx gx ξ η fy gy This is a linear system, and we can determine its character from the eigensolutions. Example. (Population dynamics - predator-prey system) Suppose that there are x prey and y predators. Then we have the following for the prey: ˙x = αx births - deaths − βx2 natural competition − γxy killed by predators. and |
the following for the predators: For example, let ˙y = εxy birth/survival rate − δy natural death rate ˙x = 8x − 2x2 − 2xy ˙y = xy − y We find the fixed points: x(8 − 2x − 2y) = 0 gives x = 0 or y = 4 − x. We also want y(x − 1) = 0 which gives y = 0 or x = 1. So the fixed points are (0, 0), (4, 0), (1, 3). Near (0, 0), we have ˙ξ ˙η 8 0 0 −1 ξ η = We clearly have eigenvalues 8, −1 with the standard basis as the eigenvectors. Near (4, 0), we have x = 4 + ξ, y = η. Instead of expanding partial derivatives, we can obtain from the equations directly: ˙ξ = (4 + ξ)(8 − 8 − 2ξ − 2η) = −8ξ − 8η − 2ξ2 − 2ξη ˙η = η(4 + ξ − 1) = 3η + ξη 59 8 Systems of differential equations IA Differential Equations Ignoring the second-order terms, we have ˙ξ ˙η = −8 −8 3 0 ξ η The eigenvalues are −8 and 3, with associated eigenvectors (1, 0), (8, −11). Near (1, 3), we have x = 1 + ξ, y = 3 + η. So ˙ξ = (1 + ξ)(8 − 2 − 2ξ − 6 − 2η) ≈ −2ξ − 2η ˙η = (3 + η)(1 + ξ − 1) So ≈ 3η ˙ξ ˙η = −2 −2 0 3 ξ η The eigenvalues are −1 ± i is a stable spiral. √ 5. Since it is complex with a negative real part, it We can determine the chirality of the spirals by considering what happens |
to a small perturbation to the right with ξ > 0. We have ξ 0 ˙ξ ˙η −2ξ 3ξ =. So x will head top-left, and the spiral is counter-clockwise (“positive”). We can patch the three equilibrium points together and obtain the following phase portrait: We see that (1, 3) is a stable solution which almost all solutions spiral towards. 60 012345012345 9 Partial differential equations (PDEs) IA Differential Equations 9 Partial differential equations (PDEs) 9.1 First-order wave equation Consider the equation of the form ∂y ∂t = c ∂y ∂x, with c a constant and y a function of x and t. This is known as the (first-order) wave equation. We will later see that solutions correspond to waves travelling in one direction. We write this as ∂y ∂t − c ∂y ∂x = 0. Recall that along a path x = x(t) so that y = y(x(t), t), dy dt = = ∂y ∂x dx dt dx dt ∂y ∂x + + c ∂y ∂t ∂y ∂x by the chain rule. Now we choose a path along which Along such paths, dx dt = −c. dy dt = 0 (1) (2) So we have replaced the original partial differential equations with a pair of ordinary differential equations. Each path that satisfies (1) can be described by x = x0 − ct, where x0 is a constant. We can write this as x0 = x + ct. From (2), along each of these paths, y is constant. So suppose for each x0, the value of y along the path x0 = x + ct is given by f (x0), where f is an arbitrary function. Then the solution to the wave equation is y = f (x + ct), By differentiating this directly, we can easily check that every function of this form is a solution to the wave equation. The contours of y look rather boring. t O contours of y x 61 9 Partial diff |
erential equations (PDEs) IA Differential Equations Note that as we move up the time axis, we are simply taking the t = 0 solution and translating it to the left. The paths we’ve identified are called the “characteristics” of the wave equation. In this particular example, y is constant along the characteristics (because the equation is unforced). We usually have initial conditions e.g. y(x, 0) = x2 − 3 Since we know that y = f (x + ct) and f (x) = x2 − 3, y must be given by y = (x + ct)2 − 3. We can plot the xy curve for different values of t to obtain this: y x We see that each solution is just a translation of the t = 0 version. We can also solve forced equations, such as ∂y ∂t + 5 ∂y ∂x = e−t, y(x, 0) = e−x2. Along each path x0 = x − 5t, we have dy function f. dt = e−t. So y = f (x0) − e−t for some Using the boundary conditions provided, At t = 0, y = f (x0) − 1 and x = x0. So f (x0) − 1 = e−x2 0, i.e. f (x0) = 1 + e−x2 0. So y = 1 + e−(x−5t)2 − e−t. 9.2 Second-order wave equation We consider equations in the following form: ∂2y ∂t2 = c2 ∂2y ∂x2. This is the second-order wave equation, and is often known as the “hyperbolic equation” because the form resembles that of a hyperbola (which has the form x2 − b2y2 = 0). However, the differential equation has no connections to hyperbolae whatsoever. 62 9 Partial differential equations (PDEs) IA Differential Equations This equation models an actual wave in one dimension. Consider a horizontal string, along the x axis. We let y(x, t) be the vertical displacement of the string at the point x at time t. y Suppose that ρ |
(x) is the mass per unit length of a string. Then the restoring ∂2y ∂x2. So ∂2y ∂t2 is proportional to the second derivative force on the string ma = ρ we obtain this wave equation. (Why is it proportional to the second derivative? It certainly cannot be proportional to y, because we get no force if we just move the whole string upwards. It also cannot be proportional to ∂y/∂x: if we have a straight slope, then the force pulling upwards is the same as the force pulling downwards, and we should have no force. We have a force only if the string is curved, and curvature is measured by the second derivative) To solve the equation, suppose that c is constant. Then we can write ∂2y ∂t2 − c2 ∂2y ∂x2 = 0 y = 0 ∂ ∂t + c ∂ ∂x ∂ ∂t − c ∂ ∂x If y = f (x + ct), then the first operator differentiates it to give a constant (as in the first-order wave equation). Then applying the second operator differentiates it to 0. So y = f (x + ct) is a solution. Since the operators are commutative, y = f (x − ct) is also a solution. Since the equation is linear, the general solution is y = f (x + ct) + g(x − ct). This shows that the solution composes of superpositions of waves travelling to the left and waves travelling to the right. We can show that this is indeed the most general solution by substituting ξ = x + ct and η = x − ct. We can show, using the chain rule, that ytt − c2yxx ≡ −4c2yηξ = 0. Integrating twice gives y = f (ξ) + g(η). How many boundary conditions do we need to have a unique solution? In ODEs, we simply count the order of the equation. In PDEs, we have to count over all variables. In this case, we need 2 boundary conditions and 2 initial conditions. For example, we can have: – Initial conditions: at t = 0, y = 1 1 + x2 |
∂y ∂t = 0 – Boundary conditions: y → 0 as x → ±∞. 63 9 Partial differential equations (PDEs) IA Differential Equations We know that the solution has the form y = f (x + ct) + g(x − ct). The first initial condition give f (x) + g(x) = 1 1 + x2 The second initial condition gives ∂y ∂t = cf (x) − cg(x) = 0 (1) (2) From (2), we know that f = g. So f and g differ by a constant. wlog, we can assume that they are indeed equal, since if we had, say, f = g + 2, then we can let y = (f (x + ct) + 1) + (g(x − ct) + 1) instead, and the new f and g are equal. From (1), we must have f (x) = g(x) = 1 2(1 + x2) So, overall, 1 1 + (x + ct)2 + Where we substituted x for x + ct and x − ct in f and g respectively. 1 1 + (x − ct)2 y = 1 2 9.3 The diffusion equation Heat conduction in a solid in one dimension is modelled by the diffusion equation ∂T ∂t = κ ∂2T ∂x2 This is known as a parabolic PDE (parabolic because it resembles y = ax2). Here T (x, t) is the temperature and the constant κ is the diffusivity. Example. Consider an infinitely long bar heated at one end (x = 0). Note in general the “velocity” ∂T /∂t is proportional to curvature ∂2T /∂x2, while in the wave equation, it is the “acceleration” that is proportional to curvature. In this case, instead of oscillatory, the diffusion equation is dissipative and all unevenness simply decays away. Suppose T (x, 0) = 0, T (0, t) = H(t, and T (x, t) → 0 |
as x → ∞. In words, this says that the rod is initially cool (at temperature 0), and then the end is heated up on one end after t = 0. There is a similarity solution of the diffusion equation valid on an infinite. domain (or our semi-infinite domain) in which T (x, t) = θ(η), where η = x √ κt 2 64 9 Partial differential equations (PDEs) IA Differential Equations Applying the chain rule, we have ∂T ∂x = = ∂2T ∂x2 = = = ∂T ∂t θ(η) dθ dη ∂η ∂x ∂η dθ dη ∂x 1 √ κt 2 1 √ 2 κt 1 4κt dθ dη 1 2 η 2t θ(η) ∂η ∂t x √ 2 κ θ(η) = − = − 1 t3/2 θ(η) Putting this into the diffusion equation yields − θ = κ η 2t θ + 2ηθ = 0 1 4κt θ This is an ordinary differential equation for θ(η). This can be seen as a firstorder equation for θ with non-constant coefficients. Use the integrating factor µ = exp( 2η dη) = eη2. So (eη2 θ) = 0 θ = Ae−η2 η θ = A e−u2 du + B 0 = α erf(η) + B where erf(η) = 2√ π η 0 e−u2 du from statistics, and erf(η) → 1 as η → ∞. √ Now look at the boundary and initial conditions, (recall η = x/(2 express them in terms of η. As x → 0, we have η → 0. So θ = 1 at η = 0. κt)) and Also, if x → ∞, t → 0+, then |
η → ∞. So θ → 0 as η → ∞. So θ(0) = 1 ⇒ B = 1. Colloquially, θ(∞) = 0 gives α = −1. So θ = 1−erf(η). This is also sometimes written as erfc(η), the error function complement of η. So T = erfc x √ 2 κt In general, at any particular fixed time t0, T (x) looks like 65 9 Partial differential equations (PDEs) IA Differential Equations T x with decay length O( can treat it as infinite if √ √ κt). So if we actually have a finite bar of length L, we κt L, or t L2/κ 66a basis, this is equivalent to saying γi(t) = αi(γ(t)), γi(0) = 0 for all i and t ∈ I. By the general theory of ordinary differential equations, there is an interval I and a solution γ, and any two solutions agree on their common domain. However, we need to do a bit more for uniqueness, since all we know is that there is a unique integral curve lying in this particular chart. It might be that there are integral curves that do wild things when they leave the chart. So suppose γ : I → M and ˜γ : ˜I → M are both integral curves passing through the same point, i.e. γ(0) = ˜γ(0) = p. We let J = {t ∈ I ∩ ˜I : γ(t) = ˜γ(t)}. This is non-empty since 0 ∈ J, and J is closed since γ and ˜γ are continuous. To show it is all of I ∩ ˜I, we only have to show it is open, since I ∩ ˜I is connected. So let t0 ∈ J, and consider q = γ(t0). Then γ and ˜γ are integral curves of X passing through q. So by the first part, they agree on some neighbourhood of t0. So J is open. So done. 24 2 Vector fi |
elds III Differential Geometry Definition (Maximal integral curve). Let p ∈ M, and X ∈ Vect(M ). Let Ip be the union of all I such that there is an integral curve γ : I → M with γ(0) = p. Then there exists a unique integral curve γ : Ip → M, known as the maximal integral curve. Note that Ip does depend on the point. Example. Consider the vector field X = ∂ ∂x on R2 \ {0}. Then for any point p = (x, y), if y = 0, we have Ip = R, but if y = 0 and x < 0, then Ip = (−∞, −x). Similarly, if y = 0 and x > 0, then Ip = (−x, ∞). Definition (Complete vector field). A vector field is complete if Ip = R for all p ∈ M. Given a complete vector field, we obtain a flow map as follows: Theorem. Let M be a manifold and X a complete vector field on M. Define Θt : R × M → M by Θt(p) = γp(t), where γp is the maximal integral curve of X through p with γ(0) = p. Then Θ is a function smooth in p and t, and Θ0 = id, Θt ◦ Θs = Θs+t Proof. This follows from uniqueness of integral curves and smooth dependence on initial conditions of ODEs. In particular, since Θt ◦ Θ−t = Θ0 = id, we know So Θt is a diffeomorphism. Θ−1 t = Θ−t. 25 2 Vector fields III Differential Geometry More algebraically, if we write Diff(M ) for the diffeomorphisms M → M, then R → Diff(M ) t → Θt is a homomorphism of groups. We call this a one-parameter subgroup of diffeomorphisms. What happens when we relax the comple |
teness assumption? Everything is essentially the same whenever things are defined, but we have to take care of the domains of definition. Theorem. Let M be a manifold, and X ∈ Vect(M ). Define D = {(t, p) ∈ R × M : t ∈ Ip}. In other words, this is the set of all (t, p) such that γp(t) exists. We set Θt(p) = Θ(t, p) = γp(t) for all (t, p) ∈ D. Then (i) D is open and Θ : D → M is smooth (ii) Θ(0, p) = p for all p ∈ M. (iii) If (t, p) ∈ D and (t, Θ(s, p)) ∈ D, then (s + t, p) ∈ D and Θ(t, Θ(s, p)) = Θ(t + s, p). (iv) For any t ∈ R, the set Mt : {p ∈ M : (t, p) ∈ D} is open in M, and Θt : Mt → M−t is a diffeomorphism with inverse Θ−t. This is really annoying. We now prove the following useful result that saves us from worrying about these problems in nice cases: Proposition. Let M be a compact manifold. Then any X ∈ Vect(M ) is complete. Proof. Recall that D = {(t, p) : Θt(p) is defined} is open. So given p ∈ M, there is some open neighbourhood U ⊆ M of p and an ε > 0 such that (−ε, ε) × U ⊆ D. By compactness, we can find finitely many such U that cover M, and find a small ε such that (−ε, ε) × M ⊆ D. In other words, we know Θt(p) exists and p ∈ M and |t| < ε. Also, we know Θt ◦ Θs = Θt+s whenever |t|, |s| < ε, and in |
particular Θt+s is defined. So ΘN t = (Θt)N is defined for all N and |t| < ε, so Θt is defined for all t. 26 2 Vector fields III Differential Geometry 2.3 Lie derivative We now want to look at the concept of a Lie derivative. If we have a function f defined on all of M, and we have a vector field X, then we might want to ask what the derivative of f in the direction of X is at each point. If f is a real-valued function, then this is by definition X(f ). If f is more complicated, then this wouldn’t work, but we can still differentiate things along X using the flows. Notation. Let F : M → M be a diffeomorphism, and g ∈ C∞(M ). We write F ∗g = g ◦ F ∈ C∞(M ). We now define the Lie derivative of a function, i.e. the derivative of a function f in the direction of a vector field X. Of course, we can obtain this by just applying X(f ), but we want to make a definition that we can generalize. Definition (Lie derivative of a function). Let X be a complete vector field, and Θ be its flow. We define the Lie derivative of g along X by LX (g) = d dt t=0 Θ∗ t g. Here this is defined pointwise, i.e. for all p ∈ M, we define LX (g)(p) = d dt t=0 Θ∗ t (g)(p). Lemma. LX (g) = X(g). In particular, LX (g) ∈ C∞(M, R). Proof. LX (g)(p) = t (g)(p) Θ∗ d dt d dt t=0 t=0 = dg|p(X(p)) = X(g)(p). = g |
(Θt(p)) So this is quite boring. However, we can do something more exciting by differentiating vector fields. Notation. Let Y ∈ Vect(M ), and F : M → M be a diffeomorphism. Then DF −1|F (p) : TF (p)M → TpM. So we can write F ∗(Y )|p = DF −1|F (p)(YF (p)) ∈ TpM. Then F ∗(Y ) ∈ Vect(M ). If g ∈ C∞(M ), then Alternatively, we have F ∗(Y )|p(g) = YF (p)(g ◦ F −1). F ∗(Y )|p(g ◦ F ) = YF (p)(g). Removing the p’s, we have F ∗(Y )(g ◦ F ) = (Y (g)) ◦ F. 27 2 Vector fields III Differential Geometry Definition (Lie derivative of a vector field). Let X ∈ Vect(M ) be complete, and Y ∈ Vect(M ) be a vector field. Then the Lie derivative is given pointwise by LX (Y ) = d dt t=0 Θ∗ t (Y ). Lemma. We have LX Y = [X, Y ]. Proof. Let g ∈ C∞(M, R). Then we have Θ∗ t (Y )(g ◦ Θt) = Y (g) ◦ Θt. We now look at Θ∗ t (Y )(g) − Y (g) t = We have Θ∗ t (Y )(g) − Θ∗ t αt t (Y )(g ◦ Θt) + Y (g) ◦ Θt − Y (g) t βt. lim t→0 βt = LX (Y (g)) = XY (g) by the previous lemma, and we have lim t→0 αt = lim t→0 (Θ∗ t (Y )) g − g ◦ Θt t = Y (−LX (g)) |
= −Y X(g). Corollary. Let X, Y ∈ Vect(M ) and f ∈ C∞(M, R). Then (i) LX (f Y ) = LX (f )Y + f LX Y = X(f )Y + f LX Y (ii) LX Y = −LY X (iii) LX [Y, Z] = [LX Y, Z] + [Y, LX Z]. Proof. Immediate from the properties of the Lie bracket. 28 3 Lie groups III Differential Geometry 3 Lie groups We now have a short digression to Lie groups. Lie groups are manifolds with a group structure. They have an extraordinary amount of symmetry, since multiplication with any element of the group induces a diffeomorphism of the Lie group, and this action of the Lie group on itself is free and transitive. Effectively, this means that any two points on the Lie group, as a manifold, are “the same”. As a consequence, a lot of the study of a Lie group reduces to studying an infinitesimal neighbourhood of the identity, which in turn tells us about infinitesimal neighbourhoods of all points on the manifold. This is known as the Lie algebra. We are not going to go deep into the theory of Lie groups, as our main focus is on differential geometry. However, we will state a few key results about Lie groups. Definition (Lie group). A Lie group is a manifold G with a group structure such that multiplication m : G × G → G and inverse i : G → G are smooth maps. Example. GLn(R) and GLn(C) are Lie groups. Example. Mn(R) under addition is also a Lie group. Example. O(n) is a Lie group. Notation. Let G be a Lie group and g ∈ G. We write Lg : G → G for the diffeomorphism Lg(h) = gh. This innocent-seeming translation map is what makes Lie groups nice. Given any local information near an element g, we can transfer it to local information near h by applying the diffeomorphism Lhg−1. In particular, the diffeomorphism Lg : G → G induces a linear isomorph |
ism DLg|e : TeG → TgG, so we have a canonical identification of all tangent spaces. Definition (Left invariant vector field). Let X ∈ Vect(G) be a vector field. This is left invariant if DLg|h(Xh) = Xgh for all g, h ∈ G. We write VectL(G) for the collection of all left invariant vector fields. Using the fact that for a diffeomorphism F, we have F ∗[X, Y ] = [F ∗X, F ∗Y ], it follows that VectL(G) is a Lie subalgebra of Vect(G). If we have a left invariant vector field, then we obtain a tangent vector at the identity. On the other hand, if we have a tangent vector at the identity, the definition of a left invariant vector field tells us how we can extend this to a left invariant vector field. One would expect this to give us an isomorphism between TeG and VectL(G), but we have to be slightly more careful and check that the induced vector field is indeed a vector field. 29 3 Lie groups III Differential Geometry Lemma. Given ξ ∈ TeG, we let Xξ|g = DLg|e(ξ) ∈ Tg(G). Then the map TeG → VectL(G) by ξ → Xξ is an isomorphism of vector spaces. Proof. The inverse is given by X → X|e. The only thing to check is that Xξ actually is a left invariant vector field. The left invariant part follows from DLh|g(Xξ|g) = DLh|g(DLg|e(ξ)) = DLhg|e(ξ) = Xξ|hg. To check that Xξ is smooth, suppose f ∈ C∞(U, R), where U is open and contains e. We let γ : (−ε, ε) → U be smooth with ˙γ( |
0) = ξ. So Xξf |g = DLg(ξ)(f ) = ξ(f ◦ Lg) = d dt t=0 (f ◦ Lg ◦ γ) But as (t, g) → f ◦ Lg ◦ γ(t) is smooth, it follows that Xξf is smooth. So Xξ ∈ VectL(G). Thus, instead of talking about VectL(G), we talk about TeG, because it seems less scary. This isomorphism gives TeG the structure of a Lie algebra. Definition (Lie algebra of a Lie group). Let G be a Lie group. The Lie algebra g of G is the Lie algebra TeG whose Lie bracket is induced by that of the isomorphism with VectL(G). So We also write Lie(G) for g. [ξ, η] = [Xξ, Xη]|e. In general, if a Lie group is written in some capital letter, say G, then the Lie algebra is written in the same letter but in lower case fraktur. Note that dim g = dim G is finite. Lemma. Let G be an abelian Lie group. Then the bracket of g vanishes. Example. For any vector space V and v ∈ V, we have TvV ∼= V. So V as a Lie group has Lie algebra V itself. The commutator vanishes because the group is commutative. Example. Note that G = GLn(R) is an open subset of Mn, so it is a manifold. It is then a Lie group under multiplication. Then we have gln(R) = Lie(GLn(R)) = TI GLn(R) = TI Mn ∼= Mn. If A, B ∈ GLn(R), then So LA(B) = AB. DLA|B(H) = AH as LA is linear. We claim that under the identification, if ξ, η ∈ gln(R) = Mn, then [ξ, η] = ξη − ηξ. 30 3 Lie groups III Differential Geometry Indeed, on G, we have global coordinates U j i : GLn(R) |
→ R where i (A) = Aj U j i, where A = (Aj i ) ∈ GLn(R). Under this chart, we have Xξ|A = LA(ξ) = (Aξ)i j i,j ∂ ∂U i j A = i,j,k Ai kξk j ∂ ∂U i j A So we have So we have Xξ = i,j,k U i kξk j ∂ ∂U i j. [Xξ, Xη] = U i kξk j i,j,k ∂ ∂U i j, p,r,q U p q ηq r ∂ ∂U p r . We now use the fact that We then expand ∂ ∂U i j U p q = δipδjq. [Xξ, Xη] = i,j,k,r (U i j ξj kηk r − U i j ξj kξk r ) ∂ ∂U i r. So we have [Xξ, Xη] = Xξη−ηξ. Definition (Lie group homomorphisms). Let G, H be Lie groups. A Lie group homomorphism is a smooth map that is also a homomorphism. Definition (Lie algebra homomorphism). Let g, h be Lie algebras. Then a Lie algebra homomorphism is a linear map β : g → h such that β[ξ, η] = [β(ξ), β(η)] for all ξ, η ∈ g. Proposition. Let G be a Lie group and ξ ∈ g. Then the integral curve γ for Xξ through e ∈ G exists for all time, and γ : R → G is a Lie group homomorphism. The idea is that once we have a small integral curve, we can use the Lie group structure to copy the curve to patch together a long integral curve. Proof. Let γ : I → G be a maximal integral curve of Xξ, say (−ε, ε) ∈ I. We fix a t0 with | |
t0| < ε. Consider g0 = γ(t0). We let for |t| < ε. ˜γ(t) = Lg0(γ(t)) 31 3 Lie groups III Differential Geometry We claim that ˜γ is an integral curve of Xξ with ˜γ(0) = g0. Indeed, we have ˙˜γ|t = d dt Lg0γ(t) = DLg0 ˙γ(t) = DLg0Xξ|γ(t) = Xξ|g0·γ(t) = Xξ|˜γ(t). By patching these together, we know (t0 − ε, t0 + ε) ⊆ I. Since we have a fixed ε that works for all t0, it follows that I = R. The fact that this is a Lie group homomorphism follows from general proper- ties of flow maps. Example. Let G = GLn. If ξ ∈ gln, we set eξ = 1 k! k≥0 ξk. We set F (t) = etξ. We observe that this is in GLn since etξ has an inverse e−tξ (alternatively, det(etξ) = etr(tξ) = 0). Then F (t) = d dt 1 k! k tkξk = etξξ = Letξ ξ = LF (t)ξ. Also, F (0) = I. So F (t) is an integral curve. Definition (Exponential map). The exponential map of a Lie group G is exp : g → G given by where γξ is the integral curve of Xξ through e ∈ G. exp(ξ) = γξ(1), So in the case of G = GLn, the exponential map is the exponential map. Proposition. (i) exp is a smooth map. (ii) If F (t) = exp(tξ), then F : R → G is a Lie group homomorphism and DF |0 d dt = ξ. (iii) The derivative is the identity map. D exp : T0g ∼= g → Te |
G ∼= g (iv) exp is a local diffeomorphism around 0 ∈ g, i.e. there exists an open U ⊆ g containing 0 such that exp : U → exp(U ) is a diffeomorphism. (v) exp is natural, i.e. if f : G → H is a Lie group homomorphism, then the diagram commutes. Proof. exp g Df |e exp h G f H 32 3 Lie groups III Differential Geometry (i) This is the smoothness of ODEs with respect to parameters (ii) Exercise. (iii) If ξ ∈ g, we let σ(t) = tξ. So ˙σ(0) = ξ ∈ T0g ∼= g. So D exp |0(ξ) = D exp |0( ˙σ(0)) = d dt t=0 exp(σ(t)) = d dt t=0 exp(tξ) = Xξ|e = ξ. (iv) Follows from above by inverse function theorem. (v) Exercise. Definition (Lie subgroup). A Lie subgroup of G is a subgroup H with a smooth structure on H making H an immersed submanifold. Certainly, if H ⊆ G is a Lie subgroup, then h ⊆ g is a Lie subalgebra. Theorem. If h ⊆ g is a subalgebra, then there exists a unique connected Lie subgroup H ⊆ G such that Lie(H) = h. Theorem. Let g be a finite-dimensional Lie algebra. Then there exists a (unique) simply-connected Lie group G with Lie algebra g. Theorem. Let G, H be Lie groups with G simply connected. Then every Lie algebra homomorphism g → h lifts to a Lie group homomorphism G → H. 33 4 Vector bundles III Differential Geometry 4 Vector bundles Recall that we had the tangent bundle of a manifold. The tangent bundle gives us a vector space at each point in space, namely the tangent space. In general, a vector bundle is a vector space attached to each point in our manifold (in a smoothly-varying way), which is what we are going to study in |
this chapter. Before we start, we have a look at tensor products. These will provide us a way of constructing new vector spaces from old ones. 4.1 Tensors The tensor product is a very important concept in Linear Algebra. It is something that is taught in no undergraduate courses and assumed knowledge in all graduate courses. For the benefit of the students, we will give a brief introduction to tensor products. A motivation for tensors comes from the study of bilinear maps. A bilinear map is a function that takes in two vectors and returns a number, and this is linear in both variables. An example is the inner product, and another example is the volume form, which tells us the volume of a parallelepiped spanned by the two vectors. Definition (Bilinear map). Let U, V, W be vector spaces. We define Bilin(V × W, U ) to be the functions V × W → U that are bilinear, i.e. α(λ1v1 + λ2v2, w) = λ1α(v1, w) + λ2α(v2, w) α(v, λ1w1 + λ2w2) = λ1α(v, w1) + λ2α(v, w2). It is important that a bilinear map is not a linear map. This is bad. We spent so much time studying linear maps, and we now have to go back to our linear algebra book and rewrite everything to talk about bilinear maps as well. But bilinear maps are not enough. We want to do them for multi-linear maps! But linear maps were already complicated enough, so this must be much worse. We want to die. Tensors are a trick to turn the study of bilinear maps to linear maps (from a different space). Definition (Tensor product). A tensor product of two vector spaces V, W is a vector space V ⊗ W and a bilinear map π : V × W → V ⊗ W such that a bilinear map from V × W is “the same as” a linear map from V ⊗ W. More precisely, given any bilinear map α : V × W → U, |
we can find a unique linear map ˜α : V ⊗ W → U such that the following diagram commutesα U So we have Bilin(V × W, U ) ∼= Hom(V ⊗ W, U ). Given v ∈ V and w ∈ W, we obtain π(v, w) ∈ V ⊗ W, called the tensor product of v and w, written v ⊗ w. 34 4 Vector bundles III Differential Geometry We say V ⊗ W represents bilinear maps from V × W. It is important to note that not all elements of V ⊗ W are of the form v ⊗ w. Now the key thing we want to prove is the existence and uniqueness of tensor products. Lemma. Tensor products exist (and are unique up to isomorphism) for all pairs of finite-dimensional vector spaces. Proof. We can construct V ⊗ W = Bilin(V × W, R)∗. The verification is left as an exercise on the example sheet. We now write down some basic properties of tensor products. Proposition. Given maps f : V → W and g : V → W, we obtain a map given by the bilinear map (f ⊗ g)(v, w) = f (v) ⊗ g(w). Lemma. Given v, vi ∈ V and w, wi ∈ W and λi ∈ R, we have (λ1v1 + λ2v2) ⊗ w = λ1(v1 ⊗ w) + λ2(v2 ⊗ w) v ⊗ (λ1w1 + λ2w2) = λ1(v ⊗ w1) + λ2(v ⊗ w2). Proof. Immediate from the definition of bilinear map. Lemma. If v1, · · ·, vn is a basis for V, and w1, · · ·, wm is a basis for W, then {vi ⊗ wj : i = 1, · · ·, n; j = 1, · · ·, m} is a basis for V ⊗ W. In particular, dim V ⊗ |
W = dim V × dim W. Proof. We have V ⊗ W = Bilin(V × W, R)∗. We let αpq : V × W → R be given by αpq aivi, bjwj = apbq. Then αpq ∈ Bilin(V × W, R), and (vi ⊗ wj) are dual to αpq. So it suffices to show that αpq are a basis. It is clear that they are independent, and any bilinear map can be written as α = cpqαpq, where So done. cpq = α(vp, wq). Proposition. For any vector spaces V, W, U, we have (natural) isomorphisms (i) V ⊗ W ∼= W ⊗ V (ii) (V ⊗ W ) ⊗ U ∼= V ⊗ (W ⊗ U ) (iii) (V ⊗ W )∗ ∼= V ∗ ⊗ W ∗ 35 4 Vector bundles III Differential Geometry Definition (Covariant tensor). A covariant tensor of rank k on V is an element of k times i.e. α is a multilinear map V × · · · × V → R ∗, Example. A covariant 1-tensor is an α ∈ V ∗, i.e. a linear map α : V → R. A covariant 2-tensor is a β ∈ V ∗ ⊗ V ∗, i.e. a bilinear map V × V → R, e.g. an inner product. Example. If α, β ∈ V ∗, then α ⊗ β ∈ V ∗ ⊗ V ∗ is the covariant 2-tensor given by (α ⊗ b)(v, w) = α(v)β(w). More generally, if α is a rank k tensor and β is a rank tensor, then α ⊗ β is a rank k + tensor. Definition (Tensor). A tensor of type (k, ) is an element in ( times ⊗ V ⊗ · · · ⊗ V times. We are interested |
in alternating bilinear maps, i.e. α(v, w) = −α(w, v), or equivalently, α(v, v) = 0 (if the characteristic is not 2). Definition (Exterior product). Consider T (V ) = V ⊗k k≥0 as an algebra (with multiplication given by the tensor product) (with V ⊗0 = R). We let I(V ) be the ideal (as algebras!) generated by {V ). We define Λ(V ) = T (V )/I(V ), with a projection map π : T (V ) → Λ(V ). This is known as the exterior algebra. We let Λk(V ) = π(V ⊗k), the k-th exterior product of V. We write a ∧ b for π(α ⊗ β). The idea is that ΛpV is the dual of the space of alternating multilinear maps V × V → R. Lemma. (i) If α ∈ ΛpV and β ∈ ΛqV, then α ∧ β = (−1)pqβ ∧ α. (ii) If dim V = n and p > n, then we have dim Λ0V = 1, dim ΛnV = 1, ΛpV = {0}. (iii) The multilinear map det : V × · · · × V → R spans ΛnV. 36 4 Vector bundles III Differential Geometry (iv) If v1, · · ·, vn is a basis for V, then {vi1 ∧ · · · ∧ vip : i1 < · · · < ip} is a basis for ΛpV. Proof. (i) We clearly have v ∧ v = 0. So Then v ∧ w = −w ∧ v (v1 ∧ · · · ∧ vp) ∧ (w1 ∧ · · · ∧ wq) = (−1)pqw1 ∧ · · · ∧ wq ∧ v1 ∧ · · · ∧ vp since we have pq swaps. Since {vi1 ∧ · · · ∧ vip : i1, · · · |
, ip ∈ {1, · · ·, n}} ⊆ ΛpV spans ΛpV (by the corresponding result for tensor products), the result follows from linearity. (ii) Exercise. (iii) The det map is non-zero. So it follows from the above. (iv) We know that {vi1 ∧ · · · ∧ vip : i1, · · ·, ip ∈ {1, · · ·, n}} ⊆ ΛpV spans, but they are not independent since there is a lot of redundancy (e.g. v1 ∧ v2 = −v2 ∧ v1). By requiring i1 < · · · < ip, then we obtain a unique copy for combination. To check independence, we write I = (i1, · · ·, ip) and let vI = vi1 ∧ · · · ∧ vip. Then suppose aI vI = 0 for aI ∈ R. For each I, we let J be the multi-index J = {1, · · ·, n} \ I. So if I = I, then vI ∧ vJ = 0. So wedging with vJ gives I I αI vI ∧ vJ = aI vI ∧ vJ = 0. So aI = 0. So done by (ii). If F : V → W is a linear map, then we get an induced linear map ΛpF : ΛpV → ΛpW in the obvious way, making the following diagram commute: V ⊗p π ΛpV F ⊗p ΛpF W ⊗p π ΛpW More concretely, we have ΛpF (v1 ∧ · · · ∧ vp) = F (v1) ∧ · · · ∧ F (vp). 37 4 Vector bundles III Differential Geometry Lemma. Let F : V → V be a linear map. Then ΛnF : ΛnV → ΛnV is multiplication by det F. Proof. Let v1, · · ·, vn be a basis. Then ΛnV is spanned by v1 ∧ · · · ∧ vn. So we have (ΛnF )(v1 ∧ · |
· · ∧ vn) = λ v1 ∧ · · · ∧ vn for some λ. Write F (vi) = j Ajivj for some Aji ∈ R, i.e. A is the matrix representation of F. Then we have (ΛnF )(v1 ∧ · · · ∧ vn) = j Aj1vj ∧ · · · ∧. Ajnvj j If we expand the thing on the right, a lot of things die. The only things that live are those where we get each of vi once in the wedges in some order. Then this becomes σ∈Sn ε(σ)(Aσ(1),1 · · · Aσ(n),n)v1 ∧ · · · ∧ vn = det(F ) v1 ∧ · · · ∧ vn, where ε(σ) is the sign of the permutation, which comes from rearranging the vi to the right order. 4.2 Vector bundles Our aim is to consider spaces TpM ⊗ TpM,..., ΛrTpM etc as p varies, i.e. construct a “tensor bundle” for these tensor products, similar to how we constructed the tangent bundle. Thus, we need to come up with a general notion of vector bundle. Definition (Vector bundle). A vector bundle of rank r on M is a smooth manifold E with a smooth π : E → M such that (i) For each p ∈ M, the fiber π−1(p) = Ep is an r-dimensional vector space, (ii) For all p ∈ M, there is an open U ⊆ M containing p and a diffeomorphism such that t : E|U = π−1(U ) → U × Rr t U × Rr p1 E|U π U commutes, and the induced map Eq → {q} × Rr is a linear isomorphism for all q ∈ U. We call t a trivialization of E over U ; call E the total space; call M the base space; and call π the projection. Also, for each q ∈ M, the vector space Eq = π−1({q}) is called the � |
�ber over q. 38 4 Vector bundles III Differential Geometry Note that the vector space structure on Ep is part of the data of a vector bundle. Alternatively, t can be given by collections of smooth maps s1, · · ·, sr : U → E with the property that for each q ∈ U, the vectors s1(q), · · ·, sr(q) form a basis for Eq. Indeed, given such s1, · · ·, sr, we can define t by t(vq) = (q, α1, · · ·, αr), where vq ∈ Eq and the αi are chosen such that vq = r i=1 αisi(q). The s1, · · ·, sr are known as a frame for E over U. Example (Tangent bundle). The bundle T M → M is a vector bundle. Given any point p, find some coordinate charts around p with coordinates x1, · · ·, xn. Then we get a frame ∂, giving trivializations of T M over U. So T M is a vector ∂xi bundle. Definition (Section). A (smooth) section of a vector bundle E → M over some open U ⊆ M is a smooth s : U → E such that s(p) ∈ Ep for all p ∈ U, that is π ◦ s = id. We write C∞(U, E) for the set of smooth sections of E over U. Example. Vect(M ) = C∞(M, T M ). Definition (Transition function). Suppose that tα : E|Uα → Uα × Rr and tβ : E|Uβ → Uβ × Rr are trivializations of E. Then tα ◦ t−1 β : (Uα ∩ Uβ) × Rr → (Uα ∩ Uβ) × Rr is fiberwise linear, i.e. tα ◦ t−1 β (q, v) = (q, ϕαβ(q)v), where ϕαβ(q) is in GLr(R). In fact, ϕαβ : Uα ∩ Uβ → GLr(R) is smooth. Then ϕαβ |
is known as the transition function from β to α. Proposition. We have the following equalities whenever everything is defined: (i) ϕαα = id (ii) ϕαβ = ϕ−1 βα (iii) ϕαβϕβγ = ϕαγ, where ϕαβϕβγ is pointwise matrix multiplication. These are known as the cocycle conditions. We now consider general constructions that allow us to construct new vector bundles from old ones. 39 4 Vector bundles III Differential Geometry Proposition (Vector bundle construction). Suppose that for each p ∈ M, we have a vector space Ep. We set E = Ep p We let π : E → M be given by π(vp) = p for vp ∈ Ep. Suppose there is an open cover {Uα} of open sets of M such that for each α, we have maps tα : E|Uα = π−1(Uα) → Uα × Rr over Uα that induce fiberwise linear isomorphisms. Suppose the transition functions ϕαβ are smooth. Then there exists a unique smooth structure on E making π : E → M a vector bundle such that the tα are trivializations for E. Proof. The same as the case for the tangent bundle. In particular, we can use this to perform the following constructions: Definition (Direct sum of vector bundles). Let E, ˜E be vector bundles on ∼= Uα × Rr is a trivialization for E over Uα, and M. Suppose tα : E|Uα ˜tα : ˜E|Uα ∼= Uα × R˜r is a trivialization for ˜E over Uα. We let ϕαβ be transition functions for {tα} and ˜ϕαβ be transition functions for {˜tα}. Define E ⊕ ˜E = Ep ⊕ ˜Ep, p and define Tα : (E ⊕ ˜E)|Uα = E|Uα ⊕ ˜E|Uα → Uα × (Rr ⊕ R˜r) = Uα × Rr+˜r be the fiberwise direct sum of the two trivializations. Then |
Tα clearly gives a linear isomorphism (E ⊕ ˜E)p ∼= Rr+˜r, and the transition function for Tα is Tα ◦ T −1 β = ϕαβ ⊕ ˜ϕαβ, which is clearly smooth. So this makes E ⊕ ˜E into a vector bundle. In terms of frames, if {s1, · · ·, sr} is a frame for E and {˜s1, · · ·, ˜s˜r} is a frame for ˜E over some U ⊆ M, then {si ⊕ 0, 0 ⊕ ˜sj : i = 1, · · ·, r; j = 1, · · ·, ˜r} is a frame for E ⊕ ˜E. Definition (Tensor product of vector bundles). Given two vector bundles E, ˜E over M, we can construct E ⊗ ˜E similarly with fibers (E ⊗ ˜E)|p = E|p ⊗ ˜E|p. Similarly, we can construct the alternating product of vector bundles ΛnE. Finally, we have the dual vector bundle. 40 4 Vector bundles III Differential Geometry Definition (Dual vector bundle). Given a vector bundle E → M, we define the dual vector bundle by E∗ = (Ep)∗. p∈M Suppose again that tα : E|Uα → Uα × Rr is a local trivialization. Taking the dual of this map gives α : Uα × (Rr)∗ → E|∗ t∗ Uα since taking the dual reverses the direction of the map. We pick an isomorphism (Rr)∗ → Rr once and for all, and then reverse the above isomorphism to get a map. This gives a local trivialization. E|∗ Uα → Uα × Rr. If {s1, · · ·, sr} is a frame for E over U, then {s∗ 1, · · ·, s∗ r} is a frame for E∗ over U, where {s∗ 1(p), · · ·, s∗ r(p)} is a dual basis |
to {s1(p), · · ·, sr(p)}. Definition (Cotangent bundle). The cotangent bundle of a manifold M is T ∗M = (T M )∗. In local coordinate charts, we have a frame ∂ ∂x1 dual frame is written as dx1, · · ·, dxn. In other words, we have, · · ·, ∂ ∂xn of T M over U. The dxi|p ∈ (TpM )∗ and p Recall the previously, given a function f ∈ C∞(U, R), we defined df as the ∂ ∂xj = δij. dxi|p differential of f given by df |p = Df |p : TpM → Tf (p)R ∼= R. Thinking of xi as a function on a coordinate chart U, we have Dxi|p ∂ ∂xj p = ∂ ∂xj (xi) = δij for all i, j. So the two definitions of dxi agree. We can now take powers of this to get more interesting things. Definition (p-form). A p-form on a manifold M over U is a smooth section of ΛpT ∗M, i.e. an element in C∞(U, ΛpT ∗M ). Example. A 1-form is an element of T ∗M. It is locally of the form α1dx1 + · · · + αndxn for some smooth functions α1, · · ·, αn. Similarly, if ω is a p-form, then locally, it is of the form ω = ωI dxI, I where I = (i1, · · ·, ip) with i1 < · · · < ip, and dxI = dxi1 ∧ · · · ∧ dxip. 41 4 Vector bundles III Differential Geometry It is important to note that these representations only work locally. Definition (Tensors on manifolds). Let M be a manifold. We define M = T ∗M ⊗ · · · ⊗ T ∗M T k |
k times ⊗ T M ⊗ · · · ⊗ T M times. A tensor of type (k, ) is an element of C∞(M, T k M ). The convention when k = = 0 is to set T 0 0 M = M × R. In local coordinates, we can write a (k, ) tensor ω as ω = αj1,...,jk i1,...,i dxj1 ⊗ · · · ⊗ dxjk ⊗ ∂ ∂xi1 ⊗ · · · ⊗ ∂ ∂xi, where the α are smooth functions. Example. A tensor of type (0, 1) is a vector field. A tensor of type (1, 0) is a 1-form. A tensor of type (0, 0) is a real-valued function. Definition (Riemannian metric). A Riemannian metric on M is a (2, 0)-tensor g such that for all p, the bilinear map gp : TpM × TpM → R is symmetric and positive definite, i.e. an inner product. Given such a g and vp ∈ TpM, we write vp for gp(vp, vp). Using these, we can work with things like length: Definition (Length of curve). Let γ : I → M be a curve. The length of γ is (γ) = I ˙γ(t) dt. Finally, we will talk about morphisms between vector bundles. Definition (Vector bundle morphisms). Let E → M and E → M be vector bundles. A bundle morphism from E to E is a pair of smooth maps (F : E → E, f : M → M ) such that the following diagram commutes: F f E M. E M i.e. such that Fp : Ep → E f (p) is linear for each p. Example. Let E = T M and E = T M. If f : M → M is smooth, then (Df, f ) is a bundle morphism. Definition (Bundle morphism over M ). Given two bundles E, E over the same base M, a bundle morphism |
over M is a bundle morphism E → E of the form (F, idM ). 42 4 Vector bundles III Differential Geometry Example. Given a Riemannian metric g, we get a bundle morphism T M → T ∗M over M by v → F (v) = g(v, −). Since each g(v, −) is an isomorphism, we have a canonical bundle isomorphism T M ∼= T ∗M. Note that the isomorphism between T M and T ∗M requires the existence of a Riemannian metric. 43 5 Differential forms and de Rham cohomology III Differential Geometry 5 Differential forms and de Rham cohomology 5.1 Differential forms We are now going to restrict our focus to a special kind of tensors, known as differential forms. Recall that in Rn (as a vector space), an alternating n-linear map tells us the signed volume of the parallelepiped spanned by n vectors. In general, a differential p-form is an alternating p-linear map on the tangent space at each point, so it tells us the volume of an “infinitesimal p-dimensional parallelepiped”. In fact, we will later see than on an (oriented) p-dimensional manifold, we can integrate a p-form on the manifold to obtain the “volume” of the manifold. Definition (Differential form). We write Ωp(M ) = C∞(M, ΛpT ∗M ) = {p-forms on M }. An element of Ωp(M ) is known as a differential p-form. In particular, we have Ω0(M ) = C∞(M, R). In local coordinates x1, · · ·, xn on U we can write ω ∈ Ωp(M ) as ω = i1<...<ip ωi1,...,ip dxi1 ∧ · · · ∧ dxip for some smooth functions ωi1,...,ip. We are usually lazy and just write ω = ωI dxI. I Example. A 0-form |
is a smooth function. Example. A 1-form is a section of T ∗M. If ω ∈ Ω1(M ) and X ∈ Vect(M ), then ω(X) ∈ C∞(M, R). For example, if f is a smooth function on M, then df ∈ Ω1(M ) with for all X ∈ Vect(M ). Locally, we can write df (X) = X(f ) df = n i=1 ai dxi. To work out what the ai’s are, we just hit this with the ∂ ∂xj. So we have So we have aj = df ∂ ∂xj = ∂f ∂xj. df = n i=1 ∂f ∂xi dxi. This is essentially just the gradient of a function! 44 5 Differential forms and de Rham cohomology III Differential Geometry Example. If dim M = n, and ω ∈ Ωn(M ), then locally we can write ω = g dx1 ∧ · · · ∧ dxn. for some smooth function g. This is an alternating form that assigns a real number to n tangent vectors. So it measures volume! If y1, · · ·, yn is any other coordinates, then dxi = ∂xi ∂yj dyj. So we have ω = g det ∂xi ∂yj i,j dy1 ∧ · · · ∧ dyn. Now a motivating question is this — given an ω ∈ Ω1(M ), can we find some f ∈ Ω0(M ) such that ω = df? More concretely, let U ⊆ R2 be open, and let x, y be the coordinates. Let If we have w = df for some f, then we have ω = a dx + b dy a = ∂f ∂x, b = ∂f ∂y. So the symmetry of partial derivatives tells us that ∂a ∂y = ∂b ∂x. (∗) So this equation (∗) is a necessary condition to solve ω = df. Is it sufficient? To begin with, we want to fi |
nd a better way to express (∗) without resorting to local coordinates, and it turns out this construction will be very useful later on. Theorem (Exterior derivative). There exists a unique linear map d = dM,p : Ωp(M ) → Ωp+1(M ) such that (i) On Ω0(M ) this is as previously defined, i.e. df (X) = X(f ) for all X ∈ Vect(M ). (ii) We have d ◦ d = 0 : Ωp(M ) → Ωp+2(M ). (iii) It satisfies the Leibniz rule d(ω ∧ σ) = dω ∧ σ + (−1)pω ∧ dσ. It follows from these assumptions that 45 5 Differential forms and de Rham cohomology III Differential Geometry (iv) d acts locally, i.e. if ω, ω ∈ Ωp(M ) satisfy ω|U = ω|U for some U ⊆ M open, then dω|U = dω|U. (v) We have for all U ⊆ M. d(ω|U ) = (dω)|U What do the three rules tell us? The first rule tells us this is a generalization of what we previously had. The second rule will turn out to be a fancy way of saying partial derivatives commute. The final Leibniz rule tells us this d is some sort of derivative. Example. If we have then we have ω = a dx + b dy, dω = da ∧ dx + a d(dx) + db ∧ dy + b d(dy) = da ∧ dx + db ∧ dy = = ∂a ∂x ∂b ∂x dx + − ∂a ∂y dy ∧ dx + ∂b ∂x dx + ∂b ∂y dy ∧ dy ∂a ∂y dx ∧ dy. So the condition (∗) says dω = 0. We now rephrase our motivating question — if ω ∈ Ω1(M ) satisfies dω = 0 |
, can we find some f such that ω = df for some f ∈ Ω0(M )? Now this has the obvious generalization — given any p-form ω, if dω = 0, can we find some σ such that ω = dσ? Example. In R3, we have coordinates x, y, z. We have seen that for f ∈ Ω0(R3), we have df = dx + dy + dz. ∂f ∂x ∂f ∂y ∂f ∂z Now if then we have ω = P dx + Q dy + R dz ∈ Ω1(R3), d(P dx) = dP ∧ dx + P ddx ∂P ∂x ∂P ∂y dx ∧ dy − ∂P ∂y dx + = − = ∂P ∂z dy + dz ∧ dx ∂P ∂z dx ∧ dz. So we have ∂Q ∂x dω = − ∂P ∂y dx ∧ dy + ∂R ∂x − ∂P ∂z dx ∧ dz + ∂R ∂y − ∂Q ∂z dy ∧ dz. This is just the curl! So d2 = 0 just says that curl ◦ grad = 0. 46 5 Differential forms and de Rham cohomology III Differential Geometry Proof. The above computations suggest that in local coordinates, the axioms already tell use completely how d works. So we just work locally and see that they match up globally. Suppose M is covered by a single chart with coordinates x1, · · ·, xn. We define d : Ω0(M ) → Ω1(M ) as required by (i). For p > 0, we define d i1<...<ip ωi1,...,ip dxi1 ∧ · · · ∧ dxip = dωi1,...,ip ∧ dxi1 ∧ · · · ∧ dxip. Then (i) is clear. For (iii), we suppose ω = f dxI ∈ Ωp |
(M ) σ = g dxJ ∈ Ωq(M ). We then have d(ω ∧ σ) = d(f g dxI ∧ dxJ ) = d(f g) ∧ dxI ∧ dxJ = g df ∧ dxI ∧ dxJ + f dg ∧ dxI ∧ dxJ = g df ∧ dxI ∧ dxJ + f (−1)p dxI ∧ (dg ∧ dxJ ) = (dω) ∧ σ + (−1)pω ∧ dσ. So done. Finally, for (ii), if f ∈ Ω0(M ), then d2f = d i ∂f ∂xi dxi = ∂2f ∂xi∂xj i,j dxj ∧ dxi = 0, since partial derivatives commute. Then for general forms, we have d2ω = d2 ωI dxI dωI ∧ dxI dωI ∧ dxi1 ∧ · · · ∧ dxip = d = d = 0 using Leibniz rule. So this works. Certainly this has the extra properties. To claim uniqueness, if ∂ : Ωp(M ) → Ωp+1(M ) satisfies the above properties, then ∂ω = ∂ ωI dxI = = ∂ωI ∧ dxI + ωI ∧ ∂dxI dωI ∧ dxI, using the fact that ∂ = d on Ω0(M ) and induction. Finally, if M is covered by charts, we can define d : Ωp(M ) → Ωp+1(M ) by defining it to be the d above on any single chart. Then uniqueness implies this is well-defined. This gives existence of d, but doesn’t immediately give uniqueness, since we only proved local uniqueness. 47 5 Differential forms and de Rham cohomology III Differential Geometry So suppose ∂ : Ωp(M ) → Ωp+1(M ) again satisfies the three properties. We claim that ∂ is local. We let ω, ω ∈ |
Ωp(M ) be such that ω|U = ω|U for some U ⊆ M open. Let x ∈ U, and pick a bump function χ ∈ C∞(M ) such that χ ≡ 1 on some neighbourhood W of x, and supp(χ) ⊆ U. Then we have χ · (ω − ω) = 0. We then apply ∂ to get 0 = ∂(χ · (ω − ω)) = dχ ∧ (ω − ω) + χ(∂ω − ∂ω). But χ ≡ 1 on W. So dχ vanishes on W. So we must have ∂ω|W − ∂ω|W = 0. So ∂ω = ∂ω on W. Finally, to show that ∂ = d, if ω ∈ Ωp(M ), we take the same χ as before, and then on x, we have ∂ω = ∂ χ ωI dxI = ∂χ ωI dxI + χ ∂ωI ∧ dxI = χ dωI ∧ dxI = dω. So we get uniqueness. Since x was arbitrary, we have ∂ = d. One useful example of a differential form is a symplectic form. Definition (Non-degenerate form). A 2-form ω ∈ Ω2(M ) is non-degenerate if ω(Xp, Xp) = 0 implies Xp = 0. As in the case of an inner product, such an ω gives us an isomorphism TpM → T ∗ p M by α(Xp)(Yp) = ω(Xp, Yp). Definition (Symplectic form). A symplectic form is a non-degenerate 2-form ω such that dω = 0. Why did we work with covectors rather than vectors when defining differential forms? It happens that differential forms have nicer properties. If we have some F ∈ C∞(M, N ) and g ∈ Ω0(N ) = C∞(N, R), then we can form the pull |
back F ∗g = g ◦ F ∈ Ω0(M ). More generally, for x ∈ M, we have a map DF |x : TxM → TF (x)N. This does not allow us to pushforward a vector field on M to a vector field of N, as the map F might not be injective. However, we can use its dual (DF |x)∗ : T ∗ F (x)N → T ∗ x M to pull forms back. 48 5 Differential forms and de Rham cohomology III Differential Geometry Definition (Pullback of differential form). Let ω ∈ Ωp(N ) and F ∈ C∞(M, N ). We define the pullback of ω along F to be F ∗ω|x = Λp(DF |x)∗(ω|F (x)). In other words, for v1, · · ·, vp ∈ TxM, we have (F ∗ω|x)(v1, · · ·, vp) = ω|F (x)(DF |x(v1), · · ·, DF |x(vp)). Lemma. Let F ∈ C∞(M, N ). Let F ∗ be the associated pullback map. Then (i) F ∗ is a linear map Ωp(N ) → Ωp(M ). (ii) F ∗(ω ∧ σ) = F ∗ω ∧ F ∗σ. (iii) If G ∈ C∞(N, P ), then (G ◦ F )∗ = F ∗ ◦ G∗. (iv) We have dF ∗ = F ∗d. Proof. All but (iv) are clear. We first check that this holds for 0 forms. If g ∈ Ω0(N ), then we have (F ∗dg)|x(v) = dg|F (x)(DF |x(v)) = DF |x(v)(g) = v(g ◦ F ) = d(g ◦ F )(v) = d(F ∗g |
)(v). So we are done. Then the general result follows from (i) and (ii). Indeed, in local coordinates y1, · · ·, yn, if then we have Then we have ω = ωi1,...,ip dyi1 ∧ · · · ∧ dyip, F ∗ω = (F ∗ωi1,...,ip )(F ∗dyi1 ∧ · · · ∧ dyip ). dF ∗ω = F ∗dω = (F ∗dωi1,...,ip )(F ∗dyi1 ∧ · · · ∧ dyip ). 5.2 De Rham cohomology We now get to answer our original motivating question — given an ω ∈ Ωp(M ) with dω = 0, does it follow that there is some σ ∈ Ωp−1(M ) such that ω = dσ? The answer is “not necessarily”. In fact, the extent to which this fails tells us something interesting about the topology of the manifold. We are going to define certain vector spaces H p dR(M ) for each p, such that this vanishes if and only if all p forms ω with dω = 0 are of the form dθ. Afterwards, we will come up with techniques to compute this H p dR(M ), and then we can show that certain spaces have vanishing H p dR(M ). We start with some definitions. 49 5 Differential forms and de Rham cohomology III Differential Geometry Definition (Closed form). A p-form ω ∈ Ωp(M ) is closed if dω = 0. Definition (Exact form). A p-form ω ∈ Ωp(M ) is exact if there is some σ ∈ Ωp−1(M ) such that ω = dσ. We know that every exact form is closed. However, in general, not every closed form is exact. The extent to which this fails is given by the de Rham cohomology. Definition (de Rham cohomology). The pth de Rham cohomology is given by the R-vector |
space H p dR(M ) = ker d : Ωp(M ) → Ωp+1(M ) im d : Ωp−1(M ) → Ωp(M ) = closed forms exact forms. In particular, we have H 0 dR(M ) = ker d : Ω0(M ) → Ω1(M ). We could tautologically say that if dω = 0, then ω is exact iff it vanishes in H p dR(M ). But this is as useful as saying “Let S be the set of solutions to this differential equation. Then the differential equation has a solution iff S is non-empty”. So we want to study the properties of H p dR and find ways of computing them. Proposition. (i) Let M have k connected components. Then H 0 dR(M ) = Rk. (ii) If p > dim M, then H p dR(M ) = 0. (iii) If F ∈ C∞(M, N ), then this induces a map F ∗ : H p dR(N ) → H p dR(M ) given by F ∗[ω] = [F ∗ω]. (iv) (F ◦ G)∗ = G∗ ◦ F ∗. (v) If F : M → N is a diffeomorphism, then F ∗ : H p dR(N ) → H p dR(M ) is an isomorphism. Proof. (i) We have H 0 dR(M ) = {f ∈ C∞(M, R) : df = 0} = {locally constant functions f } = Rnumber of connected components. (ii) If p > dim M, then all p-forms are trivial. 50 5 Differential forms and de Rham cohomology III Differential Geometry (iii) We first show that F ∗ω indeed represents some member of H p dR(M ). Let [ω] ∈ H p dR(N ). Then dω = 0. So d(F ∗ω) = F ∗(dω) = 0. dR(M ). |
So this map makes sense. So [F ∗ω] ∈ H p To see it is well-defined, if [ω] = [ω], then ω − ω = dσ for some σ. So F ∗ω − F ∗ω = d(F ∗σ). So [F ∗ω] = [F ∗ω]. (iv) Follows from the corresponding fact for pullback of differential forms. (v) If F −1 is an inverse to F, then (F −1)∗ is an inverse to F ∗ by above. It turns out that de Rham cohomology satisfies a stronger property of being homotopy invariant. To make sense of that, we need to define what it means to be homotopy invariant. Definition (Smooth homotopy). Let F0, F1 : M → N be smooth maps. A smooth homotopy from F0 to F1 is a smooth map F : [0, 1] × M → N such that F0(x) = F (0, x), F1(x) = F (1, x). If such a map exists, we say F0 and F1 are homotopic. Note that here F is defined on [0, 1] × M, which is not a manifold. So we need to be slightly annoying and say that F is smooth if it can be extended to a smooth function I × M → N for I ⊇ [0, 1] open. We can now state what it means for the de Rham cohomology to be homotopy invariant. Theorem (Homotopy invariance). Let F0, F1 be homotopic maps. Then F ∗ 1 : H p F ∗ dR(N ) → H p dR(M ). 0 = Proof. Let F : [0, 1] × M → N be the homotopy, and Ft(x) = F (t, x). We denote the exterior derivative on M by dM (and similarly dN ), and that on [0, 1] × M by d. Let ω ∈ Ωp(N ) be such that dN ω = 0. We let t be the coordinate on [0, 1 |
]. We write F ∗ω = σ + dt ∧ γ, where σ = σ(t) ∈ Ωp(M ) and γ = γ(t) ∈ Ωp−1(M ). We claim that Indeed, we let ι : {t} × M → [0, 1] × M be the inclusion. Then we have σ(t) = F ∗ t ω. t ω|{t}×M = (F ◦ ι)∗ω = ι∗F ∗ω F ∗ = ι∗(σ + dt ∧ γ) = ι∗σ + ι∗dt ∧ ι∗γ = ι∗σ, 51 5 Differential forms and de Rham cohomology III Differential Geometry using the fact that ι∗dt = 0. As dN ω = 0, we have 0 = F ∗dN ω = dF ∗ω = d(σ + dt ∧ γ) = dM (σ) + (−1)p ∂σ ∂t = dM σ + (−1)p ∂σ ∂t ∧ dt + dt ∧ dM γ ∧ dt + (−1)p−1dM γ ∧ dt. Looking at the dt components, we have ∂σ ∂t = dM γ. So we have (1) − σ(0) = 1 0 ∂σ ∂t dt = 1 0 dM γ dt = dM 1 0 γ(t) dt. So we know that So done. [F ∗ 1 ω] = [F ∗ 0 ω]. Example. Suppose U ⊆ Rn is an open “star-shaped ” subset, i.e. there is some x0 ∈ U such that for any x ∈ U and t ∈ [0, 1], we have tx + (1 − t)x0 ∈ U. x x0 We define Ft : U → U by Ft(x) = tx + (1 − t)x0. Then F is a smooth homotopy from the identity map to F0, |
the constant map to x0. We clearly have F ∗ 0 is the zero map on H p 1 being the identity map, and F ∗ dR(U ) for all p ≥ 1. So we have H p dR(. Corollary (Poincar´e lemma). Let U ⊆ Rn be open and star-shaped. Suppose ω ∈ Ωp(U ) is such that dω = 0. Then there is some σ ∈ Ωp−1(M ) such that ω = dσ. 52 5 Differential forms and de Rham cohomology III Differential Geometry Proof. H p dR(U ) = 0 for p ≥ 1. More generally, we have the following notion. Definition (Smooth homotopy equivalence). We say two manifolds M, N are smoothly homotopy equivalent if there are smooth maps F : M → N and G : N → M such that both F ◦ G and G ◦ F are homotopic to the identity. Corollary. If M and N are smoothly homotopy equivalent, then H p H p dR(M ) ∼= dR(N ). Note that by approximation, it can be shown that if M and N are homotopy equivalent as topological spaces (i.e. the same definition where we drop the word “smooth”), then they are in fact smoothly homotopy equivalent. So the de Rham cohomology depends only on the homotopy type of the underlying topological space. 5.3 Homological algebra and Mayer-Vietoris theorem The main theorem we will have for computing de Rham cohomology will be the Mayer-Vietoris theorem. Proving this involves quite a lot of setting up and hard work. In particular, we need to define some notions from homological algebra to even state Mayer-Vietoris theorem. The actual proof will be divided into two parts. The first part is a purely algebraic result known as the snake lemma, and the second part is a differentialgeometric part that proves that we satisfy the hypothesis of the snake lemma. We will not prove the snake lemma, whose proof can be found in standard algebraic topology texts (perhaps with arrows the wrong |
way round). We start with some definitions. Definition (Cochain complex and exact sequence). A sequence of vector spaces and linear maps · · · V p−1 dp−1 V p dp V p+1 · · · is a cochain complex if dp ◦ dp−1 = 0 for all p ∈ Z. Usually we have V p = 0 for p < 0 and we do not write them out. Keeping these negative degree V p rather than throwing them away completely helps us state our theorems more nicely, so that we don’t have to view V 0 as a special case when we state our theorems. It is exact at p if ker dp = im dp−1, and exact if it is exact at every p. There are, of course, chain complexes as well, but we will not need them for this course. Example. The de Rham complex Ω0(M ) d Ω1(M ) d Ω2(M ) · · · is a cochain complex as d2 = 0. It is exact at p iff H p dR(M ) = {0}. Example. If we have an exact sequence such that dim V p < ∞ for all p and are zero for all but finitely many p, then (−1)p dim V p = 0. p 53 5 Differential forms and de Rham cohomology III Differential Geometry Definition (Cohomology). Let V · = · · · V p−1 be a cochain complex. The cohomology of V · at p is given by V p+1 V p dp−1 dp · · · H p(V ·) = ker dp im dp−1. Example. The cohomology of the de Rham complex is the de Rham cohomology. We can define maps between cochain complexes: Definition (Cochain map). Let V · and W· be cochain complexes. A cochain map V · → W· is a collection of maps f p : V p → W p such that the following diagram commutes for all p: V p dp V p+1 f p W p dp f p+1 W p+1 Proposition |
. A cochain map induces a well-defined homomorphism on the cohomology groups. Definition (Short exact sequence). A short exact sequence is an exact sequence of the form. This implies that α is injective, β is surjective, and im(α) = ker(β). By the rank-nullity theorem, we know dim V 2 = rank(β) + null(β) = dim V 3 + dim V 1. We can now state the main technical lemma, which we shall not prove. Theorem (Snake lemma). Suppose we have a short exact sequence of complexes B· C· A· 0 0, q i i.e. the i, q are cochain maps and we have a short exact sequence 0 Ap ip Bp qp C p 0, for each p. Then there are maps δ : H p(C·) → H p+1(A·) such that there is a long exact sequence · · · H p(A·) H p+1(A·) i∗ i∗ H p(B·) δ H p+1(B·) q∗ q∗ H p(C·) H p+1(C·). · · · 54 5 Differential forms and de Rham cohomology III Differential Geometry Using this, we can prove the Mayer-Vietoris theorem. Theorem (Mayer-Vietoris theorem). Let M be a manifold, and M = U ∪ V, where U, V are open. We denote the inclusion maps as follows: U ∩ V i1 i2 V j2 U j1 M Then there exists a natural linear map δ : H p dR(U ∩ V ) → H p+1 dR (M ) such that the following sequence is exact: H p dR(M ) 1 ⊕j∗ j∗ 2 H p+1 dR (M ) 1 ⊕j∗ j∗ 2 H p dR(U ) ⊕ H p dR(V ) δ H p+1 dR (U ) ⊕ H p+1 dR (V ) 1 −i∗ i∗ 2 H p dR(U ∩ V ) 1 −i∗ i∗ 2 · · |
· Before we prove the theorem, we do a simple example. Example. Consider M = S1. We can cut the circle up: U V Here we have S1 = {(x, y) : x2 + y2 = 1} U = S1 ∩ {y > −ε} V = S1 ∩ {y < ε}. As U, V are diffeomorphic to intervals, hence contractible, and U ∩ V is diffeomorphic to the disjoint union of two intervals, we know their de Rham cohomology. 0 H 0 dR(S1) H 1 dR(S1) H 0 dR(U ) ⊕ H 0 dR(V ) H 1 dR(U ) ⊕ H 1 dR(V ) H 0 dR(U ∩ V ) · · · We can fill in the things we already know to get dR(S1) 0 · · · 55 5 Differential forms and de Rham cohomology III Differential Geometry By adding the degrees alternatingly, we know that So dim H 1 dR(S1) = 1. H 1 dR(S1) ∼= R. Now we prove Mayer-Vietoris. Proof of Mayer-Vietoris. By the snake lemma, it suffices to prove that the following sequence is exact for all p: 0 Ωp(U ∪ V ) j∗ 1 ⊕j∗ 2 Ωp(U ) ⊕ Ωp(V ) i∗ 1 −i∗ 2 Ωp(U ∩ V ) 0 It is clear that the two maps compose to 0, and the first map is injective. By counting dimensions, it suffices to show that i∗ 2 is surjective. 1 − i∗ Indeed, let {ϕU, ϕV } be partitions of unity subordinate to {U, V }. Let ω ∈ Ωp(U ∩ V ). We set σU ∈ Ωp(U ) to be σU = ϕV ω on U ∩ V 0 on U \ supp ϕV Similarly, we define σV ∈ |
Ωp(V ) by σV = −ϕU ω on U ∩ V 0 on V \ supp ϕU.. Then we have 1σU − i∗ i∗ 2σV = (ϕV ω + ϕU ω)|U ∩V = ω. So i∗ 1 − i∗ 2 is surjective. 56 6 Integration III Differential Geometry 6 Integration As promised, we will be able to integrate differential forms on manifolds. However, there is a slight catch. We said that differential forms give us the signed volume of an infinitesimal parallelepiped, and we can integrate these infinitesimal volumes up to get the whole volume of the manifold. However, there is no canonical choice of the sign of the volume, so we do not, in general, get a well-defined volume. In order to fix this issue, our manifold needs to have an orientation. 6.1 Orientation We start with the notion of an orientation of a vector space. After we have one, we can define an orientation of a manifold to be a smooth choice of orientation for each tangent space. Informally, an orientation on a vector space V is a choice of a collection of ordered bases that we declare to be “oriented”. If (e1, · · ·, en) is an oriented basis, then changing the sign of one of the ei changes orientation, while scaling by a positive multiple does not. Similarly, swapping two elements in the basis will induce a change in orientation. To encode this information, we come up with some alternating form ω ∈ Λn(V ∗). We can then say a basis e1, · · ·, en is oriented if ω(e1, · · ·, en) is positive. Definition (Orientation of vector space). Let V be a vector space with dim V = n. An orientation is an equivalence class of elements ω ∈ Λn(V ∗), where we say ω ∼ ω iff ω = λω for some λ > 0. A basis (e1, · · ·, en) is oriented if ω(e1, · · · |
, en) > 0. By convention, if V = {0}, an orientation is just a choice of number in {±1}. Suppose we have picked an oriented basis e1, · · ·, en. If we have any other basis ˜e1, · · ·, ˜en, we write ei = j Bij ˜ej. Then we have ω(˜e1, · · ·, ˜en) = det B ω(e1, · · ·, en). So ˜e1, · · ·, ˜en is oriented iff det B > 0. We now generalize this to manifolds, where we try to orient the tangent bundle smoothly. Definition (Orientation of a manifold). An orientation of a manifold M is defined to be an equivalence class of elements ω ∈ Ωn(M ) that are nowhere vanishing, under the equivalence relation ω ∼ ω if there is some smooth f : M → R>0 such that ω = f ω. Definition (Orientable manifold). A manifold is orientable if it has some orientation. If M is a connected, orientable manifold, then it has precisely two possible orientations. 57 6 Integration III Differential Geometry Definition (Oriented manifold). An oriented manifold is a manifold with a choice of orientation. Definition (Oriented coordinates). Let M be an oriented manifold. We say coordinates x1, · · ·, xn on a chart U are oriented coordinates if ∂ ∂x1 p, · · ·, ∂ ∂xn p is an oriented basis for TpM for all p ∈ U. Note that we can always find enough oriented coordinates. Given any connected chart, either the chart is oriented, or −x1, · · ·, xn is oriented. So any oriented M is covered by oriented charts. Now by the previous discussion, we know that if x1, · · ·, xn and y1, · · ·, yn are oriented charts, then the transition maps for the tangent space all have positive determinant. Example. Rn is always assumed to have the standard orientation given by dx1 ∧ · · · ∧ dxn. Defin |
ition (Orientation-preserving diffeomorphism). Let M, N be oriented manifolds, and F ∈ C∞(M, N ) be a diffeomorphism. We say F preserves orientation if DF |p : TpM → TF (p)N takes an oriented basis to an oriented basis. Alternatively, this says the pullback of the orientation on N is the orientation on M (up to equivalence). 6.2 Integration The idea is that to define integration, we fist understand how we can integrate on Rn, and then patch them up using partitions of unity. We are going to allow ourselves to integrate on rather general domains. Definition (Domain of integration). Let D ⊆ Rn. We say D is a domain of integration if D is bounded and ∂D has measure zero. Since D can be an arbitrary subset, we define an n-form on D to be some ω ∈ Ωn(U ) for some open U containing D. Definition (Integration on Rn). Let D be a compact domain of integration, and ω = f dx1 ∧ · · · ∧ dxn be an n-form on D. Then we define ω = D D f (x1, · · ·, xn) dx1 · · · dxn. In general, let U ⊆ Rn and let ω ∈ Ωn(Rn) have compact support. We define ω = ω U D for some D ⊆ U containing supp ω. 58 6 Integration III Differential Geometry Note that we do not directly say we integrate it on supp ω, since supp ω need not have a nice boundary. Now if we want to integrate on a manifold, we need to patch things up, and to do so, we need to know how these things behave when we change coordinates. Definition (Smooth function). Let D ⊆ Rn and f : D → Rm. We say f is smooth if it is a restriction of some smooth function ˜f : U → Rm where U ⊇ D. Lemma. Let F : D → E be a smooth map between domains of integration in Rn, and |
assume that F | ˚D : ˚D → ˚E is an orientation-preserving diffeomorphism. Then ω = F ∗ω. This is exactly what we want. E D Proof. Suppose we have coordinates x1, · · ·, xn on D and y1, · · ·, yn on E. Write Then we have ω = f dy1 ∧ · · · ∧ dyn dy1 · · · dyn (f ◦ F ) | det DF | dx1 · · · dxn (f ◦ F ) det DF dx1 · · · dxn F ∗ω. Here we used the fact that | det DF | = det DF because F is orientation-preserving. We can now define integration over manifolds. Definition (Integration on manifolds). Let M be an oriented manifold. Let ω ∈ Ωn(M ). Suppose that supp(ω) is a compact subset of some oriented chart (U, ϕ). We set ω = (ϕ−1)∗ω. M ϕ(U ) By the previous lemma, this does not depend on the oriented chart (U, ϕ). If ω ∈ Ωn(M ) is a general form with compact support, we do the following: cover the support by finitely many oriented charts {Uα}α=1,...,m. Let {χα} be a partition of unity subordinate to {Uα}. We then set M ω = χαω. α Uα It is clear that we have Lemma. This is well-defined, i.e. it is independent of cover and partition of unity. 59 6 Integration III Differential Geometry We will not bother to go through the technicalities of proving this properly. Note that it is possible to define this for non-smooth forms, or not-everywhere- defined form, or with non-compact support etc, but we will not do it here. Theoretically, our definition is perfectly fine and easy to work with. However, it is absolutely useless for computations, and there is no hope you can evaluate that directly. Now how would we normally integrate things? In IA Vector |
Calculus, we probably did something like this — if we want to integrate something over a sphere, we cut the sphere up into the Northern and Southern hemisphere. We have coordinates for each of the hemispheres, so we integrate each hemisphere separately, and then add the result up. This is all well, except we have actually missed out the equator in this process. But that doesn’t really matter, because the equator has measure zero, and doesn’t contribute to the integral. We now try to formalize our above approach. The below definition is not standard: Definition (Parametrization). Let M be either an oriented manifold of dimension n, or a domain of integration in Rn. By a parametrization of M we mean a decomposition M = S1 ∪ · · · ∪ Sn, with smooth maps Fi : Di → Si, where Di is a compact domain of integration, such that : ˚Di → ˚Si is an orientation-preserving diffeomorphism (i) Fi| ˚Di (ii) ∂Si has measure zero (if M is a manifold, this means ϕ(∂Si ∩ U ) for all charts (U, ϕ)). (iii) For i = j, Si intersects Sj only in their common boundary. Theorem. Given a parametrization {Si} of M and an ω ∈ Ωn(M ) with compact support, we have ω = M i Di F ∗ i ω. Proof. By using partitions of unity, we may consider the case where ω has support in a single chart, and thus we may wlog assume we are working on Rn, and then the result is obvious. There is a problem — in all our lives, we’ve been integrating functions, not forms. If we have a function f : R → R, then we can take the integral f dx. Now of course, we are not actually integrating f. We are integrating the differential form f dx. Why we seem like we are integrating functions is because we have a background form dx. So if we have a manifold M with a “background” n-form ω ∈ Ωn(M ), then we can integrate f ∈ C∞(M, R) by M f ω. |
60 6 Integration III Differential Geometry In general, a manifold does not come with such a canonical background form. However, in some cases, it does. Lemma. Let M be an oriented manifold, and g a Riemannian metric on M. Then there is a unique ω ∈ Ωn(M ) such that for all p, if e1, · · ·, en is an oriented orthonormal basis of TpM, then ω(e1, · · ·, en) = 1. We call this the Riemannian volume form, written dVg. Note that dVg is a notation. It is not the exterior derivative of some mysterious object Vg. Proof. Uniqueness is clear, since if ω is another, then ωp = λω evaluating on an orthonormal basis shows that λ = 1. p for some λ, and To see existence, let σ be any nowhere vanishing n-form giving the orientation of M. On a small set U, pick a frame s1, · · ·, sn for T M |U and apply the GramSchmidt process to obtain an orthonormal frame e1, · · ·, en, which we may wlog assume is oriented. Then we set which is non-vanishing because σ is nowhere vanishing. Then set f = σ(e1, · · ·, en), ω = σ f. This proves existence locally, and can be patched together globally by uniqueness. 6.3 Stokes Theorem Recall from, say, IA Vector Calculus that Stokes’ theorem relates an integral on a manifold to a integral on its boundary. However, our manifolds do not have boundaries! So we can’t talk about Stokes’ theorem! So we now want to define what it means to be a manifold with boundary. Definition (Manifold with boundary). Let Hn = {(x1, · · ·, xn) ∈ Rn : xn ≥ 0}. A chart-with-boundary on a set M is a bijection ϕ : U → ϕ(U ) for some U ⊆ M such that ϕ(U ) ⊆ Hn is open. Note that this image may or may not hit the boundary of Hn. So a “normal” chart |
is also a chart with boundary. An atlas-with-boundary on M is a cover by charts-with-boundary (Uα, ϕα) such that the transition maps ϕβ ◦ ϕ−1 α : ϕα(Uα ∩ Uβ) → ϕβ(Uα ∩ Uβ) are smooth (in the usual sense) for all α, β. A manifold-with-boundary is a set M with an (equivalence class of) atlas with boundary whose induced topology is Hausdorff and second-countable. 61 6 Integration III Differential Geometry Note that a manifold with boundary is not a manifold, but a manifold is a manifold with boundary. We will often be lazy and drop the “with boundary” descriptions. Definition (Boundary point). If M is a manifold with boundary and p ∈ M, then we say p is a boundary point if ϕ(p) ∈ ∂Hn for some (hence any) chartwith-boundary (U, ϕ) containing p. We let ∂M be the set of boundary points and Int(M ) = M \ ∂M. Note that these are not the topological notions of boundary and interior. Proposition. Let M be a manifold with boundary. Then Int(M ) and ∂M are naturally manifolds, with dim ∂M = dim Int M − 1. Example. The solid ball B1(0) is a manifold with boundary, whose interior is B1(0) and boundary is Sn−1. Note that the product of manifolds with boundary is not a manifold with boundary. For example, the interval [0, 1] is a manifold with boundary, but [0, 1]2 has corners. This is bad. We can develop the theory of manifolds with corners, but that is more subtle. We will not talk about them. Everything we did for manifolds can be done for manifolds with boundary, e.g. smooth functions, tangent spaces, tangent bundles etc. Note in particular the definition of the tangent space as derivations still works word-for-word. Lemma. Let p ∈ ∂M, say p ∈ U ⊆ M where (U, ϕ) is a chart (with boundary). Then p is a basis |
for TpM. In particular, dim TpM = n. ∂ ∂xn ∂ ∂x1 p, · · ·, Proof. Since this is a local thing, it suffices to prove it for M = Hn. We write C∞(H, R) for the functions f : Hn → Rn that extend smoothly to an open neighbourhood of Hn. We fix a ∈ ∂Hn. Then by definition, we have TaHn = Dera(C∞(Hn, R)). We let i∗ : TaHn → TaRn be given by i∗(X)(g) = X(g|Hn ) We claim that i∗ is an isomorphism. For injectivity, suppose i∗(X) = 0. If f ∈ C∞(Hn), then f extends to a smooth g on some neighbourhood U of Hn. Then X(f ) = X(g|Hn ) = i∗(X)(g) = 0. So X(f ) = 0 for all f. Then X = 0. So i∗ is injective. To see surjectivity, let Y ∈ TaRn, and let X ∈ TaHn be defined by X(f ) = Y (g), 62 6 Integration III Differential Geometry where g ∈ C∞(Hn, R) is any extension of f to U. To see this is well-defined, we let Y = n i=1 αi ∂ ∂xi a. Then Y (g) = n αi ∂g ∂xi (a), i=1 which only depends on g|Hn, i.e. f. So X is a well-defined element of TaHn, and i∗(X) = Y by construction. So done. Now we want to see how orientations behave. We can define them in exactly the same way as manifolds, and everything works. However, something interesting happens. If a manifold with boundary has an orientation, this naturally induces an orientation of the boundary. Definition (Outward/Inward pointing). Let p ∈ ∂M. We then have an inclusion T |
p∂M ⊆ TpM. If Xp ∈ TpM, then in a chart, we can write Xp = n i=1 ai ∂ ∂xi, where ai ∈ R and ∂, · · ·, ∂x1 pointing if an < 0, and inward pointing if an > 0. ∂ ∂xn−1 are a basis for Tp∂M. We say Xp is outward Definition (Induced orientation). Let M be an oriented manifold with boundary. We say a basis e1, · · ·, en−1 is an oriented basis for Tp∂M if (Xp, e1, · · ·, en−1) is an oriented basis for TpM, where Xp is any outward pointing element in TpM. This orientation is known as the induced orientation. It is an exercise to see that these notions are all well-defined and do not depend on the basis. Example. We have an isomorphism ∂Hn ∼= Rn−1 (x1, · · ·, xn−1, 0) → (x1, · · ·, xn−1). So − ∂ ∂xn ∂Hn is an outward pointing vector. So we know x1, · · ·, xn−1 is an oriented chart for ∂Hn iff ∂ ∂x1 is oriented, which is true iff n is even. ∂ ∂xn −,, · · ·, ∂ ∂xn−1 Example. If n = 1, say M = [a, b] ⊆ R with a < b, then {a, b}, then Tp∂M = {0}. So an orientation of ∂M is a choice of numbers ±1 attached to each point. The convention is that if M is in the standard orientation induced by M ⊆ R, then the orientation is obtained by giving +1 to b and −1 to a. 63 6 Integration III Differential Geometry Finally, we get to Stokes’ theorem. Theorem (Stokes’ theorem). Let M be an oriented manifold with boundary of dimension n. Then if ω ∈ Ωn−1(M ) has compact support, then dω = � |
�. In particular, if M has no boundary, then M ∂M M dω = 0 Note that this makes sense. dω is an n-form on M, so we can integrate it. On the right hand side, what we really doing is integrating the restriction of ω to ∂M, i.e. the (n − 1)-form i∗ω, where i : ∂M → M is the inclusion, so that i∗ω ∈ Ωn−1(∂M ). Note that if M = [a, b], then this is just the usual fundamental theorem of calculus. The hard part of the proof is keeping track of the signs. Proof. We first do the case where M = Hn. Then we have ω = n i=1 ωi dx1 ∧ · · · ∧ dxi ∧ · · · ∧ dxn, where ωi is compactly supported, and the hat denotes omission. So we have dωi ∧ dx1 ∧ · · · ∧ dxi ∧ · · · ∧ dxn dω = = = i i i ∂ωi ∂xi (−1)i−1 ∂ωi ∂xi dxi ∧ dx1 ∧ · · · ∧ dxi ∧ · · · ∧ dxn dx1 ∧ · · · ∧ dxi ∧ · · · ∧ dxn Let’s say supp(ω) = {xj ∈ [−R, R] : j = 1, · · ·, n − 1; xn ∈ [0, R]} = A. Then suppose i = n. Then we have ∂ωi ∂xi ∂ωi ∂xi R Hn A R = = dx1 ∧ · · · ∧ dxi ∧ · · · ∧ dxn dx1 · · · dxn R R · · · ∂ωi ∂xi dx1 · · · dxn −R −R −R 0 By Fubini’s theorem, we can integrate this in any order. We integrate with respect to dxi first. So this is R −R 0 −R ∂ωi ∂xi dxi dx1 · · · dxi · · · dxn 64 6 Integration III Differential Ge |
ometry By the fundamental theorem of calculus, the inner integral is ω(x1, · · ·, xi−1, R, xi+1, · · ·, xn)−ω(x1, · · ·, xi−1, −R, xi+1, · · ·, xn) = 0−0 = 0. So the integral vanishes. So we are only left with the i = n term. So we have dω = (−1)n−1 Hn = (−1)n−1 Now that integral is just ∂ωn ∂xn · · · A R dx1 · · · dxn R R −R −R 0 dxn dx1 · · · dxn−1 ∂ωn ∂xn ωn(x1, · · ·, xn−1, R) − ωn(x1, · · ·, xn−1, 0) = −ωn(x1, · · ·, xn−1, 0). So this becomes = (−1)n R R · · · −R −R ωn(x1, · · ·, xn−1, 0) dx1 · · · dxn−1. Next we see that i∗ω = ωndx1 ∧ · · · ∧ dxn−1, as i∗(dxn) = 0. So we have i∗ω = ± ∂Hn A∩∂Hn ω(x1, · · ·, xn−1, 0) dx1 · · · dxn. Here the sign is a plus iff x1, · · ·, xn−1 are an oriented coordinate for ∂Hn, i.e. n is even. So this is ∂Hn ω = (−1)n R R · · · −R −R ωn(x1, · · ·, xn−1, 0) dx1 · · · dxn−1 = Hn dω. Now for a general manifold M, suppose first that ω ∈ Ωn−1(M ) is compactly supported in a single oriented chart (U, ϕ). Then the result is true by working in local coordinates. More explicitly, we have M dω = Hn ( |
ϕ−1)∗dω = Hn d((ϕ−1)∗ω) = ∂Hn (ϕ−1)∗ω = ∂M ω. Finally, for a general ω, we just cover M by oriented charts (U, ϕα), and use a partition of unity χα subordinate to {Uα}. So we have ω = χαω. Then dω = (dχα)ω + χαdω = d χα ω + χαdω = χαdω, using the fact that χα is constant, hence its derivative vanishes. So we have M dω = α M χαdω = α ∂M χαω = ∂M ω. 65 6 Integration III Differential Geometry Then all the things likes Green’s theorem and divergence theorem follow from this. Example. Let M be a manifold without boundary with a symplectic form ω ∈ Ω2(M ) that is closed and positive definite. Then by basic Linear algebra we know ωn = 0. Since ω is closed, it is an element [ω] ∈ H 2 then we have dR(M ). Does this vanish? If ω = dτ, M So we have d(τ ∧ ω ∧ · · · ∧ ω) = ωn. ωn = d(τ ∧ ω ∧ · · · ∧ ω) = 0 M by Stokes’ theorem. This is a contradiction. So [ω] is non-zero in H 2 M dR(M ). 66 7 De Rham’s theorem* III Differential Geometry 7 De Rham’s theorem* In the whole section, M will be a compact manifold. Theorem (de Rham’s theorem). There exists a natural isomorphism H p dR(M ) ∼= H p(M, R), where H p(M, R) is the singular cohomology of M, and this is in fact an isomorphism of rings, where H p dR(M ) has the product given by the wedge, and H p(M, R) has the cup product. Recall that singular cohomology is defined |
as follow: Definition (Singular p-complex). Let M be a manifold. Then a singular psimplex is a continuous map σ : ∆p → M, ∆p = p i=0 tiei : tI = 1 ⊆ Rn+1. where We define Cp(M ) = {formal sums aiσi : ai ∈ R, σi a singular p simplex}. We define C∞ p (m) = {formal sums aiσi : ai ∈ R, σi a smooth singular p simplex}. Definition (Boundary map). The boundary map ∂ : Cp(M ) → Cp−1(M ) is the linear map such that if σ : ∆p → M is a p simplex, then ∂σ = (−1)iσ ◦ Fi,p, where Fi,p maps ∆p−1 affine linearly to the face of ∆p opposite the ith vertex. We similarly have ∂ : C∞ p (M ) → C∞ p−1(M ). We can then define singular homology Definition (Singular homology). The singular homology of M is Hp(M, R) = ker ∂ : Cp(M ) → Cp−1(M ) im ∂ : Cp+1(M ) → Cp(M ). The smooth singular homology is the same thing with Cp(M ) replaced with C∞ p (M ). 67 7 De Rham’s theorem* III Differential Geometry H ∞ p has the same properties as Hp, e.g. functoriality, (smooth) homotopy invariance, Mayer-Vietoris etc with no change in proof. Any smooth p-simplex σ is also continuous, giving a natural inclusion i : C∞ p (M ) → Cp(M ), which obviously commutes with ∂, giving i∗ : H ∞ p (M ) → Hp(M ). Theorem. The map i∗ : H ∞ p (M ) → Hp(M ) is an isomorphism. There are three ways we can prove this |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.