source
stringlengths 16
98
| text
stringlengths 40
168k
|
|---|---|
Wikipedia:David Shale#0
|
David Winston Howard Shale (22 March 1932, New Zealand – 7 January 2016) was a New Zealand-American mathematician, specializing in the mathematical foundations of quantum physics. He is known as one of the namesakes of the Segal–Shale-Weil representation. After secondary and undergraduate education in New Zealand, Shale became a graduate student in mathematics at the University of Chicago and received his Ph.D. there in 1960. His thesis On certain groups of operators on Hilbert space was written under the supervision of Irving Segal. Shale became an assistant professor at the University of California, Berkeley and then became in 1964 a professor at the University of Pennsylvania, where he continued teaching until his retirement. He was an expert in the mathematical foundations of Quantum Physics with many very original ideas on the subject. In addition, he discovered what is now called the Shale-Weil Representation in operator theory. He was also an expert in Bayesian Probability Theory, especially as it applied to Physics. According to Irving Segal: ... although contrary to common intuitive belief, Lorentz-invariance in itself is materially insufficient to characterize the vacuum for any free field (this remarkable fact is due to David Shale; it should perhaps be emphasized that this lack of uniqueness holds even in such a simple case as the conventional scalar meson field ...), none of the Lorentz-invariant states other than the conventional vacuum is consistent with the postulate of the positivity of the energy, when suitably and simply formulated. == Selected publications == Shale, David (1962). "Linear Symmetries of Free Boson Fields". Transactions of the American Mathematical Society. 103 (1): 149–167. doi:10.2307/1993745. JSTOR 1993745. Shale, David (1962). "A Note on the Scattering of Boson Fields". Journal of Mathematical Physics. 3 (5): 915–921. Bibcode:1962JMP.....3..915S. doi:10.1063/1.1724306. Shale, David; Stinespring, W. Forrest (1964). "States of the Clifford Algebra". The Annals of Mathematics. 80 (2): 365. doi:10.2307/1970397. JSTOR 1970397. Shale, David; Stinespring, W. Forrest (1965). "Spinor Representations of Infinite Orthogonal Groups". Journal of Mathematics and Mechanics. 14 (2): 315–322. JSTOR 24901279. Shale, David (1966). "Invariant Integration over the Infinite Dimensional Orthogonal Group and Related Spaces". Transactions of the American Mathematical Society. 124 (1): 148–157. doi:10.2307/1994441. JSTOR 1994441. Shale, David; Stinespring, W. Forrest (1966). "Integration over Non-Euclidean Geometries of Infinite Dimension". Journal of Mathematics and Mechanics. 16 (2): 135–146. JSTOR 24901475. Shale, David; Stinespring, W. Forrest (1966). "Continuously splittable distributions in Hilbert space". Illinois Journal of Mathematics. 10 (4): 574–578. doi:10.1215/ijm/1256054896. ISSN 0019-2082. Shale, David; Stinespring, W. Forrest (1967). "The quantum harmonic oscillator with hyperbolic phase space" (PDF). Journal of Functional Analysis. 1 (4): 492–502. doi:10.1016/0022-1236(67)90013-4. Archived from the original (PDF) on 2019-03-24. Retrieved 2019-09-28. Shale, David; Stinespring, W. Forrest (1968). "Wiener processes" (PDF). Journal of Functional Analysis. 2 (4): 378–394. doi:10.1016/0022-1236(68)90002-5. Archived from the original (PDF) on 2019-04-15. Retrieved 2019-09-28. Shale, David; Stinespring, W. Forrest (1970). "Wiener processes II" (PDF). Journal of Functional Analysis. 5 (3): 334–353. doi:10.1016/0022-1236(70)90013-3. Shale, David (1973). "Absolute continuity of Wiener processes". Journal of Functional Analysis. 12 (3): 321–334. doi:10.1016/0022-1236(73)90083-9. Shale, David (1974). "Analysis over Discrete spaces". Journal of Functional Analysis. 16 (3): 258–288. doi:10.1016/0022-1236(74)90074-3. Shale, David (1979). "On geometric ideas which lie at the foundation of quantum theory". Advances in Mathematics. 32 (3): 175–203. doi:10.1016/0001-8708(79)90041-0. Shale, David (1979). "Random functions of Poisson type". Journal of Functional Analysis. 33: 1–35. doi:10.1016/0022-1236(79)90015-6. Shale, David (1982). "Discrete quantum theory". Foundations of Physics. 12 (7): 661–687. Bibcode:1982FoPh...12..661S. doi:10.1007/BF00729805. S2CID 119764527. == References ==
|
Wikipedia:David Spence (mathematician)#0
|
David Allan Spence (3 January 1926 – 7 September 2003) was a mathematician who applied his mathematical skills in the aeronautics industry, and to the understanding of geophysical problems. He was born and educated in New Zealand, later moving to England, to take up a Doctorate in Engineering at Clare College, Cambridge. He was employed for a period of time at the then Royal Aircraft Establishment at Farnborough, after which he was appointed to the Engineering Department of the University of Oxford, where he stayed for twenty years. Later, he held the position of Professor of Mathematics at Imperial College London. == Early life and education == Spence, the son of a lawyer, was born in Auckland, New Zealand on 3 January 1926. He was educated at King’s College, Auckland and the University of Auckland, from which he graduated with a both a Bachelor of Science and a Master of Science. He then moved to England where he undertook research in Engineering at Clare College, Cambridge. He was awarded his Doctorate in 1952. == Career and research == After completing his doctorate, Spence took up employment at The Royal Aircraft Establishment, where he undertook research relating to the design of aircraft wings. His paper, The lift coefficient of a thin, jet-flapped wing, his most significant work of the period, was published in the Proceedings of the Royal Society, in 1956. He also studied the propagation of shock waves, specifically how such waves are weakened by viscosity, which is of particular relevance to supersonic flight. === Research Interests === In 1964, Spence left Farnborough to take up the position at the Engineering Department of the University of Oxford, becoming a Fellow of Lincoln College, Oxford. He remained at Oxford for about twenty years, applying his mathematical techniques to a range of topics, including the compression of solids. This latter work enabled him to apply his skills to the study of magma flow beneath the earth's surface, and how it behaves in the presence of fractures, thereby obtaining a better understanding of volcanic eruptions. Spence's final years were spent as Professor of Mathematics at Imperial College London, where he taught mathematics to engineering and science students. He continued his research across a range of disciplines, one of which involved the use of injected water to enhance oil recovery from a well - important for energy conservation of depleting reserves in areas such as the North Sea. His research interests were energised by his many visits to universities in Australia, New Zealand and the United States. === 1991 - 2003 === In 1991, at the age of 65 years, Spence retired from his chair at Imperial College London. His intellectual interest in mathematics and science, expanded to include political history and the law. David Allan Spence died in Headington, Oxford, on 7 September 2003, aged 77. He was survived by his wife, Isobel and their two sons and two daughters. Spence's friend Robin Cooke, Baron Cooke of Thorndon, described Spence as perceptive and analytical of human motivation to an extent bordering on cynicism. While deeply respectful of significant achievement in any walk of life he nevertheless had an "attitude towards authority markedly less than reverential - which may at times have hindered his career". Spence was elected a Fellow of the Royal Aeronautical Society (FRAeS), and of the Institute of Mathematics and its Applications (FIMA), and was a member of the London Mathematical Society. His former doctoral students included Hilary Ockendon and Frank T. Smith. == References ==
|
Wikipedia:David Steurer#0
|
David Steurer is a German theoretical computer scientist, working in approximation algorithms, hardness of approximation, sum of squares, and high-dimensional statistics. He is an associate professor of computer science at ETH Zurich. == Biography == David Steurer studied for bachelor's and master's degrees at the University of Saarland (2003–2006), and went on to study at Princeton University, where he obtained his PhD under the supervision of Sanjeev Arora in 2010. He then spent two years as a postdoc at Microsoft Research New England, before joining Cornell University. In 2017 he moved to ETH Zurich, where he became an associate professor in 2020. == Work == Steurer's work focuses on optimization using the sum of squares technique, giving an invited talk on the topic at the 2018 ICM, together with Prasad Raghavendra. Together with Prasad Raghavendra, he developed the small set expansion hypothesis, for which they won the Michael and Shiela Held Prize. Together with James Lee and Prasad Raghavendra, he showed that in some settings, the sum-of-squares hierarchy is the most general kind of SDP hierarchy. Together with Irit Dinur, he introduced a new and simple approach to parallel repetition theorems. == References == == External links == David Steurer publications indexed by Google Scholar
|
Wikipedia:David W. Lewis (mathematician)#0
|
David W. Lewis (21 February 1944 in Douglas, Isle of Man—20 August 2021 in Dublin) was a Manx mathematician known for his contributions to quadratic forms theory. He spent his entire career at University College Dublin (UCD), where he was head of the Department of Mathematics (now the School of Mathematics and Statistics) from 1999 until 2002. After his retirement in 2009 he remained research active for many years. == Education and career == Lewis attended Douglas High School where he developed an interest in physics and astronomy, and ultimately mathematics. He attended the University of Liverpool, and after completing his BSc degree in 1965 commenced doctoral studies in topology under the guidance of CTC (Terry) Wall. When his PhD funding ended in 1968, he started as assistant lecturer at the UCD Mathematics Department, while continuing the work on his doctoral thesis, shifting from topology to algebra and specifically to the area of quadratic and hermitian forms. During his first decade of lecturing at UCD, he completed his PhD thesis, Hermitian Forms over Algebras with Involution, under the supervision of Professor Wall and was awarded a doctorate by the National University of Ireland in 1979. He received a DSc from NUI in 1992, and served as head of the Department of Mathematics there from 1999 until 2002. He supervised 4 PhDs and authored one monograph. == Books == Lewis, David W. (1991). Matrix Theory. River Edge, NJ: World Scientific Publishing Co., Inc. doi:10.1142/1424. Bayer-Fluckiger, Eva; Lewis, David; Ranicki, Andrew, eds. (2000). Quadratic Forms and Their Applications. Conference on Quadratic Forms and Their Applications, July 5–9, 1999, University College Dublin. Contemporary Mathematics. Vol. 272. American Mathematical Society. doi:10.1090/conm/272. == Selected papers == Cortella, Anne; Lewis, David W. (2013). "Sesquilinear Morita equivalence and orthogonal sum of algebras with antiautomorphism". Communications in Algebra. 41 (12): 4463–4490. doi:10.1080/00927872.2012.704462. De Wannemacker, Stefan A.G.; Lewis, David W. (2007). "Structure theorems for AP rings". Journal of Algebra. 315 (1): 144–158. arXiv:math/0701830. doi:10.1016/j.jalgebra.2007.02.016. Lewis, David W.; Unger, Thomas (2003). "A local-global principle for algebras with involution and Hermitian forms". Mathematische Zeitschrift. 244 (3): 469–477. doi:10.1007/s00209-003-0490-6. Lewis, D. W.; McGarraghy, S. (2000). "Annihilating polynomials, étale algebras, trace forms and the Galois number". Archiv der Mathematik. 75 (2): 116–120. doi:10.1007/PL00000430. Lewis, David W.; Tignol, J.-P. (1993). "On the signature of an involution". Archiv der Mathematik. 60 (2): 128–135. doi:10.1007/BF01199098. Lewis, D. W. (1987). "Witt rings as integral rings". Inventiones Mathematicae. 90 (3): 631–633. doi:10.1007/BF01389181. Lewis, D. W. (1985). "Periodicity of Clifford algebras and exact octagons of Witt groups". Mathematical Proceedings of the Cambridge Philosophical Society. 98 (2): 263–269. doi:10.1017/S0305004100063441. Lewis, D. W. (1982). "New improved exact sequences of Witt groups". Journal of Algebra. 74 (1): 206–210. doi:10.1016/0021-8693(82)90013-8. Lewis, D. W. (1977). "Forms over real algebras and the multisignature of a manifold". Advances in Mathematics. 23 (3): 272–284. doi:10.1016/S0001-8708(77)80030-3. == References == == External links == The mathematics of David W. Lewis Jean-Pierre Tignol, Université catholique de Louvain, The Lewisfest, 23 July 2009
|
Wikipedia:David William Boyd#0
|
David William Boyd (born 17 September 1941) is a Canadian mathematician who does research on harmonic and classical analysis, inequalities related to geometry, number theory, and polynomial factorization, sphere packing, number theory involving Diophantine approximation and Mahler's measure, and computer computations. Boyd received in 1963 his B.Sc. with Honours from Carleton University, then in 1964 his M.A. and in 1966 his Ph.D. from the University of Toronto under Paul George Rooney with thesis The Hilbert transformation on rearrangement invariant Banach spaces. Boyd became in 1966–67 an assistant professor at the University of Alberta, in 1967–70 an assistant professor and in 1970–71 an associate professor at the California Institute of Technology, and in 1971–74 an associate professor, in 1974–2007 a professor, and since 2007 a professor emeritus at the University of British Columbia. Boyd has done research on classical and harmonic analysis, including interpolation spaces, integral transforms, and potential theory, and research on inequalities involving geometry, number theory, polynomials, and applications to polynomial factorization. He has also worked, especially in the 1970s, on sphere packing, in particular, Apollonian packing and limit sets of Kleinian groups. Boyd has studied number theory, such as diophantine approximation, the Pisot and Salem numbers, Pisot sequences, Mahler’s measure, applications to symbolic dynamics, and special values of L-functions and polylogarithms. He is also interested in mathematical computation, including numerical analysis, symbolic computation, and computational number theory, and also geometric topology, including hyperbolic manifolds and computation of invariants. His doctoral students include Peter Borwein. == Awards and honours == Killam Senior Research Fellowship, 1976–77 and 1981–82 Steacie Prize, 1978 Coxeter–James Prize, Canadian Mathematical Society, 1979 Elected to Fellowship of the Royal Society of Canada, 1980 Jeffery–Williams Prize, Canadian Mathematical Society, 2001 CRM–Fields Prize, Centre de Recherches Mathématiques & Fields Institute, 2005 Elected to Fellowship of the American Mathematical Society, 2013 Inaugural fellow of the Canadian Mathematical Society, 2018 == Editorships == Associate editor, Canadian Journal of Mathematics, 1981–1991 Associate editor, Mathematics of Computation, 1998 – 2007 Associate editor, Contributions to Discrete Mathematics, 2006–present == Selected works == Mahler's measure and special values of L-functions, Experimental Mathematics, vol. 37, 1998, pp. 37–82 Mahler's measure and invariants of hyperbolic manifolds, in M. A. Bennett (ed.) Number theory for the Millennium, A. K. Peters 2000, pp. 127–143 Mahler's measure, hyperbolic manifolds and the dilogarithm, Canadian Mathematical Society Notes, vol. 34, no. 2, 2002, 3–4, 26–28 (Jeffery Williams Lecture) with F. Rodriguez Villegas: Mahler's measure and the dilogarithm, part 1, Canadian J. Math., vol, 54, 2002, pp. 468–492 == References == == External links == Homepage Laudatio at the Fields Institute/CRM/PIMS Prize 2005 by Andrew Granville
|
Wikipedia:David Wood (mathematician)#0
|
David Ronald Wood (born in Christchurch, New Zealand in 1971) is a Professor in the School of Mathematics at Monash University in Melbourne, Australia. His research area is discrete mathematics and theoretical computer science, especially structural graph theory, extremal graph theory, geometric graph theory, graph colouring, graph drawing, and combinatorial geometry. Wood received a Ph.D. in computer science from Monash University in 2000. His thesis "Three-Dimensional Orthogonal Graph Drawing", supervised by Graham Farr, was awarded a Mollie Holman Doctoral Medal. He held postdoctoral research positions at the University of Sydney, at Carleton University in Ottawa, at Charles University in Prague, at McGill University in Montreal, at Universitat Politècnica de Catalunya in Barcelona, and at the University of Melbourne. Since 2012 he has been at Monash University, where he was promoted to Professor in 2016. He has been awarded distinguished research fellowships including a Marie Curie Fellowship from the European Commission (2006–2008), a QEII Fellowship from the Australian Research Council (2008–2012), and a Future Fellowship from the Australian Research Council (2014–2017). David Wood was an invited speaker at the 9th European Congress of Mathematics. Wood is a Fellow of the Australian Mathematics Society and life member of the Combinatorial Mathematics Society of Australasia (CMSA). He was president of the CMSA in 2015–2016 and Vice-President in 2011–2014. He is a Deputy Director of The Mathematical Research Institute MATRIX. Wood is an Editor-in-Chief of the Electronic Journal of Combinatorics, Editor-in-Chief of the MATRIX Book Series, and an Editor of the Journal of Computational Geometry, Journal of Graph Theory, and SIAM Journal on Discrete Mathematics. His main research contributions are in graph product structure theory, extremal graph minor theory, graph treewidth, graphs on surfaces, graph colouring, geometric graph theory, poset dimension, and graph drawing. == Major publications == Vida Dujmović; Gwenaël Joret; Piotr Micek; Pat Morin; Torsten Ueckerdt; David R. Wood (2020). "Planar graphs have bounded queue-number". Journal of the Association for Computing Machinery. 67 (4). Article 22. arXiv:1904.04791. doi:10.1145/3385731. Alex Scott; David R. Wood (2020). "Better bounds for poset dimension and boxicity". Transactions of the American Mathematical Society. 373 (3): 2157–2172. arXiv:1804.03271. doi:10.1090/tran/7962. Jan van den Heuvel; David R. Wood (2018). "Improper colourings inspired by Hadwiger's conjecture". Journal of the London Mathematical Society. 98 (1): 129–148. arXiv:1704.06536. doi:10.1112/jlms.12127. Kevin Hendrey; David R. Wood (2018). "The extremal function for Petersen minors". Journal of Combinatorial Theory, Series B. 131: 220–253. arXiv:1508.04541. doi:10.1016/j.jctb.2018.02.001. Bruce Reed; David R. Wood (2016). "Forcing a sparse minor". Combinatorics, Probability and Computing. 25 (2): 300–322. arXiv:1402.0272. doi:10.1017/S0963548315000073. S2CID 993264. Vida Dujmović; Pat Morin; David R. Wood (2005). "Layout of graphs with bounded tree-width". SIAM Journal on Computing. 34 (3): 553–579. arXiv:cs/0406024. doi:10.1137/S0097539702416141. S2CID 3264071. == References == == External links == David Wood's home page David Wood at the Mathematics Genealogy Project David Wood publications indexed by Google Scholar
|
Wikipedia:Dawson–Gärtner theorem#0
|
In mathematics, the Dawson–Gärtner theorem is a result in large deviations theory. Heuristically speaking, the Dawson–Gärtner theorem allows one to transport a large deviation principle on a “smaller” topological space to a “larger” one. == Statement of the theorem == Let (Yj)j∈J be a projective system of Hausdorff topological spaces with maps pij : Yj → Yi. Let X be the projective limit (also known as the inverse limit) of the system (Yj, pij)i,j∈J, i.e. X = lim ← j ∈ J Y j = { y = ( y j ) j ∈ J ∈ Y = ∏ j ∈ J Y j | i < j ⟹ y i = p i j ( y j ) } . {\displaystyle X=\varprojlim _{j\in J}Y_{j}=\left\{\left.y=(y_{j})_{j\in J}\in Y=\prod _{j\in J}Y_{j}\right|i<j\implies y_{i}=p_{ij}(y_{j})\right\}.} Let (με)ε>0 be a family of probability measures on X. Assume that, for each j ∈ J, the push-forward measures (pj∗με)ε>0 on Yj satisfy the large deviation principle with good rate function Ij : Yj → R ∪ {+∞}. Then the family (με)ε>0 satisfies the large deviation principle on X with good rate function I : X → R ∪ {+∞} given by I ( x ) = sup j ∈ J I j ( p j ( x ) ) . {\displaystyle I(x)=\sup _{j\in J}I_{j}(p_{j}(x)).} == References == Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR 1619036. (See theorem 4.6.1)
|
Wikipedia:De Gradibus#0
|
De Gradibus was an Arabic book published by the Arab physician Al-Kindi (c. 801–873 CE). De gradibus is the Latinized name of the book. An alternative name for the book was Quia Primos. In De Gradibus, Al-Kindi attempts to apply mathematics to pharmacology by quantifying the strength of drugs. According to Prioreschi, this was the first attempt at serious quantification in medicine. During the Arabic-Latin translation movement of the 12th century, De Gradibus was translated into Latin by Gerard of Cremona. Al-Kindi's mathematical reasoning was complex and hard to follow; Roger Bacon commented that his method of computing the strength of a drug was extremely difficult to use. == References ==
|
Wikipedia:Deborah Hughes Hallett#0
|
Deborah J. Hughes Hallett is a mathematician who works as a professor of mathematics at the University of Arizona. Her expertise is in the undergraduate teaching of mathematics. She has also taught as Professor of the Practice in the Teaching of Mathematics at Harvard University, and continues to hold an affiliation with Harvard as Adjunct Professor of Public Policy in the John F. Kennedy School of Government. == Education and career == Hughes Hallett earned a bachelor's degree in mathematics from the University of Cambridge in 1966, and a master's degree from Harvard in 1976. She worked as a preceptor and senior preceptor at Harvard from 1975 to 1991, as an instructor at the Middle East Technical University in Ankara, Turkey from 1981 to 1984, and as a faculty member at Harvard from 1986 to 1998. She served as Professor of the Practice in the Teaching of Mathematics at Harvard from 1991 to 1998. She moved to Arizona in 1998, and took on her adjunct position at the Kennedy School in 2001. == Work on calculus reform == With Andrew M. Gleason at Harvard she was a founder of the Calculus Consortium, a project for the reform of undergraduate teaching in calculus. Through the consortium, she is an author of a successful and influential sequence of high school and college mathematics textbooks. However, the project has also been criticized for omitting topics such as the mean value theorem, and for its perceived lack of mathematical rigor. == Recognition == Hallett was an invited speaker at the International Congress of Mathematicians in 1994. Hallett won the Louise Hay Award in 1998, and was named a Fellow of the American Association for the Advancement of Science in 1998. She is a two-time winner of the ICTCM Award, in 1998 for her internet-based course "Information, Data and Decisions" and in 2000 for "Computer Texts for Business Mathematics". In 2005, she received a Deborah and Franklin Haimo Award for Distinguished College or University Teaching of Mathematics. In October 2021, the American Mathematical Society named her as the recipient of the 2022 Award for Impact on the Teaching and Learning of Mathematics. == References ==
|
Wikipedia:Deborah Kent#0
|
Deborah Anne Kent (born 1978) is an American mathematics educator, textbook author, historian of mathematics, and historian of astronomy, with particular interests in game theory, 19th-century mathematics, and historic observations of eclipses. She works in Scotland as Senior Lecturer in History of Mathematics at the University of St Andrews. == Education and career == Kent is originally from the Pacific Northwest. After graduating magna cum laude from Hillsdale College, she completed her Ph.D. in 2005 at the University of Virginia. Her dissertation, Benjamin Peirce and the Promotion of Research-Level Mathematics in America: 1830–1880, was supervised by Karen Parshall. She returned to Hillsdale College as an assistant professor, also becoming the first mathematical research fellow at the Massachusetts Historical Society. After earning tenure at Drake University in Iowa, she moved to the University of St Andrews in 2020. She also serves as Librarian of the London Mathematical Society and a member of the council of the British Society for the History of Mathematics. == Book == Kent is the coauthor of Game Theory: A Playful Introduction (with Matt DeVos, Student Mathematical Library 80, American Mathematical Society, 2016). == Recognition == Kent was a 2017 recipient of the Paul R. Halmos – Lester R. Ford Award, with David Muraki, for their paper "A Geometric Solution of a Cubic by Omar Khayyam ... in Which Colored Diagrams Are Used Instead of Letters for the Greater Ease of Learners" in The American Mathematical Monthly. She was the 2017 recipient of the Women of Innovation Award in Academic Innovation and Leadership of the Technology Association of Iowa, recognizing both her mathematics writing and her innovative mathematics teaching. She is a HiMEd Lecturer of the British Society for the History of Mathematics. == References ==
|
Wikipedia:Debra Boutin#0
|
Debra Lynn Boutin (born 1957 in Holyoke, Massachusetts) is an American mathematician, the Samuel F. Pratt Professor of Mathematics at Hamilton College, where she chairs the mathematics department. Her research involves the symmetries of graphs and distinguishing colorings of graphs. == Education == Boutin is a 1975 graduate of Chicopee Comprehensive High School in Massachusetts. After high school, Boutin served in the United States Navy and United States Naval Reserve, 1975-1995, retiring as a Chief Petty Officer. She restarted her education, supported by the G.I. Bill, by studying data processing at Springfield Technical Community College in Massachusetts. Next, Boutin went to Smith College as an Ada Comstock Scholar. She was awarded the Science Achievement Award at Smith and the Association for Women in Mathematics Alice T. Schafer Prize (honorable mention), both in 1991. She graduated summa cum laude in 1991 with a bachelor's degree in mathematics. She was elected to Phi Beta Kappa and Sigma Xi in 1991. She completed her Ph.D. in mathematics in 1998 at Cornell University. Her doctoral dissertation, Centralizers of Finite Subgroups of Automorphisms and Outer Automorphisms of Free Groups, was supervised by Karen Vogtmann. == Career == After a one-year visiting position at Trinity College (Connecticut), Boutin joined Hamilton College in Clinton, New York as an assistant professor in 1999. In 1999-2000 she was selected to be a Project NExT Fellow of the Mathematical Association of America. She was tenured as an associate professor in 2005 and promoted to full professor in 2010. Boutin was awarded a named professorship in 2019 and has served as departmental chair since 2022. In summers between 2010 and 2018, Professor Boutin was a Research Adjunct at the Institute for Defense Analysis, Center for Communications Research . == Research == Professor Boutin has published over 30 research papers in topics ranging from finite group theory [1], geometric graph theory [2], to graph symmetries [3] in which she developed the concepts of distinguishing costs and determining numbers of graphs. These papers appear in combinatorics, graph theory, algebra and geometry journals. She has been invited and presented her research at conferences nationally and internationally. She has a large set of co-authors internationally and nationally. Her most frequent collaborator was Michael O. Albertson, the Smith College L. Clark Seelye Professor of Mathematics, to whom she was married until his untimely death (m. 1993; died 2009). She has been called upon by colleges for professional evaluations, has been a referee for over 20 professional journals, and has served on committees of the American Mathematical Society and the Society for Industrial and Applied Mathematics; she was elected Secretary of the latter's Discrete Mathematics Activity Group. == Awards and honors == In 2008 Boutin was the inaugural recipient of the Dean's Scholarly Achievement Award for Early Career Achievement and in 2023 the recipient of the Dean's Scholarly Achievement Award for Career Achievement, both from Hamilton College. Hamilton College named Boutin as the Samuel F. Pratt Professor of Mathematics in 2019. == Selected publications == Boutin, Debra L., When are Centralizers of Finite Subgroups of Out(F_n) Finite? "Groups, Languages and Geometry". Contemporary Mathematics. Vol. 250. Providence, Rhode Island: American Mathematical Society. 1999. doi:10.1090/conm/250. ISBN 978-0-8218-1053-8. ISSN 1098-3627. MR 1732207. Albertson, Michael O.; Boutin, Debra L., Realizing Finite Groups in Euclidean Space, J. Algebra, 225(2000), no.2, 947-956. MR1741572. Boutin, Debra L., Convex Geometric Graphs with No Short Self-intersecting Paths, Proceedings of the Thirty-Fourth Southeastern International Conference on Combinatorics, Graph Theory and Computing, Congr. Numer. 160(2003). MR2049115. Boutin, Debra L., Identifying Graph Automorphisms Using Determining Sets, Electron. J. Combin., 13(2006), no. 1, Research Paper 78 MR2255420 . == References == == External links == Home page Debra Boutin publications indexed by Google Scholar Debra L. Boutin's Author Profile [4] on MathSciNet
|
Wikipedia:Dedekind eta function#0
|
In mathematics, the Dedekind eta function, named after Richard Dedekind, is a modular form of weight 1/2 and is a function defined on the upper half-plane of complex numbers, where the imaginary part is positive. It also occurs in bosonic string theory. == Definition == For any complex number τ with Im(τ) > 0, let q = e2πiτ; then the eta function is defined by, η ( τ ) = e π i τ 12 ∏ n = 1 ∞ ( 1 − e 2 n π i τ ) = q 1 24 ∏ n = 1 ∞ ( 1 − q n ) . {\displaystyle \eta (\tau )=e^{\frac {\pi i\tau }{12}}\prod _{n=1}^{\infty }\left(1-e^{2n\pi i\tau }\right)=q^{\frac {1}{24}}\prod _{n=1}^{\infty }\left(1-q^{n}\right).} Raising the eta equation to the 24th power and multiplying by (2π)12 gives Δ ( τ ) = ( 2 π ) 12 η 24 ( τ ) {\displaystyle \Delta (\tau )=(2\pi )^{12}\eta ^{24}(\tau )} where Δ is the modular discriminant. The presence of 24 can be understood by connection with other occurrences, such as in the 24-dimensional Leech lattice. The eta function is holomorphic on the upper half-plane but cannot be continued analytically beyond it. The eta function satisfies the functional equations η ( τ + 1 ) = e π i 12 η ( τ ) , η ( − 1 τ ) = − i τ η ( τ ) . {\displaystyle {\begin{aligned}\eta (\tau +1)&=e^{\frac {\pi i}{12}}\eta (\tau ),\\\eta \left(-{\frac {1}{\tau }}\right)&={\sqrt {-i\tau }}\,\eta (\tau ).\,\end{aligned}}} In the second equation the branch of the square root is chosen such that √−iτ = 1 when τ = i. More generally, suppose a, b, c, d are integers with ad − bc = 1, so that τ ↦ a τ + b c τ + d {\displaystyle \tau \mapsto {\frac {a\tau +b}{c\tau +d}}} is a transformation belonging to the modular group. We may assume that either c > 0, or c = 0 and d = 1. Then η ( a τ + b c τ + d ) = ϵ ( a , b , c , d ) ( c τ + d ) 1 2 η ( τ ) , {\displaystyle \eta \left({\frac {a\tau +b}{c\tau +d}}\right)=\epsilon (a,b,c,d)\left(c\tau +d\right)^{\frac {1}{2}}\eta (\tau ),} where ϵ ( a , b , c , d ) = { e b i π 12 c = 0 , d = 1 , e i π ( a + d 12 c − s ( d , c ) − 1 4 ) c > 0. {\displaystyle \epsilon (a,b,c,d)={\begin{cases}e^{\frac {bi\pi }{12}}&c=0,\,d=1,\\e^{i\pi \left({\frac {a+d}{12c}}-s(d,c)-{\frac {1}{4}}\right)}&c>0.\end{cases}}} Here s(h,k) is the Dedekind sum s ( h , k ) = ∑ n = 1 k − 1 n k ( h n k − ⌊ h n k ⌋ − 1 2 ) . {\displaystyle s(h,k)=\sum _{n=1}^{k-1}{\frac {n}{k}}\left({\frac {hn}{k}}-\left\lfloor {\frac {hn}{k}}\right\rfloor -{\frac {1}{2}}\right).} Because of these functional equations the eta function is a modular form of weight 1/2 and level 1 for a certain character of order 24 of the metaplectic double cover of the modular group, and can be used to define other modular forms. In particular the modular discriminant of the Weierstrass elliptic function with ω 2 = τ ω 1 {\displaystyle \omega _{2}=\tau \omega _{1}} can be defined as Δ ( τ ) = ( 2 π ω 1 ) 12 η ( τ ) 24 {\displaystyle \Delta (\tau )=(2\pi \omega _{1})^{12}\eta (\tau )^{24}\,} and is a modular form of weight 12. Some authors omit the factor of (2π)12, so that the series expansion has integral coefficients. The Jacobi triple product implies that the eta is (up to a factor) a Jacobi theta function for special values of the arguments: η ( τ ) = ∑ n = 1 ∞ χ ( n ) exp ( π i n 2 τ 12 ) , {\displaystyle \eta (\tau )=\sum _{n=1}^{\infty }\chi (n)\exp \left({\frac {\pi in^{2}\tau }{12}}\right),} where χ(n) is "the" Dirichlet character modulo 12 with χ(±1) = 1 and χ(±5) = −1. Explicitly, η ( τ ) = e π i τ 12 ϑ ( τ + 1 2 ; 3 τ ) . {\displaystyle \eta (\tau )=e^{\frac {\pi i\tau }{12}}\vartheta \left({\frac {\tau +1}{2}};3\tau \right).} The Euler function ϕ ( q ) = ∏ n = 1 ∞ ( 1 − q n ) = q − 1 24 η ( τ ) , {\displaystyle {\begin{aligned}\phi (q)&=\prod _{n=1}^{\infty }\left(1-q^{n}\right)\\&=q^{-{\frac {1}{24}}}\eta (\tau ),\end{aligned}}} has a power series by the Euler identity: ϕ ( q ) = ∑ n = − ∞ ∞ ( − 1 ) n q 3 n 2 − n 2 . {\displaystyle \phi (q)=\sum _{n=-\infty }^{\infty }(-1)^{n}q^{\frac {3n^{2}-n}{2}}.} Note that by using Euler Pentagonal number theorem for I ( τ ) > 0 {\displaystyle {\mathfrak {I}}(\tau )>0} , the eta function can be expressed as η ( τ ) = ∑ n = − ∞ ∞ e π i n e 3 π i ( n + 1 6 ) 2 τ . {\displaystyle \eta (\tau )=\sum _{n=-\infty }^{\infty }e^{\pi in}e^{3\pi i\left(n+{\frac {1}{6}}\right)^{2}\tau }.} This can be proved by using x = 2 π i τ {\displaystyle x=2\pi i\tau } in Euler Pentagonal number theorem with the definition of eta function. Another way to see the Eta function is through the following limit lim z → 0 ϑ 1 ( z | τ ) z = 2 π η 3 ( τ ) {\displaystyle \lim _{z\to 0}{\frac {\vartheta _{1}(z|\tau )}{z}}=2\pi \eta ^{3}(\tau )} Which alternatively is: ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) q ( 2 n + 1 ) 2 8 = η 3 ( τ ) {\displaystyle \sum _{n=0}^{\infty }(-1)^{n}(2n+1)q^{\frac {(2n+1)^{2}}{8}}=\eta ^{3}(\tau )} Where ϑ 1 ( z | τ ) {\displaystyle \vartheta _{1}(z|\tau )} is the Jacobi Theta function and ϑ 1 ( z | τ ) = − ϑ 11 ( z ; τ ) {\displaystyle \vartheta _{1}(z|\tau )=-\vartheta _{11}(z;\tau )} Because the eta function is easy to compute numerically from either power series, it is often helpful in computation to express other functions in terms of it when possible, and products and quotients of eta functions, called eta quotients, can be used to express a great variety of modular forms. The picture on this page shows the modulus of the Euler function: the additional factor of q1/24 between this and eta makes almost no visual difference whatsoever. Thus, this picture can be taken as a picture of eta as a function of q. == Combinatorial identities == The theory of the algebraic characters of the affine Lie algebras gives rise to a large class of previously unknown identities for the eta function. These identities follow from the Weyl–Kac character formula, and more specifically from the so-called "denominator identities". The characters themselves allow the construction of generalizations of the Jacobi theta function which transform under the modular group; this is what leads to the identities. An example of one such new identity is η ( 8 τ ) η ( 16 τ ) = ∑ m , n ∈ Z m ≤ | 3 n | ( − 1 ) m q ( 2 m + 1 ) 2 − 32 n 2 {\displaystyle \eta (8\tau )\eta (16\tau )=\sum _{m,n\in \mathbb {Z} \atop m\leq |3n|}(-1)^{m}q^{(2m+1)^{2}-32n^{2}}} where q = e2πiτ is the q-analog or "deformation" of the highest weight of a module. == Special values == From the above connection with the Euler function together with the special values of the latter, it can be easily deduced that η ( i ) = Γ ( 1 4 ) 2 π 3 4 η ( 1 2 i ) = Γ ( 1 4 ) 2 7 8 π 3 4 η ( 2 i ) = Γ ( 1 4 ) 2 11 8 π 3 4 η ( 3 i ) = Γ ( 1 4 ) 2 3 3 ( 3 + 2 3 ) 1 12 π 3 4 η ( 4 i ) = − 1 + 2 4 Γ ( 1 4 ) 2 29 16 π 3 4 η ( e 2 π i 3 ) = e − π i 24 3 8 Γ ( 1 3 ) 3 2 2 π {\displaystyle {\begin{aligned}\eta (i)&={\frac {\Gamma \left({\frac {1}{4}}\right)}{2\pi ^{\frac {3}{4}}}}\\[6pt]\eta \left({\tfrac {1}{2}}i\right)&={\frac {\Gamma \left({\frac {1}{4}}\right)}{2^{\frac {7}{8}}\pi ^{\frac {3}{4}}}}\\[6pt]\eta (2i)&={\frac {\Gamma \left({\frac {1}{4}}\right)}{2^{\frac {11}{8}}\pi ^{\frac {3}{4}}}}\\[6pt]\eta (3i)&={\frac {\Gamma \left({\frac {1}{4}}\right)}{2{\sqrt[{3}]{3}}\left(3+2{\sqrt {3}}\right)^{\frac {1}{12}}\pi ^{\frac {3}{4}}}}\\[6pt]\eta (4i)&={\frac {{\sqrt[{4}]{-1+{\sqrt {2}}}}\,\Gamma \left({\frac {1}{4}}\right)}{2^{\frac {29}{16}}\pi ^{\frac {3}{4}}}}\\[6pt]\eta \left(e^{\frac {2\pi i}{3}}\right)&=e^{-{\frac {\pi i}{24}}}{\frac {{\sqrt[{8}]{3}}\,\Gamma \left({\frac {1}{3}}\right)^{\frac {3}{2}}}{2\pi }}\end{aligned}}} == Eta quotients == Eta quotients are defined by quotients of the form ∏ 0 < d ∣ N η ( d τ ) r d {\displaystyle \prod _{0<d\mid N}\eta (d\tau )^{r_{d}}} where d is a non-negative integer and rd is any integer. Linear combinations of eta quotients at imaginary quadratic arguments may be algebraic, while combinations of eta quotients may even be integral. For example, define, j ( τ ) = ( ( η ( τ ) η ( 2 τ ) ) 8 + 2 8 ( η ( 2 τ ) η ( τ ) ) 16 ) 3 j 2 A ( τ ) = ( ( η ( τ ) η ( 2 τ ) ) 12 + 2 6 ( η ( 2 τ ) η ( τ ) ) 12 ) 2 j 3 A ( τ ) = ( ( η ( τ ) η ( 3 τ ) ) 6 + 3 3 ( η ( 3 τ ) η ( τ ) ) 6 ) 2 j 4 A ( τ ) = ( ( η ( τ ) η ( 4 τ ) ) 4 + 4 2 ( η ( 4 τ ) η ( τ ) ) 4 ) 2 = ( η 2 ( 2 τ ) η ( τ ) η ( 4 τ ) ) 24 {\displaystyle {\begin{aligned}j(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (2\tau )}}\right)^{8}+2^{8}\left({\frac {\eta (2\tau )}{\eta (\tau )}}\right)^{16}\right)^{3}\\[6pt]j_{2A}(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (2\tau )}}\right)^{12}+2^{6}\left({\frac {\eta (2\tau )}{\eta (\tau )}}\right)^{12}\right)^{2}\\[6pt]j_{3A}(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (3\tau )}}\right)^{6}+3^{3}\left({\frac {\eta (3\tau )}{\eta (\tau )}}\right)^{6}\right)^{2}\\[6pt]j_{4A}(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (4\tau )}}\right)^{4}+4^{2}\left({\frac {\eta (4\tau )}{\eta (\tau )}}\right)^{4}\right)^{2}=\left({\frac {\eta ^{2}(2\tau )}{\eta (\tau )\,\eta (4\tau )}}\right)^{24}\end{aligned}}} with the 24th power of the Weber modular function 𝔣(τ). Then, j ( 1 + − 163 2 ) = − 640320 3 , e π 163 ≈ 640320 3 + 743.99999999999925 … j 2 A ( − 58 2 ) = 396 4 , e π 58 ≈ 396 4 − 104.00000017 … j 3 A ( 1 + − 89 3 2 ) = − 300 3 , e π 89 3 ≈ 300 3 + 41.999971 … j 4 A ( − 7 2 ) = 2 12 , e π 7 ≈ 2 12 − 24.06 … {\displaystyle {\begin{aligned}j\left({\frac {1+{\sqrt {-163}}}{2}}\right)&=-640320^{3},&e^{\pi {\sqrt {163}}}&\approx 640320^{3}+743.99999999999925\dots \\[6pt]j_{2A}\left({\frac {\sqrt {-58}}{2}}\right)&=396^{4},&e^{\pi {\sqrt {58}}}&\approx 396^{4}-104.00000017\dots \\[6pt]j_{3A}\left({\frac {1+{\sqrt {-{\frac {89}{3}}}}}{2}}\right)&=-300^{3},&e^{\pi {\sqrt {\frac {89}{3}}}}&\approx 300^{3}+41.999971\dots \\[6pt]j_{4A}\left({\frac {\sqrt {-7}}{2}}\right)&=2^{12},&e^{\pi {\sqrt {7}}}&\approx 2^{12}-24.06\dots \end{aligned}}} and so on, values which appear in Ramanujan–Sato series. Eta quotients may also be a useful tool for describing bases of modular forms, which are notoriously difficult to compute and express directly. In 1993 Basil Gordon and Kim Hughes proved that if an eta quotient ηg of the form given above, namely ∏ 0 < d ∣ N η ( d τ ) r d {\displaystyle \prod _{0<d\mid N}\eta (d\tau )^{r_{d}}} satisfies ∑ 0 < d ∣ N d r d ≡ 0 ( mod 24 ) and ∑ 0 < d ∣ N N d r d ≡ 0 ( mod 24 ) , {\displaystyle \sum _{0<d\mid N}dr_{d}\equiv 0{\pmod {24}}\quad {\text{and}}\quad \sum _{0<d\mid N}{\frac {N}{d}}r_{d}\equiv 0{\pmod {24}},} then ηg is a weight k modular form for the congruence subgroup Γ0(N) (up to holomorphicity) where k = 1 2 ∑ 0 < d ∣ N r d . {\displaystyle k={\frac {1}{2}}\sum _{0<d\mid N}r_{d}.} This result was extended in 2019 such that the converse holds for cases when N is coprime to 6, and it remains open that the original theorem is sharp for all integers N. This also extends to state that any modular eta quotient for any level n congruence subgroup must also be a modular form for the group Γ(N). While these theorems characterize modular eta quotients, the condition of holomorphicity must be checked separately using a theorem that emerged from the work of Gérard Ligozat and Yves Martin: If ηg is an eta quotient satisfying the above conditions for the integer N and c and d are coprime integers, then the order of vanishing at the cusp c/d relative to Γ0(N) is N 24 ∑ 0 < δ | N gcd ( d , δ ) 2 r δ gcd ( d , N δ ) d δ . {\displaystyle {\frac {N}{24}}\sum _{0<\delta |N}{\frac {\gcd \left(d,\delta \right)^{2}r_{\delta }}{\gcd \left(d,{\frac {N}{\delta }}\right)d\delta }}.} These theorems provide an effective means of creating holomorphic modular eta quotients, however this may not be sufficient to construct a basis for a vector space of modular forms and cusp forms. A useful theorem for limiting the number of modular eta quotients to consider states that a holomorphic weight k modular eta quotient on Γ0(N) must satisfy ∑ 0 < d ∣ N | r d | ≤ ∏ p ∣ N ( p + 1 p − 1 ) min ( 2 , ord p ( N ) ) , {\displaystyle \sum _{0<d\mid N}|r_{d}|\leq \prod _{p\mid N}\left({\frac {p+1}{p-1}}\right)^{\min {\bigl (}2,{\text{ord}}_{p}(N){\bigr )}},} where ordp(N) denotes the largest integer m such that pm divides N. These results lead to several characterizations of spaces of modular forms that can be spanned by modular eta quotients. Using the graded ring structure on the ring of modular forms, we can compute bases of vector spaces of modular forms composed of C {\displaystyle \mathbb {C} } -linear combinations of eta-quotients. For example, if we assume N = pq is a semiprime then the following process can be used to compute an eta-quotient basis of Mk(Γ0(N)). A collection of over 6300 product identities for the Dedekind eta function in a canonical, standardized form is available at the Wayback machine of Michael Somos' website. == See also == Chowla–Selberg formula Ramanujan–Sato series q-series Weierstrass elliptic function Partition function Kronecker limit formula Affine Lie algebra == References == == Further reading == Apostol, Tom M. (1990). Modular functions and Dirichlet Series in Number Theory. Graduate Texts in Mathematics. Vol. 41 (2nd ed.). Springer-Verlag. ch. 3. ISBN 3-540-97127-0. Koblitz, Neal (1993). Introduction to Elliptic Curves and Modular Forms. Graduate Texts in Mathematics. Vol. 97 (2nd ed.). Springer-Verlag. ISBN 3-540-97966-2.
|
Wikipedia:Defective matrix#0
|
In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n × n {\displaystyle n\times n} matrix is defective if and only if it does not have n {\displaystyle n} linearly independent eigenvectors. A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving defective systems of ordinary differential equations and other problems. An n × n {\displaystyle n\times n} defective matrix always has fewer than n {\displaystyle n} distinct eigenvalues, since distinct eigenvalues always have linearly independent eigenvectors. In particular, a defective matrix has one or more eigenvalues λ {\displaystyle \lambda } with algebraic multiplicity m > 1 {\displaystyle m>1} (that is, they are multiple roots of the characteristic polynomial), but fewer than m {\displaystyle m} linearly independent eigenvectors associated with λ {\displaystyle \lambda } . If the algebraic multiplicity of λ {\displaystyle \lambda } exceeds its geometric multiplicity (that is, the number of linearly independent eigenvectors associated with λ {\displaystyle \lambda } ), then λ {\displaystyle \lambda } is said to be a defective eigenvalue. However, every eigenvalue with algebraic multiplicity m {\displaystyle m} always has m {\displaystyle m} linearly independent generalized eigenvectors. A real symmetric matrix and more generally a Hermitian matrix, and a unitary matrix, is never defective; more generally, a normal matrix (which includes Hermitian and unitary matrices as special cases) is never defective. == Jordan block == Any nontrivial Jordan block of size 2 × 2 {\displaystyle 2\times 2} or larger (that is, not completely diagonal) is defective. (A diagonal matrix is a special case of the Jordan normal form with all trivial Jordan blocks of size 1 × 1 {\displaystyle 1\times 1} and is not defective.) For example, the n × n {\displaystyle n\times n} Jordan block J = [ λ 1 λ ⋱ ⋱ 1 λ ] , {\displaystyle J={\begin{bmatrix}\lambda &1&\;&\;\\\;&\lambda &\ddots &\;\\\;&\;&\ddots &1\\\;&\;&\;&\lambda \end{bmatrix}},} has an eigenvalue, λ {\displaystyle \lambda } with algebraic multiplicity n {\displaystyle n} (or greater if there are other Jordan blocks with the same eigenvalue), but only one distinct eigenvector J v 1 = λ v 1 {\displaystyle Jv_{1}=\lambda v_{1}} , where v 1 = [ 1 0 ⋮ 0 ] . {\displaystyle v_{1}={\begin{bmatrix}1\\0\\\vdots \\0\end{bmatrix}}.} The other canonical basis vectors v 2 = [ 0 1 ⋮ 0 ] , … , v n = [ 0 0 ⋮ 1 ] {\displaystyle v_{2}={\begin{bmatrix}0\\1\\\vdots \\0\end{bmatrix}},~\ldots ,~v_{n}={\begin{bmatrix}0\\0\\\vdots \\1\end{bmatrix}}} form a chain of generalized eigenvectors such that J v k = λ v k + v k − 1 {\displaystyle Jv_{k}=\lambda v_{k}+v_{k-1}} for k = 2 , … , n {\displaystyle k=2,\ldots ,n} . Any defective matrix has a nontrivial Jordan normal form, which is as close as one can come to diagonalization of such a matrix. == Example == A simple example of a defective matrix is [ 3 1 0 3 ] , {\displaystyle {\begin{bmatrix}3&1\\0&3\end{bmatrix}},} which has a double eigenvalue of 3 but only one distinct eigenvector [ 1 0 ] {\displaystyle {\begin{bmatrix}1\\0\end{bmatrix}}} (and constant multiples thereof). == See also == Jordan normal form – Form of a matrix indicating its eigenvalues and their algebraic multiplicities == Notes == == References == Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 978-0-8018-5414-9 Strang, Gilbert (1988). Linear Algebra and Its Applications (3rd ed.). San Diego: Harcourt. ISBN 978-970-686-609-7.
|
Wikipedia:Definite quadratic form#0
|
In mathematics, a definite quadratic form is a quadratic form over some real vector space V that has the same sign (always positive or always negative) for every non-zero vector of V. According to that sign, the quadratic form is called positive-definite or negative-definite. A semidefinite (or semi-definite) quadratic form is defined in much the same way, except that "always positive" and "always negative" are replaced by "never negative" and "never positive", respectively. In other words, it may take on zero values for some non-zero vectors of V. An indefinite quadratic form takes on both positive and negative values and is called an isotropic quadratic form. More generally, these definitions apply to any vector space over an ordered field. == Associated symmetric bilinear form == Quadratic forms correspond one-to-one to symmetric bilinear forms over the same space. A symmetric bilinear form is also described as definite, semidefinite, etc. according to its associated quadratic form. A quadratic form Q and its associated symmetric bilinear form B are related by the following equations: Q ( x ) = B ( x , x ) B ( x , y ) = B ( y , x ) = 1 2 [ Q ( x + y ) − Q ( x ) − Q ( y ) ] . {\displaystyle {\begin{aligned}Q(x)&=B(x,x)\\B(x,y)&=B(y,x)={\tfrac {1}{2}}[Q(x+y)-Q(x)-Q(y)]~.\end{aligned}}} The latter formula arises from expanding Q ( x + y ) = B ( x + y , x + y ) . {\displaystyle \;Q(x+y)=B(x+y,x+y)~.} == Examples == As an example, let V = R 2 {\displaystyle V=\mathbb {R} ^{2}} , and consider the quadratic form Q ( x ) = c 1 x 1 2 + c 2 x 2 2 {\displaystyle Q(x)=c_{1}{x_{1}}^{2}+c_{2}{x_{2}}^{2}} where x = [ x 1 , x 2 ] ∈ V {\displaystyle ~x=[x_{1},x_{2}]\in V} and c1 and c2 are constants. If c1 > 0 and c2 > 0 , the quadratic form Q is positive-definite, so Q evaluates to a positive number whenever [ x 1 , x 2 ] ≠ [ 0 , 0 ] . {\displaystyle \;[x_{1},x_{2}]\neq [0,0]~.} If one of the constants is positive and the other is 0, then Q is positive semidefinite and always evaluates to either 0 or a positive number. If c1 > 0 and c2 < 0 , or vice versa, then Q is indefinite and sometimes evaluates to a positive number and sometimes to a negative number. If c1 < 0 and c2 < 0 , the quadratic form is negative-definite and always evaluates to a negative number whenever [ x 1 , x 2 ] ≠ [ 0 , 0 ] . {\displaystyle \;[x_{1},x_{2}]\neq [0,0]~.} And if one of the constants is negative and the other is 0, then Q is negative semidefinite and always evaluates to either 0 or a negative number. In general a quadratic form in two variables will also involve a cross-product term in x1·x2: Q ( x ) = c 1 x 1 2 + c 2 x 2 2 + 2 c 3 x 1 x 2 . {\displaystyle Q(x)=c_{1}{x_{1}}^{2}+c_{2}{x_{2}}^{2}+2c_{3}x_{1}x_{2}~.} This quadratic form is positive-definite if c 1 > 0 {\displaystyle \;c_{1}>0\;} and c 1 c 2 − c 3 2 > 0 , {\displaystyle \,c_{1}c_{2}-{c_{3}}^{2}>0\;,} negative-definite if c 1 < 0 {\displaystyle \;c_{1}<0\;} and c 1 c 2 − c 3 2 > 0 , {\displaystyle \,c_{1}c_{2}-{c_{3}}^{2}>0\;,} and indefinite if c 1 c 2 − c 3 2 < 0 . {\displaystyle \;c_{1}c_{2}-{c_{3}}^{2}<0~.} It is positive or negative semidefinite if c 1 c 2 − c 3 2 = 0 , {\displaystyle \;c_{1}c_{2}-{c_{3}}^{2}=0\;,} with the sign of the semidefiniteness coinciding with the sign of c 1 . {\displaystyle \;c_{1}~.} This bivariate quadratic form appears in the context of conic sections centered on the origin. If the general quadratic form above is equated to 0, the resulting equation is that of an ellipse if the quadratic form is positive or negative-definite, a hyperbola if it is indefinite, and a parabola if c 1 c 2 − c 3 2 = 0 . {\displaystyle \;c_{1}c_{2}-{c_{3}}^{2}=0~.} The square of the Euclidean norm in n-dimensional space, the most commonly used measure of distance, is x 1 2 + ⋯ + x n 2 . {\displaystyle {x_{1}}^{2}+\cdots +{x_{n}}^{2}~.} In two dimensions this means that the distance between two points is the square root of the sum of the squared distances along the x 1 {\displaystyle x_{1}} axis and the x 2 {\displaystyle x_{2}} axis. == Matrix form == A quadratic form can be written in terms of matrices as x T A x {\displaystyle x^{\mathsf {T}}A\,x} where x is any n×1 Cartesian vector [ x 1 , ⋯ , x n ] T {\displaystyle \;[x_{1},\cdots ,x_{n}]^{\mathsf {T}}\;} in which at least one element is not 0; A is an n × n symmetric matrix; and superscript T denotes a matrix transpose. If A is diagonal this is equivalent to a non-matrix form containing solely terms involving squared variables; but if A has any non-zero off-diagonal elements, the non-matrix form will also contain some terms involving products of two different variables. Positive or negative-definiteness or semi-definiteness, or indefiniteness, of this quadratic form is equivalent to the same property of A, which can be checked by considering all eigenvalues of A or by checking the signs of all of its principal minors. == Optimization == Definite quadratic forms lend themselves readily to optimization problems. Suppose the matrix quadratic form is augmented with linear terms, as x T A x + b T x , {\displaystyle x^{\mathsf {T}}A\,x+b^{\mathsf {T}}x\;,} where b is an n×1 vector of constants. The first-order conditions for a maximum or minimum are found by setting the matrix derivative to the zero vector: 2 A x + b = 0 → , {\displaystyle 2A\,x+b={\vec {0}}\;,} giving x = − 1 2 A − 1 b , {\displaystyle x=-{\tfrac {1}{2}}\,A^{-1}b\;,} assuming A is nonsingular. If the quadratic form, and hence A, is positive-definite, the second-order conditions for a minimum are met at this point. If the quadratic form is negative-definite, the second-order conditions for a maximum are met. An important example of such an optimization arises in multiple regression, in which a vector of estimated parameters is sought which minimizes the sum of squared deviations from a perfect fit within the dataset. == See also == Isotropic quadratic form Positive-definite function Positive-definite matrix Polarization identity == Notes == == References == Kitaoka, Yoshiyuki (1993). Arithmetic of quadratic forms. Cambridge Tracts in Mathematics. Vol. 106. Cambridge University Press. ISBN 0-521-40475-4. Zbl 0785.11021. Lang, Serge (2004), Algebra, Graduate Texts in Mathematics, vol. 211 (Corrected fourth printing, revised third ed.), New York: Springer-Verlag, p. 578, ISBN 978-0-387-95385-4. Milnor, J.; Husemoller, D. (1973). Symmetric Bilinear Forms. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 73. Springer. ISBN 3-540-06009-X. Zbl 0292.10016.
|
Wikipedia:Deformation ring#0
|
In mathematics, a deformation ring is a ring that controls liftings of a representation of a Galois group from a finite field to a local field. In particular for any such lifting problem there is often a universal deformation ring that classifies all such liftings, and whose spectrum is the universal deformation space. A key step in Wiles's proof of the modularity theorem was to study the relation between universal deformation rings and Hecke algebras. == See also == Deformation (mathematics) Galois module == References == Cornell, Gary; Silverman, Joseph H.; Stevens, Glenn, eds. (1997), Modular forms and Fermat's last theorem, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94609-2, MR 1638473
|
Wikipedia:Degree matrix#0
|
In the mathematical field of algebraic graph theory, the degree matrix of an undirected graph is a diagonal matrix which contains information about the degree of each vertex—that is, the number of edges attached to each vertex. It is used together with the adjacency matrix to construct the Laplacian matrix of a graph: the Laplacian matrix is the difference of the degree matrix and the adjacency matrix. == Definition == Given a graph G = ( V , E ) {\displaystyle G=(V,E)} with | V | = n {\displaystyle |V|=n} , the degree matrix D {\displaystyle D} for G {\displaystyle G} is a n × n {\displaystyle n\times n} diagonal matrix defined as D i , j := { deg ( v i ) if i = j 0 otherwise {\displaystyle D_{i,j}:=\left\{{\begin{matrix}\deg(v_{i})&{\mbox{if}}\ i=j\\0&{\mbox{otherwise}}\end{matrix}}\right.} where the degree deg ( v i ) {\displaystyle \deg(v_{i})} of a vertex counts the number of times an edge terminates at that vertex. In an undirected graph, this means that each loop increases the degree of a vertex by two. In a directed graph, the term degree may refer either to indegree (the number of incoming edges at each vertex) or outdegree (the number of outgoing edges at each vertex). == Example == The following undirected graph has a 6x6 degree matrix with values: Note that in the case of undirected graphs, an edge that starts and ends in the same node increases the corresponding degree value by 2 (i.e. it is counted twice). == Properties == The degree matrix of a k-regular graph has a constant diagonal of k {\displaystyle k} . According to the degree sum formula, the trace of the degree matrix is twice the number of edges of the considered graph. == References ==
|
Wikipedia:Deirdre Smeltzer#0
|
Deirdre Longacher Smeltzer (born 1964) is an American mathematician, mathematics educator, textbook author, and academic administrator. A former professor, dean, and vice president at Eastern Mennonite University, she is Senior Director for Programs at the Mathematical Association of America. == Education and career == Smeltzer was a mathematics major at Eastern Mennonite University, graduating in 1987 with a minor in Bible study. At Eastern Mennonite, mathematicians Millard Showalter and Del Snyder became faculty mentors, encouraging her to continue in advanced mathematics. She went on to graduate study in mathematics at the University of Virginia, earning a master's degree and completing her Ph.D. in 1994, with the dissertation Topics in Difference Sets in 2-Groups on difference sets in group theory, supervised by Harold Ward. She became a faculty member at the University of St. Thomas, a Catholic university in Saint Paul, Minnesota, before returning to Eastern Mennonite University as a faculty member in 1998. She chaired the mathematical sciences department from 2005 to 2012. As an undergraduate at Eastern Mennonite, she had participated in a cross-cultural visit to China, and as a faculty member, she led another such visit in 2013, and became director of cross-cultural programs for the university. In 2013, she was named the university's vice president and undergraduate dean. In that position, she led the university's creation of new programs in political science and global studies, among others. She stepped down from her administrative positions at Eastern Mennonite in 2019, and joined the Mathematical Association of America as Senior Director for Programs in 2020. == Textbooks == Smeltzer is the coauthor of two undergraduate textbooks in mathematics: Methods for Euclidean Geometry (with Owen Byer and Felix Lazebnik, Mathematical Association of America, 2010) and Journey into Discrete Mathematics (with Owen Byer and Kenneth Wantz, MAA Press, 2018). == References ==
|
Wikipedia:Delta operator#0
|
In mathematics, a delta operator is a shift-equivariant linear operator Q : K [ x ] ⟶ K [ x ] {\displaystyle Q\colon \mathbb {K} [x]\longrightarrow \mathbb {K} [x]} on the vector space of polynomials in a variable x {\displaystyle x} over a field K {\displaystyle \mathbb {K} } that reduces degrees by one. To say that Q {\displaystyle Q} is shift-equivariant means that if g ( x ) = f ( x + a ) {\displaystyle g(x)=f(x+a)} , then ( Q g ) ( x ) = ( Q f ) ( x + a ) . {\displaystyle {(Qg)(x)=(Qf)(x+a)}.\,} In other words, if f {\displaystyle f} is a "shift" of g {\displaystyle g} , then Q f {\displaystyle Qf} is also a shift of Q g {\displaystyle Qg} , and has the same "shifting vector" a {\displaystyle a} . To say that an operator reduces degree by one means that if f {\displaystyle f} is a polynomial of degree n {\displaystyle n} , then Q f {\displaystyle Qf} is either a polynomial of degree n − 1 {\displaystyle n-1} , or, in case n = 0 {\displaystyle n=0} , Q f {\displaystyle Qf} is 0. Sometimes a delta operator is defined to be a shift-equivariant linear transformation on polynomials in x {\displaystyle x} that maps x {\displaystyle x} to a nonzero constant. Seemingly weaker than the definition given above, this latter characterization can be shown to be equivalent to the stated definition when K {\displaystyle \mathbb {K} } has characteristic zero, since shift-equivariance is a fairly strong condition. == Examples == The forward difference operator ( Δ f ) ( x ) = f ( x + 1 ) − f ( x ) {\displaystyle (\Delta f)(x)=f(x+1)-f(x)\,} is a delta operator. Differentiation with respect to x, written as D, is also a delta operator. Any operator of the form ∑ k = 1 ∞ c k D k {\displaystyle \sum _{k=1}^{\infty }c_{k}D^{k}} (where Dn(ƒ) = ƒ(n) is the nth derivative) with c 1 ≠ 0 {\displaystyle c_{1}\neq 0} is a delta operator. It can be shown that all delta operators can be written in this form. For example, the difference operator given above can be expanded as Δ = e D − 1 = ∑ k = 1 ∞ D k k ! . {\displaystyle \Delta =e^{D}-1=\sum _{k=1}^{\infty }{\frac {D^{k}}{k!}}.} The generalized derivative of time scale calculus which unifies the forward difference operator with the derivative of standard calculus is a delta operator. In computer science and cybernetics, the term "discrete-time delta operator" (δ) is generally taken to mean a difference operator ( δ f ) ( x ) = f ( x + Δ t ) − f ( x ) Δ t , {\displaystyle {(\delta f)(x)={{f(x+\Delta t)-f(x)} \over {\Delta t}}},} the Euler approximation of the usual derivative with a discrete sample time Δ t {\displaystyle \Delta t} . The delta-formulation obtains a significant number of numerical advantages compared to the shift-operator at fast sampling. == Basic polynomials == Every delta operator Q {\displaystyle Q} has a unique sequence of "basic polynomials", a polynomial sequence defined by three conditions: p 0 ( x ) = 1 ; {\displaystyle p_{0}(x)=1;} p n ( 0 ) = 0 ; {\displaystyle p_{n}(0)=0;} ( Q p n ) ( x ) = n p n − 1 ( x ) for all n ∈ N . {\displaystyle (Qp_{n})(x)=np_{n-1}(x){\text{ for all }}n\in \mathbb {N} .} Such a sequence of basic polynomials is always of binomial type, and it can be shown that no other sequences of binomial type exist. If the first two conditions above are dropped, then the third condition says this polynomial sequence is a Sheffer sequence—a more general concept. == See also == Pincherle derivative Shift operator Umbral calculus == References == Nikol'Skii, Nikolai Kapitonovich (1986), Treatise on the shift operator: spectral function theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-15021-5 == External links == Weisstein, Eric W. "Delta Operator". MathWorld.
|
Wikipedia:Demonic composition#0
|
In mathematics, demonic composition is an operation on binary relations that is similar to the ordinary composition of relations but is robust to refinement of the relations into (partial) functions or injective relations. Unlike ordinary composition of relations, demonic composition is not associative. == Definition == Suppose R {\displaystyle R} is a binary relation between X {\displaystyle X} and Y {\displaystyle Y} and S {\displaystyle S} is a relation between Y {\displaystyle Y} and Z . {\displaystyle Z.} Their right demonic composition R ; → S {\displaystyle R{\textbf {;}}^{\to }S} is a relation between X {\displaystyle X} and Z . {\displaystyle Z.} Its graph is defined as { ( x , z ) : x ( S ∘ R ) z and for all y ∈ Y ( x R y implies y S z ) } . {\displaystyle \{(x,z)\ :\ x\mathrel {(S\circ R)} z{\text{ and for all }}y\in Y\ (x\mathrel {R} y{\text{ implies }}y\mathrel {S} z)\}.} Conversely, their left demonic composition R ; ← S {\displaystyle R{\textbf {;}}^{\leftarrow }S} is defined by { ( x , z ) : x ( S ∘ R ) z and for all y ∈ Y ( y S z implies x R y ) } . {\displaystyle \{(x,z)\ :\ x\mathrel {(S\circ R)} z{\text{ and for all }}y\in Y\ (y\mathrel {S} z{\text{ implies }}x\mathrel {R} y)\}.} == References == Backhouse, Roland; van der Woude, Jaap (1993), "Demonic operators and monotype factors", Mathematical Structures in Computer Science, 3 (4): 417–433, CiteSeerX 10.1.1.40.9602, doi:10.1017/S096012950000030X, MR 1249420.
|
Wikipedia:Denis Higgs#0
|
A Higgs prime, named after Denis Higgs, is a prime number with a totient (one less than the prime) that evenly divides the square of the product of the smaller Higgs primes. (This can be generalized to cubes, fourth powers, etc.) To put it algebraically, given an exponent a, a Higgs prime Hpn satisfies ϕ ( H p n ) | ∏ i = 1 n − 1 H p i a and H p n > H p n − 1 {\displaystyle \phi (Hp_{n})|\prod _{i=1}^{n-1}{Hp_{i}}^{a}{\mbox{ and }}Hp_{n}>Hp_{n-1}} where Φ(x) is Euler's totient function. For squares, the first few Higgs primes are 2, 3, 5, 7, 11, 13, 19, 23, 29, 31, 37, 43, 47, ... (sequence A007459 in the OEIS). So, for example, 13 is a Higgs prime because the square of the product of the smaller Higgs primes is 5336100, and divided by 12 this is 444675. But 17 is not a Higgs prime because the square of the product of the smaller primes is 901800900, which leaves a remainder of 4 when divided by 16. From observation of the first few Higgs primes for squares through seventh powers, it would seem more compact to list those primes that are not Higgs primes: Observation further reveals that a Fermat prime 2 2 n + 1 {\displaystyle 2^{2^{n}}+1} can't be a Higgs prime for the ath power if a is less than 2n. It's not known if there are infinitely many Higgs primes for any exponent a greater than 1. The situation is quite different for a = 1. There are only four of them: 2, 3, 7 and 43 (a sequence suspiciously similar to Sylvester's sequence). Burris & Lee (1993) found that about a fifth of the primes below a million are Higgs prime, and they concluded that even if the sequence of Higgs primes for squares is finite, "a computer enumeration is not feasible." == References == Burris, S.; Lee, S. (1993). "Tarski's high school identities". Amer. Math. Monthly. 100 (3): 231–236 [p. 233]. doi:10.1080/00029890.1993.11990393. JSTOR 2324454. Sloane, N.; Plouffe, S. (1995). The Encyclopedia of Integer Sequences. New York: Academic Press. ISBN 0-12-558630-2. M0660
|
Wikipedia:Denis Miéville#0
|
Denis Miéville (15 September 1946 – 27 October 2018) was a Swiss expert on the logic of Stanislaw Lesniewski and natural logic. == Biography == Denis Miéville was raised in the towns of Colombier (Canton of Neuchâtel) and Essert-Pittet (Canton of Vaud). After studying mathematics and logic at the University of Neuchâtel (Switzerland) and Bowling Green University (Ohio, United States), Denis Miéville developed an interest in the development and formalization of natural logic that led him to study both the theory of collective classes and the foundations of maximal predicates in propositional logic. These interests were integrated in the doctoral thesis that he defended in 1984 ("A Development of Stanislaw Lesniewski's logical systems: Protothetic, ontology and mereology") at the University of Neuchâtel, supervised by the eminent logician Jean-Blaise Grize. Appointed Professor at the University of Neuchâtel in 1987 (he will become its rector from 1999 to 2003), he taught logic and chaired the Semiologic Research Centre. Professor Miéville has taught at various institutions such as the University of Geneva, the University of Rennes in France and the University of Iasi in Romania. The University of Iasi awarded him a degree Honoris causa in 2003. He also received the Honorary Certificate of the Francophony in 2001. == Research interests == Professor Miéville developed an expertise on Lesniewski's logic due to his interest in developmental systems, that have the advantage of being dynamic, universal, free and of a higher order. Moreover, by developing a theory of syntactic-semantic categories, he focussed on a methodology able to identify logico-discursive indices in texts, by representing them as argumentative and reasoned structures. His interests became more and more inclined towards understanding the way in which discursive thought creates meaning, by inscribing it into reasoning networks. This is one of the reasons that led Professor Miéville to specifically examine the discursive procedures from which new knowledge is developed, such as those proceeding by analogy, and those structuring creative definitions. Very sensitive to the epistemological dimension of knowledge, he has been interested in how concepts develop gradually and crystallize into stable entities. == Publications == === Books === Introduction à l'œuvre de S. Leśniewski. VI: La métalangue d'une syntaxe inscriptionnelle, Neuchâtel, Travaux de logique, 2009. Introduction à l'œuvre de S. Leśniewski. II. L'Ontologie. Neuchâtel, Travaux de logique, 2004. Introduction à l'œuvre de S. Leśniewski. I. La Protothétique. Neuchâtel, Travaux de logique, 2001. Pensée logico-mathématique. Nouveaux objets interdisciplinaires. Paris: P.U.F., 1993 (Collaboration with O. Houdé). Essai de logique naturelle. Berne, Lang, 1992 (Collaboration with J.-B. Grize and M.-J. Borel) === Scientific editor === Stanislaw Leśniewski aujourd’hui (Ed., with Denis Vernant), Grenoble/Neuchâtel, Groupe de Recherches sur la philosophie et le langage/ Centre de Recherches Sémiologiques 1995. == References == Faculty of Letters and Human Sciences of the UNINE Institut of Philosophy of the UNINE Communalis Cursus and main publications
|
Wikipedia:Denjoy–Carleman–Ahlfors theorem#0
|
Lars Valerian Ahlfors (18 April 1907 – 11 October 1996) was a Finnish mathematician, remembered for his work in the field of Riemann surfaces and his textbook on complex analysis. == Background == Ahlfors was born in Helsinki, Finland. His mother, Sievä Helander, died at his birth. His father, Axel Ahlfors, was a professor of engineering at the Helsinki University of Technology. The Ahlfors family was Swedish-speaking, so he first attended the private school Nya svenska samskolan where all classes were taught in Swedish. Ahlfors studied at University of Helsinki from 1924, graduating in 1928 having studied under Ernst Lindelöf and Rolf Nevanlinna. He assisted Nevanlinna in 1929 with his work on Denjoy's conjecture on the number of asymptotic values of an entire function. In 1929 Ahlfors published the first proof of this conjecture, now known as the Denjoy–Carleman–Ahlfors theorem. It states that the number of asymptotic values approached by an entire function of order ρ along curves in the complex plane going toward infinity is less than or equal to 2ρ. He completed his doctorate from the University of Helsinki in 1930. == Career == Ahlfors worked as an associate professor at the University of Helsinki from 1933 to 1936. In 1936 he was one of the first two people to be awarded the Fields Medal (the other was Jesse Douglas). In 1935 Ahlfors visited Harvard University. He returned to Finland in 1938 to take up a professorship at the University of Helsinki. The outbreak of war in 1939 led to problems although Ahlfors was unfit for military service. He was offered a position at the Swiss Federal Institute of Technology at Zurich in 1944 and finally managed to travel there in March 1945. He did not enjoy his time in Switzerland, so in 1946 he jumped at a chance to leave, returning to work at Harvard, where he remained until his retirement in 1977; he was William Caspar Graustein Professor of Mathematics from 1964. Ahlfors was a visiting scholar at the Institute for Advanced Study in 1962 and again in 1966. He was awarded the Wihuri Prize in 1968 and the Wolf Prize in Mathematics in 1981. He served as the Honorary President of the International Congress of Mathematicians in 1986 at Berkeley, California, in celebration of his 50th year of the award of his Fields Medal. His book Complex Analysis (1953) is the classic text on the subject and is almost certainly referenced in any more recent text which makes heavy use of complex analysis. Ahlfors wrote several other significant books, including Riemann surfaces (1960) and Conformal invariants (1973). He made decisive contributions to meromorphic curves, value distribution theory, Riemann surfaces, conformal geometry, quasiconformal mappings and other areas during his career. == Personal life == In 1933, he married Erna Lehnert, an Austrian who with her parents had first settled in Sweden and then in Finland. The couple had three daughters. Ahlfors died of pneumonia at the Willowwood nursing home in Pittsfield, Massachusetts in 1996. == See also == Ahlfors finiteness theorem Ahlfors function Ahlfors measure conjecture Beurling–Ahlfors transform Schwarz–Ahlfors–Pick theorem Measurable Riemann mapping theorem == Bibliography == Articles Ahlfors, Lars V. An extension of Schwarz's lemma. Trans. Amer. Math. Soc. 43 (1938), no. 3, 359–364. doi:10.2307/1990065 Ahlfors, Lars; Beurling, Arne. Conformal invariants and function-theoretic null-sets. Acta Math. 83 (1950), 101–129. doi:10.1007/BF02392634 Beurling, A.; Ahlfors, L. The boundary correspondence under quasiconformal mappings. Acta Math. 96 (1956), 125–142. doi:10.1007/BF02392360 Ahlfors, Lars; Bers, Lipman. Riemann's mapping theorem for variable metrics. Ann. of Math. (2) 72 (1960), 385–404. doi:10.2307/1970141 Ahlfors, Lars Valerian. Collected papers. Vol. 1. 1929–1955. Edited with the assistance of Rae Michael Shortt. Contemporary Mathematicians. Birkhäuser, Boston, Mass., 1982. xix+520 pp. ISBN 3-7643-3075-9 Ahlfors, Lars Valerian. Collected papers. Vol. 2. 1954–1979. Edited with the assistance of Rae Michael Shortt. Contemporary Mathematicians. Birkhäuser, Boston, Mass., 1982. xix+515 pp. ISBN 3-7643-3076-7 Books Ahlfors, Lars V. Complex analysis. An introduction to the theory of analytic functions of one complex variable. Third edition. International Series in Pure and Applied Mathematics. McGraw-Hill Book Co., New York, 1978. xi+331 pp. ISBN 0-07-000657-1 Ahlfors, Lars V. Conformal invariants. Topics in geometric function theory. Reprint of the 1973 original. With a foreword by Peter Duren, F. W. Gehring and Brad Osgood. AMS Chelsea Publishing, Providence, RI, 2010. xii+162 pp. ISBN 978-0-8218-5270-5 Ahlfors, Lars V. Lectures on quasiconformal mappings. Second edition. With supplemental chapters by C. J. Earle, I. Kra, M. Shishikura and J. H. Hubbard. University Lecture Series, 38. American Mathematical Society, Providence, RI, 2006. viii+162 pp. ISBN 0-8218-3644-7 Ahlfors, Lars V. Möbius transformations in several dimensions. Ordway Professorship Lectures in Mathematics. University of Minnesota, School of Mathematics, Minneapolis, Minn., 1981. ii+150 pp. Ahlfors, Lars V.; Sario, Leo. Riemann surfaces. Princeton Mathematical Series, No. 26 Princeton University Press, Princeton, N.J. 1960 xi+382 pp. == References == == External links == Media related to Lars Ahlfors at Wikimedia Commons Lars Ahlfors at the Mathematics Genealogy Project Ahlfors entry on Harvard University Mathematics department web site. Papers of Lars Valerian Ahlfors : an inventory (Harvard University Archives) Lars Valerian Ahlfors The MacTutor History of Mathematics page about Ahlfors The Mathematics of Lars Valerian Ahlfors, Notices of the American Mathematical Society; vol. 45, no. 2 (February 1998). Lars Valerian Ahlfors (1907–1996), Notices of the American Mathematical Society; vol. 45, no. 2 (February 1998). Frederick Gehring (2005). "Lars Valerian Ahlfors: a biographical memoir" (PDF). Biographical Memoirs. 87. National Academy of Sciences Biographical Memoir Author profile in the database zbMATH
|
Wikipedia:Denjoy–Koksma inequality#0
|
In mathematics, the Denjoy–Koksma inequality, introduced by Herman (1979, p.73) as a combination of work of Arnaud Denjoy and the Koksma–Hlawka inequality of Jurjen Ferdinand Koksma, is a bound for Weyl sums ∑ k = 0 m − 1 f ( x + k ω ) {\displaystyle \sum _{k=0}^{m-1}f(x+k\omega )} of functions f of bounded variation. == Statement == Suppose that a map f from the circle T to itself has irrational rotation number α, and p/q is a rational approximation to α with p and q coprime, |α – p/q| < 1/q2. Suppose that φ is a function of bounded variation, and μ a probability measure on the circle invariant under f. Then | ∑ i = 0 q − 1 ϕ ∘ f i ( x ) − q ∫ T ϕ d μ | ⩽ Var ( ϕ ) {\displaystyle \left|\sum _{i=0}^{q-1}\phi \circ f^{i}(x)-q\int _{T}\phi \,d\mu \right|\leqslant \operatorname {Var} (\phi )} (Herman 1979, p.73) == References == Herman, Michael-Robert (1979), "Sur la conjugaison différentiable des difféomorphismes du cercle à des rotations", Publications Mathématiques de l'IHÉS (49): 5–233, ISSN 1618-1913, MR 0538680 Kuipers, L.; Niederreiter, H. (1974), Uniform distribution of sequences, New York: Wiley-Interscience [John Wiley & Sons], ISBN 978-0-486-45019-3, MR 0419394, Reprinted by Dover 2006
|
Wikipedia:Denjoy–Luzin theorem#0
|
In mathematics, the Denjoy–Luzin theorem, introduced independently by Denjoy (1912) and Luzin (1912) states that if a trigonometric series converges absolutely on a set of positive measure, then the sum of its coefficients converges absolutely, and in particular the trigonometric series converges absolutely everywhere. == References == Denjoy, Arnaud (1912), "Sur l'absolue convergence des séries trigonométriques", C. R. Acad. Sci., 155: 135–136 "Denjoy-Luzin theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Luzin, N. N. (1912), "On the convergence of trigonometric series", Moskau Math. Samml. (in Russian), 28: 461–472, JFM 43.0319.03
|
Wikipedia:Denjoy–Luzin–Saks theorem#0
|
In mathematics, the Denjoy–Luzin–Saks theorem states that a function of generalized bounded variation in the restricted sense has a derivative almost everywhere, and gives further conditions of the set of values of the function where the derivative does not exist. N. N. Luzin and A. Denjoy proved a weaker form of the theorem, and Saks (1937, theorem 7.2, page 230) later strengthened their theorem. == References == Saks, Stanisław (1937), Theory of the Integral, Monografie Matematyczne, vol. 7 (2nd ed.), Warsaw-Lwów: G.E. Stechert & Co., JFM 63.0183.05, Zbl 0017.30004, archived from the original on 2006-12-12
|
Wikipedia:Denjoy–Young–Saks theorem#0
|
In mathematics, the Denjoy–Young–Saks theorem gives some possibilities for the Dini derivatives of a function that hold almost everywhere. Denjoy (1915) proved the theorem for continuous functions, Young (1917) extended it to measurable functions, and Saks (1924) extended it to arbitrary functions. Saks (1937, Chapter IX, section 4) and Bruckner (1978, chapter IV, theorem 4.4) give historical accounts of the theorem. == Statement == If f is a real valued function defined on an interval, then with the possible exception of a set of measure 0 on the interval, the Dini derivatives of f satisfy one of the following four conditions at each point: f has a finite derivative D+f = D–f is finite, D−f = ∞, D+f = –∞. D−f = D+f is finite, D+f = ∞, D–f = –∞. D−f = D+f = ∞, D–f = D+f = –∞. == References == Bruckner, Andrew M. (1978), Differentiation of real functions, Lecture Notes in Mathematics, vol. 659, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0069821, ISBN 978-3-540-08910-0, MR 0507448 Saks, Stanisław (1937), Theory of the Integral, Monografie Matematyczne, vol. 7 (2nd ed.), Warsaw-Lwów: G.E. Stechert & Co., JFM 63.0183.05, Zbl 0017.30004, archived from the original on 2006-12-12 Young, Grace Chisholm (1917), "On the Derivates of a Function" (PDF), Proc. London Math. Soc., 15 (1): 360–384, doi:10.1112/plms/s2-15.1.360
|
Wikipedia:Dennis Gaitsgory#0
|
Dennis Gaitsgory (born 17 November 1973) is an Israeli-American mathematician. He is a mathematician at Max Planck Institute for Mathematics (MPIM) at Bonn and is known for his research on the geometric Langlands program. == Life and career == Born in Chișinău (now in Moldova) he grew up in Tajikistan, before studying at Tel Aviv University under Joseph Bernstein (1990–1996). He received his doctorate in 1997 for a thesis entitled "Automorphic Sheaves and Eisenstein Series". He has been awarded a Harvard Junior Fellowship, a Clay Research Fellowship, and the prize of the European Mathematical Society for his work. His work in geometric Langlands culminated in a joint 2002 paper with Edward Frenkel and Kari Vilonen, establishing the conjecture for finite fields, and a separate 2004 paper, generalizing the proof to include the field of complex numbers as well. Prior to his current appointment at MPIM Bonn, he was a professor of mathematics at Harvard and an associate professor at the University of Chicago from 2001–2005. == Honors and awards == In 2025, he received the Breakthrough Prize in Mathematics. == Selected publications == Gaitsgory, Dennis; Rozenblyum, Nick (2017). A Study in Derived Algebraic Geometry. American Mathematical Society. ISBN 978-1-4704-3569-1. Gaitsgory, Dennis; Lurie, Jacob (19 February 2019). Weil's Conjecture for Function Fields: Volume I (AMS-199). Princeton University Press. ISBN 978-0-691-18443-2. == References == == External links == Dennis Gaitsgory at the Mathematics Genealogy Project Gaitsgory's thesis
|
Wikipedia:Denny Gulick#0
|
Denny Gulick (born Sidney Lewis Gulick III, July 29, 1936) is an emeritus professor of mathematics at University of Maryland, College Park. == Life == Gulick graduated from Oberlin College, Ohio, then obtained his PhD from Yale University, with his main interest of operator theory. He is the leader of College Mathematics in Maryland, and is active in statewide college education and policies. He has written several textbooks, including Encounters with Chaos (1992) and six editions of Calculus with Analytic Geometry, with fellow University of Maryland math professor Robert Ellis. == Works == with Robert Ellis (2002) [1982]. Calculus with analytic geometry (sixth ed.). Saunders College Publishing. ISBN 978-0-12-237400-5. Denny Gulick (1992). Encounters with chaos. McGraw-Hill. ISBN 978-0-07-025203-5. == References == == External links == "Denny Gulick". faculty web page. University of Maryland, College Park Department of Mathematics. Retrieved November 30, 2010.
|
Wikipedia:Dependence relation#0
|
In mathematics, a dependence relation is a binary relation which generalizes the relation of linear dependence. Let X {\displaystyle X} be a set. A (binary) relation ◃ {\displaystyle \triangleleft } between an element a {\displaystyle a} of X {\displaystyle X} and a subset S {\displaystyle S} of X {\displaystyle X} is called a dependence relation, written a ◃ S {\displaystyle a\triangleleft S} , if it satisfies the following properties: if a ∈ S {\displaystyle a\in S} , then a ◃ S {\displaystyle a\triangleleft S} ; if a ◃ S {\displaystyle a\triangleleft S} , then there is a finite subset S 0 {\displaystyle S_{0}} of S {\displaystyle S} , such that a ◃ S 0 {\displaystyle a\triangleleft S_{0}} ; if T {\displaystyle T} is a subset of X {\displaystyle X} such that b ∈ S {\displaystyle b\in S} implies b ◃ T {\displaystyle b\triangleleft T} , then a ◃ S {\displaystyle a\triangleleft S} implies a ◃ T {\displaystyle a\triangleleft T} ; if a ◃ S {\displaystyle a\triangleleft S} but a ⋪ S − { b } {\displaystyle a\ntriangleleft S-\lbrace b\rbrace } for some b ∈ S {\displaystyle b\in S} , then b ◃ ( S − { b } ) ∪ { a } {\displaystyle b\triangleleft (S-\lbrace b\rbrace )\cup \lbrace a\rbrace } . Given a dependence relation ◃ {\displaystyle \triangleleft } on X {\displaystyle X} , a subset S {\displaystyle S} of X {\displaystyle X} is said to be independent if a ⋪ S − { a } {\displaystyle a\ntriangleleft S-\lbrace a\rbrace } for all a ∈ S . {\displaystyle a\in S.} If S ⊆ T {\displaystyle S\subseteq T} , then S {\displaystyle S} is said to span T {\displaystyle T} if t ◃ S {\displaystyle t\triangleleft S} for every t ∈ T . {\displaystyle t\in T.} S {\displaystyle S} is said to be a basis of X {\displaystyle X} if S {\displaystyle S} is independent and S {\displaystyle S} spans X . {\displaystyle X.} If X {\displaystyle X} is a non-empty set with a dependence relation ◃ {\displaystyle \triangleleft } , then X {\displaystyle X} always has a basis with respect to ◃ . {\displaystyle \triangleleft .} Furthermore, any two bases of X {\displaystyle X} have the same cardinality. If a ◃ S {\displaystyle a\triangleleft S} and S ⊆ T {\displaystyle S\subseteq T} , then a ◃ T {\displaystyle a\triangleleft T} , using property 3. and 1. == Examples == Let V {\displaystyle V} be a vector space over a field F . {\displaystyle F.} The relation ◃ {\displaystyle \triangleleft } , defined by υ ◃ S {\displaystyle \upsilon \triangleleft S} if υ {\displaystyle \upsilon } is in the subspace spanned by S {\displaystyle S} , is a dependence relation. This is equivalent to the definition of linear dependence. Let K {\displaystyle K} be a field extension of F . {\displaystyle F.} Define ◃ {\displaystyle \triangleleft } by α ◃ S {\displaystyle \alpha \triangleleft S} if α {\displaystyle \alpha } is algebraic over F ( S ) . {\displaystyle F(S).} Then ◃ {\displaystyle \triangleleft } is a dependence relation. This is equivalent to the definition of algebraic dependence. == See also == matroid This article incorporates material from Dependence relation on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Wikipedia:Derangement#0
|
In combinatorial mathematics, a derangement is a permutation of the elements of a set in which no element appears in its original position. In other words, a derangement is a permutation that has no fixed points. The number of derangements of a set of size n is known as the subfactorial of n or the n th derangement number or n th de Montmort number (after Pierre Remond de Montmort). Notations for subfactorials in common use include !n, Dn, dn, or n¡ . For n > 0 , the subfactorial !n equals the nearest integer to n!/e, where n! denotes the factorial of n and e ≈ 2.718281828... is Euler's number. The problem of counting derangements was first considered by Pierre Raymond de Montmort in his Essay d'analyse sur les jeux de hazard in 1708; he solved it in 1713, as did Nicholas Bernoulli at about the same time. == Example == Suppose that a professor gave a test to 4 students – A, B, C, and D – and wants to let them grade each other's tests. Of course, no student should grade their own test. How many ways could the professor hand the tests back to the students for grading, such that no student receives their own test back? Out of 24 possible permutations (4!) for handing back the tests, there are only 9 derangements (shown in blue italics above). In every other permutation of this 4-member set, at least one student gets their own test back (shown in bold red). Another version of the problem arises when we ask for the number of ways n letters, each addressed to a different person, can be placed in n pre-addressed envelopes so that no letter appears in the correctly addressed envelope. == Counting derangements == Counting derangements of a set amounts to the hat-check problem, in which one considers the number of ways in which n hats (call them h1 through hn) can be returned to n people (P1 through Pn) such that no hat makes it back to its owner. Each person may receive any of the n − 1 hats that is not their own. Call the hat which the person P1 receives hi and consider hi's owner: Pi receives either P1's hat, h1, or some other. Accordingly, the problem splits into two possible cases: Pi receives a hat other than h1. This case is equivalent to solving the problem with n − 1 people and n − 1 hats because for each of the n − 1 people besides P1 there is exactly one hat from among the remaining n − 1 hats that they may not receive (for any Pj besides Pi, the unreceivable hat is hj, while for Pi it is h1). Another way to see this is to rename h1 to hi, where the derangement is more explicit: for any j from 2 to n, Pj cannot receive hj. Pi receives h1. In this case the problem reduces to n − 2 people and n − 2 hats, because P1 received hi's hat and Pi received h1's hat, effectively putting both out of further consideration. For each of the n − 1 hats that P1 may receive, the number of ways that P2, ..., Pn may all receive hats is the sum of the counts for the two cases. This gives us the solution to the hat-check problem: Stated algebraically, the number !n of derangements of an n-element set is ! n = ( n − 1 ) ( ! ( n − 1 ) + ! ( n − 2 ) ) {\displaystyle !n=\left(n-1\right){\bigl (}{!\left(n-1\right)}+{!\left(n-2\right)}{\bigr )}} for n ≥ 2 {\displaystyle n\geq 2} , where ! 0 = 1 {\displaystyle !0=1} and ! 1 = 0. {\displaystyle !1=0.} The number of derangements of small lengths is given in the table below. There are various other expressions for !n, equivalent to the formula given above. These include ! n = n ! ∑ i = 0 n ( − 1 ) i i ! {\displaystyle !n=n!\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}}} for n ≥ 0 {\displaystyle n\geq 0} and ! n = [ n ! e ] = ⌊ n ! e + 1 2 ⌋ {\displaystyle !n=\left[{\frac {n!}{e}}\right]=\left\lfloor {\frac {n!}{e}}+{\frac {1}{2}}\right\rfloor } for n ≥ 1 , {\displaystyle n\geq 1,} where [ x ] {\displaystyle \left[x\right]} is the nearest integer function and ⌊ x ⌋ {\displaystyle \left\lfloor x\right\rfloor } is the floor function. Other related formulas include ! n = ⌊ n ! + 1 e ⌋ , n ≥ 1 , {\displaystyle !n=\left\lfloor {\frac {n!+1}{e}}\right\rfloor ,\quad \ n\geq 1,} ! n = ⌊ ( e + e − 1 ) n ! ⌋ − ⌊ e n ! ⌋ , n ≥ 2 , {\displaystyle !n=\left\lfloor \left(e+e^{-1}\right)n!\right\rfloor -\lfloor en!\rfloor ,\quad n\geq 2,} and ! n = n ! − ∑ i = 1 n ( n i ) ⋅ ! ( n − i ) , n ≥ 1. {\displaystyle !n=n!-\sum _{i=1}^{n}{n \choose i}\cdot {!(n-i)},\quad \ n\geq 1.} The following recurrence also holds: ! n = { 1 if n = 0 , n ⋅ ( ! ( n − 1 ) ) + ( − 1 ) n if n > 0. {\displaystyle !n={\begin{cases}1&{\text{if }}n=0,\\n\cdot \left(!(n-1)\right)+(-1)^{n}&{\text{if }}n>0.\end{cases}}} === Derivation by inclusion–exclusion principle === One may derive a non-recursive formula for the number of derangements of an n-set, as well. For 1 ≤ k ≤ n {\displaystyle 1\leq k\leq n} we define S k {\displaystyle S_{k}} to be the set of permutations of n objects that fix the k th object. Any intersection of a collection of i of these sets fixes a particular set of i objects and therefore contains ( n − i ) ! {\displaystyle (n-i)!} permutations. There are ( n i ) {\textstyle {n \choose i}} such collections, so the inclusion–exclusion principle yields | S 1 ∪ ⋯ ∪ S n | = ∑ i | S i | − ∑ i < j | S i ∩ S j | + ∑ i < j < k | S i ∩ S j ∩ S k | + ⋯ + ( − 1 ) n + 1 | S 1 ∩ ⋯ ∩ S n | = ( n 1 ) ( n − 1 ) ! − ( n 2 ) ( n − 2 ) ! + ( n 3 ) ( n − 3 ) ! − ⋯ + ( − 1 ) n + 1 ( n n ) 0 ! = ∑ i = 1 n ( − 1 ) i + 1 ( n i ) ( n − i ) ! = n ! ∑ i = 1 n ( − 1 ) i + 1 i ! , {\displaystyle {\begin{aligned}|S_{1}\cup \dotsm \cup S_{n}|&=\sum _{i}\left|S_{i}\right|-\sum _{i<j}\left|S_{i}\cap S_{j}\right|+\sum _{i<j<k}\left|S_{i}\cap S_{j}\cap S_{k}\right|+\cdots +(-1)^{n+1}\left|S_{1}\cap \dotsm \cap S_{n}\right|\\&={n \choose 1}(n-1)!-{n \choose 2}(n-2)!+{n \choose 3}(n-3)!-\cdots +(-1)^{n+1}{n \choose n}0!\\&=\sum _{i=1}^{n}(-1)^{i+1}{n \choose i}(n-i)!\\&=n!\ \sum _{i=1}^{n}{(-1)^{i+1} \over i!},\end{aligned}}} and since a derangement is a permutation that leaves none of the n objects fixed, this implies ! n = n ! − | S 1 ∪ ⋯ ∪ S n | = n ! ∑ i = 0 n ( − 1 ) i i ! . {\displaystyle !n=n!-\left|S_{1}\cup \dotsm \cup S_{n}\right|=n!\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}}~.} On the other hand, n ! = ∑ i = 0 n ( n i ) ! i {\displaystyle n!=\sum _{i=0}^{n}{\binom {n}{i}}\ !i} since we can choose n − i {\displaystyle n-i} elements to be in their own place and derange the other i elements in just !i ways, by definition. == Growth of number of derangements as n approaches ∞ == From ! n = n ! ∑ i = 0 n ( − 1 ) i i ! {\displaystyle !n=n!\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}}} and e x = ∑ i = 0 ∞ x i i ! {\displaystyle e^{x}=\sum _{i=0}^{\infty }{x^{i} \over i!}} by substituting x = − 1 {\textstyle x=-1} one immediately obtains that lim n → ∞ ! n n ! = lim n → ∞ ∑ i = 0 n ( − 1 ) i i ! = e − 1 ≈ 0.367879 … . {\displaystyle \lim _{n\to \infty }{!n \over n!}=\lim _{n\to \infty }\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}}=e^{-1}\approx 0.367879\ldots .} This is the limit of the probability that a randomly selected permutation of a large number of objects is a derangement. The probability converges to this limit extremely quickly as n increases, which is why !n is the nearest integer to n!/e. The above semi-log graph shows that the derangement graph lags the permutation graph by an almost constant value. More information about this calculation and the above limit may be found in the article on the statistics of random permutations. === Asymptotic expansion in terms of Bell numbers === An asymptotic expansion for the number of derangements in terms of Bell numbers is as follows: ! n = n ! e + ∑ k = 1 m ( − 1 ) n + k − 1 B k n k + O ( 1 n m + 1 ) , {\displaystyle !n={\frac {n!}{e}}+\sum _{k=1}^{m}\left(-1\right)^{n+k-1}{\frac {B_{k}}{n^{k}}}+O\left({\frac {1}{n^{m+1}}}\right),} where m {\displaystyle m} is any fixed positive integer, and B k {\displaystyle B_{k}} denotes the k {\displaystyle k} -th Bell number. Moreover, the constant implied by the big O-term does not exceed B m + 1 {\displaystyle B_{m+1}} . == Generalizations == The problème des rencontres asks how many permutations of a size-n set have exactly k fixed points. Derangements are an example of the wider field of constrained permutations. For example, the ménage problem asks if n opposite-sex couples are seated man-woman-man-woman-... around a table, how many ways can they be seated so that nobody is seated next to his or her partner? More formally, given sets A and S, and some sets U and V of surjections A → S, we often wish to know the number of pairs of functions (f, g) such that f is in U and g is in V, and for all a in A, f(a) ≠ g(a); in other words, where for each f and g, there exists a derangement φ of S such that f(a) = φ(g(a)). Another generalization is the following problem: How many anagrams with no fixed letters of a given word are there? For instance, for a word made of only two different letters, say n letters A and m letters B, the answer is, of course, 1 or 0 according to whether n = m or not, for the only way to form an anagram without fixed letters is to exchange all the A with B, which is possible if and only if n = m. In the general case, for a word with n1 letters X1, n2 letters X2, ..., nr letters Xr, it turns out (after a proper use of the inclusion-exclusion formula) that the answer has the form ∫ 0 ∞ P n 1 ( x ) P n 2 ( x ) ⋯ P n r ( x ) e − x d x , {\displaystyle \int _{0}^{\infty }P_{n_{1}}(x)P_{n_{2}}(x)\cdots P_{n_{r}}(x)\ e^{-x}dx,} for a certain sequence of polynomials Pn, where Pn has degree n. But the above answer for the case r = 2 gives an orthogonality relation, whence the Pn's are the Laguerre polynomials (up to a sign that is easily decided). In particular, for the classical derangements, one has that ! n = Γ ( n + 1 , − 1 ) e = ∫ 0 ∞ ( x − 1 ) n e − x d x {\displaystyle !n={\frac {\Gamma (n+1,-1)}{e}}=\int _{0}^{\infty }(x-1)^{n}e^{-x}dx} where Γ ( s , x ) {\displaystyle \Gamma (s,x)} is the upper incomplete gamma function. == Computational complexity == It is NP-complete to determine whether a given permutation group (described by a given set of permutations that generate it) contains any derangements. == Footnotes == == References == == External links == Baez, John (2003). "Let's get deranged!" (PDF) – via math.ucr.edu. Bogart, Kenneth P.; Doyle, Peter G. (1985). "Non-sexist solution of the ménage problem" – via math.dartmouth.edu. Weisstein, E.W. "Derangement". MathWorld / Wolfram Research – via mathworld.wolfram.com.
|
Wikipedia:Derek Corneil#0
|
Derek Gordon Corneil is a Canadian mathematician and computer scientist, a professor emeritus of computer science at the University of Toronto, and an expert in graph algorithms and graph theory. == Life == When he was leaving high school, Corneil was told by his English teacher that doing a degree in mathematics and physics was a bad idea, and that the best he could hope for was to go to a technical college. His interest in computer science began when, as an undergraduate student at Queens College, he heard that a computer was purchased by the London Life insurance company in London, Ontario, where his father worked. As a freshman, he took a summer job operating the UNIVAC Mark II at the company. One of his main responsibilities was to operate a printer. An opportunity for a programming job with the company sponsoring his college scholarship appeared soon after. It was a chance that Corneil jumped at after being denied a similar position at London Life. There was an initial mix-up at his job as his overseer thought that he knew how to program the UNIVAC Mark II, and so he would easily transition to doing the same for the company's newly acquired IBM 1401 machine. However, Corneil did not have the assumed programming background. Thus, in the two-week window that Corneil had been given to learn how to grasp programming the IBM 1401, he learned how to write code from scratch by relying heavily on the instruction manual. This experience pushed him further on his way as did a number of projects he worked on in that position later on. Corneil went on to earn a bachelor's degree in mathematics and physics from Queen's University in 1964. Initially he had planned to do his graduate studies before becoming a high school teacher, but his acceptance into the brand new graduate program in computer science at the University of Toronto changed that. At the University of Toronto, Corneil earned a master's degree and then in 1968 a doctorate in computer science under the supervision of Calvin Gotlieb. (His post-doctoral supervisor was Jaap Seidel.) It was during this time that Corneil became interested in graph theory. He and Gotlieb eventually became good friends. After postdoctoral studies at the Eindhoven University of Technology, Corneil returned to Toronto as a faculty member in 1970. Before his retirement in 2010, Corneil held many positions at the University of Toronto, including Department Chair of the Computer Science department (July 1985 to June 1990), Director of Research Initiatives of the Faculty of Arts and Science (July 1991 to March 1998), and Acting Vice President of Research and International Relations (September to December 1993). During his time as a professor, he was also a visiting professor at universities such as the University of British Columbia, Simon Fraser University, the Université de Grenoble and the Université de Montpellier. == Work == Corneil did his research in algorithmic graph theory and graph theory in general. He has overseen 49 theses and published over 100 papers on his own or with co-authors. These papers include: A proof that recognizing graphs of small treewidth is NP-complete, The discovery of the cotree representation for cographs and of fast recognition algorithms for cographs, Generating algorithms for graph isomorphism. Algorithmic and structural properties of complement reducible graphs. Properties of asteroidal triple-free graphs. An algorithm to solve the problem of determining whether a graph is a partial graph of a k-tree. Results addressing graph theoretic, algorithmic, and complexity issues with regard to tree spanners. An explanation of the relationship between tree width and clique-width. Determining the diameter of restricted graph families. Outlining the structure of trapezoid graphs. As a professor emeritus, Corneil still does research and is also an editor of several publications such as Ars Combinatoria and SIAM Monographs on Discrete Mathematics and Applications. == Awards == He was inducted as a Fields Institute Fellow in 2004. == References == == External links == Interview with Corneil, Stephen Ibaraki, 13 June 2011 List of publications at DBLP
|
Wikipedia:Derek W. Moore#0
|
Derek William Moore (19 April 1931 – 15 July 2008) was a British mathematician. He was born in South Shields, where his father was a head of department at the nautical college. He was educated at the local grammar school and Jesus College, Cambridge. In 1956 he began research into theoretical fluid dynamics at the Cavendish Laboratory, followed by spells at Bristol University and the NASA Goddard Space Flight Center in New York. In 1967 he moved to Imperial College London to become a Professor of Applied Mathematics, holding the post for the rest of his career. In 2001, he was awarded the Senior Whitehead Prize by the London Mathematical Society. He was a foreign honorary member of the American Academy of Arts and Sciences and a Fellow of the Royal Society. == References ==
|
Wikipedia:Derek W. Robinson#0
|
Derek William Robinson (25 June 1935 – 31 August 2021) was a British-Australian theoretical mathematician and physicist. He was a researcher at the Australian National University. == Early life == Derek W. Robinson was born in southern England. He attended grammar school followed by the University of Oxford where he earned a Bachelor of Arts with honours in mathematics in 1957 and a PhD in nuclear physics in 1960 with the dissertation, Multiple Coulomb Excitations in Deformed Nuclei. His PhD advisor was David M. Brink. == Research == His academic focus became the mathematics behind quantum mechanics, which led him to research facilities all over the world. From 1960 to 1962, he was at the ETH Zurich, Switzerland. He then served as a research associate at the University of Illinois for two years, after which he was a research associate at the Max Planck Institute in Munich, Germany from 1964 to 1965. He also spent a year as a professor at Aix-Marseille University, followed by two years as a research associate at CERN in Geneva, Switzerland, followed by another stint as a professor at the Aix Marseille University between 1968 and 1977. He served as the president of the Department of Physics from 1973 to 1975 and the assistant director of the Centre de Physique at CNRS in Marseille from 1974 to 1978. === Australia === In 1978, he moved his family to Sydney, Australia, where he became a Professor of Pure Mathematics at the University of New South Wales until 1982. From 1982 until his retirement in 2000, he was a Professor of Mathematics at the Centre for Mathematics and its Applications at the Australian National University. From 2000, he continued doing grant-funded research based at Australia National University until his death in 2021. He also served as Chairman of the Board for the Institute for Advanced Studies from 1988 to 1992. In 1980, he was inducted as a Fellow of the Australian Academy of Science. == Papers and accomplishments == Robinson is best known for the discovery of Lieb-Robinson bounds, the theoretical upper limit for the speed of information propagation in a non-relativistic quantum system. He is also known for writing, with Ola Bratteli a two-volume work titled, Operator Algebras and Quantum Statistical Mechanics. He received the Thomas Ranken Lyle Medal by the Australian Academy of Science in 1981. In 2001, he received the Centenary Medal. He was also a world-class cyclist, having won Time Trials Championship in the International Masters Games Melbourne in 2002 in the Men 65-69 category. == References == == External links == Barnsley, Michael; Nachtergaele, Bruno; Simon, Barry; Connes, Alain; Evans, David; Gallavotti, Giovanni; Glashow, Sheldon; Jaffe, Arthur M.; Jorgensen, Palle; Kishimoto, Aki; Lieb, Elliott; Narnhofer, Heide; Ruelle, David; Ruskai, Mary Beth; Sikora, Adam; Ter Elst, A. F. M. (September 2023). "Remembrances of Derek William Robinson, June 25, 1935–August 31, 2021" (PDF). Notices of the American Mathematical Society. 70 (8): 1252–1267. doi:10.1090/noti2765.
|
Wikipedia:Derivative algebra (abstract algebra)#0
|
In mathematics, a derivation is a function on an algebra that generalizes certain features of the derivative operator. Specifically, given an algebra A over a ring or a field K, a K-derivation is a K-linear map D : A → A that satisfies Leibniz's law: D ( a b ) = a D ( b ) + D ( a ) b . {\displaystyle D(ab)=aD(b)+D(a)b.} More generally, if M is an A-bimodule, a K-linear map D : A → M that satisfies the Leibniz law is also called a derivation. The collection of all K-derivations of A to itself is denoted by DerK(A). The collection of K-derivations of A into an A-module M is denoted by DerK(A, M). Derivations occur in many different contexts in diverse areas of mathematics. The partial derivative with respect to a variable is an R-derivation on the algebra of real-valued differentiable functions on Rn. The Lie derivative with respect to a vector field is an R-derivation on the algebra of differentiable functions on a differentiable manifold; more generally it is a derivation on the tensor algebra of a manifold. It follows that the adjoint representation of a Lie algebra is a derivation on that algebra. The Pincherle derivative is an example of a derivation in abstract algebra. If the algebra A is noncommutative, then the commutator with respect to an element of the algebra A defines a linear endomorphism of A to itself, which is a derivation over K. That is, [ F G , N ] = [ F , N ] G + F [ G , N ] , {\displaystyle [FG,N]=[F,N]G+F[G,N],} where [ ⋅ , N ] {\displaystyle [\cdot ,N]} is the commutator with respect to N {\displaystyle N} . An algebra A equipped with a distinguished derivation d forms a differential algebra, and is itself a significant object of study in areas such as differential Galois theory. == Properties == If A is a K-algebra, for K a ring, and D: A → A is a K-derivation, then If A has a unit 1, then D(1) = D(12) = 2D(1), so that D(1) = 0. Thus by K-linearity, D(k) = 0 for all k ∈ K. If A is commutative, D(x2) = xD(x) + D(x)x = 2xD(x), and D(xn) = nxn−1D(x), by the Leibniz rule. More generally, for any x1, x2, …, xn ∈ A, it follows by induction that D ( x 1 x 2 ⋯ x n ) = ∑ i x 1 ⋯ x i − 1 D ( x i ) x i + 1 ⋯ x n {\displaystyle D(x_{1}x_{2}\cdots x_{n})=\sum _{i}x_{1}\cdots x_{i-1}D(x_{i})x_{i+1}\cdots x_{n}} which is ∑ i D ( x i ) ∏ j ≠ i x j {\textstyle \sum _{i}D(x_{i})\prod _{j\neq i}x_{j}} if for all i, D(xi) commutes with x 1 , x 2 , … , x i − 1 {\displaystyle x_{1},x_{2},\ldots ,x_{i-1}} . For n > 1, Dn is not a derivation, instead satisfying a higher-order Leibniz rule: D n ( u v ) = ∑ k = 0 n ( n k ) ⋅ D n − k ( u ) ⋅ D k ( v ) . {\displaystyle D^{n}(uv)=\sum _{k=0}^{n}{\binom {n}{k}}\cdot D^{n-k}(u)\cdot D^{k}(v).} Moreover, if M is an A-bimodule, write Der K ( A , M ) {\displaystyle \operatorname {Der} _{K}(A,M)} for the set of K-derivations from A to M. DerK(A, M) is a module over K. DerK(A) is a Lie algebra with Lie bracket defined by the commutator: [ D 1 , D 2 ] = D 1 ∘ D 2 − D 2 ∘ D 1 . {\displaystyle [D_{1},D_{2}]=D_{1}\circ D_{2}-D_{2}\circ D_{1}.} since it is readily verified that the commutator of two derivations is again a derivation. There is an A-module ΩA/K (called the Kähler differentials) with a K-derivation d: A → ΩA/K through which any derivation D: A → M factors. That is, for any derivation D there is a A-module map φ with D : A ⟶ d Ω A / K ⟶ φ M {\displaystyle D:A{\stackrel {d}{\longrightarrow }}\Omega _{A/K}{\stackrel {\varphi }{\longrightarrow }}M} The correspondence D ↔ φ {\displaystyle D\leftrightarrow \varphi } is an isomorphism of A-modules: Der K ( A , M ) ≃ Hom A ( Ω A / K , M ) {\displaystyle \operatorname {Der} _{K}(A,M)\simeq \operatorname {Hom} _{A}(\Omega _{A/K},M)} If k ⊂ K is a subring, then A inherits a k-algebra structure, so there is an inclusion Der K ( A , M ) ⊂ Der k ( A , M ) , {\displaystyle \operatorname {Der} _{K}(A,M)\subset \operatorname {Der} _{k}(A,M),} since any K-derivation is a fortiori a k-derivation. == Graded derivations == Given a graded algebra A and a homogeneous linear map D of grade |D| on A, D is a homogeneous derivation if D ( a b ) = D ( a ) b + ε | a | | D | a D ( b ) {\displaystyle {D(ab)=D(a)b+\varepsilon ^{|a||D|}aD(b)}} for every homogeneous element a and every element b of A for a commutator factor ε = ±1. A graded derivation is sum of homogeneous derivations with the same ε. If ε = 1, this definition reduces to the usual case. If ε = −1, however, then D ( a b ) = D ( a ) b + ( − 1 ) | a | | D | a D ( b ) {\displaystyle {D(ab)=D(a)b+(-1)^{|a||D|}aD(b)}} for odd |D|, and D is called an anti-derivation. Examples of anti-derivations include the exterior derivative and the interior product acting on differential forms. Graded derivations of superalgebras (i.e. Z2-graded algebras) are often called superderivations. == Related notions == Hasse–Schmidt derivations are K-algebra homomorphisms A → A [ [ t ] ] . {\displaystyle A\to A[[t]].} Composing further with the map that sends a formal power series ∑ a n t n {\displaystyle \sum a_{n}t^{n}} to the coefficient a 1 {\displaystyle a_{1}} gives a derivation. == See also == In differential geometry derivations are tangent vectors Kähler differential Hasse derivative p-derivation Wirtinger derivatives Derivative of the exponential map == References == Bourbaki, Nicolas (1989), Algebra I, Elements of mathematics, Springer-Verlag, ISBN 3-540-64243-9. Eisenbud, David (1999), Commutative algebra with a view toward algebraic geometry (3rd. ed.), Springer-Verlag, ISBN 978-0-387-94269-8. Matsumura, Hideyuki (1970), Commutative algebra, Mathematics lecture note series, W. A. Benjamin, ISBN 978-0-8053-7025-6. Kolař, Ivan; Slovák, Jan; Michor, Peter W. (1993), Natural operations in differential geometry, Springer-Verlag.
|
Wikipedia:Determinant#0
|
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant is completely determined by the two following properties: the determinant of a product of matrices is the product of their determinants, and the determinant of a triangular matrix is the product of its diagonal entries. The determinant of a 2 × 2 matrix is | a b c d | = a d − b c , {\displaystyle {\begin{vmatrix}a&b\\c&d\end{vmatrix}}=ad-bc,} and the determinant of a 3 × 3 matrix is | a b c d e f g h i | = a e i + b f g + c d h − c e g − b d i − a f h . {\displaystyle {\begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}}=aei+bfg+cdh-ceg-bdi-afh.} The determinant of an n × n matrix can be defined in several equivalent ways, the most common being Leibniz formula, which expresses the determinant as a sum of n ! {\displaystyle n!} (the factorial of n) signed products of matrix entries. It can be computed by the Laplace expansion, which expresses the determinant as a linear combination of determinants of submatrices, or with Gaussian elimination, which allows computing a row echelon form with the same determinant, equal to the product of the diagonal entries of the row echelon form. Determinants can also be defined by some of their properties. Namely, the determinant is the unique function defined on the n × n matrices that has the four following properties: The determinant of the identity matrix is 1. The exchange of two rows multiplies the determinant by −1. Multiplying a row by a number multiplies the determinant by this number. Adding a multiple of one row to another row does not change the determinant. The above properties relating to rows (properties 2–4) may be replaced by the corresponding statements with respect to columns. The determinant is invariant under matrix similarity. This implies that, given a linear endomorphism of a finite-dimensional vector space, the determinant of the matrix that represents it on a basis does not depend on the chosen basis. This allows defining the determinant of a linear endomorphism, which does not depend on the choice of a coordinate system. Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and determinants can be used to solve these equations (Cramer's rule), although other methods of solution are computationally much more efficient. Determinants are used for defining the characteristic polynomial of a square matrix, whose roots are the eigenvalues. In geometry, the signed n-dimensional volume of a n-dimensional parallelepiped is expressed by a determinant, and the determinant of a linear endomorphism determines how the orientation and the n-dimensional volume are transformed under the endomorphism. This is used in calculus with exterior differential forms and the Jacobian determinant, in particular for changes of variables in multiple integrals. == Two by two matrices == The determinant of a 2 × 2 matrix ( a b c d ) {\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}} is denoted either by "det" or by vertical bars around the matrix, and is defined as det ( a b c d ) = | a b c d | = a d − b c . {\displaystyle \det {\begin{pmatrix}a&b\\c&d\end{pmatrix}}={\begin{vmatrix}a&b\\c&d\end{vmatrix}}=ad-bc.} For example, det ( 3 7 1 − 4 ) = | 3 7 1 − 4 | = ( 3 ⋅ ( − 4 ) ) − ( 7 ⋅ 1 ) = − 19. {\displaystyle \det {\begin{pmatrix}3&7\\1&-4\end{pmatrix}}={\begin{vmatrix}3&7\\1&{-4}\end{vmatrix}}=(3\cdot (-4))-(7\cdot 1)=-19.} === First properties === The determinant has several key properties that can be proved by direct evaluation of the definition for 2 × 2 {\displaystyle 2\times 2} -matrices, and that continue to hold for determinants of larger matrices. They are as follows: first, the determinant of the identity matrix ( 1 0 0 1 ) {\displaystyle {\begin{pmatrix}1&0\\0&1\end{pmatrix}}} is 1. Second, the determinant is zero if two rows are the same: | a b a b | = a b − b a = 0. {\displaystyle {\begin{vmatrix}a&b\\a&b\end{vmatrix}}=ab-ba=0.} This holds similarly if the two columns are the same. Moreover, | a b + b ′ c d + d ′ | = a ( d + d ′ ) − ( b + b ′ ) c = | a b c d | + | a b ′ c d ′ | . {\displaystyle {\begin{vmatrix}a&b+b'\\c&d+d'\end{vmatrix}}=a(d+d')-(b+b')c={\begin{vmatrix}a&b\\c&d\end{vmatrix}}+{\begin{vmatrix}a&b'\\c&d'\end{vmatrix}}.} Finally, if any column is multiplied by some number r {\displaystyle r} (i.e., all entries in that column are multiplied by that number), the determinant is also multiplied by that number: | r ⋅ a b r ⋅ c d | = r a d − b r c = r ( a d − b c ) = r ⋅ | a b c d | . {\displaystyle {\begin{vmatrix}r\cdot a&b\\r\cdot c&d\end{vmatrix}}=rad-brc=r(ad-bc)=r\cdot {\begin{vmatrix}a&b\\c&d\end{vmatrix}}.} == Geometric meaning == If the matrix entries are real numbers, the matrix A represents the linear map that maps the basis vectors to the columns of A. The images of the basis vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the columns of the above matrix is the one with vertices at (0, 0), (a, c), (a + b, c + d), and (b, d), as shown in the accompanying diagram. The absolute value of ad − bc is the area of the parallelogram, and thus represents the scale factor by which areas are transformed by A. The absolute value of the determinant together with the sign becomes the signed area of the parallelogram. The signed area is the same as the usual area, except that it is negative when the angle from the first to the second vector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for the identity matrix). To show that ad − bc is the signed area, one may consider a matrix containing two vectors u ≡ (a, c) and v ≡ (b, d) representing the parallelogram's sides. The signed area can be expressed as |u| |v| sin θ for the angle θ between the vectors, which is simply base times height, the length of one vector times the perpendicular component of the other. Due to the sine this already is the signed area, yet it may be expressed more conveniently using the cosine of the complementary angle to a perpendicular vector, e.g. u⊥ = (−c, a), so that |u⊥| |v| cos θ′ becomes the signed area in question, which can be determined by the pattern of the scalar product to be equal to ad − bc according to the following equations: Signed area = | u | | v | sin θ = | u ⊥ | | v | cos θ ′ = ( − c a ) ⋅ ( b d ) = a d − b c . {\displaystyle {\text{Signed area}}=|{\boldsymbol {u}}|\,|{\boldsymbol {v}}|\,\sin \,\theta =\left|{\boldsymbol {u}}^{\perp }\right|\,\left|{\boldsymbol {v}}\right|\,\cos \,\theta '={\begin{pmatrix}-c\\a\end{pmatrix}}\cdot {\begin{pmatrix}b\\d\end{pmatrix}}=ad-bc.} Thus the determinant gives the area scale factor and the orientation induced by the mapping represented by A. When the determinant is equal to one, the linear mapping defined by the matrix preserves area and orientation. If an n × n real matrix A is written in terms of its column vectors A = [ a 1 a 2 ⋯ a n ] {\displaystyle A=\left[{\begin{array}{c|c|c|c}\mathbf {a} _{1}&\mathbf {a} _{2}&\cdots &\mathbf {a} _{n}\end{array}}\right]} , then A ( 1 0 ⋮ 0 ) = a 1 , A ( 0 1 ⋮ 0 ) = a 2 , … , A ( 0 0 ⋮ 1 ) = a n . {\displaystyle A{\begin{pmatrix}1\\0\\\vdots \\0\end{pmatrix}}=\mathbf {a} _{1},\quad A{\begin{pmatrix}0\\1\\\vdots \\0\end{pmatrix}}=\mathbf {a} _{2},\quad \ldots ,\quad A{\begin{pmatrix}0\\0\\\vdots \\1\end{pmatrix}}=\mathbf {a} _{n}.} This means that A {\displaystyle A} maps the unit n-cube to the n-dimensional parallelotope defined by the vectors a 1 , a 2 , … , a n , {\displaystyle \mathbf {a} _{1},\mathbf {a} _{2},\ldots ,\mathbf {a} _{n},} the region P = { c 1 a 1 + ⋯ + c n a n ∣ 0 ≤ c i ≤ 1 ∀ i } {\displaystyle P=\left\{c_{1}\mathbf {a} _{1}+\cdots +c_{n}\mathbf {a} _{n}\mid 0\leq c_{i}\leq 1\ \forall i\right\}} ( ∀ {\textstyle \forall } stands for "for all" as a logical symbol.) The determinant gives the signed n-dimensional volume of this parallelotope, det ( A ) = ± vol ( P ) , {\displaystyle \det(A)=\pm {\text{vol}}(P),} and hence describes more generally the n-dimensional volume scale factor of the linear transformation produced by A. (The sign shows whether the transformation preserves or reverses orientation.) In particular, if the determinant is zero, then this parallelotope has volume zero and is not fully n-dimensional, which indicates that the dimension of the image of A is less than n. This means that A produces a linear transformation which is neither onto nor one-to-one, and so is not invertible. == Definition == Let A be a square matrix with n rows and n columns, so that it can be written as A = [ a 1 , 1 a 1 , 2 ⋯ a 1 , n a 2 , 1 a 2 , 2 ⋯ a 2 , n ⋮ ⋮ ⋱ ⋮ a n , 1 a n , 2 ⋯ a n , n ] . {\displaystyle A={\begin{bmatrix}a_{1,1}&a_{1,2}&\cdots &a_{1,n}\\a_{2,1}&a_{2,2}&\cdots &a_{2,n}\\\vdots &\vdots &\ddots &\vdots \\a_{n,1}&a_{n,2}&\cdots &a_{n,n}\end{bmatrix}}.} The entries a 1 , 1 {\displaystyle a_{1,1}} etc. are, for many purposes, real or complex numbers. As discussed below, the determinant is also defined for matrices whose entries are in a commutative ring. The determinant of A is denoted by det(A), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets: | a 1 , 1 a 1 , 2 ⋯ a 1 , n a 2 , 1 a 2 , 2 ⋯ a 2 , n ⋮ ⋮ ⋱ ⋮ a n , 1 a n , 2 ⋯ a n , n | . {\displaystyle {\begin{vmatrix}a_{1,1}&a_{1,2}&\cdots &a_{1,n}\\a_{2,1}&a_{2,2}&\cdots &a_{2,n}\\\vdots &\vdots &\ddots &\vdots \\a_{n,1}&a_{n,2}&\cdots &a_{n,n}\end{vmatrix}}.} There are various equivalent ways to define the determinant of a square matrix A, i.e. one with the same number of rows and columns: the determinant can be defined via the Leibniz formula, an explicit formula involving sums of products of certain entries of the matrix. The determinant can also be characterized as the unique function depending on the entries of the matrix satisfying certain properties. This approach can also be used to compute determinants by simplifying the matrices in question. === Leibniz formula === ==== 3 × 3 matrices ==== The Leibniz formula for the determinant of a 3 × 3 matrix is the following: | a b c d e f g h i | = a e i + b f g + c d h − c e g − b d i − a f h . {\displaystyle {\begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}}=aei+bfg+cdh-ceg-bdi-afh.\ } In this expression, each term has one factor from each row, all in different columns, arranged in increasing row order. For example, bdi has b from the first row second column, d from the second row first column, and i from the third row third column. The signs are determined by how many transpositions of factors are necessary to arrange the factors in increasing order of their columns (given that the terms are arranged left-to-right in increasing row order): positive for an even number of transpositions and negative for an odd number. For the example of bdi, the single transposition of bd to db gives dbi, whose three factors are from the first, second and third columns respectively; this is an odd number of transpositions, so the term appears with negative sign. The rule of Sarrus is a mnemonic for the expanded form of this determinant: the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements, when the copies of the first two columns of the matrix are written beside it as in the illustration. This scheme for calculating the determinant of a 3 × 3 matrix does not carry over into higher dimensions. ==== n × n matrices ==== Generalizing the above to higher dimensions, the determinant of an n × n {\displaystyle n\times n} matrix is an expression involving permutations and their signatures. A permutation of the set { 1 , 2 , … , n } {\displaystyle \{1,2,\dots ,n\}} is a bijective function σ {\displaystyle \sigma } from this set to itself, with values σ ( 1 ) , σ ( 2 ) , … , σ ( n ) {\displaystyle \sigma (1),\sigma (2),\ldots ,\sigma (n)} exhausting the entire set. The set of all such permutations, called the symmetric group, is commonly denoted S n {\displaystyle S_{n}} . The signature sgn ( σ ) {\displaystyle \operatorname {sgn}(\sigma )} of a permutation σ {\displaystyle \sigma } is + 1 , {\displaystyle +1,} if the permutation can be obtained with an even number of transpositions (exchanges of two entries); otherwise, it is − 1. {\displaystyle -1.} Given a matrix A = [ a 1 , 1 … a 1 , n ⋮ ⋮ a n , 1 … a n , n ] , {\displaystyle A={\begin{bmatrix}a_{1,1}\ldots a_{1,n}\\\vdots \qquad \vdots \\a_{n,1}\ldots a_{n,n}\end{bmatrix}},} the Leibniz formula for its determinant is, using sigma notation for the sum, det ( A ) = | a 1 , 1 … a 1 , n ⋮ ⋮ a n , 1 … a n , n | = ∑ σ ∈ S n sgn ( σ ) a 1 , σ ( 1 ) ⋯ a n , σ ( n ) . {\displaystyle \det(A)={\begin{vmatrix}a_{1,1}\ldots a_{1,n}\\\vdots \qquad \vdots \\a_{n,1}\ldots a_{n,n}\end{vmatrix}}=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )a_{1,\sigma (1)}\cdots a_{n,\sigma (n)}.} Using pi notation for the product, this can be shortened into det ( A ) = ∑ σ ∈ S n ( sgn ( σ ) ∏ i = 1 n a i , σ ( i ) ) {\displaystyle \det(A)=\sum _{\sigma \in S_{n}}\left(\operatorname {sgn}(\sigma )\prod _{i=1}^{n}a_{i,\sigma (i)}\right)} . The Levi-Civita symbol ε i 1 , … , i n {\displaystyle \varepsilon _{i_{1},\ldots ,i_{n}}} is defined on the n-tuples of integers in { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} as 0 if two of the integers are equal, and otherwise as the signature of the permutation defined by the n-tuple of integers. With the Levi-Civita symbol, the Leibniz formula becomes det ( A ) = ∑ i 1 , i 2 , … , i n ε i 1 ⋯ i n a 1 , i 1 ⋯ a n , i n , {\displaystyle \det(A)=\sum _{i_{1},i_{2},\ldots ,i_{n}}\varepsilon _{i_{1}\cdots i_{n}}a_{1,i_{1}}\!\cdots a_{n,i_{n}},} where the sum is taken over all n-tuples of integers in { 1 , … , n } . {\displaystyle \{1,\ldots ,n\}.} == Properties == === Characterization of the determinant === The determinant can be characterized by the following three key properties. To state these, it is convenient to regard an n × n {\displaystyle n\times n} matrix A as being composed of its n {\displaystyle n} columns, so denoted as A = ( a 1 , … , a n ) , {\displaystyle A={\big (}a_{1},\dots ,a_{n}{\big )},} where the column vector a i {\displaystyle a_{i}} (for each i) is composed of the entries of the matrix in the i-th column. det ( I ) = 1 {\displaystyle \det \left(I\right)=1} , where I {\displaystyle I} is an identity matrix. The determinant is multilinear: if the jth column of a matrix A {\displaystyle A} is written as a linear combination a j = r ⋅ v + w {\displaystyle a_{j}=r\cdot v+w} of two column vectors v and w and a number r, then the determinant of A is expressible as a similar linear combination: | A | = | a 1 , … , a j − 1 , r ⋅ v + w , a j + 1 , … , a n | = r ⋅ | a 1 , … , v , … a n | + | a 1 , … , w , … , a n | {\displaystyle {\begin{aligned}|A|&={\big |}a_{1},\dots ,a_{j-1},r\cdot v+w,a_{j+1},\dots ,a_{n}|\\&=r\cdot |a_{1},\dots ,v,\dots a_{n}|+|a_{1},\dots ,w,\dots ,a_{n}|\end{aligned}}} The determinant is alternating: whenever two columns of a matrix are identical, its determinant is 0: | a 1 , … , v , … , v , … , a n | = 0. {\displaystyle |a_{1},\dots ,v,\dots ,v,\dots ,a_{n}|=0.} If the determinant is defined using the Leibniz formula as above, these three properties can be proved by direct inspection of that formula. Some authors also approach the determinant directly using these three properties: it can be shown that there is exactly one function that assigns to any n × n {\displaystyle n\times n} matrix A a number that satisfies these three properties. This also shows that this more abstract approach to the determinant yields the same definition as the one using the Leibniz formula. To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a standard basis vector. These determinants are either 0 (by property 9) or else ±1 (by properties 1 and 12 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear. === Immediate consequences === These rules have several further consequences: The determinant is a homogeneous function, i.e., det ( c A ) = c n det ( A ) {\displaystyle \det(cA)=c^{n}\det(A)} (for an n × n {\displaystyle n\times n} matrix A {\displaystyle A} ). Interchanging any pair of columns of a matrix multiplies its determinant by −1. This follows from the determinant being multilinear and alternating (properties 2 and 3 above): | a 1 , … , a j , … a i , … , a n | = − | a 1 , … , a i , … , a j , … , a n | . {\displaystyle |a_{1},\dots ,a_{j},\dots a_{i},\dots ,a_{n}|=-|a_{1},\dots ,a_{i},\dots ,a_{j},\dots ,a_{n}|.} This formula can be applied iteratively when several columns are swapped. For example | a 3 , a 1 , a 2 , a 4 … , a n | = − | a 1 , a 3 , a 2 , a 4 , … , a n | = | a 1 , a 2 , a 3 , a 4 , … , a n | . {\displaystyle |a_{3},a_{1},a_{2},a_{4}\dots ,a_{n}|=-|a_{1},a_{3},a_{2},a_{4},\dots ,a_{n}|=|a_{1},a_{2},a_{3},a_{4},\dots ,a_{n}|.} Yet more generally, any permutation of the columns multiplies the determinant by the sign of the permutation. If some column can be expressed as a linear combination of the other columns (i.e. the columns of the matrix form a linearly dependent set), the determinant is 0. As a special case, this includes: if some column is such that all its entries are zero, then the determinant of that matrix is 0. Adding a scalar multiple of one column to another column does not change the value of the determinant. This is a consequence of multilinearity and being alternative: by multilinearity the determinant changes by a multiple of the determinant of a matrix with two equal columns, which determinant is 0, since the determinant is alternating. If A {\displaystyle A} is a triangular matrix, i.e. a i j = 0 {\displaystyle a_{ij}=0} , whenever i > j {\displaystyle i>j} or, alternatively, whenever i < j {\displaystyle i<j} , then its determinant equals the product of the diagonal entries: det ( A ) = a 11 a 22 ⋯ a n n = ∏ i = 1 n a i i . {\displaystyle \det(A)=a_{11}a_{22}\cdots a_{nn}=\prod _{i=1}^{n}a_{ii}.} Indeed, such a matrix can be reduced, by appropriately adding multiples of the columns with fewer nonzero entries to those with more entries, to a diagonal matrix (without changing the determinant). For such a matrix, using the linearity in each column reduces to the identity matrix, in which case the stated formula holds by the very first characterizing property of determinants. Alternatively, this formula can also be deduced from the Leibniz formula, since the only permutation σ {\displaystyle \sigma } which gives a non-zero contribution is the identity permutation. ==== Example ==== These characterizing properties and their consequences listed above are both theoretically significant, but can also be used to compute determinants for concrete matrices. In fact, Gaussian elimination can be applied to bring any matrix into upper triangular form, and the steps in this algorithm affect the determinant in a controlled way. The following concrete example illustrates the computation of the determinant of the matrix A {\displaystyle A} using that method: A = [ − 2 − 1 2 2 1 4 − 3 3 − 1 ] . {\displaystyle A={\begin{bmatrix}-2&-1&2\\2&1&4\\-3&3&-1\end{bmatrix}}.} Combining these equalities gives | A | = − | E | = − ( 18 ⋅ 3 ⋅ ( − 1 ) ) = 54. {\displaystyle |A|=-|E|=-(18\cdot 3\cdot (-1))=54.} === Transpose === The determinant of the transpose of A {\displaystyle A} equals the determinant of A: det ( A T ) = det ( A ) {\displaystyle \det \left(A^{\textsf {T}}\right)=\det(A)} . This can be proven by inspecting the Leibniz formula. This implies that in all the properties mentioned above, the word "column" can be replaced by "row" throughout. For example, viewing an n × n matrix as being composed of n rows, the determinant is an n-linear function. === Multiplicativity and matrix groups === The determinant is a multiplicative map, i.e., for square matrices A {\displaystyle A} and B {\displaystyle B} of equal size, the determinant of a matrix product equals the product of their determinants: det ( A B ) = det ( A ) det ( B ) {\displaystyle \det(AB)=\det(A)\det(B)} This key fact can be proven by observing that, for a fixed matrix B {\displaystyle B} , both sides of the equation are alternating and multilinear as a function depending on the columns of A {\displaystyle A} . Moreover, they both take the value det B {\displaystyle \det B} when A {\displaystyle A} is the identity matrix. The above-mentioned unique characterization of alternating multilinear maps therefore shows this claim. A matrix A {\displaystyle A} with entries in a field is invertible precisely if its determinant is nonzero. This follows from the multiplicativity of the determinant and the formula for the inverse involving the adjugate matrix mentioned below. In this event, the determinant of the inverse matrix is given by det ( A − 1 ) = 1 det ( A ) = [ det ( A ) ] − 1 {\displaystyle \det \left(A^{-1}\right)={\frac {1}{\det(A)}}=[\det(A)]^{-1}} . In particular, products and inverses of matrices with non-zero determinant (respectively, determinant one) still have this property. Thus, the set of such matrices (of fixed size n {\displaystyle n} over a field K {\displaystyle K} ) forms a group known as the general linear group GL n ( K ) {\displaystyle \operatorname {GL} _{n}(K)} (respectively, a subgroup called the special linear group SL n ( K ) ⊂ GL n ( K ) {\displaystyle \operatorname {SL} _{n}(K)\subset \operatorname {GL} _{n}(K)} . More generally, the word "special" indicates the subgroup of another matrix group of matrices of determinant one. Examples include the special orthogonal group (which if n is 2 or 3 consists of all rotation matrices), and the special unitary group. Because the determinant respects multiplication and inverses, it is in fact a group homomorphism from GL n ( K ) {\displaystyle \operatorname {GL} _{n}(K)} into the multiplicative group K × {\displaystyle K^{\times }} of nonzero elements of K {\displaystyle K} . This homomorphism is surjective and its kernel is SL n ( K ) {\displaystyle \operatorname {SL} _{n}(K)} (the matrices with determinant one). Hence, by the first isomorphism theorem, this shows that SL n ( K ) {\displaystyle \operatorname {SL} _{n}(K)} is a normal subgroup of GL n ( K ) {\displaystyle \operatorname {GL} _{n}(K)} , and that the quotient group GL n ( K ) / SL n ( K ) {\displaystyle \operatorname {GL} _{n}(K)/\operatorname {SL} _{n}(K)} is isomorphic to K × {\displaystyle K^{\times }} . The Cauchy–Binet formula is a generalization of that product formula for rectangular matrices. This formula can also be recast as a multiplicative formula for compound matrices whose entries are the determinants of all quadratic submatrices of a given matrix. === Laplace expansion === Laplace expansion expresses the determinant of a matrix A {\displaystyle A} recursively in terms of determinants of smaller matrices, known as its minors. The minor M i , j {\displaystyle M_{i,j}} is defined to be the determinant of the ( n − 1 ) × ( n − 1 ) {\displaystyle (n-1)\times (n-1)} matrix that results from A {\displaystyle A} by removing the i {\displaystyle i} -th row and the j {\displaystyle j} -th column. The expression ( − 1 ) i + j M i , j {\displaystyle (-1)^{i+j}M_{i,j}} is known as a cofactor. For every i {\displaystyle i} , one has the equality det ( A ) = ∑ j = 1 n ( − 1 ) i + j a i , j M i , j , {\displaystyle \det(A)=\sum _{j=1}^{n}(-1)^{i+j}a_{i,j}M_{i,j},} which is called the Laplace expansion along the ith row. For example, the Laplace expansion along the first row ( i = 1 {\displaystyle i=1} ) gives the following formula: | a b c d e f g h i | = a | e f h i | − b | d f g i | + c | d e g h | {\displaystyle {\begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}}=a{\begin{vmatrix}e&f\\h&i\end{vmatrix}}-b{\begin{vmatrix}d&f\\g&i\end{vmatrix}}+c{\begin{vmatrix}d&e\\g&h\end{vmatrix}}} Unwinding the determinants of these 2 × 2 {\displaystyle 2\times 2} -matrices gives back the Leibniz formula mentioned above. Similarly, the Laplace expansion along the j {\displaystyle j} -th column is the equality det ( A ) = ∑ i = 1 n ( − 1 ) i + j a i , j M i , j . {\displaystyle \det(A)=\sum _{i=1}^{n}(-1)^{i+j}a_{i,j}M_{i,j}.} Laplace expansion can be used iteratively for computing determinants, but this approach is inefficient for large matrices. However, it is useful for computing the determinants of highly symmetric matrix such as the Vandermonde matrix | 1 1 1 ⋯ 1 x 1 x 2 x 3 ⋯ x n x 1 2 x 2 2 x 3 2 ⋯ x n 2 ⋮ ⋮ ⋮ ⋱ ⋮ x 1 n − 1 x 2 n − 1 x 3 n − 1 ⋯ x n n − 1 | = ∏ 1 ≤ i < j ≤ n ( x j − x i ) . {\displaystyle {\begin{vmatrix}1&1&1&\cdots &1\\x_{1}&x_{2}&x_{3}&\cdots &x_{n}\\x_{1}^{2}&x_{2}^{2}&x_{3}^{2}&\cdots &x_{n}^{2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\x_{1}^{n-1}&x_{2}^{n-1}&x_{3}^{n-1}&\cdots &x_{n}^{n-1}\end{vmatrix}}=\prod _{1\leq i<j\leq n}\left(x_{j}-x_{i}\right).} The n-term Laplace expansion along a row or column can be generalized to write an n x n determinant as a sum of ( n k ) {\displaystyle {\tbinom {n}{k}}} terms, each the product of the determinant of a k x k submatrix and the determinant of the complementary (n−k) x (n−k) submatrix. === Adjugate matrix === The adjugate matrix adj ( A ) {\displaystyle \operatorname {adj} (A)} is the transpose of the matrix of the cofactors, that is, ( adj ( A ) ) i , j = ( − 1 ) i + j M j i . {\displaystyle (\operatorname {adj} (A))_{i,j}=(-1)^{i+j}M_{ji}.} For every matrix, one has ( det A ) I = A adj A = ( adj A ) A . {\displaystyle (\det A)I=A\operatorname {adj} A=(\operatorname {adj} A)\,A.} Thus the adjugate matrix can be used for expressing the inverse of a nonsingular matrix: A − 1 = 1 det A adj A . {\displaystyle A^{-1}={\frac {1}{\det A}}\operatorname {adj} A.} === Block matrices === The formula for the determinant of a 2 × 2 {\displaystyle 2\times 2} matrix above continues to hold, under appropriate further assumptions, for a block matrix, i.e., a matrix composed of four submatrices A , B , C , D {\displaystyle A,B,C,D} of dimension m × m {\displaystyle m\times m} , m × n {\displaystyle m\times n} , n × m {\displaystyle n\times m} and n × n {\displaystyle n\times n} , respectively. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the Schur complement, is det ( A 0 C D ) = det ( A ) det ( D ) = det ( A B 0 D ) . {\displaystyle \det {\begin{pmatrix}A&0\\C&D\end{pmatrix}}=\det(A)\det(D)=\det {\begin{pmatrix}A&B\\0&D\end{pmatrix}}.} If A {\displaystyle A} is invertible, then it follows with results from the section on multiplicativity that det ( A B C D ) = det ( A ) det ( A B C D ) det ( A − 1 − A − 1 B 0 I n ) ⏟ = det ( A − 1 ) = ( det A ) − 1 = det ( A ) det ( I m 0 C A − 1 D − C A − 1 B ) = det ( A ) det ( D − C A − 1 B ) , {\displaystyle {\begin{aligned}\det {\begin{pmatrix}A&B\\C&D\end{pmatrix}}&=\det(A)\det {\begin{pmatrix}A&B\\C&D\end{pmatrix}}\underbrace {\det {\begin{pmatrix}A^{-1}&-A^{-1}B\\0&I_{n}\end{pmatrix}}} _{=\,\det(A^{-1})\,=\,(\det A)^{-1}}\\&=\det(A)\det {\begin{pmatrix}I_{m}&0\\CA^{-1}&D-CA^{-1}B\end{pmatrix}}\\&=\det(A)\det(D-CA^{-1}B),\end{aligned}}} which simplifies to det ( A ) ( D − C A − 1 B ) {\displaystyle \det(A)(D-CA^{-1}B)} when D {\displaystyle D} is a 1 × 1 {\displaystyle 1\times 1} matrix. A similar result holds when D {\displaystyle D} is invertible, namely det ( A B C D ) = det ( D ) det ( A B C D ) det ( I m 0 − D − 1 C D − 1 ) ⏟ = det ( D − 1 ) = ( det D ) − 1 = det ( D ) det ( A − B D − 1 C B D − 1 0 I n ) = det ( D ) det ( A − B D − 1 C ) . {\displaystyle {\begin{aligned}\det {\begin{pmatrix}A&B\\C&D\end{pmatrix}}&=\det(D)\det {\begin{pmatrix}A&B\\C&D\end{pmatrix}}\underbrace {\det {\begin{pmatrix}I_{m}&0\\-D^{-1}C&D^{-1}\end{pmatrix}}} _{=\,\det(D^{-1})\,=\,(\det D)^{-1}}\\&=\det(D)\det {\begin{pmatrix}A-BD^{-1}C&BD^{-1}\\0&I_{n}\end{pmatrix}}\\&=\det(D)\det(A-BD^{-1}C).\end{aligned}}} Both results can be combined to derive Sylvester's determinant theorem, which is also stated below. If the blocks are square matrices of the same size further formulas hold. For example, if C {\displaystyle C} and D {\displaystyle D} commute (i.e., C D = D C {\displaystyle CD=DC} ), then det ( A B C D ) = det ( A D − B C ) . {\displaystyle \det {\begin{pmatrix}A&B\\C&D\end{pmatrix}}=\det(AD-BC).} This formula has been generalized to matrices composed of more than 2 × 2 {\displaystyle 2\times 2} blocks, again under appropriate commutativity conditions among the individual blocks. For A = D {\displaystyle A=D} and B = C {\displaystyle B=C} , the following formula holds (even if A {\displaystyle A} and B {\displaystyle B} do not commute). det ( A B B A ) = det ( A + B B B + A A ) = det ( A + B B 0 A − B ) = det ( A + B ) det ( A − B ) . {\displaystyle \det {\begin{pmatrix}A&B\\B&A\end{pmatrix}}=\det {\begin{pmatrix}A+B&B\\B+A&A\end{pmatrix}}=\det {\begin{pmatrix}A+B&B\\0&A-B\end{pmatrix}}=\det(A+B)\det(A-B).} === Sylvester's determinant theorem === Sylvester's determinant theorem states that for A, an m × n matrix, and B, an n × m matrix (so that A and B have dimensions allowing them to be multiplied in either order forming a square matrix): det ( I m + A B ) = det ( I n + B A ) , {\displaystyle \det \left(I_{\mathit {m}}+AB\right)=\det \left(I_{\mathit {n}}+BA\right),} where Im and In are the m × m and n × n identity matrices, respectively. From this general result several consequences follow. A generalization is det ( Z + A W B ) = det ( Z ) det ( W ) det ( W − 1 + B Z − 1 A ) {\displaystyle \det \left(Z+AWB\right)=\det \left(Z\right)\det \left(W\right)\det \left(W^{-1}+BZ^{-1}A\right)} (see Matrix determinant lemma), where Z is an m × m invertible matrix and W is an n × n invertible matrix. === Sum === The determinant of the sum A + B {\displaystyle A+B} of two square matrices of the same size is not in general expressible in terms of the determinants of A and of B. However, for positive semidefinite matrices A {\displaystyle A} , B {\displaystyle B} and C {\displaystyle C} of equal size, det ( A + B + C ) + det ( C ) ≥ det ( A + C ) + det ( B + C ) , {\displaystyle \det(A+B+C)+\det(C)\geq \det(A+C)+\det(B+C){\text{,}}} with the corollary det ( A + B ) ≥ det ( A ) + det ( B ) . {\displaystyle \det(A+B)\geq \det(A)+\det(B){\text{.}}} Brunn–Minkowski theorem implies that the nth root of determinant is a concave function, when restricted to Hermitian positive-definite n × n {\displaystyle n\times n} matrices. Therefore, if A and B are Hermitian positive-definite n × n {\displaystyle n\times n} matrices, one has det ( A + B ) n ≥ det ( A ) n + det ( B ) n , {\displaystyle {\sqrt[{n}]{\det(A+B)}}\geq {\sqrt[{n}]{\det(A)}}+{\sqrt[{n}]{\det(B)}},} since the nth root of the determinant is a homogeneous function. ==== Sum identity for 2×2 matrices ==== For the special case of 2 × 2 {\displaystyle 2\times 2} matrices with complex entries, the determinant of the sum can be written in terms of determinants and traces in the following identity: det ( A + B ) = det ( A ) + det ( B ) + tr ( A ) tr ( B ) − tr ( A B ) . {\displaystyle \det(A+B)=\det(A)+\det(B)+{\text{tr}}(A){\text{tr}}(B)-{\text{tr}}(AB).} == Properties of the determinant in relation to other notions == === Eigenvalues and characteristic polynomial === The determinant is closely related to two other central concepts in linear algebra, the eigenvalues and the characteristic polynomial of a matrix. Let A {\displaystyle A} be an n × n {\displaystyle n\times n} matrix with complex entries. Then, by the Fundamental Theorem of Algebra, A {\displaystyle A} must have exactly n eigenvalues λ 1 , λ 2 , … , λ n {\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{n}} . (Here it is understood that an eigenvalue with algebraic multiplicity μ occurs μ times in this list.) Then, it turns out the determinant of A is equal to the product of these eigenvalues, det ( A ) = ∏ i = 1 n λ i = λ 1 λ 2 ⋯ λ n . {\displaystyle \det(A)=\prod _{i=1}^{n}\lambda _{i}=\lambda _{1}\lambda _{2}\cdots \lambda _{n}.} The product of all non-zero eigenvalues is referred to as pseudo-determinant. From this, one immediately sees that the determinant of a matrix A {\displaystyle A} is zero if and only if 0 {\displaystyle 0} is an eigenvalue of A {\displaystyle A} . In other words, A {\displaystyle A} is invertible if and only if 0 {\displaystyle 0} is not an eigenvalue of A {\displaystyle A} . The characteristic polynomial is defined as χ A ( t ) = det ( t ⋅ I − A ) . {\displaystyle \chi _{A}(t)=\det(t\cdot I-A).} Here, t {\displaystyle t} is the indeterminate of the polynomial and I {\displaystyle I} is the identity matrix of the same size as A {\displaystyle A} . By means of this polynomial, determinants can be used to find the eigenvalues of the matrix A {\displaystyle A} : they are precisely the roots of this polynomial, i.e., those complex numbers λ {\displaystyle \lambda } such that χ A ( λ ) = 0. {\displaystyle \chi _{A}(\lambda )=0.} A Hermitian matrix is positive definite if all its eigenvalues are positive. Sylvester's criterion asserts that this is equivalent to the determinants of the submatrices A k := [ a 1 , 1 a 1 , 2 ⋯ a 1 , k a 2 , 1 a 2 , 2 ⋯ a 2 , k ⋮ ⋮ ⋱ ⋮ a k , 1 a k , 2 ⋯ a k , k ] {\displaystyle A_{k}:={\begin{bmatrix}a_{1,1}&a_{1,2}&\cdots &a_{1,k}\\a_{2,1}&a_{2,2}&\cdots &a_{2,k}\\\vdots &\vdots &\ddots &\vdots \\a_{k,1}&a_{k,2}&\cdots &a_{k,k}\end{bmatrix}}} being positive, for all k {\displaystyle k} between 1 {\displaystyle 1} and n {\displaystyle n} . === Trace === The trace tr(A) is by definition the sum of the diagonal entries of A and also equals the sum of the eigenvalues. Thus, for complex matrices A, det ( exp ( A ) ) = exp ( tr ( A ) ) {\displaystyle \det(\exp(A))=\exp(\operatorname {tr} (A))} or, for real matrices A, tr ( A ) = log ( det ( exp ( A ) ) ) . {\displaystyle \operatorname {tr} (A)=\log(\det(\exp(A))).} Here exp(A) denotes the matrix exponential of A, because every eigenvalue λ of A corresponds to the eigenvalue exp(λ) of exp(A). In particular, given any logarithm of A, that is, any matrix L satisfying exp ( L ) = A {\displaystyle \exp(L)=A} the determinant of A is given by det ( A ) = exp ( tr ( L ) ) . {\displaystyle \det(A)=\exp(\operatorname {tr} (L)).} For example, for n = 2, n = 3, and n = 4, respectively, det ( A ) = 1 2 ( ( tr ( A ) ) 2 − tr ( A 2 ) ) , det ( A ) = 1 6 ( ( tr ( A ) ) 3 − 3 tr ( A ) tr ( A 2 ) + 2 tr ( A 3 ) ) , det ( A ) = 1 24 ( ( tr ( A ) ) 4 − 6 tr ( A 2 ) ( tr ( A ) ) 2 + 3 ( tr ( A 2 ) ) 2 + 8 tr ( A 3 ) tr ( A ) − 6 tr ( A 4 ) ) . {\displaystyle {\begin{aligned}\det(A)&={\frac {1}{2}}\left(\left(\operatorname {tr} (A)\right)^{2}-\operatorname {tr} \left(A^{2}\right)\right),\\\det(A)&={\frac {1}{6}}\left(\left(\operatorname {tr} (A)\right)^{3}-3\operatorname {tr} (A)~\operatorname {tr} \left(A^{2}\right)+2\operatorname {tr} \left(A^{3}\right)\right),\\\det(A)&={\frac {1}{24}}\left(\left(\operatorname {tr} (A)\right)^{4}-6\operatorname {tr} \left(A^{2}\right)\left(\operatorname {tr} (A)\right)^{2}+3\left(\operatorname {tr} \left(A^{2}\right)\right)^{2}+8\operatorname {tr} \left(A^{3}\right)~\operatorname {tr} (A)-6\operatorname {tr} \left(A^{4}\right)\right).\end{aligned}}} cf. Cayley-Hamilton theorem. Such expressions are deducible from combinatorial arguments, Newton's identities, or the Faddeev–LeVerrier algorithm. That is, for generic n, detA = (−1)nc0 the signed constant term of the characteristic polynomial, determined recursively from c n = 1 ; c n − m = − 1 m ∑ k = 1 m c n − m + k tr ( A k ) ( 1 ≤ m ≤ n ) . {\displaystyle c_{n}=1;~~~c_{n-m}=-{\frac {1}{m}}\sum _{k=1}^{m}c_{n-m+k}\operatorname {tr} \left(A^{k}\right)~~(1\leq m\leq n)~.} In the general case, this may also be obtained from det ( A ) = ∑ k 1 , k 2 , … , k n ≥ 0 k 1 + 2 k 2 + ⋯ + n k n = n ∏ l = 1 n ( − 1 ) k l + 1 l k l k l ! tr ( A l ) k l , {\displaystyle \det(A)=\sum _{\begin{array}{c}k_{1},k_{2},\ldots ,k_{n}\geq 0\\k_{1}+2k_{2}+\cdots +nk_{n}=n\end{array}}\prod _{l=1}^{n}{\frac {(-1)^{k_{l}+1}}{l^{k_{l}}k_{l}!}}\operatorname {tr} \left(A^{l}\right)^{k_{l}},} where the sum is taken over the set of all integers kl ≥ 0 satisfying the equation ∑ l = 1 n l k l = n . {\displaystyle \sum _{l=1}^{n}lk_{l}=n.} The formula can be expressed in terms of the complete exponential Bell polynomial of n arguments sl = −(l – 1)! tr(Al) as det ( A ) = ( − 1 ) n n ! B n ( s 1 , s 2 , … , s n ) . {\displaystyle \det(A)={\frac {(-1)^{n}}{n!}}B_{n}(s_{1},s_{2},\ldots ,s_{n}).} This formula can also be used to find the determinant of a matrix AIJ with multidimensional indices I = (i1, i2, ..., ir) and J = (j1, j2, ..., jr). The product and trace of such matrices are defined in a natural way as ( A B ) J I = ∑ K A K I B J K , tr ( A ) = ∑ I A I I . {\displaystyle (AB)_{J}^{I}=\sum _{K}A_{K}^{I}B_{J}^{K},\operatorname {tr} (A)=\sum _{I}A_{I}^{I}.} An important arbitrary dimension n identity can be obtained from the Mercator series expansion of the logarithm when the expansion converges. If every eigenvalue of A is less than 1 in absolute value, det ( I + A ) = ∑ k = 0 ∞ 1 k ! ( − ∑ j = 1 ∞ ( − 1 ) j j tr ( A j ) ) k , {\displaystyle \det(I+A)=\sum _{k=0}^{\infty }{\frac {1}{k!}}\left(-\sum _{j=1}^{\infty }{\frac {(-1)^{j}}{j}}\operatorname {tr} \left(A^{j}\right)\right)^{k}\,,} where I is the identity matrix. More generally, if ∑ k = 0 ∞ 1 k ! ( − ∑ j = 1 ∞ ( − 1 ) j s j j tr ( A j ) ) k , {\displaystyle \sum _{k=0}^{\infty }{\frac {1}{k!}}\left(-\sum _{j=1}^{\infty }{\frac {(-1)^{j}s^{j}}{j}}\operatorname {tr} \left(A^{j}\right)\right)^{k}\,,} is expanded as a formal power series in s then all coefficients of sm for m > n are zero and the remaining polynomial is det(I + sA). === Upper and lower bounds === For a positive definite matrix A, the trace operator gives the following tight lower and upper bounds on the log determinant tr ( I − A − 1 ) ≤ log det ( A ) ≤ tr ( A − I ) {\displaystyle \operatorname {tr} \left(I-A^{-1}\right)\leq \log \det(A)\leq \operatorname {tr} (A-I)} with equality if and only if A = I. This relationship can be derived via the formula for the Kullback-Leibler divergence between two multivariate normal distributions. Also, n tr ( A − 1 ) ≤ det ( A ) 1 n ≤ 1 n tr ( A ) ≤ 1 n tr ( A 2 ) . {\displaystyle {\frac {n}{\operatorname {tr} \left(A^{-1}\right)}}\leq \det(A)^{\frac {1}{n}}\leq {\frac {1}{n}}\operatorname {tr} (A)\leq {\sqrt {{\frac {1}{n}}\operatorname {tr} \left(A^{2}\right)}}.} These inequalities can be proved by expressing the traces and the determinant in terms of the eigenvalues. As such, they represent the well-known fact that the harmonic mean is less than the geometric mean, which is less than the arithmetic mean, which is, in turn, less than the root mean square. === Derivative === The Leibniz formula shows that the determinant of real (or analogously for complex) square matrices is a polynomial function from R n × n {\displaystyle \mathbf {R} ^{n\times n}} to R {\displaystyle \mathbf {R} } . In particular, it is everywhere differentiable. Its derivative can be expressed using Jacobi's formula: d det ( A ) d α = tr ( adj ( A ) d A d α ) . {\displaystyle {\frac {d\det(A)}{d\alpha }}=\operatorname {tr} \left(\operatorname {adj} (A){\frac {dA}{d\alpha }}\right).} where adj ( A ) {\displaystyle \operatorname {adj} (A)} denotes the adjugate of A {\displaystyle A} . In particular, if A {\displaystyle A} is invertible, we have d det ( A ) d α = det ( A ) tr ( A − 1 d A d α ) . {\displaystyle {\frac {d\det(A)}{d\alpha }}=\det(A)\operatorname {tr} \left(A^{-1}{\frac {dA}{d\alpha }}\right).} Expressed in terms of the entries of A {\displaystyle A} , these are ∂ det ( A ) ∂ A i j = adj ( A ) j i = det ( A ) ( A − 1 ) j i . {\displaystyle {\frac {\partial \det(A)}{\partial A_{ij}}}=\operatorname {adj} (A)_{ji}=\det(A)\left(A^{-1}\right)_{ji}.} Yet another equivalent formulation is det ( A + ϵ X ) − det ( A ) = tr ( adj ( A ) X ) ϵ + O ( ϵ 2 ) = det ( A ) tr ( A − 1 X ) ϵ + O ( ϵ 2 ) {\displaystyle \det(A+\epsilon X)-\det(A)=\operatorname {tr} (\operatorname {adj} (A)X)\epsilon +O\left(\epsilon ^{2}\right)=\det(A)\operatorname {tr} \left(A^{-1}X\right)\epsilon +O\left(\epsilon ^{2}\right)} , using big O notation. The special case where A = I {\displaystyle A=I} , the identity matrix, yields det ( I + ϵ X ) = 1 + tr ( X ) ϵ + O ( ϵ 2 ) . {\displaystyle \det(I+\epsilon X)=1+\operatorname {tr} (X)\epsilon +O\left(\epsilon ^{2}\right).} This identity is used in describing Lie algebras associated to certain matrix Lie groups. For example, the special linear group SL n {\displaystyle \operatorname {SL} _{n}} is defined by the equation det A = 1 {\displaystyle \det A=1} . The above formula shows that its Lie algebra is the special linear Lie algebra s l n {\displaystyle {\mathfrak {sl}}_{n}} consisting of those matrices having trace zero. Writing a 3 × 3 {\displaystyle 3\times 3} matrix as A = [ a b c ] {\displaystyle A={\begin{bmatrix}a&b&c\end{bmatrix}}} where a , b , c {\displaystyle a,b,c} are column vectors of length 3, then the gradient over one of the three vectors may be written as the cross product of the other two: ∇ a det ( A ) = b × c ∇ b det ( A ) = c × a ∇ c det ( A ) = a × b . {\displaystyle {\begin{aligned}\nabla _{\mathbf {a} }\det(A)&=\mathbf {b} \times \mathbf {c} \\\nabla _{\mathbf {b} }\det(A)&=\mathbf {c} \times \mathbf {a} \\\nabla _{\mathbf {c} }\det(A)&=\mathbf {a} \times \mathbf {b} .\end{aligned}}} == History == Historically, determinants were used long before matrices: A determinant was originally defined as a property of a system of linear equations. The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero). In this sense, determinants were first used in the Chinese mathematics textbook The Nine Chapters on the Mathematical Art (九章算術, Chinese scholars, around the 3rd century BCE). In Europe, solutions of linear systems of two equations were expressed by Cardano in 1545 by a determinant-like entity. Determinants proper originated separately from the work of Seki Takakazu in 1683 in Japan and parallelly of Leibniz in 1693. Cramer (1750) stated, without proof, Cramer's rule. Both Cramer and also Bézout (1779) were led to determinants by the question of plane curves passing through a given set of points. Vandermonde (1771) first recognized determinants as independent functions. Laplace (1772) gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order and applied it to questions of elimination theory; he proved many special cases of general identities. Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word "determinant" (Laplace had used "resultant"), though not in the present signification, but rather as applied to the discriminant of a quadratic form. Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem. The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of m columns and n rows, which for the special case of m = n reduces to the multiplication theorem. On the same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See Cauchy–Binet formula.) In this he used the word "determinant" in its present sense, summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's. With him begins the theory in its generality. Jacobi (1841) used the functional determinant which Sylvester later called the Jacobian. In his memoirs in Crelle's Journal for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work. Cayley 1841 introduced the modern notation for the determinant using vertical bars. The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the textbooks on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises. == Applications == === Cramer's rule === Determinants can be used to describe the solutions of a linear system of equations, written in matrix form as A x = b {\displaystyle Ax=b} . This equation has a unique solution x {\displaystyle x} if and only if det ( A ) {\displaystyle \det(A)} is nonzero. In this case, the solution is given by Cramer's rule: x i = det ( A i ) det ( A ) i = 1 , 2 , 3 , … , n {\displaystyle x_{i}={\frac {\det(A_{i})}{\det(A)}}\qquad i=1,2,3,\ldots ,n} where A i {\displaystyle A_{i}} is the matrix formed by replacing the i {\displaystyle i} -th column of A {\displaystyle A} by the column vector b {\displaystyle b} . This follows immediately by column expansion of the determinant, i.e. det ( A i ) = det [ a 1 … b … a n ] {\displaystyle \det(A_{i})=\det {\begin{bmatrix}a_{1}&\ldots &b&\ldots &a_{n}\end{bmatrix}}} = ∑ j = 1 n x j det [ a 1 … a i − 1 a j a i + 1 … a n ] = x i det ( A ) {\displaystyle =\sum _{j=1}^{n}x_{j}\det {\begin{bmatrix}a_{1}&\ldots &a_{i-1}&a_{j}&a_{i+1}&\ldots &a_{n}\end{bmatrix}}=x_{i}\det(A)} where the vectors a j {\displaystyle a_{j}} are the columns of A. The rule is also implied by the identity A adj ( A ) = adj ( A ) A = det ( A ) I n . {\displaystyle A\,\operatorname {adj} (A)=\operatorname {adj} (A)\,A=\det(A)\,I_{n}.} Cramer's rule can be implemented in O ( n 3 ) {\displaystyle \operatorname {O} (n^{3})} time, which is comparable to more common methods of solving systems of linear equations, such as LU, QR, or singular value decomposition. === Linear independence === Determinants can be used to characterize linearly dependent vectors: det A {\displaystyle \det A} is zero if and only if the column vectors of the matrix A {\displaystyle A} are linearly dependent. For example, given two linearly independent vectors v 1 , v 2 ∈ R 3 {\displaystyle v_{1},v_{2}\in \mathbf {R} ^{3}} , a third vector v 3 {\displaystyle v_{3}} lies in the plane spanned by the former two vectors exactly if the determinant of the 3 × 3 {\displaystyle 3\times 3} matrix consisting of the three vectors is zero. The same idea is also used in the theory of differential equations: given functions f 1 ( x ) , … , f n ( x ) {\displaystyle f_{1}(x),\dots ,f_{n}(x)} (supposed to be n − 1 {\displaystyle n-1} times differentiable), the Wronskian is defined to be W ( f 1 , … , f n ) ( x ) = | f 1 ( x ) f 2 ( x ) ⋯ f n ( x ) f 1 ′ ( x ) f 2 ′ ( x ) ⋯ f n ′ ( x ) ⋮ ⋮ ⋱ ⋮ f 1 ( n − 1 ) ( x ) f 2 ( n − 1 ) ( x ) ⋯ f n ( n − 1 ) ( x ) | . {\displaystyle W(f_{1},\ldots ,f_{n})(x)={\begin{vmatrix}f_{1}(x)&f_{2}(x)&\cdots &f_{n}(x)\\f_{1}'(x)&f_{2}'(x)&\cdots &f_{n}'(x)\\\vdots &\vdots &\ddots &\vdots \\f_{1}^{(n-1)}(x)&f_{2}^{(n-1)}(x)&\cdots &f_{n}^{(n-1)}(x)\end{vmatrix}}.} It is non-zero (for some x {\displaystyle x} ) in a specified interval if and only if the given functions and all their derivatives up to order n − 1 {\displaystyle n-1} are linearly independent. If it can be shown that the Wronskian is zero everywhere on an interval then, in the case of analytic functions, this implies the given functions are linearly dependent. See the Wronskian and linear independence. Another such use of the determinant is the resultant, which gives a criterion when two polynomials have a common root. === Orientation of a basis === The determinant can be thought of as assigning a number to every sequence of n vectors in Rn, by using the square matrix whose columns are the given vectors. The determinant will be nonzero if and only if the sequence of vectors is a basis for Rn. In that case, the sign of the determinant determines whether the orientation of the basis is consistent with or opposite to the orientation of the standard basis. In the case of an orthogonal basis, the magnitude of the determinant is equal to the product of the lengths of the basis vectors. For instance, an orthogonal matrix with entries in Rn represents an orthonormal basis in Euclidean space, and hence has determinant of ±1 (since all the vectors have length 1). The determinant is +1 if and only if the basis has the same orientation. It is −1 if and only if the basis has the opposite orientation. More generally, if the determinant of A is positive, A represents an orientation-preserving linear transformation (if A is an orthogonal 2 × 2 or 3 × 3 matrix, this is a rotation), while if it is negative, A switches the orientation of the basis. === Volume and Jacobian determinant === As pointed out above, the absolute value of the determinant of real vectors is equal to the volume of the parallelepiped spanned by those vectors. As a consequence, if f : R n → R n {\displaystyle f:\mathbf {R} ^{n}\to \mathbf {R} ^{n}} is the linear map given by multiplication with a matrix A {\displaystyle A} , and S ⊂ R n {\displaystyle S\subset \mathbf {R} ^{n}} is any measurable subset, then the volume of f ( S ) {\displaystyle f(S)} is given by | det ( A ) | {\displaystyle |\det(A)|} times the volume of S {\displaystyle S} . More generally, if the linear map f : R n → R m {\displaystyle f:\mathbf {R} ^{n}\to \mathbf {R} ^{m}} is represented by the m × n {\displaystyle m\times n} matrix A {\displaystyle A} , then the n {\displaystyle n} -dimensional volume of f ( S ) {\displaystyle f(S)} is given by: volume ( f ( S ) ) = det ( A T A ) volume ( S ) . {\displaystyle \operatorname {volume} (f(S))={\sqrt {\det \left(A^{\textsf {T}}A\right)}}\operatorname {volume} (S).} By calculating the volume of the tetrahedron bounded by four points, they can be used to identify skew lines. The volume of any tetrahedron, given its vertices a , b , c , d {\displaystyle a,b,c,d} , 1 6 ⋅ | det ( a − b , b − c , c − d ) | {\displaystyle {\frac {1}{6}}\cdot |\det(a-b,b-c,c-d)|} , or any other combination of pairs of vertices that form a spanning tree over the vertices. For a general differentiable function, much of the above carries over by considering the Jacobian matrix of f. For f : R n → R n , {\displaystyle f:\mathbf {R} ^{n}\rightarrow \mathbf {R} ^{n},} the Jacobian matrix is the n × n matrix whose entries are given by the partial derivatives D ( f ) = ( ∂ f i ∂ x j ) 1 ≤ i , j ≤ n . {\displaystyle D(f)=\left({\frac {\partial f_{i}}{\partial x_{j}}}\right)_{1\leq i,j\leq n}.} Its determinant, the Jacobian determinant, appears in the higher-dimensional version of integration by substitution: for suitable functions f and an open subset U of Rn (the domain of f), the integral over f(U) of some other function φ : Rn → Rm is given by ∫ f ( U ) ϕ ( v ) d v = ∫ U ϕ ( f ( u ) ) | det ( D f ) ( u ) | d u . {\displaystyle \int _{f(U)}\phi (\mathbf {v} )\,d\mathbf {v} =\int _{U}\phi (f(\mathbf {u} ))\left|\det(\operatorname {D} f)(\mathbf {u} )\right|\,d\mathbf {u} .} The Jacobian also occurs in the inverse function theorem. When applied to the field of Cartography, the determinant can be used to measure the rate of expansion of a map near the poles. == Abstract algebraic aspects == === Determinant of an endomorphism === The above identities concerning the determinant of products and inverses of matrices imply that similar matrices have the same determinant: two matrices A and B are similar, if there exists an invertible matrix X such that A = X−1BX. Indeed, repeatedly applying the above identities yields det ( A ) = det ( X ) − 1 det ( B ) det ( X ) = det ( B ) det ( X ) − 1 det ( X ) = det ( B ) . {\displaystyle \det(A)=\det(X)^{-1}\det(B)\det(X)=\det(B)\det(X)^{-1}\det(X)=\det(B).} The determinant is therefore also called a similarity invariant. The determinant of a linear transformation T : V → V {\displaystyle T:V\to V} for some finite-dimensional vector space V is defined to be the determinant of the matrix describing it, with respect to an arbitrary choice of basis in V. By the similarity invariance, this determinant is independent of the choice of the basis for V and therefore only depends on the endomorphism T. === Square matrices over commutative rings === The above definition of the determinant using the Leibniz rule holds works more generally when the entries of the matrix are elements of a commutative ring R {\displaystyle R} , such as the integers Z {\displaystyle \mathbf {Z} } , as opposed to the field of real or complex numbers. Moreover, the characterization of the determinant as the unique alternating multilinear map that satisfies det ( I ) = 1 {\displaystyle \det(I)=1} still holds, as do all the properties that result from that characterization. A matrix A ∈ Mat n × n ( R ) {\displaystyle A\in \operatorname {Mat} _{n\times n}(R)} is invertible (in the sense that there is an inverse matrix whose entries are in R {\displaystyle R} ) if and only if its determinant is an invertible element in R {\displaystyle R} . For R = Z {\displaystyle R=\mathbf {Z} } , this means that the determinant is +1 or −1. Such a matrix is called unimodular. The determinant being multiplicative, it defines a group homomorphism GL n ( R ) → R × , {\displaystyle \operatorname {GL} _{n}(R)\rightarrow R^{\times },} between the general linear group (the group of invertible n × n {\displaystyle n\times n} -matrices with entries in R {\displaystyle R} ) and the multiplicative group of units in R {\displaystyle R} . Since it respects the multiplication in both groups, this map is a group homomorphism. Given a ring homomorphism f : R → S {\displaystyle f:R\to S} , there is a map GL n ( f ) : GL n ( R ) → GL n ( S ) {\displaystyle \operatorname {GL} _{n}(f):\operatorname {GL} _{n}(R)\to \operatorname {GL} _{n}(S)} given by replacing all entries in R {\displaystyle R} by their images under f {\displaystyle f} . The determinant respects these maps, i.e., the identity f ( det ( ( a i , j ) ) ) = det ( ( f ( a i , j ) ) ) {\displaystyle f(\det((a_{i,j})))=\det((f(a_{i,j})))} holds. In other words, the displayed commutative diagram commutes. For example, the determinant of the complex conjugate of a complex matrix (which is also the determinant of its conjugate transpose) is the complex conjugate of its determinant, and for integer matrices: the reduction modulo m {\displaystyle m} of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo m {\displaystyle m} (the latter determinant being computed using modular arithmetic). In the language of category theory, the determinant is a natural transformation between the two functors GL n {\displaystyle \operatorname {GL} _{n}} and ( − ) × {\displaystyle (-)^{\times }} . Adding yet another layer of abstraction, this is captured by saying that the determinant is a morphism of algebraic groups, from the general linear group to the multiplicative group, det : GL n → G m . {\displaystyle \det :\operatorname {GL} _{n}\to \mathbb {G} _{m}.} === Exterior algebra === The determinant of a linear transformation T : V → V {\displaystyle T:V\to V} of an n {\displaystyle n} -dimensional vector space V {\displaystyle V} or, more generally a free module of (finite) rank n {\displaystyle n} over a commutative ring R {\displaystyle R} can be formulated in a coordinate-free manner by considering the n {\displaystyle n} -th exterior power ⋀ n V {\displaystyle \bigwedge ^{n}V} of V {\displaystyle V} . The map T {\displaystyle T} induces a linear map ⋀ n T : ⋀ n V → ⋀ n V v 1 ∧ v 2 ∧ ⋯ ∧ v n ↦ T v 1 ∧ T v 2 ∧ ⋯ ∧ T v n . {\displaystyle {\begin{aligned}\bigwedge ^{n}T:\bigwedge ^{n}V&\rightarrow \bigwedge ^{n}V\\v_{1}\wedge v_{2}\wedge \dots \wedge v_{n}&\mapsto Tv_{1}\wedge Tv_{2}\wedge \dots \wedge Tv_{n}.\end{aligned}}} As ⋀ n V {\displaystyle \bigwedge ^{n}V} is one-dimensional, the map ⋀ n T {\displaystyle \bigwedge ^{n}T} is given by multiplying with some scalar, i.e., an element in R {\displaystyle R} . Some authors such as (Bourbaki 1998) use this fact to define the determinant to be the element in R {\displaystyle R} satisfying the following identity (for all v i ∈ V {\displaystyle v_{i}\in V} ): ( ⋀ n T ) ( v 1 ∧ ⋯ ∧ v n ) = det ( T ) ⋅ v 1 ∧ ⋯ ∧ v n . {\displaystyle \left(\bigwedge ^{n}T\right)\left(v_{1}\wedge \dots \wedge v_{n}\right)=\det(T)\cdot v_{1}\wedge \dots \wedge v_{n}.} This definition agrees with the more concrete coordinate-dependent definition. This can be shown using the uniqueness of a multilinear alternating form on n {\displaystyle n} -tuples of vectors in R n {\displaystyle R^{n}} . For this reason, the highest non-zero exterior power ⋀ n V {\displaystyle \bigwedge ^{n}V} (as opposed to the determinant associated to an endomorphism) is sometimes also called the determinant of V {\displaystyle V} and similarly for more involved objects such as vector bundles or chain complexes of vector spaces. Minors of a matrix can also be cast in this setting, by considering lower alternating forms ⋀ k V {\displaystyle \bigwedge ^{k}V} with k < n {\displaystyle k<n} . == Berezin integral == The conventional definition of the determinant, as a sum over permutations over a product of matrix elements, can be written using the somewhat surprising notation of the Berezin integral. In this notation, the determinant can be written as ∫ exp [ − θ T A η ] d θ d η = det A {\displaystyle \int \exp \left[-\theta ^{T}A\eta \right]\,d\theta \,d\eta =\det A} This holds for any n × n {\displaystyle n\times n} -dimensional matrix A . {\displaystyle A.} The symbols θ , η {\displaystyle \theta ,\eta } are two n {\displaystyle n} -dimensional vectors of anti-commuting Grassmann numbers (aka "supernumbers"), taken from the Grassmann algebra. The exp {\displaystyle \exp } here is the exponential function. The integral sign is meant to be understood as the Berezin integral. Despite the use of the integral symbol, this expression is in fact an entirely finite sum. This unusual-looking expression can be understood as a notational trick that rewrites the conventional expression for the determinant det A = ∑ σ ∈ S n sgn ( σ ) a 1 , σ ( 1 ) ⋯ a n , σ ( n ) . {\displaystyle \det A=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )a_{1,\sigma (1)}\cdots a_{n,\sigma (n)}.} by using some novel notation. The anti-commuting property of the Grassmann numbers captures the sign (signature) of the permutation, while the integral combined with the exp {\displaystyle \exp } ensures that all permutations are explored. That is, the Taylor's series for exp {\displaystyle \exp } terminates after exactly n {\displaystyle n} terms, because the square of a Grassmann number is zero, and there are exactly n {\displaystyle n} distinct Grassmann variables. Meanwhile, the integral is defined to vanish, if the corresponding Grassmann number does not appear in the integrand. Thus, the integral selects out only those terms in the exp {\displaystyle \exp } series that have exactly n {\displaystyle n} distinct variables; all lower-order terms vanish. Thus, the somewhat magical combination of the integral sign, the use of anti-commuting variables, and the Taylor's series for exp {\displaystyle \exp } just encodes a finite sum, identical to the conventional summation. This form is popular in physics, where it is often used as a stand-in for the Jacobian determinant. The appeal is that, notationally, the integral takes the form of a path integral, such as in the path integral formulation for quantized Hamiltonian mechanics. An example can be found in the theory of Fadeev–Popov ghosts; although this theory may seem rather abstruse, it's best to keep in mind that the use of the ghost fields is little more than a notational trick to express a Jacobian determinant. The Pfaffian P f A {\displaystyle \mathrm {Pf} \,A} of a skew-symmetric matrix A {\displaystyle A} is the square-root of the determinant: that is, ( P f A ) 2 = det A . {\displaystyle \left(\mathrm {Pf} \,A\right)^{2}=\det A.} The Berezin integral form for the Pfaffian is even more suggestive; it is ∫ exp [ − 1 2 θ T A θ ] d θ = P f A {\displaystyle \int \exp \left[-{\tfrac {1}{2}}\theta ^{T}A\theta \right]\,d\theta =\mathrm {Pf} \,A} The integrand has exactly the same formal structure as a normal Gaussian distribution, albeit with Grassman numbers, instead of real numbers. This formal resemblance accounts for the occasional appearance of supernumbers in the theory of stochastic dynamics and stochastic differential equations. == Generalizations and related notions == Determinants as treated above admit several variants: the permanent of a matrix is defined as the determinant, except that the factors sgn ( σ ) {\displaystyle \operatorname {sgn}(\sigma )} occurring in Leibniz's rule are omitted. The immanant generalizes both by introducing a character of the symmetric group S n {\displaystyle S_{n}} in Leibniz's rule. === Determinants for finite-dimensional algebras === For any associative algebra A {\displaystyle A} that is finite-dimensional as a vector space over a field F {\displaystyle F} , there is a determinant map det : A → F . {\displaystyle \det :A\to F.} This definition proceeds by establishing the characteristic polynomial independently of the determinant, and defining the determinant as the lowest order term of this polynomial. This general definition recovers the determinant for the matrix algebra A = Mat n × n ( F ) {\displaystyle A=\operatorname {Mat} _{n\times n}(F)} , but also includes several further cases including the determinant of a quaternion, det ( a + i b + j c + k d ) = a 2 + b 2 + c 2 + d 2 {\displaystyle \det(a+ib+jc+kd)=a^{2}+b^{2}+c^{2}+d^{2}} , the norm N L / F : L → F {\displaystyle N_{L/F}:L\to F} of a field extension, as well as the Pfaffian of a skew-symmetric matrix and the reduced norm of a central simple algebra, also arise as special cases of this construction. === Infinite matrices === For matrices with an infinite number of rows and columns, the above definitions of the determinant do not carry over directly. For example, in the Leibniz formula, an infinite sum (all of whose terms are infinite products) would have to be calculated. Functional analysis provides different extensions of the determinant for such infinite-dimensional situations, which however only work for particular kinds of operators. The Fredholm determinant defines the determinant for operators known as trace class operators by an appropriate generalization of the formula det ( I + A ) = exp ( tr ( log ( I + A ) ) ) . {\displaystyle \det(I+A)=\exp(\operatorname {tr} (\log(I+A))).} Another infinite-dimensional notion of determinant is the functional determinant. === Operators in von Neumann algebras === For operators in a finite factor, one may define a positive real-valued determinant called the Fuglede−Kadison determinant using the canonical trace. In fact, corresponding to every tracial state on a von Neumann algebra there is a notion of Fuglede−Kadison determinant. === Related notions for non-commutative rings === For matrices over non-commutative rings, multilinearity and alternating properties are incompatible for n ≥ 2, so there is no good definition of the determinant in this setting. For square matrices with entries in a non-commutative ring, there are various difficulties in defining determinants analogously to that for commutative rings. A meaning can be given to the Leibniz formula provided that the order for the product is specified, and similarly for other definitions of the determinant, but non-commutativity then leads to the loss of many fundamental properties of the determinant, such as the multiplicative property or that the determinant is unchanged under transposition of the matrix. Over non-commutative rings, there is no reasonable notion of a multilinear form (existence of a nonzero bilinear form with a regular element of R as value on some pair of arguments implies that R is commutative). Nevertheless, various notions of non-commutative determinant have been formulated that preserve some of the properties of determinants, notably quasideterminants and the Dieudonné determinant. For some classes of matrices with non-commutative elements, one can define the determinant and prove linear algebra theorems that are very similar to their commutative analogs. Examples include the q-determinant on quantum groups, the Capelli determinant on Capelli matrices, and the Berezinian on supermatrices (i.e., matrices whose entries are elements of Z 2 {\displaystyle \mathbb {Z} _{2}} -graded rings). Manin matrices form the class closest to matrices with commutative elements. == Calculation == Determinants are mainly used as a theoretical tool. They are rarely calculated explicitly in numerical linear algebra, where for applications such as checking invertibility and finding eigenvalues the determinant has largely been supplanted by other techniques. Computational geometry, however, does frequently use calculations related to determinants. While the determinant can be computed directly using the Leibniz rule this approach is extremely inefficient for large matrices, since that formula requires calculating n ! {\displaystyle n!} ( n {\displaystyle n} factorial) products for an n × n {\displaystyle n\times n} matrix. Thus, the number of required operations grows very quickly: it is of order n ! {\displaystyle n!} . The Laplace expansion is similarly inefficient. Therefore, more involved techniques have been developed for calculating determinants. === Gaussian elimination === Gaussian elimination consists of left multiplying a matrix by elementary matrices for getting a matrix in a row echelon form. One can restrict the computation to elementary matrices of determinant 1. In this case, the determinant of the resulting row echelon form equals the determinant of the initial matrix. As a row echelon form is a triangular matrix, its determinant is the product of the entries of its diagonal. So, the determinant can be computed for almost free from the result of a Gaussian elimination. === Decomposition methods === Some methods compute det ( A ) {\displaystyle \det(A)} by writing the matrix as a product of matrices whose determinants can be more easily computed. Such techniques are referred to as decomposition methods. Examples include the LU decomposition, the QR decomposition or the Cholesky decomposition (for positive definite matrices). These methods are of order O ( n 3 ) {\displaystyle \operatorname {O} (n^{3})} , which is a significant improvement over O ( n ! ) {\displaystyle \operatorname {O} (n!)} . For example, LU decomposition expresses A {\displaystyle A} as a product A = P L U . {\displaystyle A=PLU.} of a permutation matrix P {\displaystyle P} (which has exactly a single 1 {\displaystyle 1} in each column, and otherwise zeros), a lower triangular matrix L {\displaystyle L} and an upper triangular matrix U {\displaystyle U} . The determinants of the two triangular matrices L {\displaystyle L} and U {\displaystyle U} can be quickly calculated, since they are the products of the respective diagonal entries. The determinant of P {\displaystyle P} is just the sign ε {\displaystyle \varepsilon } of the corresponding permutation (which is + 1 {\displaystyle +1} for an even number of permutations and is − 1 {\displaystyle -1} for an odd number of permutations). Once such a LU decomposition is known for A {\displaystyle A} , its determinant is readily computed as det ( A ) = ε det ( L ) ⋅ det ( U ) . {\displaystyle \det(A)=\varepsilon \det(L)\cdot \det(U).} === Further methods === The order O ( n 3 ) {\displaystyle \operatorname {O} (n^{3})} reached by decomposition methods has been improved by different methods. If two matrices of order n {\displaystyle n} can be multiplied in time M ( n ) {\displaystyle M(n)} , where M ( n ) ≥ n a {\displaystyle M(n)\geq n^{a}} for some a > 2 {\displaystyle a>2} , then there is an algorithm computing the determinant in time O ( M ( n ) ) {\displaystyle O(M(n))} . This means, for example, that an O ( n 2.376 ) {\displaystyle \operatorname {O} (n^{2.376})} algorithm for computing the determinant exists based on the Coppersmith–Winograd algorithm. This exponent has been further lowered, as of 2016, to 2.373. In addition to the complexity of the algorithm, further criteria can be used to compare algorithms. Especially for applications concerning matrices over rings, algorithms that compute the determinant without any divisions exist. (By contrast, Gauss elimination requires divisions.) One such algorithm, having complexity O ( n 4 ) {\displaystyle \operatorname {O} (n^{4})} is based on the following idea: one replaces permutations (as in the Leibniz rule) by so-called closed ordered walks, in which several items can be repeated. The resulting sum has more terms than in the Leibniz rule, but in the process several of these products can be reused, making it more efficient than naively computing with the Leibniz rule. Algorithms can also be assessed according to their bit complexity, i.e., how many bits of accuracy are needed to store intermediate values occurring in the computation. For example, the Gaussian elimination (or LU decomposition) method is of order O ( n 3 ) {\displaystyle \operatorname {O} (n^{3})} , but the bit length of intermediate values can become exponentially long. By comparison, the Bareiss Algorithm, is an exact-division method (so it does use division, but only in cases where these divisions can be performed without remainder) is of the same order, but the bit complexity is roughly the bit size of the original entries in the matrix times n {\displaystyle n} . If the determinant of A and the inverse of A have already been computed, the matrix determinant lemma allows rapid calculation of the determinant of A + uvT, where u and v are column vectors. Charles Dodgson (i.e. Lewis Carroll of Alice's Adventures in Wonderland fame) invented a method for computing determinants called Dodgson condensation. Unfortunately this interesting method does not always work in its original form. == See also == == Notes == == References == Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0. Bareiss, Erwin (1968), "Sylvester's Identity and Multistep Integer-Preserving Gaussian Elimination" (PDF), Mathematics of Computation, 22 (102): 565–578, doi:10.2307/2004533, JSTOR 2004533, archived (PDF) from the original on 2012-10-25 de Boor, Carl (1990), "An empty exercise" (PDF), ACM SIGNUM Newsletter, 25 (2): 3–7, doi:10.1145/122272.122273, S2CID 62780452, archived (PDF) from the original on 2006-09-01 Bourbaki, Nicolas (1998), Algebra I, Chapters 1-3, Springer, ISBN 9783540642435 Bunch, J. R.; Hopcroft, J. E. (1974). "Triangular Factorization and Inversion by Fast Matrix Multiplication". Mathematics of Computation. 28 (125): 231–236. doi:10.1090/S0025-5718-1974-0331751-8. hdl:1813/6003. Dummit, David S.; Foote, Richard M. (2004), Abstract algebra (3rd ed.), Hoboken, NJ: Wiley, ISBN 9780471452348, OCLC 248917264 Fisikopoulos, Vissarion; Peñaranda, Luis (2016), "Faster geometric algorithms via dynamic determinant computation", Computational Geometry, 54: 1–16, arXiv:1206.7067, doi:10.1016/j.comgeo.2015.12.001 Garibaldi, Skip (2004), "The characteristic polynomial and determinant are not ad hoc constructions", American Mathematical Monthly, 111 (9): 761–778, arXiv:math/0203276, doi:10.2307/4145188, JSTOR 4145188, MR 2104048 Habgood, Ken; Arel, Itamar (2012). "A condensation-based application of Cramer's rule for solving large-scale linear systems" (PDF). Journal of Discrete Algorithms. 10: 98–109. doi:10.1016/j.jda.2011.06.007. Archived (PDF) from the original on 2019-05-05. Harris, Frank E. (2014), Mathematics for Physical Science and Engineering, Elsevier, ISBN 9780128010495 Kleiner, Israel (2007), Kleiner, Israel (ed.), A history of abstract algebra, Birkhäuser, doi:10.1007/978-0-8176-4685-1, ISBN 978-0-8176-4684-4, MR 2347309 Kung, Joseph P.S.; Rota, Gian-Carlo; Yan, Catherine (2009), Combinatorics: The Rota Way, Cambridge University Press, ISBN 9780521883894 Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7 Lombardi, Henri; Quitté, Claude (2015), Commutative Algebra: Constructive Methods, Springer, ISBN 9789401799447 Mac Lane, Saunders (1998), Categories for the Working Mathematician, Graduate Texts in Mathematics 5 (2nd ed.), Springer-Verlag, ISBN 0-387-98403-8 Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on 2009-10-31 Muir, Thomas (1960) [1933], A treatise on the theory of determinants, Revised and enlarged by William H. Metzler, New York, NY: Dover Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3 G. Baley Price (1947) "Some identities in the theory of determinants", American Mathematical Monthly 54:75–90 MR0019078 Horn, Roger Alan; Johnson, Charles Royal (2018) [1985]. Matrix Analysis (2nd ed.). Cambridge University Press. ISBN 978-0-521-54823-6. Lang, Serge (1985), Introduction to Linear Algebra, Undergraduate Texts in Mathematics (2 ed.), Springer, ISBN 9780387962054 Lang, Serge (1987), Linear Algebra, Undergraduate Texts in Mathematics (3 ed.), Springer, ISBN 9780387964126 Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. New York, NY: Springer. ISBN 978-0-387-95385-4. Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall Rote, Günter (2001), "Division-free algorithms for the determinant and the Pfaffian: algebraic and combinatorial approaches" (PDF), Computational discrete mathematics, Lecture Notes in Comput. Sci., vol. 2122, Springer, pp. 119–135, doi:10.1007/3-540-45506-X_9, ISBN 978-3-540-42775-9, MR 1911585, archived from the original (PDF) on 2007-02-01, retrieved 2020-06-04 Trefethen, Lloyd; Bau III, David (1997), Numerical Linear Algebra (1st ed.), Philadelphia: SIAM, ISBN 978-0-89871-361-9 === Historical references === Bourbaki, Nicolas (1994), Elements of the history of mathematics, translated by Meldrum, John, Springer, doi:10.1007/978-3-642-61693-8, ISBN 3-540-19376-6 Cajori, Florian (1993), A history of mathematical notations: Including Vol. I. Notations in elementary mathematics; Vol. II. Notations mainly in higher mathematics, Reprint of the 1928 and 1929 originals, Dover, ISBN 0-486-67766-4, MR 3363427 Bézout, Étienne (1779), Théorie générale des equations algébriques, Paris Cayley, Arthur (1841), "On a theorem in the geometry of position", Cambridge Mathematical Journal, 2: 267–271 Cramer, Gabriel (1750), Introduction à l'analyse des lignes courbes algébriques, Genève: Frères Cramer & Cl. Philibert, doi:10.3931/e-rara-4048 Eves, Howard (1990), An introduction to the history of mathematics (6 ed.), Saunders College Publishing, ISBN 0-03-029558-0, MR 1104435 Grattan-Guinness, I., ed. (2003), Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences, vol. 1, Johns Hopkins University Press, ISBN 9780801873966 Jacobi, Carl Gustav Jakob (1841), "De Determinantibus functionalibus", Journal für die reine und angewandte Mathematik, 1841 (22): 320–359, doi:10.1515/crll.1841.22.319, S2CID 123637858 Laplace, Pierre-Simon, de (1772), "Recherches sur le calcul intégral et sur le systéme du monde", Histoire de l'Académie Royale des Sciences (seconde partie), Paris: 267–376{{citation}}: CS1 maint: multiple names: authors list (link) Robert Forsyth Scott (1880): A Treatise on the Theory of Determinants and Their Applications in Analysis and Geometry, Cambridge University Press E. R. Hedrick: On Three Dimensional Determinants, Annals of Mathematics, Vol.1, No.1/4 (1899-1900), pp.49-67 (19pages). https://doi.org/10.2307/1967268 # Note: This is not the ordinal determinant. == External links == Suprunenko, D.A. (2001) [1994], "Determinant", Encyclopedia of Mathematics, EMS Press Weisstein, Eric W. "Determinant". MathWorld. O'Connor, John J.; Robertson, Edmund F., "Matrices and determinants", MacTutor History of Mathematics Archive, University of St Andrews Determinant Interactive Program and Tutorial Linear algebra: determinants. Archived 2008-12-04 at the Wayback Machine Compute determinants of matrices up to order 6 using Laplace expansion you choose. Determinant Calculator Calculator for matrix determinants, up to the 8th order. Matrices and Linear Algebra on the Earliest Uses Pages Determinants explained in an easy fashion in the 4th chapter as a part of a Linear Algebra course.
|
Wikipedia:Detrended fluctuation analysis#0
|
In stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analysing time series that appear to be long-memory processes (diverging correlation time, e.g. power-law decaying autocorrelation function) or 1/f noise. The obtained exponent is similar to the Hurst exponent, except that DFA may also be applied to signals whose underlying statistics (such as mean and variance) or dynamics are non-stationary (changing with time). It is related to measures based upon spectral techniques such as autocorrelation and Fourier transform. Peng et al. introduced DFA in 1994 in a paper that has been cited over 3,000 times as of 2022 and represents an extension of the (ordinary) fluctuation analysis (FA), which is affected by non-stationarities. Systematic studies of the advantages and limitations of the DFA method were performed by PCh Ivanov et al. in a series of papers focusing on the effects of different types of nonstationarities in real-world signals: (1) types of trends; (2) random outliers/spikes, noisy segments, signals composed of parts with different correlation; (3) nonlinear filters; (4) missing data; (5) signal coarse-graining procedures and comparing DFA performance with moving average techniques (cumulative citations > 4,000). Datasets generated to test DFA are available on PhysioNet. == Definition == === Algorithm === Given: a time series x 1 , x 2 , . . . , x N {\displaystyle x_{1},x_{2},...,x_{N}} . Compute its average value ⟨ x ⟩ = 1 N ∑ t = 1 N x t {\displaystyle \langle x\rangle ={\frac {1}{N}}\sum _{t=1}^{N}x_{t}} . Sum it into a process X t = ∑ i = 1 t ( x i − ⟨ x ⟩ ) {\displaystyle X_{t}=\sum _{i=1}^{t}(x_{i}-\langle x\rangle )} . This is the cumulative sum, or profile, of the original time series. For example, the profile of an i.i.d. white noise is a standard random walk. Select a set T = { n 1 , . . . , n k } {\displaystyle T=\{n_{1},...,n_{k}\}} of integers, such that n 1 < n 2 < ⋯ < n k {\displaystyle n_{1}<n_{2}<\cdots <n_{k}} , the smallest n 1 ≈ 4 {\displaystyle n_{1}\approx 4} , the largest n k ≈ N {\displaystyle n_{k}\approx N} , and the sequence is roughly distributed evenly in log-scale: log ( n 2 ) − log ( n 1 ) ≈ log ( n 3 ) − log ( n 2 ) ≈ ⋯ {\displaystyle \log(n_{2})-\log(n_{1})\approx \log(n_{3})-\log(n_{2})\approx \cdots } . In other words, it is approximately a geometric progression. For each n ∈ T {\displaystyle n\in T} , divide the sequence X t {\displaystyle X_{t}} into consecutive segments of length n {\displaystyle n} . Within each segment, compute the least squares straight-line fit (the local trend). Let Y 1 , n , Y 2 , n , . . . , Y N , n {\displaystyle Y_{1,n},Y_{2,n},...,Y_{N,n}} be the resulting piecewise-linear fit. Compute the root-mean-square deviation from the local trend (local fluctuation): F ( n , i ) = 1 n ∑ t = i n + 1 i n + n ( X t − Y t , n ) 2 . {\displaystyle F(n,i)={\sqrt {{\frac {1}{n}}\sum _{t=in+1}^{in+n}\left(X_{t}-Y_{t,n}\right)^{2}}}.} And their root-mean-square is the total fluctuation: F ( n ) = 1 N / n ∑ i = 1 N / n F ( n , i ) 2 . {\displaystyle F(n)={\sqrt {{\frac {1}{N/n}}\sum _{i=1}^{N/n}F(n,i)^{2}}}.} (If N {\displaystyle N} is not divisible by n {\displaystyle n} , then one can either discard the remainder of the sequence, or repeat the procedure on the reversed sequence, then take their root-mean-square.) Make the log-log plot log n − log F ( n ) {\displaystyle \log n-\log F(n)} . === Interpretation === A straight line of slope α {\displaystyle \alpha } on the log-log plot indicates a statistical self-affinity of form F ( n ) ∝ n α {\displaystyle F(n)\propto n^{\alpha }} . Since F ( n ) {\displaystyle F(n)} monotonically increases with n {\displaystyle n} , we always have α > 0 {\displaystyle \alpha >0} . The scaling exponent α {\displaystyle \alpha } is a generalization of the Hurst exponent, with the precise value giving information about the series self-correlations: α < 1 / 2 {\displaystyle \alpha <1/2} : anti-correlated α ≃ 1 / 2 {\displaystyle \alpha \simeq 1/2} : uncorrelated, white noise α > 1 / 2 {\displaystyle \alpha >1/2} : correlated α ≃ 1 {\displaystyle \alpha \simeq 1} : 1/f-noise, pink noise α > 1 {\displaystyle \alpha >1} : non-stationary, unbounded α ≃ 3 / 2 {\displaystyle \alpha \simeq 3/2} : Brownian noise Because the expected displacement in an uncorrelated random walk of length N grows like N {\displaystyle {\sqrt {N}}} , an exponent of 1 2 {\displaystyle {\tfrac {1}{2}}} would correspond to uncorrelated white noise. When the exponent is between 0 and 1, the result is fractional Gaussian noise. === Pitfalls in interpretation === Though the DFA algorithm always produces a positive number α {\displaystyle \alpha } for any time series, it does not necessarily imply that the time series is self-similar. Self-similarity requires the log-log graph to be sufficiently linear over a wide range of n {\displaystyle n} . Furthermore, a combination of techniques including maximum likelihood estimation (MLE), rather than least-squares has been shown to better approximate the scaling, or power-law, exponent. Also, there are many scaling exponent-like quantities that can be measured for a self-similar time series, including the divider dimension and Hurst exponent. Therefore, the DFA scaling exponent α {\displaystyle \alpha } is not a fractal dimension, and does not have certain desirable properties that the Hausdorff dimension has, though in certain special cases it is related to the box-counting dimension for the graph of a time series. == Generalizations == === Generalization to polynomial trends (higher order DFA) === The standard DFA algorithm given above removes a linear trend in each segment. If we remove a degree-n polynomial trend in each segment, it is called DFAn, or higher order DFA. Since X t {\displaystyle X_{t}} is a cumulative sum of x t − ⟨ x ⟩ {\displaystyle x_{t}-\langle x\rangle } , a linear trend in X t {\displaystyle X_{t}} is a constant trend in x t − ⟨ x ⟩ {\displaystyle x_{t}-\langle x\rangle } , which is a constant trend in x t {\displaystyle x_{t}} (visible as short sections of "flat plateaus"). In this regard, DFA1 removes the mean from segments of the time series x t {\displaystyle x_{t}} before quantifying the fluctuation. Similarly, a degree n trend in X t {\displaystyle X_{t}} is a degree (n-1) trend in x t {\displaystyle x_{t}} . For example, DFA1 removes linear trends from segments of the time series x t {\displaystyle x_{t}} before quantifying the fluctuation, DFA1 removes parabolic trends from x t {\displaystyle x_{t}} , and so on. The Hurst R/S analysis removes constant trends in the original sequence and thus, in its detrending it is equivalent to DFA1. === Generalization to different moments (multifractal DFA) === DFA can be generalized by computing F q ( n ) = ( 1 N / n ∑ i = 1 N / n F ( n , i ) q ) 1 / q {\displaystyle F_{q}(n)=\left({\frac {1}{N/n}}\sum _{i=1}^{N/n}F(n,i)^{q}\right)^{1/q}} then making the log-log plot of log n − log F q ( n ) {\displaystyle \log n-\log F_{q}(n)} , If there is a strong linearity in the plot of log n − log F q ( n ) {\displaystyle \log n-\log F_{q}(n)} , then that slope is α ( q ) {\displaystyle \alpha (q)} . DFA is the special case where q = 2 {\displaystyle q=2} . Multifractal systems scale as a function F q ( n ) ∝ n α ( q ) {\displaystyle F_{q}(n)\propto n^{\alpha (q)}} . Essentially, the scaling exponents need not be independent of the scale of the system. In particular, DFA measures the scaling-behavior of the second moment-fluctuations. Kantelhardt et al. intended this scaling exponent as a generalization of the classical Hurst exponent. The classical Hurst exponent corresponds to H = α ( 2 ) {\displaystyle H=\alpha (2)} for stationary cases, and H = α ( 2 ) − 1 {\displaystyle H=\alpha (2)-1} for nonstationary cases. == Applications == The DFA method has been applied to many systems, e.g. DNA sequences; heartbeat dynamics in sleep and wake, sleep stages, rest and exercise, and across circadian phases; locomotor gate and wrist dynamics, neuronal oscillations, speech pathology detection, and animal behavior pattern analysis. == Relations to other methods, for specific types of signal == === For signals with power-law-decaying autocorrelation === In the case of power-law decaying auto-correlations, the correlation function decays with an exponent γ {\displaystyle \gamma } : C ( L ) ∼ L − γ {\displaystyle C(L)\sim L^{-\gamma }\!\ } . In addition the power spectrum decays as P ( f ) ∼ f − β {\displaystyle P(f)\sim f^{-\beta }\!\ } . The three exponents are related by: γ = 2 − 2 α {\displaystyle \gamma =2-2\alpha } β = 2 α − 1 {\displaystyle \beta =2\alpha -1} and γ = 1 − β {\displaystyle \gamma =1-\beta } . The relations can be derived using the Wiener–Khinchin theorem. The relation of DFA to the power spectrum method has been well studied. Thus, α {\displaystyle \alpha } is tied to the slope of the power spectrum β {\displaystyle \beta } and is used to describe the color of noise by this relationship: α = ( β + 1 ) / 2 {\displaystyle \alpha =(\beta +1)/2} . === For fractional Gaussian noise === For fractional Gaussian noise (FGN), we have β ∈ [ − 1 , 1 ] {\displaystyle \beta \in [-1,1]} , and thus α ∈ [ 0 , 1 ] {\displaystyle \alpha \in [0,1]} , and β = 2 H − 1 {\displaystyle \beta =2H-1} , where H {\displaystyle H} is the Hurst exponent. α {\displaystyle \alpha } for FGN is equal to H {\displaystyle H} . === For fractional Brownian motion === For fractional Brownian motion (FBM), we have β ∈ [ 1 , 3 ] {\displaystyle \beta \in [1,3]} , and thus α ∈ [ 1 , 2 ] {\displaystyle \alpha \in [1,2]} , and β = 2 H + 1 {\displaystyle \beta =2H+1} , where H {\displaystyle H} is the Hurst exponent. α {\displaystyle \alpha } for FBM is equal to H + 1 {\displaystyle H+1} . In this context, FBM is the cumulative sum or the integral of FGN, thus, the exponents of their power spectra differ by 2. == See also == Multifractal system – System with multiple fractal dimensions Self-organized criticality – Concept in physics Self-affinity – Whole of an object being mathematically similar to part of itselfPages displaying short descriptions of redirect targets Time series analysis – Sequence of data points over timePages displaying short descriptions of redirect targets Hurst exponent – A measure of the long-range dependence of a time series == References == == External links == Tutorial on how to calculate detrended fluctuation analysis Archived 2019-02-03 at the Wayback Machine in Matlab using the Neurophysiological Biomarker Toolbox. FastDFA MATLAB code for rapidly calculating the DFA scaling exponent on very large datasets. Physionet A good overview of DFA and C code to calculate it. MFDFA Python implementation of (Multifractal) Detrended Fluctuation Analysis.
|
Wikipedia:Deutsche Mathematik#0
|
Deutsche Mathematik (German Mathematics) was a mathematics journal founded in 1936 by Ludwig Bieberbach and Theodor Vahlen. Vahlen was publisher on behalf of the German Research Foundation (DFG), and Bieberbach was chief editor. Other editors were Fritz Kubach, Erich Schönhardt, Werner Weber (all volumes), Ernst August Weiß (volumes 1–6), Karl Dörge, Wilhelm Süss (volumes 1–5), Günther Schulz (de), Erhard Tornier (volumes 1–4), Georg Feigl, Gerhard Kowalewski (volumes 2–6), Maximilian Krafft, Willi Rinow, Max Zacharias (volumes 2–5), and Oswald Teichmüller (volumes 3–7). In February 1936, the journal was declared the official organ of the German Student Union (DSt) by its Reichsführer, and all local DSt mathematics departments were requested to subscribe and actively contribute. In the 1940s, issues appeared increasingly delayed and bunched; the journal ended with a triple issue (due Dec 1942) in June 1944. Deutsche Mathematik is also the name of a movement closely associated with the journal whose aim was to promote "German mathematics" and eliminate "Jewish influence" in mathematics, similar to the Deutsche Physik movement. As well as articles on mathematics, the journal published propaganda articles giving the Nazi viewpoint on the relation between mathematics and race (though these political articles mostly disappeared after the first two volumes). As a result of this many mathematics libraries outside Germany did not subscribe to it, so copies of the journal can be hard to find. This caused some problems in Teichmüller theory, as Oswald Teichmüller published several of his foundational papers in the journal. == References == == Further reading == Mehrtens, Herbert (1987), "Ludwig Bieberbach and "Deutsche Mathematik"", in Phillips, Esther R. (ed.), Studies in the history of mathematics, MAA Stud. Math., vol. 26, Washington, DC: Math. Assoc. America, pp. 195–241, ISBN 978-0-88385-128-9, MR 0913104 M. A. H. N. (1936), "Deutsche Mathematik", Nature, 137 (3467): 596–597, Bibcode:1936Natur.137..596M, doi:10.1038/137596a0, S2CID 37098502, Book review Segal, Sanford L. (2003), "Chapter seven: Ludwig Bieberbach and Deutsche Mathematik", Mathematicians under the Nazis, Princeton University Press, pp. 334–418, ISBN 978-0-691-00451-8, MR 1991149 The title page, the table of contents, and some article pages of the journal's volume 1, issue 2 (1936) are linked from the blog Mathematicians are human beings (scientopia.org, 19 Sep 2011).
|
Wikipedia:Devācārya#0
|
The Nimbarka Sampradaya (IAST: Nimbārka Sampradāya, Sanskrit निम्बार्क सम्प्रदाय), also known as the Kumāra Sampradāya, Hamsa Sampradāya, and Sanakādi Sampradāya (सनकादि सम्प्रदाय), is the oldest Vaiṣṇava sect. It was founded by Nimbarka, a Telugu Brahmin yogi and philosopher. It propounds the Vaishnava Bhedabheda theology of Dvaitadvaita (dvaita-advaita) or dualistic non-dualism. Dvaitadvaita states that humans are both different and non-different from Isvara, God or Supreme Being. Specifically, this Sampradaya is a part of Krishnaism—Krishna-centric traditions. == Guru Parampara == Nimbarka Sampradaya is also known as Kumāra Sampradāya, Hamsa Sampradāya, and Sanakādi Sampradāya. According to tradition, the Nimbarka Sampradaya Dvaita-advaita philosophy was revealed by Śrī Hansa Bhagavān to Sri Sanakadi Bhagawan, one of the Four Kumaras; who passed it to Sri Narada Muni; and then on to Nimbarka. The Four Kumaras: Sanaka, Sanandana, Sanātana, and Sanat Kumāra, are traditionally regarded as the four mind-born sons of Lord Brahmā. They were created by Brahmā in order to advance creation, but chose to undertake lifelong vows of celibacy (brahmacarya), becoming renowned yogis, who requested from Brahma the boon of remaining perpetually five years old. Śrī Sanat Kumāra Samhitā, a treatise on the worship of Śrī Rādhā Kṛṣṇa, is attributed to the brothers, just like the Śrī Sanat Kumāra Tantra, which is part of the Pancarātra literature. In the creation of this universe as narrated by the Paurāṇika literature, Śrī Nārada Muni is the younger brother of the Four Kumāras, who took initiation from his older brothers. Their discussions as guru and disciple are recorded in the Upaniṣads with a famous conversation in the Chāndogya Upaniṣad, and in the Śrī Nārada Purāṇa and the Pañcarātra literature. Nārada Muni is recorded as main teacher in all four of the Vaiṣṇava Sampradāyas. According to tradition, he initiated Śrī Nimbārkācārya into the sacred 18-syllabled Śrī Gopāla Mantra (Klim Krishnaya Govindaya Gopijanavallabhaya Svaha), and introduced him to the philosophy of the Yugala upāsana, the devotional worship of the divine couple Śrī Rādhā Kṛṣṇa. According to tradition, this was the first time that Śrī Rādhā Kṛṣṇa were worshipped together by anyone on earth other than the Gopis of Vṛndāvana. Śrī Nārada Muni then taught Nimbarka the essence of devotional service in the Śrī Nārada Bhakti Sūtras. Śrī Nimbārkācārya already knew the Vedas, Upaniṣads and the rest of the scriptures, but perfection was found in the teachings of Śrī Nārada Muni. == Nimbarka == === Dating === Nimbarka is conventionally dated at the 7th or 11th century, but this dating has been questioned, suggesting that Nimbarka lived somewhat earlier than Shankara, in the 6th or 7th century CE. According to Roma Bose, Nimbarka lived in the 13th century, on the presupposition that Śrī Nimbārkāchārya was the author of the work Madhvamukhamardana. Meanwhile, Vijay Ramnarace concluded that the work Madhvamukhamardana has been wrongly attributed to Nimbarkacharya. This view is also supported by traditional scholars, who hold a similar perspective. Bhandarkar has placed him after Ramanuja, suggesting 1162 AD as the date of his demise.S.N. Dasgupta, on the other hand, dates Nimbārka to the mid-14th century. Dasgupta bases this dating on the absence of Nimbārka's mention in the Sarvadarśanasaṅgraha, a doxography by 14th-century author Mādhava Vidyāraṇya. However, it is important to note that none of the Bhedābhedins—whether Bhartṛprapañca, Nimbārka, Bhāskara, or Yādavaprakāśa—are referenced in the Sarvadarśanasaṅgraha. while S. A. A. Rizvi assigns a date of c. 1130–1200 AD. According to Satyanand, Bose's dating of the 13th century is an erroneous attribution. Malkovsky, following Satyanand, notes that in Bhandarkar's own work it is clearly stated that his dating of Nimbarka was an approximation based on an extremely flimsy calculation; yet most scholars chose to honour his suggested date, even until modern times. According to Malkovsky, Satyanand has convincingly demonstrated that Nimbarka and his immediate disciple Srinivasacharya flourished well before Ramanuja (1017–1137 CE), arguing that Srinivasacharya was a contemporary, or just after Sankaracarya (early 8th century). According to Ramnarace, summarising the available research, Nimbarka must be dated in the 7th century CE. === Traditional accounts === According to the Bhavishya Purana, and his eponymous tradition, the Nimbārka Sampradāya, Śrī Nimbārkāchārya appeared in the year 3096 BCE, when the grandson of Arjuna was on the throne. According to tradition, Nimbārka was born in Vaidūryapattanam, the present-day Mungi Village, Paithan in East Maharashtra. His parents were Aruṇa Ṛṣi and Jayantī Devī. Together, they migrated to Mathurā and settled at what is now known as Nimbagrāma (Neemgaon), situated between Barsānā and Govardhan. == Philosophy == === Dvaitādvaita === The Nimbarka Sampradaya follows the doctrine of Svābhāvika Bhedabheda also known as dvaitādvaita. The doctrine of Svābhāvika Bhedābheda is primarily elaborated in the works of Nimbārka and Srinivasacharya, particularly Nimbarka's Vedānta pārijāta saurabha and Vedānta Kaustubha, commentaries on the Brahma Sūtras. Svābhāvika Bhedābheda discern three foundational elements of reality: Brahman, which is the metaphysical ultimate reality; the controller. Chit, representing the Jivātman, which is the sentient, individual soul; the enjoyer. Achit, which is the non-sentient universe; the object to be enjoyed. Svābhāvika Bhedābheda holds that the individual soul (jīva) and the non-sentient universe (jagat) are both distinct from and identical to Brahman, the ultimate reality, depending on the perspective. Brahman alone is svatantra tattva (independent reality), while the activities and existence of the other two realities depend on Brahman are regarded as paratantra tattva (dependent reality). In this approach the relation between Atman and Brahman is "svābhāvika or natural, not brought about by any external agency, and therefore it cannot be dispensed with. An adventitious relation can be finished away by removing the cause or agency which has brought it, but what is inherent or more appropriately natural cannot be taken away." Brahman pervades the entire universe and is immanent in all beings, yet they retain their individuality. The non-sentient universe is not considered an illusion (māyā), but a real manifestation of Brahman's power. The philosophy draws on metaphors like the sun and its rays, fire and its sparks, to demonstrate the natural, inherent connection between Brahman and its manifestations. === Brahman === They regard Brahman as the universal soul, both transcendent and immanent, referred to by various names such as Śrī Kṛṣṇa, Viṣnu, Vāsudeva, Purushottama, Nārāyaņa, Paramatman, Bhagawan and so on. Similarly, Nimbārkācārya, in his Vedanta Kamadhenu Daśaślokī, refers to Śrī Kṛṣṇa alongside his consort Rādhā. Brahman is the supreme being, the source of all auspicious qualities, and possesses unfathomable attributes. It is omnipresent, omniscient, the lord of all, and greater than all. None can be equal to or superior to Brahman. He is the creator, cause of creation, maintenance and destruction of the universe. In Dvaitādvaita, Brahman is saguṇa (with qualities). Therefore, they interprets scriptural passages that describe Brahman as nirguṇa (without qualities) differently as they argues that nirguṇa, when applied to Brahman, signifies the absence of inauspicious qualities, rather than the complete negation of all attributes. Similarly, terms like nirākāra (formless) are understood to denote the absence of an undesirable or inauspicious form. It upheld the view that Śrī Kṛṣṇa possesses all auspicious attributes and that relative qualities such as virtue and vice, or auspiciousness and inauspiciousness, do not affect him. Sri Nimbarkacharya, on the worship of the divine couple, in Dasha Shloki (verse 5): === Jivātman (chit) === Jivatman is different from physical body, sense organs, mind, prāṇa and Buddhi, all of these are dependent on Individual soul and serve as instrument in such actions as seeing, hearing and so on. Individual soul (Jivātman) is eternal, being of the nature of Knowledge, and knower (possesses the attribute of knowledge).The attribute of knowledge extends beyond the soul, i.e. its occupying a larger space. As in the case of smell, just like smell occupying a larger space than the flower which occupies a smaller space. == Practices == The basic practice consists of the worship of Sri Radha Madhav, with Sri Radha being personified as the inseparable part of Sri Krishna. Nimbarka Sampradaya became the first Krishnaite tradition in late medieval time. Nimbarka refers to five methods to salvation, namely karma (ritual action); vidya (knowledge); upasana or dhyana (meditation); prapatti (surrender to the Lord/devotion); Gurupasatti (devotion and self-surrender to God as Shri Radha Krsna). === Karma (ritual action) === Performed conscientiously in a proper spirit, with one's varna and asrama (phase of life) thereby giving rise to knowledge which is a means to salvation). === Vidya (knowledge) === Not as a subordinate factor of karma but also not as an independent means for everyone; only for those inclined to spending vast lengths of time in scriptural study and reflection on deeper meanings. === Upasana or dhyana (meditation) === It is of three types. First is meditation on the Lord as one's self, i.e. meditation on the Lord as the Inner Controller of the sentient. Second is meditation on the Lord as the Inner Controller of the non-sentient. Final one is meditation on Lord Himself, as different from the sentient and non-sentient. This is again not an independent means to Salvation for all, as only those qualified to perform the upasana (with Yajnopavitam) can perform this Sadhana. === Śaraṇāgati === Śaraṇāgati is the complete entrusting of one's own self to the infinitely merciful Lord through the means recommended by the good, when one is convinced of one's incapacity for resorting to other sādhanas like knowledge and the rest. In this tradition there are six constituent elements of Śaraṇāgati (total surrender) in Vedāntaratnamañjūṣā: The resolve to treat everyone with good will and friendliness, being convinced of the great truth that everyone and everything, down to as tuft of grass, deserves respect. Discarding what is contrary to the above solemn determination, i.e. refraining from all violence, malice, back- biting, falsehood, etc. Strong faith in the protection of the Lord. Praying to the Lord for protection, being aware of the fact that the Lord, though all-merciful, does not release anyone who does not pray to Him but is, on the contrary, adverse to Him Discarding all false pride and sense of egoity, i.e. assuming an attitude of utter humility Complete entrusting of one's own self and whatever belongs to one's self to the Lord, being convinced that such a complete resignation of the 'I' and the 'mine' to the Lord alone induce the mercy and grace of the Lord. == Literature == The literature of the Nimbarka Sampradaya reflects its theological, philosophical, and devotional aspects. === Commentaries on Brahmasūtras === The Brahmasūtras of Bādarāyaṇa have been extensively interpreted and commented upon by several distinguished scholars. Among the six primary commentaries are: Vedānta Pārijāta Saurabha by Śrī Nimbārkāchārya. Vedānta Kaustubha by Śrī Śrīnivāsāchārya. Siddhānta Jahnavi by Śrī Devāchārya. Siddhānta Setukā by Śrī Sundara Bhaṭṭāchārya. Vedānta Kaustubha Prabhā by Śrī Keśava Kāśmīrī Bhaṭṭāchārya. Vedānta Kaustubha Prabhā Bhāvadipikā by Śrī Pandita Amolakrama Śāstrī. === Vedāntakāmadhenu Daśaślokī === A small work of Nimbārkāchārya containing ten stanzas The Daśaślokī have been extensively commented upon by several scholars. Among them, the three primary commentaries are: Vedāntaratnamañjūṣā of Śrī Puruṣottamāchārya Vedānta Siddhāntaratnāñjali of Śrī Harivyāsa Devāchārya Vedāntalaghumañjūṣā of Śrī Giridhara dāsa == Nimbarka Sampradaya Devachāryas == === Sri Bhatta === As themes of Radha and Krishna gained popularity, Keshava Kashmiri's disciple Sribhatta in the 15th century, amplified Nimbarka's insights and brought Radha Krishna once more into the theological forefront through the medium of brajbhasha. A range of poets and theologians who flourished in the milieu of Vrindavana, Vallabha, Surdas, rest of Vallabha's disciples, Svami Haridas, Chaitanya Mahaprabhu and the Six Goswamis of Vrindavana were influenced in some manner by Sribhatta. The theological insights by this particular teacher were developed by his disciple Harivyasa, whose works reveal not only the theology of Radha Krisna and the sakhis the nitya nikunja lilas of goloka vrindavana, but also embody a fairly developed vedantic theory propagating the unique branch of Bhedabheda philosophy, ultimately the legacy of Nimbarka's original re-envisaging role of Radha. === Śrī Harivyāsa Devacārya (c. 1443–1543 CE) === Harivyasa devacharya (c. 15th Century, was an Indian philosopher, theologian and poet. He was born in a Gaud brahmin family. He was the 35th āchārya of the Nimbārka Sampradāya. He lived in Vrindavana. He was a disciple of Śrī Śrībhaṭṭa Devāchārya ji and his nom-de-plume was Hari Priyā. He also sent his twelve main disciples on missionary work throughout the India each of which founded their own sub-lineage, a few exists today. The most famous are Svāmī Paraśurāma Devācārya (c. 1525–1610 CE) and Svāmī Svabhūrāma Devācārya (fl. 16th century). === Svāmī Svabhūrāma Devācārya (fl. 16th century CE) === Svāmī Svabhūrāma Devācārya (fl. 16th century CE) was born in Budhiya Village, outside Jagadhri and Yamunanagar near Kurukshetra in modern Haryana, India. He established over 52 temples in Punjab, Haryana and Vraja during his lifetime; his current followers are found mostly in Vṛndāvana, Haryana, Punjab, Bengal, Rajasthan, Orissa, Assam, Sikkim, Bihar, other regions in Uttar Pradesh and Maharashtra, also in significant numbers in Nepal. In his sub-lineage, there are many branches. Notable saints of this sub-branch include: Saint Swami Chatur Chintamani Nagaji Maharaj, who started the Vraja Parikrama. This tradition has been continuously maintained over 528 years by the Acharyas of the Svabhurāma-Dwara (sub-lineage). Swami Brindaban Bihari Das Mahanta Maharaj at Kathia Baba ka Ashram, Shivala, Varanasi, Uttar Pradesh and Sukhchar, 24-Parganas (North), West Bengal, who has undertaken projects for orphans and aged persons, building schools and elderly care homes. He travels relentlessly to spread Nimbarka Philosophy through world religion conferences held in US, UK, Sweden, Africa, Bangladesh and other different countries across the globe. The Sukhchar Kathiababar Ashram was originally established by Swami Dhananjaya Das Kathiababa and is presently headed by Swami Brindabanbiharidas Mahanta Maharaj. === Svāmī Haripriyā Śaraṇa Devācārya === The famous teacher and leader Svāmī Haripriyā Śaraṇa Devācārya, founded the temple and monastery at Bihari Ji Ka Bageecha, Vṛndāvana, sponsored by his disciple, the philanthropic Shri Hargulal Beriwala and the Beriwala Trust in the 19th century. === Svāmī Lalitā Śaraṇa Devācārya === The predecessor of the current successor was Svāmī Lalitā Śaraṇa Devācārya, who died in July 2005 at the age of 103. One of his other disciples is the world-renowned Svāmī Gopāla Śaraṇa Devācārya, who has founded the Monastery and temple known as the Shri Golok Dham Ashram in New Delhi and Vṛndāvana. He has also helped ordinary Hindus who are not Vaiṣṇava to establish temples overseas. Of note are the Glasgow Hindu Mandir, Scotland, UK: the Lakshmi Narayan Hindu Mandir, Bradford, UK; and the Valley Hindu Temple, Northridge, California. He has also facilitated major festivals at the Hindu Sabha Mandir in Brampton, Canada. === Svāmī Rādhā Śarveshavara Śaraṇa Devācārya === The 48th leader of the Nimbārka Sampradāya is H.D.H. Jagadguru Nimbārkācārya Svāmī Śrī Rādhā Śarveshavara Śaraṇa Devācārya, known in reverence as Śrī Śrījī Māhārāja by his followers. His followers are mainly in Rajasthan and Vṛndāvana, Mathura. He established the Mandir at the birth site of Śrī Nimbārkācārya in Mungi Village, Paithan, Maharashtra in 2005. In addition, he oversees the maintenance of thousands of temples, hundreds of monasteries, schools, hospitals, orphanages, cow-shelters, environmental projects, memorial shrines, etc., and arranges various scholarly conventions, religious conferences, medical camps and outreach, etc. === Śrī Śrījī Māhārāja (present) === The 49th and current leader of the entire Nimbārka Sampradāya is H.D.H. Jagadguru Nimbārkācārya Svāmī Śrī Shyām Śaraṇa Devācārya, known in reverence as Śrī Śrījī Māhārāja by his followers. He is based in Nimbārka Tīrtha Rajasthan, India. He is the current leader of the Sampradāya, who worships the śālagrāma deity known as Śrī Sarveśvara. His followers are mainly in Rajasthan, Madhya Pradesh, Maharashtra, Gujarat, Vrindavan and Mathura. == See also == Svayam Bhagavan Vrindavan Srinivasacharya == Notes == == References == == Bibliography == Beck, Guy L. (2005), Beck, Guy (ed.), "Krishna as Loving Husband of God", Alternative Krishnas: Regional and Vernacular Variations on a Hindu Deity, SUNY Press, doi:10.1353/book4933, ISBN 978-0-7914-6415-1, S2CID 130088637, archived from the original on 17 July 2023, retrieved 12 April 2008 Bose, Roma (1940), Vedanta Parijata Saurabha of Nimbarka and Vedanta Kaustubha of Srinivasa (Commentaries on the Brahma-Sutras) – Doctrines of Nimbarka and his followers, vol.3, Asiatic Society of Bengal Hardy, Friedhelm E. (1987). "Kṛṣṇaism". In Mircea Eliade (ed.). The Encyclopedia of Religion. Vol. 8. New York: MacMillan. pp. 387–392. ISBN 978-0-02897-135-3. Malkovsky, B. (2001), The Role of Divine Grace in the Soteriology of Śaṁkarācārya, BRILL Agrawal, Madan Mohan (2013). Encyclopedia of Indian philosophies, Bhedābheda and Dvaitādvaita systems. Encyclopedia of Indian philosophies / general ed.: Karl H. Potter. Delhi: Motilal Banarsidass. ISBN 978-81-208-3637-2. Bhandarkar, R. G. (2014). Vaisnavism, Saivism and Minor Religious Systems (Routledge Revivals). Routledge. ISBN 978-1-317-58933-4. Ramnarace, Vijay (2014). Rādhā-Kṛṣṇa's Vedāntic Debut: Chronology & Rationalisation in the Nimbārka Sampradāya (PDF) (PhD thesis). University of Edinburgh. Archived (PDF) from the original on 9 October 2022. Retrieved 12 January 2019. Sri Sarvesvara (1972), Sri Nimbarkacarya Aur Unka Sampraday, Akhila Bharatiya Nimbarkacarya Pitha, Salemabad, Rajasthan, India Dasgupta, Surendranath (1988). A history of Indian philosophy. Delhi: Motilal Banarsidass. ISBN 978-81-208-0408-1. Prakash, Dr Ravi (2022). Religious Debates in Indian Philosophy. K.K. Publications. Hastings, James (1909). Encyclopaedia of Religion and Ethics, Vol. 2: Arthur-Bunyan. FB&C Limited. ISBN 978-0-332-41345-7. {{cite book}}: ISBN / Date incompatibility (help) Catherine, Clémentin-Ojha (1990). "La renaissance du Nimbarka Sampradaya au XVIe siècle. Contribution à l'étude d'une secte Krsnaïte". Journal Asiatique (in French). 278. doi:10.2143/JA.278.3.2011219. Kaviraj, Gopinath (1965). काशी की सारस्वत साधना (in Hindi). Bihāra-Rāshṭrabhāshā-Parishad. Gupta, Tripta (2000). Vedānta-Kaustubha, a study (in English and Sanskrit). Delhi: Sanjay Prakashan. ISBN 978-81-7453-043-1. Radhakrishnan, Sarvepalli (2011). The Brahma Sutra: The Philosophy Of Spiritual Life. Literary Licensing, LLC. ISBN 978-1-258-00753-9. Upadhyay, Baladeva (1978). Vaishnava Sampradayon ka Siddhanta aur Sahitya. Varanasi: Chowkhamba Amarbharati Prakashan. Ramkrishnadev Garga, Nabha das ji, Priya Das ji (2004). Bhaktamāla of Nābhādāsa, with Bhaktirasabodhinī commentary of Priyādāsa, Hindi translation and gloss by Ramkrishnadev Garga (in Sanskrit and Hindi). Vṛndāvana.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: multiple names: authors list (link) Klostermaier, Klaus K. (2014). A Concise Encyclopedia of Hinduism. Oneworld Publications. ISBN 978-1-78074-672-2. Bose, Roma (2004). Vedānta-pārijāta-saurabha of Nimbārka and Vedānta-kaustubha of Śrīnivāsa: commentaries on the Brahma-sutras; English translation. New Delhi: Munshiram Manoharlal Publishers. ISBN 978-81-215-1121-6. == External links == Brahma Sutras (Nimbarka commentary) English translation by Roma Bose [proofread] (includes glossary) http://www.shrijagatgurunimbarkacharyapeeth.org http://internationalnimbarkasociety.org http://www.golokdham.org http://www.sriradhabhakti.org https://web.archive.org/web/20090419071328/http://nimbark.org/ http://www.kathiababa.in/nimbarka Archived 3 November 2017 at the Wayback Machine Works by or about Nimbarka Sampradaya at the Internet Archive Nimbarka at Encyclopædia Britannica Teachers and Pupils of the Nimbārka School, Surendranath Dasgupta, 1940
|
Wikipedia:Dialgebra#0
|
In abstract algebra, a dialgebra is the generalization of both algebra and coalgebra. The notion was originally introduced by Lambek as "subequalizers", and named as dialgebras by Tatsuya Hagino. Many algebraic notions have previously been generalized to dialgebras. Dialgebra also attempts to obtain Lie algebras from associated algebras. == See also == F-algebra == References == == Further reading == dialgebra in nLab
|
Wikipedia:Diamond-square algorithm#0
|
The diamond-square algorithm is a method for generating heightmaps for computer graphics. It is a slightly better algorithm than the three-dimensional implementation of the midpoint displacement algorithm, which produces two-dimensional landscapes. It is also known as the random midpoint displacement fractal, the cloud fractal or the plasma fractal, because of the plasma effect produced when applied. The idea was first introduced by Fournier, Fussell and Carpenter at SIGGRAPH in 1982. The diamond-square algorithm starts with a two-dimensional grid, then randomly generates terrain height from four seed values arranged in a grid of points so that the entire plane is covered in squares. == Description == The diamond-square algorithm begins with a two-dimensional square array of width and height 2n + 1. The four corner points of the array must first be set to initial values. The diamond and square steps are then performed alternately until all array values have been set. The diamond step: For each square in the array, set the midpoint of that square to be the average of the four corner points plus a random value. The square step: For each diamond in the array, set the midpoint of that diamond to be the average of the four corner points plus a random value. Each random value is multiplied by a scale constant, which decreases with each iteration by a factor of 2−h, where h is a value between 0.0 and 1.0 (lower values produce rougher terrain). During the square steps, points located on the edges of the array will have only three adjacent values set, rather than four. There are a number of ways to handle this complication - the simplest being to take the average of just the three adjacent values. Another option is to 'wrap around', taking the fourth value from the other side of the array. When used with consistent initial corner values, this method also allows generated fractals to be stitched together without discontinuities. == Visualization == The image below shows the steps involved in running the diamond-square algorithm on a 5 × 5 array. == Applications == This algorithm can be used to generate realistic-looking landscapes, and different implementations are used in computer graphics software such as Terragen. It is also applicable as a common component in procedural textures. == Artifacts and extensions == The diamond-square algorithm was analyzed by Gavin S. P. Miller in SIGGRAPH 1986 who described it as flawed because the algorithm produces noticeable vertical and horizontal "creases" due to the most significant perturbation taking place in a rectangular grid. The grid artifacts were addressed in a generalized algorithm introduced by J.P. Lewis. In this variant the weights on the neighboring points are obtained by solving a small linear system motivated by estimation theory, rather than being fixed. The Lewis algorithm also allows the synthesis of non-fractal heightmaps such as rolling hills or ocean waves. Similar results can be efficiently obtained with Fourier synthesis, although the possibility of adaptive refinement is lost. The diamond-square algorithm and its refinements are reviewed in Peitgen and Saupe's book "The Science of Fractal Images". == References == == External links == Simple open source heightmap module for Lua using diamond-square algorithm Generating Random Fractal Terrain: The Diamond-Square Algorithm from GameProgrammer.com Plasma Fractal from Justin Seyster's web page Plasma fractals from Patrick Hahn's home page Terrain Tutorial from Lighthouse3d.com Random Midpoint Displacement with Canvas Random midpoint displacement method Diamond And Square algorithm on Github (PHP) An example of test-driving an implementation of the algorithm on Uncle Bob's Clean Coder blog Xmountains classic sideways scrolling X11 implementation. Algorithm details. A Python implementation, short and straightforward. Handles both fixed and periodic boundary conditions.
|
Wikipedia:Didier Dubois (mathematician)#0
|
Didier Dubois (born 1952) is a French mathematician. Since 1999, he is a co-editor-in-chief of the journal Fuzzy Sets and Systems. In 1993–1997 he was vice-president and president of the International Fuzzy Systems Association. His research interests include fuzzy set theory, possibility theory, and knowledge representation. Most of his works are co-authored by Henri Prade. == Selected bibliography == Dubois, Didier and Prade, Henri (1980). Fuzzy Sets & Systems: Theory and Applications. Academic Press (APNet). ISBN 0-122-22750-6 Dubois, Didier and Prade, Henri (1988). Possibility Theory: An Approach to Computerized Processing of Uncertainty. New York: Plenum Press. == See also == Construction of t-norms for Dubois–Prade t-norms == External links == Didier Dubois' home page at IRIT "On the use of aggregation operations in information fusion process" Didier Dubois, Henri Prade (2004) "Interval-valued Fuzzy Sets, Possibility Theory and Imprecise Probability" Didier Dubois, Henri Prade
|
Wikipedia:Diego Rodríguez (mathematician)#0
|
Diego Rodríguez (Atitalaquia c.1596, in Mexico City – 1668) was a mathematician, astronomer, educator, and technological innovator in New Spain. He was one of the most important figures in the scientific field in the colony in the second half of the seventeenth century. == Background == In 1613 he entered the Order of the Blessed Virgin Mary of Mercy. == Scientific revolution == For thirty years Father Rodríguez maintained in his writing and teaching the separation of the exact sciences from metaphysics and theology. He tried to propound the heliocentric theory of Nicolaus Copernicus without, in his writings, openly breaking with the scholastic tradition. He wrote on the astronomical findings of Galileo Galilei, but without directly endorsing them or attacking the classical cosmology. Nevertheless, these were radical steps, and the scientific community he headed in Mexico accepted them about 30 years before their colleagues in Spain. One reason for this surprising difference is that the books of modern science originating in Protestant countries were refused entry into Spain by the censors. Booksellers, in order not to lose their investments, often sent the contraband books on to America. Because of this aspect of Rodriguez's work, he was a target of Mexican Inquisition. Rodríguez was at the center of a small circle of intellectuals that met semi-clandestinely in private homes to discuss the new ideas. The 1640s, however, brought them to the attention of the Inquisition. A series of investigations and trials followed, continuing into the mid-1650s. A frantic hiding of books followed the Inquisition's 1647 edict imposing careful censorship on scientific works. In July 1655 the Inquisition required all Mexico City's booksellers (six) to submit their book lists to the Holy Office for approval, on pain of fine and excommunication. Melchor Pérez de Soto, one of the group of scientific modernizers headed by Diego Rodríguez and chief architect at the cathedral, was subjected to the Inquisition. Thanks to this process, a catalog of his library, more than 1,660 volumes, has come down to us. Many of the works dealt with the modern science of contemporary Europe; many others had more traditional content. == Works == BBC Rodríguez wrote many works, some of them truly revolutionary contributions to mathematics (like his treatise on logarithms), astronomy and engineering. He also wrote treatises on technology, such as the one dealing with the construction of precise clocks. Many of these works were developed for his own courses in the university; others were written to support his own investigations. In the latter category is the report on the prediction and exact measurement of eclipses, which is fundamental for calculation of exact geographic positions (longitude), because the eclipse permits synchronization of the time with that in other geographic localities. This and his work on the improvement of clocks allowed him to measure the longitude of Mexico City with a precision greater than Alexander von Humboldt was able to make a century and a half later, even with improved methods. Rodríguez's Peruvian student and correspondent, Francisco Ruiz Lozano, used the samtechnique to measure the position of his birthplace, Lima, Peru. == Evaluation == It is strange that the many valuable contributions of Rodríguez and his students did not make a bigger impact on the history of the colony. His methods of calculating positions were not used by Spanish navigators, who could have benefited greatly from them. Most of his writings were never published, remaining in manuscript. In New Spain it was difficult to print them, not only because of high costs but also because special type faces were unavailable, for example, for mathematical symbols. And there was no market for the published works. For that reason some of his manuscripts were sent to Spain, but there was no greater interest there and they were ignored. At his death in 1668, most of his manuscripts were buried in the library of his order; the rest were dispersed in private collections or were irretrievably lost. Rodríguez's successors in the chair of astronomy and mathematics occupied the position only briefly, and are of little interest, up until Carlos de Sigüenza y Góngora took over the position in 1672. == References == This article is a free translation of the article at the Spanish Wikipedia, accessed on July 13, 2007, with a little additional information.
|
Wikipedia:Dietrich Stoyan#0
|
Dietrich Stoyan (born November 26, 1940, Berlin) is a German mathematician and statistician who made contributions to queueing theory, stochastic geometry, and spatial statistics. == Education and career == Stoyan studied mathematics at Technical University Dresden; applied research at Deutsches Brennstoffinstitut Freiberg, 1967 PhD, 1975 Habilitation. Since 1976 at TU Bergakademie Freiberg, Rektor of that university in 1991—1997. He became famous by his statistical research of the diffusion of euro coins in Germany and Europe after the introduction of the euro in 2002. In 2024, he criticized together with Sung Nok Chiu a wrong proof of the hypothesis that the origin of the COVID-19 pandemic was a market in Wuhan, which was published in the journal Science. Dietrich Stoyan is an honorary doctor of the Technical University of Dresden (2000) and the University of Jyväskylä (2004). He was a member of the Academy of Sciences of the GDR (1990), which disappeared in 1991. At present he is a member of the Academia Europaea (since 1992), the Berlin-Brandenburg Academy of Sciences (since 2000) and the German Academy of Sciences Leopoldina (since 2002). He is also a Fellow of the Institute for Mathematical Statistics (since 1997). In 2018 he published his autobiography In two times. == Research == === Queueing Theory === Qualitative theory, in particular inequalities, for queueing systems and related stochastic models. The books D. Stoyan: Comparison Methods for Queues and other Stochastic Models. J. Wiley and Sons, Chichester, 1983 and A. Mueller and D. Stoyan: Comparison Methods for Stochastic Models and Risks, J. Wiley and Sons, Chichester, 2002 report on the results. The work goes back to 1969 when he discovered the monotonicity of the GI/G/1 waiting times with respect to the convex order. === Stochastic Geometry === Stereological formulae, applications for marked point processes, development of stochastic models. Successful joint work with Joseph Mecke led to the first exact proof of the fundamental stereological formulae. A book entitled "Stochastic Geometry and its Applications" reports on these results. Its 3rd edition is the key reference for applied stochastic geometry. === Spatial Statistics === Statistical methods for point processes, random sets and many other random geometrical structures such as fibre processes. Results can be found in the 2013 book on stochastic geometry and in the book, Fractals, Random Shapes and Point Fields by D. and H. Stoyan. (J. Wiley and Sons, Chichester, 1994). A particular strength of Stoyan is second-order methods. He is also the main author of the book Statistical analysis and modelling of spatial point patterns. It treats the statistics of point patterns with methods of point process theory. Stoyan is very active in demonstrating non-mathematicians and non-statisticians the potential of statistical and stochastic geometrical methods. He published many papers in journals of physics, materials science, forestry, and geology. One topic of particular interest was random packings of hard spheres. He co-organized together with Klaus Mecke conferences where physicists, geometers and statisticians met. See the books Mecke Klaus R. and Stoyan D. (eds.): Statistical Physics and Spatial Statistics. Lecture Notes in Physics 554, Springer-Verlag, 2000 and Mecke Klaus R. and Stoyan D. (eds.): Morphology of Condensed Matter. Lecture Notes in Physics 600, Springer-Verlag, 2002. == External links == Homepage == References ==
|
Wikipedia:Dieudonné determinant#0
|
In linear algebra, the Dieudonné determinant is a generalization of the determinant of a matrix to matrices over division rings and local rings. It was introduced by Dieudonné (1943). If K is a division ring, then the Dieudonné determinant is a group homomorphism from the group GLn(K ) of invertible n-by-n matrices over K onto the abelianization K ×/ [K ×, K ×] of the multiplicative group K × of K. For example, the Dieudonné determinant for a 2-by-2 matrix is the residue class, in K ×/ [K ×, K ×], of det ( a b c d ) = { − c b if a = 0 a d − a c a − 1 b if a ≠ 0. {\displaystyle \det \left({\begin{array}{*{20}c}a&b\\c&d\end{array}}\right)=\left\lbrace {\begin{array}{*{20}c}-cb&{\text{if }}a=0\\ad-aca^{-1}b&{\text{if }}a\neq 0.\end{array}}\right.} == Properties == Let R be a local ring. There is a determinant map from the matrix ring GL(R ) to the abelianised unit group R ×ab with the following properties: The determinant is invariant under elementary row operations The determinant of the identity matrix is 1 If a row is left multiplied by a in R × then the determinant is left multiplied by a The determinant is multiplicative: det(AB) = det(A)det(B) If two rows are exchanged, the determinant is multiplied by −1 If R is commutative, then the determinant is invariant under transposition == Tannaka–Artin problem == Assume that K is finite over its center F. The reduced norm gives a homomorphism Nn from GLn(K ) to F ×. We also have a homomorphism from GLn(K ) to F × obtained by composing the Dieudonné determinant from GLn(K ) to K ×/ [K ×, K ×] with the reduced norm N1 from GL1(K ) = K × to F × via the abelianization. The Tannaka–Artin problem is whether these two maps have the same kernel SLn(K ). This is true when F is locally compact but false in general. == See also == Moore determinant over a division algebra == References == Dieudonné, Jean (1943), "Les déterminants sur un corps non commutatif", Bulletin de la Société Mathématique de France, 71: 27–45, doi:10.24033/bsmf.1345, ISSN 0037-9484, MR 0012273, Zbl 0028.33904 Rosenberg, Jonathan (1994), Algebraic K-theory and its applications, Graduate Texts in Mathematics, vol. 147, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94248-3, MR 1282290, Zbl 0801.19001. Errata Serre, Jean-Pierre (2003), Trees, Springer, p. 74, ISBN 3-540-44237-5, Zbl 1013.20001 Suprunenko, D.A. (2001) [1994], "Determinant", Encyclopedia of Mathematics, EMS Press
|
Wikipedia:Diffeology#0
|
In mathematics, a diffeology on a set generalizes the concept of a smooth atlas of a differentiable manifold, by declaring only what constitutes the "smooth parametrizations" into the set. A diffeological space is a set equipped with a diffeology. Many of the standard tools of differential geometry extend to diffeological spaces, which beyond manifolds include arbitrary quotients of manifolds, arbitrary subsets of manifolds, and spaces of mappings between manifolds. == Introduction == === Calculus on "smooth spaces" === The differential calculus on R n {\displaystyle \mathbb {R} ^{n}} , or, more generally, on finite dimensional vector spaces, is one of the most impactful successes of modern mathematics. Fundamental to its basic definitions and theorems is the linear structure of the underlying space. The field of differential geometry establishes and studies the extension of the classical differential calculus to non-linear spaces. This extension is made possible by the definition of a smooth manifold, which is also the starting point for diffeological spaces. A smooth n {\displaystyle n} -dimensional manifold is a set M {\displaystyle M} equipped with a maximal smooth atlas, which consists of injective functions, called charts, of the form ϕ : U → M {\displaystyle \phi :U\to M} , where U {\displaystyle U} is an open subset of R n {\displaystyle \mathbb {R} ^{n}} , satisfying some mutual-compatibility relations. The charts of a manifold perform two distinct functions, which are often syncretized: They dictate the local structure of the manifold. The chart ϕ : U → M {\displaystyle \phi :U\to M} identifies its image in M {\displaystyle M} with its domain U {\displaystyle U} . This is convenient because the latter is simply an open subset of a Euclidean space. They define the class of smooth maps between manifolds. These are the maps to which the differential calculus extends. In particular, the charts determine smooth functions (smooth maps M → R {\displaystyle M\to \mathbb {R} } ), smooth curves (smooth maps R → M {\displaystyle \mathbb {R} \to M} ), smooth homotopies (smooth maps R 2 → M {\displaystyle \mathbb {R} ^{2}\to M} ), etc. A diffeology generalizes the structure of a smooth manifold by abandoning the first requirement for an atlas, namely that the charts give a local model of the space, while retaining the ability to discuss smooth maps into the space. === Informal definition === A diffeological space is a set X {\displaystyle X} equipped with a diffeology: a collection of maps { p : U → X ∣ U is an open subset of R n , and n ≥ 0 } , {\displaystyle \{p:U\to X\mid U{\text{ is an open subset of }}\mathbb {R} ^{n},{\text{ and }}n\geq 0\},} whose members are called plots, that satisfies some axioms. The plots are not required to be injective, and can (indeed, must) have as domains the open subsets of arbitrary Euclidean spaces. A smooth manifold can be viewed as a diffeological space which is locally diffeomorphic to R n {\displaystyle \mathbb {R} ^{n}} . In general, while not giving local models for the space, the axioms of a diffeology still ensure that the plots induce a coherent notion of smooth functions, smooth curves, smooth homotopies, etc. Diffeology is therefore suitable to treat objects more general than manifolds. === Motivating example === Let M {\displaystyle M} and N {\displaystyle N} be smooth manifolds. A smooth homotopy of maps M → N {\displaystyle M\to N} is a smooth map H : R × M → N {\displaystyle H:\mathbb {R} \times M\to N} . For each t ∈ R {\displaystyle t\in \mathbb {R} } , the map H t := H ( t , ⋅ ) : M → N {\displaystyle H_{t}:=H(t,\cdot ):M\to N} is smooth, and the intuition behind a smooth homotopy is that it is a smooth curve into the space of smooth functions C ∞ ( M , N ) {\displaystyle {\mathcal {C}}^{\infty }(M,N)} connecting, say, H 0 {\displaystyle H_{0}} and H 1 {\displaystyle H_{1}} . But C ∞ ( M , N ) {\displaystyle {\mathcal {C}}^{\infty }(M,N)} is not a finite-dimensional smooth manifold, so formally we cannot yet speak of smooth curves into it. On the other hand, the collection of maps { p : U → C ∞ ( M , N ) ∣ the map U × M → N , ( r , x ) ↦ p ( r ) ( x ) is smooth } {\displaystyle \{p:U\to {\mathcal {C}}^{\infty }(M,N)\mid {\text{ the map }}U\times M\to N,\ (r,x)\mapsto p(r)(x){\text{ is smooth}}\}} is a diffeology on C ∞ ( M , N ) {\displaystyle {\mathcal {C}}^{\infty }(M,N)} . With this structure, the smooth curves (a notion which is now rigorously defined) correspond precisely to the smooth homotopies. === History === The concept of diffeology was first introduced by Jean-Marie Souriau in the 1980s under the name espace différentiel. Souriau's motivating application for diffeology was to uniformly handle the infinite-dimensional groups arising from his work in geometric quantization. Thus the notion of diffeological group preceded the more general concept of a diffeological space. Souriau's diffeological program was taken up by his students, particularly Paul Donato and Patrick Iglesias-Zemmour, who completed early pioneering work in the field. A structure similar to diffeology was introduced by Kuo-Tsaï Chen (陳國才, Chen Guocai) in the 1970s, in order to formalize certain computations with path integrals. Chen's definition used convex sets instead of open sets for the domains of the plots. The similarity between diffeological and "Chen" structures can be made precise by viewing both as concrete sheaves over the appropriate concrete site. == Formal definition == A diffeology on a set X {\displaystyle X} consists of a collection of maps, called plots or parametrizations, from open subsets of R n {\displaystyle \mathbb {R} ^{n}} (for all n ≥ 0 {\displaystyle n\geq 0} ) to X {\displaystyle X} such that the following axioms hold: Covering axiom: every constant map is a plot. Locality axiom: for a given map p : U → X {\displaystyle p:U\to X} , if every point in U {\displaystyle U} has a neighborhood V ⊂ U {\displaystyle V\subset U} such that p | V {\displaystyle p|_{V}} is a plot, then p {\displaystyle p} itself is a plot. Smooth compatibility axiom: if p {\displaystyle p} is a plot, and F {\displaystyle F} is a smooth function from an open subset of some R m {\displaystyle \mathbb {R} ^{m}} into the domain of p {\displaystyle p} , then the composite p ∘ F {\displaystyle p\circ F} is a plot. Note that the domains of different plots can be subsets of R n {\displaystyle \mathbb {R} ^{n}} for different values of n {\displaystyle n} ; in particular, any diffeology contains the elements of its underlying set as the plots with n = 0 {\displaystyle n=0} . A set together with a diffeology is called a diffeological space. More abstractly, a diffeological space is a concrete sheaf on the site of open subsets of R n {\displaystyle \mathbb {R} ^{n}} , for all n ≥ 0 {\displaystyle n\geq 0} , and open covers. === Morphisms === A map between diffeological spaces is called smooth if and only if its composite with any plot of the first space is a plot of the second space. It is called a diffeomorphism if it is smooth, bijective, and its inverse is also smooth. Equipping the open subsets of Euclidean spaces with their standard diffeology (as defined in the next section), the plots into a diffeological space X {\displaystyle X} are precisely the smooth maps from U {\displaystyle U} to X {\displaystyle X} . Diffeological spaces constitute the objects of a category, denoted by D f l g {\displaystyle {\mathsf {Dflg}}} , whose morphisms are smooth maps. The category D f l g {\displaystyle {\mathsf {Dflg}}} is closed under many categorical operations: for instance, it is Cartesian closed, complete and cocomplete, and more generally it is a quasitopos. === D-topology === Any diffeological space is a topological space when equipped with the D-topology: the final topology such that all plots are continuous (with respect to the Euclidean topology on R n {\displaystyle \mathbb {R} ^{n}} ). In other words, a subset U ⊂ X {\displaystyle U\subset X} is open if and only if p − 1 ( U ) {\displaystyle p^{-1}(U)} is open for any plot p {\displaystyle p} on X {\displaystyle X} . Actually, the D-topology is completely determined by smooth curves, i.e. a subset U ⊂ X {\displaystyle U\subset X} is open if and only if c − 1 ( U ) {\displaystyle c^{-1}(U)} is open for any smooth map c : R → X {\displaystyle c:\mathbb {R} \to X} . The D-topology is automatically locally path-connected A smooth map between diffeological spaces is automatically continuous between their D-topologies. Therefore we have the functor D : D f l g → T o p {\displaystyle D:{\mathsf {Dflg}}\to {\mathsf {Top}}} , from the category of diffeological spaces to the category of topological spaces, which assigns to a diffeological space its D-topology. This functor realizes D f l g {\displaystyle {\mathsf {Dflg}}} as a concrete category over T o p {\displaystyle {\mathsf {Top}}} . === Additional structures === A Cartan-De Rham calculus can be developed in the framework of diffeologies, as well as a suitable adaptation of the notions of fiber bundles, homotopy, etc. However, there is not a canonical definition of tangent spaces and tangent bundles for diffeological spaces. == Examples == === First examples === Any set carries at least two diffeologies: the coarse (or trivial, or indiscrete) diffeology, consisting of every map into the set. This is the largest possible diffeology. The corresponding D-topology is the trivial topology. the discrete (or fine) diffeology, consisting of the locally constant maps into the set. This is the smallest possible diffeology. The corresponding D-topology is the discrete topology. Any topological space can be endowed with the continuous diffeology, whose plots are the continuous maps. The Euclidean space R n {\displaystyle \mathbb {R} ^{n}} admits several diffeologies beyond those listed above. The standard diffeology on R n {\displaystyle \mathbb {R} ^{n}} consists of those maps p : U → R n {\displaystyle p:U\to \mathbb {R} ^{n}} which are smooth in the usual sense of multivariable calculus. The wire (or spaghetti) diffeology on R n {\displaystyle \mathbb {R} ^{n}} is the diffeology whose plots factor locally through R {\displaystyle \mathbb {R} } . More precisely, a map p : U → R n {\displaystyle p:U\to \mathbb {R} ^{n}} is a plot if and only if for every u ∈ U {\displaystyle u\in U} there is an open neighbourhood V ⊆ U {\displaystyle V\subseteq U} of u {\displaystyle u} such that p | V = q ∘ F {\displaystyle p|_{V}=q\circ F} for two smooth functions F : V → R {\displaystyle F:V\to \mathbb {R} } and q : R → R n {\displaystyle q:\mathbb {R} \to \mathbb {R} ^{n}} . This diffeology does not coincide with the standard diffeology on R n {\displaystyle \mathbb {R} ^{n}} when n ≥ 2 {\displaystyle n\geq 2} : for instance, the identity R n → X = R n {\textstyle \mathbb {R} ^{n}\to X=\mathbb {R} ^{n}} is not a plot for the wire diffeology. The previous example can be enlarged to diffeologies whose plots factor locally through R r {\displaystyle \mathbb {R} ^{r}} , yielding the rank- r {\displaystyle r} -restricted diffeology on a smooth manifold M {\displaystyle M} : a map U → M {\displaystyle U\to M} is a plot if and only if it is smooth and the rank of its differential is less than or equal than r {\displaystyle r} . For r = 1 {\displaystyle r=1} one recovers the wire diffeology. === Relation to other smooth spaces === Diffeological spaces generalize manifolds, but they are far from the only mathematical objects to do so. For instance manifolds with corners, orbifolds, and infinite-dimensional Fréchet manifolds are all well-established alternatives. This subsection makes precise the extent to which these spaces are diffeological. We view D f l g {\displaystyle {\mathsf {Dflg}}} as a concrete category over the category of topological spaces T o p {\displaystyle {\mathsf {Top}}} via the D-topology functor D : D f l g → T o p {\displaystyle D:{\mathsf {Dflg}}\to {\mathsf {Top}}} . If U : C → T o p {\displaystyle U:{\mathsf {C}}\to {\mathsf {Top}}} is another concrete category over T o p {\displaystyle {\mathsf {Top}}} , we say that a functor E : C → D f l g {\displaystyle E:{\mathsf {C}}\to {\mathsf {Dflg}}} is an embedding (of concrete categories) if it is injective on objects and faithful, and D ∘ E = U {\displaystyle D\circ E=U} . To specify an embedding, we need only describe it on objects; it is necessarily the identity map on arrows. We will say that a diffeological space X {\displaystyle X} is locally modeled by a collection of diffeological spaces E {\displaystyle {\mathcal {E}}} if around every point x ∈ X {\displaystyle x\in X} , there is a D-open neighbourhood U {\displaystyle U} , a D-open subset V {\displaystyle V} of some E ∈ E {\displaystyle E\in {\mathcal {E}}} , and a diffeological diffeomorphism U → V {\displaystyle U\to V} . ==== Manifolds ==== The category of finite-dimensional smooth manifolds (allowing those with connected components of different dimensions) fully embeds into D f l g {\displaystyle {\mathsf {Dflg}}} . The embedding y {\displaystyle y} assigns to a smooth manifold M {\displaystyle M} the canonical diffeology { p : U → M ∣ p is smooth in the usual sense } . {\displaystyle \{p:U\to M\mid p{\text{ is smooth in the usual sense}}\}.} In particular, a diffeologically smooth map between manifolds is smooth in the usual sense, and the D-topology of y ( M ) {\displaystyle y(M)} is the original topology of M {\displaystyle M} . The essential image of this embedding consists of those diffeological spaces that are locally modeled by the collection { y ( R n ) } {\displaystyle \{y(\mathbb {R} ^{n})\}} , and whose D-topology is Hausdorff and second-countable. ==== Manifolds with boundary or corners ==== The category of finite-dimensional smooth manifolds with boundary (allowing those with connected components of different dimensions) similarly fully embeds into D f l g {\displaystyle {\mathsf {Dflg}}} . The embedding is defined identically to the smooth case, except "smooth in the usual sense" refers to the standard definition of smooth maps between manifolds with boundary. The essential image of this embedding consists of those diffeological spaces that are locally modeled by the collection { y ( O ) ∣ O is a half-space } {\displaystyle \{y(O)\mid O{\text{ is a half-space}}\}} , and whose D-topology is Hausdorff and second-countable. The same can be done in more generality for manifolds with corners, using the collection { y ( O ) ∣ O is an orthant } {\displaystyle \{y(O)\mid O{\text{ is an orthant}}\}} . ==== Fréchet and Banach manifolds ==== The category of Fréchet manifolds similarly fully embeds into D f l g {\displaystyle {\mathsf {Dflg}}} . Once again, the embedding is defined identically to the smooth case, except "smooth in the usual sense" refers to the standard definition of smooth maps between Fréchet spaces. The essential image of this embedding consists of those diffeological spaces that are locally modeled by the collection { y ( E ) ∣ E is a Fréchet space } {\displaystyle \{y(E)\mid E{\text{ is a Fréchet space}}\}} , and whose D-topology is Hausdorff. The embedding restricts to one of the category of Banach manifolds. Historically, the case of Banach manifolds was proved first, by Hain, and the case of Fréchet manifolds was treated later, by Losik. The category of manifolds modeled on convenient vector spaces also similarly embeds into D f l g {\displaystyle {\mathsf {Dflg}}} . ==== Orbifolds ==== A (classical) orbifold X {\displaystyle X} is a space that is locally modeled by quotients of the form R n / Γ {\displaystyle \mathbb {R} ^{n}/\Gamma } , where Γ {\displaystyle \Gamma } is a finite subgroup of linear transformations. On the other hand, each model R n / Γ {\displaystyle \mathbb {R} ^{n}/\Gamma } is naturally a diffeological space (with the quotient diffeology discussed below), and therefore the orbifold charts generate a diffeology on X {\displaystyle X} . This diffeology is uniquely determined by the orbifold structure of X {\displaystyle X} . Conversely, a diffeological space that is locally modeled by the collection { R n / Γ } {\displaystyle \{\mathbb {R} ^{n}/\Gamma \}} (and with Hausdorff D-topology) carries a classical orbifold structure that induces the original diffeology, wherein the local diffeomorphisms are the orbifold charts. Such a space is called a diffeological orbifold. Whereas diffeological orbifolds automatically have a notion of smooth map between them (namely diffeologically smooth maps in D f l g {\displaystyle {\mathsf {Dflg}}} ), the notion of a smooth map between classical orbifolds is not standardized. If orbifolds are viewed as differentiable stacks presented by étale proper Lie groupoids, then there is a functor from the underlying 1-category of orbifolds, and equivalent maps-of-stacks between them, to D f l g {\displaystyle {\mathsf {Dflg}}} . Its essential image consists of diffeological orbifolds, but the functor is neither faithful nor full. == Constructions == === Intersections === If a set X {\displaystyle X} is given two different diffeologies, their intersection is a diffeology on X {\displaystyle X} , called the intersection diffeology, which is finer than both starting diffeologies. The D-topology of the intersection diffeology is finer than the intersection of the D-topologies of the original diffeologies. === Products === If X {\displaystyle X} and Y {\displaystyle Y} are diffeological spaces, then the product diffeology on the Cartesian product X × Y {\displaystyle X\times Y} is the diffeology generated by all products of plots of X {\displaystyle X} and of Y {\displaystyle Y} . Precisely, a map p : U → X × Y {\displaystyle p:U\to X\times Y} necessarily has the form p ( u ) = ( x ( u ) , y ( u ) ) {\displaystyle p(u)=(x(u),y(u))} for maps x : U → X {\displaystyle x:U\to X} and y : U → Y {\displaystyle y:U\to Y} . The map p {\displaystyle p} is a plot in the product diffeology if and only if x {\displaystyle x} and y {\displaystyle y} are plots of X {\displaystyle X} and Y {\displaystyle Y} , respectively. This generalizes to products of arbitrary collections of spaces. The D-topology of X × Y {\displaystyle X\times Y} is the coarsest delta-generated topology containing the product topology of the D-topologies of X {\displaystyle X} and Y {\displaystyle Y} ; it is equal to the product topology when X {\displaystyle X} or Y {\displaystyle Y} is locally compact, but may be finer in general. === Pullbacks === Given a map f : X → Y {\displaystyle f:X\to Y} from a set X {\displaystyle X} to a diffeological space Y {\displaystyle Y} , the pullback diffeology on X {\displaystyle X} consists of those maps p : U → X {\displaystyle p:U\to X} such that the composition f ∘ p {\displaystyle f\circ p} is a plot of Y {\displaystyle Y} . In other words, the pullback diffeology is the smallest diffeology on X {\displaystyle X} making f {\displaystyle f} smooth. If X {\displaystyle X} is a subset of the diffeological space Y {\displaystyle Y} , then the subspace diffeology on X {\displaystyle X} is the pullback diffeology induced by the inclusion X ↪ Y {\displaystyle X\hookrightarrow Y} . In this case, the D-topology of X {\displaystyle X} is equal to the subspace topology of the D-topology of Y {\displaystyle Y} if Y {\displaystyle Y} is open, but may be finer in general. === Pushforwards === Given a map f : X → Y {\displaystyle f:X\to Y} from diffeological space X {\displaystyle X} to a set Y {\displaystyle Y} , the pushforward diffeology on Y {\displaystyle Y} is the diffeology generated by the compositions f ∘ p {\displaystyle f\circ p} , for plots p : U → X {\displaystyle p:U\to X} of X {\displaystyle X} . In other words, the pushforward diffeology is the smallest diffeology on Y {\displaystyle Y} making f {\displaystyle f} smooth. If X {\displaystyle X} is a diffeological space and ∼ {\displaystyle \sim } is an equivalence relation on X {\displaystyle X} , then the quotient diffeology on the quotient set X / ∼ {\displaystyle X/{\sim }} is the pushforward diffeology induced by the quotient map X → X / ∼ {\displaystyle X\to X/{\sim }} . The D-topology on X / ∼ {\displaystyle X/{\sim }} is the quotient topology of the D-topology of X {\displaystyle X} . Note that this topology may be trivial without the diffeology being trivial. Quotients often give rise to non-manifold diffeologies. For example, the set of real numbers R {\displaystyle \mathbb {R} } is a smooth manifold. The quotient R / ( Z + α Z ) {\displaystyle \mathbb {R} /(\mathbb {Z} +\alpha \mathbb {Z} )} , for some irrational α {\displaystyle \alpha } , called the irrational torus, is a diffeological space diffeomorphic to the quotient of the regular 2-torus R 2 / Z 2 {\displaystyle \mathbb {R} ^{2}/\mathbb {Z} ^{2}} by a line of slope α {\displaystyle \alpha } . It has a non-trivial diffeology, although its D-topology is the trivial topology. === Functional diffeologies === The functional diffeology on the set C ∞ ( X , Y ) {\displaystyle {\mathcal {C}}^{\infty }(X,Y)} of smooth maps between two diffeological spaces X {\displaystyle X} and Y {\displaystyle Y} is the diffeology whose plots are the maps ϕ : U → C ∞ ( X , Y ) {\displaystyle \phi :U\to {\mathcal {C}}^{\infty }(X,Y)} such that U × X → Y , ( u , x ) ↦ ϕ ( u ) ( x ) {\displaystyle U\times X\to Y,\quad (u,x)\mapsto \phi (u)(x)} is smooth with respect to the product diffeology of U × X {\displaystyle U\times X} . When X {\displaystyle X} and Y {\displaystyle Y} are manifolds, the D-topology of C ∞ ( X , Y ) {\displaystyle {\mathcal {C}}^{\infty }(X,Y)} is the smallest locally path-connected topology containing the Whitney C ∞ {\displaystyle C^{\infty }} topology. Taking the subspace diffeology of a functional diffeology, one can define diffeologies on the space of sections of a fibre bundle, or the space of bisections of a Lie groupoid, etc. If M {\displaystyle M} is a compact smooth manifold, and F → M {\displaystyle F\to M} is a smooth fiber bundle over M {\displaystyle M} , then the space of smooth sections Γ ( F ) {\displaystyle \Gamma (F)} of the bundle is frequently equipped with the structure of a Fréchet manifold. Upon embedding this Fréchet manifold into the category of diffeological spaces, the resulting diffeology coincides with the subspace diffeology that Γ ( F ) {\displaystyle \Gamma (F)} inherits from the functional diffeology on C ∞ ( M , F ) {\displaystyle {\mathcal {C}}^{\infty }(M,F)} . == Distinguished maps between diffeological spaces == Analogous to the notions of submersions and immersions between manifolds, there are two special classes of morphisms between diffeological spaces. A subduction is a surjective function f : X → Y {\displaystyle f:X\to Y} between diffeological spaces such that the diffeology of Y {\displaystyle Y} is the pushforward of the diffeology of X {\displaystyle X} . Similarly, an induction is an injective function f : X → Y {\displaystyle f:X\to Y} between diffeological spaces such that the diffeology of X {\displaystyle X} is the pullback of the diffeology of Y {\displaystyle Y} . Subductions and inductions are automatically smooth. It is instructive to consider the case where X {\displaystyle X} and Y {\displaystyle Y} are smooth manifolds. Every surjective submersion f : X → Y {\displaystyle f:X\to Y} is a subduction. A subduction need not be a surjective submersion. One example is f : R 2 → R , f ( x , y ) := x y . {\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ,\quad f(x,y):=xy.} An injective immersion need not be an induction. One example is the parametrization of the "figure-eight," f : ( − π 2 , 3 π 2 ) → R 2 , f ( t ) := ( 2 cos ( t ) , sin ( 2 t ) ) . {\displaystyle f:\left(-{\frac {\pi }{2}},{\frac {3\pi }{2}}\right)\to \mathbb {R^{2}} ,\quad f(t):=(2\cos(t),\sin(2t)).} An induction need not be an injective immersion. One example is the "semi-cubic," f : R → R 2 , f ( t ) := ( t 2 , t 3 ) . {\displaystyle f:\mathbb {R} \to \mathbb {R} ^{2},\quad f(t):=(t^{2},t^{3}).} In the category of diffeological spaces, subductions are precisely the strong epimorphisms, and inductions are precisely the strong monomorphisms. A map that is both a subduction and induction is a diffeomorphism. == References == == External links == Patrick Iglesias-Zemmour: Diffeology (many documents) diffeology.net Global hub on diffeology and related topics
|
Wikipedia:Diffeomorphometry#0
|
Diffeomorphometry is the metric study of imagery, shape and form in the discipline of computational anatomy (CA) in medical imaging. The study of images in computational anatomy rely on high-dimensional diffeomorphism groups φ ∈ Diff V {\displaystyle \varphi \in \operatorname {Diff} _{V}} which generate orbits of the form I ≐ { φ ⋅ I ∣ φ ∈ Diff V } {\displaystyle {\mathcal {I}}\doteq \{\varphi \cdot I\mid \varphi \in \operatorname {Diff} _{V}\}} , in which images I ∈ I {\displaystyle I\in {\mathcal {I}}} can be dense scalar magnetic resonance or computed axial tomography images. For deformable shapes these are the collection of manifolds M ≐ { φ ⋅ M ∣ φ ∈ Diff V } {\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot M\mid \varphi \in \operatorname {Diff} _{V}\}} , points, curves and surfaces. The diffeomorphisms move the images and shapes through the orbit according to ( φ , I ) ↦ φ ⋅ I {\displaystyle (\varphi ,I)\mapsto \varphi \cdot I} which are defined as the group actions of computational anatomy. The orbit of shapes and forms is made into a metric space by inducing a metric on the group of diffeomorphisms. The study of metrics on groups of diffeomorphisms and the study of metrics between manifolds and surfaces has been an area of significant investigation. In Computational anatomy, the diffeomorphometry metric measures how close and far two shapes or images are from each other. Informally, the metric is constructed by defining a flow of diffeomorphisms ϕ ˙ t , t ∈ [ 0 , 1 ] , ϕ t ∈ Diff V {\displaystyle {\dot {\phi }}_{t},t\in [0,1],\phi _{t}\in \operatorname {Diff} _{V}} which connect the group elements from one to another, so for φ , ψ ∈ Diff V {\displaystyle \varphi ,\psi \in \operatorname {Diff} _{V}} then ϕ 0 = φ , ϕ 1 = ψ {\displaystyle \phi _{0}=\varphi ,\phi _{1}=\psi } . The metric between two coordinate systems or diffeomorphisms is then the shortest length or geodesic flow connecting them. The metric on the space associated to the geodesics is given by ρ ( φ , ψ ) = inf ϕ : ϕ 0 = φ , ϕ 1 = ψ ∫ 0 1 ‖ ϕ ˙ t ‖ ϕ t d t {\displaystyle \rho (\varphi ,\psi )=\inf _{\phi :\phi _{0}=\varphi ,\phi _{1}=\psi }\int _{0}^{1}\|{\dot {\phi }}_{t}\|_{\phi _{t}}\,dt} . The metrics on the orbits I , M {\displaystyle {\mathcal {I}},{\mathcal {M}}} are inherited from the metric induced on the diffeomorphism group. The group φ ∈ Diff V {\displaystyle \varphi \in \operatorname {Diff} _{V}} is thusly made into a smooth Riemannian manifold with Riemannian metric ‖ ⋅ ‖ φ {\displaystyle \|\cdot \|_{\varphi }} associated to the tangent spaces at all φ ∈ Diff V {\displaystyle \varphi \in \operatorname {Diff} _{V}} . The Riemannian metric satisfies at every point of the manifold ϕ ∈ Diff V {\displaystyle \phi \in \operatorname {Diff} _{V}} there is an inner product inducing the norm on the tangent space ‖ ϕ ˙ t ‖ ϕ t {\displaystyle \|{\dot {\phi }}_{t}\|_{\phi _{t}}} that varies smoothly across Diff V {\displaystyle \operatorname {Diff} _{V}} . Oftentimes, the familiar Euclidean metric is not directly applicable because the patterns of shapes and images don't form a vector space. In the Riemannian orbit model of Computational anatomy, diffeomorphisms acting on the forms φ ⋅ I ∈ I , φ ∈ Diff V , M ∈ M {\displaystyle \varphi \cdot I\in {\mathcal {I}},\varphi \in \operatorname {Diff} _{V},M\in {\mathcal {M}}} don't act linearly. There are many ways to define metrics, and for the sets associated to shapes the Hausdorff metric is another. The method used to induce the Riemannian metric is to induce the metric on the orbit of shapes by defining it in terms of the metric length between diffeomorphic coordinate system transformations of the flows. Measuring the lengths of the geodesic flow between coordinates systems in the orbit of shapes is called diffeomorphometry. == The diffeomorphisms group generated as Lagrangian and Eulerian flows == The diffeomorphisms in computational anatomy are generated to satisfy the Lagrangian and Eulerian specification of the flow fields, φ t , t ∈ [ 0 , 1 ] {\displaystyle \varphi _{t},t\in [0,1]} , generated via the ordinary differential equation with the Eulerian vector fields v ≐ ( v 1 , v 2 , v 3 ) {\displaystyle v\doteq (v_{1},v_{2},v_{3})} in R 3 {\displaystyle {\mathbb {R} }^{3}} for v t = φ ˙ t ∘ φ t − 1 , t ∈ [ 0 , 1 ] {\displaystyle v_{t}={\dot {\varphi }}_{t}\circ \varphi _{t}^{-1},t\in [0,1]} . The inverse for the flow is given by d d t φ t − 1 = − ( D φ t − 1 ) v t , φ 0 − 1 = id , {\displaystyle {\frac {d}{dt}}\varphi _{t}^{-1}=-(D\varphi _{t}^{-1})v_{t},\ \varphi _{0}^{-1}=\operatorname {id} ,} and the 3 × 3 {\displaystyle 3\times 3} Jacobian matrix for flows in R 3 {\displaystyle \mathbb {R} ^{3}} given as D φ ≐ ( ∂ φ i ∂ x j ) . {\displaystyle \ D\varphi \doteq \left({\frac {\partial \varphi _{i}}{\partial x_{j}}}\right).} To ensure smooth flows of diffeomorphisms with inverse, the vector fields R 3 {\displaystyle {\mathbb {R} }^{3}} must be at least 1-time continuously differentiable in space which are modelled as elements of the Hilbert space ( V , ‖ ⋅ ‖ V ) {\displaystyle (V,\|\cdot \|_{V})} using the Sobolev embedding theorems so that each element v i ∈ H 0 3 , i = 1 , 2 , 3 , {\displaystyle v_{i}\in H_{0}^{3},i=1,2,3,} has 3-square-integrable derivatives thusly implies ( V , ‖ ⋅ ‖ V ) {\displaystyle (V,\|\cdot \|_{V})} embeds smoothly in 1-time continuously differentiable functions. The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm: == The Riemannian orbit model == Shapes in Computational Anatomy (CA) are studied via the use of diffeomorphic mapping for establishing correspondences between anatomical coordinate systems. In this setting, 3-dimensional medical images are modelled as diffeomorphic transformations of some exemplar, termed the template I t e m p {\displaystyle I_{temp}} , resulting in the observed images to be elements of the random orbit model of CA. For images these are defined as I ∈ I ≐ { I = I t e m p ∘ φ , φ ∈ Diff V } {\displaystyle I\in {\mathcal {I}}\doteq \{I=I_{temp}\circ \varphi ,\varphi \in \operatorname {Diff} _{V}\}} , with for charts representing sub-manifolds denoted as M ≐ { φ ⋅ M t e m p : φ ∈ Diff V } {\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot M_{temp}:\varphi \in \operatorname {Diff} _{V}\}} . === The Riemannian metric === The orbit of shapes and forms in Computational Anatomy are generated by the group action I ≐ { φ ⋅ I : φ ∈ Diff V } {\displaystyle {\mathcal {I}}\doteq \{\varphi \cdot I:\varphi \in \operatorname {Diff} _{V}\}} , M ≐ { φ ⋅ M : φ ∈ Diff V } {\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot M:\varphi \in \operatorname {Diff} _{V}\}} . These are made into a Riemannian orbits by introducing a metric associated to each point and associated tangent space. For this a metric is defined on the group which induces the metric on the orbit. Take as the metric for Computational anatomy at each element of the tangent space φ ∈ Diff V {\displaystyle \varphi \in \operatorname {Diff} _{V}} in the group of diffeomorphisms ‖ φ ˙ ‖ φ ≐ ‖ φ ˙ ∘ φ − 1 ‖ V = ‖ v ‖ V , {\displaystyle \|{\dot {\varphi }}\|_{\varphi }\doteq \|{\dot {\varphi }}\circ \varphi ^{-1}\|_{V}=\|v\|_{V},} with the vector fields modelled to be in a Hilbert space with the norm in the Hilbert space ( V , ‖ ⋅ ‖ V ) {\displaystyle (V,\|\cdot \|_{V})} . We model V {\displaystyle V} as a reproducing kernel Hilbert space (RKHS) defined by a 1-1, differential operator A : V → V ∗ {\displaystyle A:V\rightarrow V^{*}} , where V ∗ {\displaystyle V^{*}} is the dual-space. In general, σ ≐ A v ∈ V ∗ {\displaystyle \sigma \doteq Av\in V^{*}} is a generalized function or distribution, the linear form associated to the inner-product and norm for generalized functions are interpreted by integration by parts according to for v , w ∈ V {\displaystyle v,w\in V} , ⟨ v , w ⟩ V ≐ ∫ X A v ⋅ w d x , ‖ v ‖ V 2 ≐ ∫ X A v ⋅ v d x , v , w ∈ V . {\displaystyle \langle v,w\rangle _{V}\doteq \int _{X}Av\cdot w\,dx,\ \|v\|_{V}^{2}\doteq \int _{X}Av\cdot v\,dx,\ v,w\in V\ .} When A v ≐ μ d x {\displaystyle Av\doteq \mu \,dx} , a vector density, ∫ A v ⋅ v d x ≐ ∫ μ ⋅ v d x = ∑ i = 1 3 μ i v i d x . {\displaystyle \int Av\cdot v\,dx\doteq \int \mu \cdot v\,dx=\sum _{i=1}^{3}\mu _{i}v_{i}\,dx.} The differential operator is selected so that the Green's kernel associated to the inverse is sufficiently smooth so that the vector fields support 1-continuous derivative. The Sobolev embedding theorem arguments were made in demonstrating that 1-continuous derivative is required for smooth flows. The Green's operator generated from the Green's function(scalar case) associated to the differential operator smooths. For proper choice of A {\displaystyle A} then ( V , ‖ ⋅ ‖ V ) {\displaystyle (V,\|\cdot \|_{V})} is an RKHS with the operator K = A − 1 : V ∗ → V {\displaystyle K=A^{-1}:V^{*}\rightarrow V} . The Green's kernels associated to the differential operator smooths since for controlling enough derivatives in the square-integral sense the kernel k ( ⋅ , ⋅ ) {\displaystyle k(\cdot ,\cdot )} is continuously differentiable in both variables implying K A v ( x ) i ≐ ∑ j ∫ R 3 k i j ( x , y ) A v j ( y ) d y ∈ V . {\displaystyle KAv(x)_{i}\doteq \sum _{j}\int _{{\mathbb {R} }^{3}}k_{ij}(x,y)Av_{j}(y)\,dy\in V\ .} == The diffeomorphometry of the space of shapes and forms == === The right-invariant metric on diffeomorphisms === The metric on the group of diffeomorphisms is defined by the distance as defined on pairs of elements in the group of diffeomorphisms according to This distance provides a right-invariant metric of diffeomorphometry, invariant to reparameterization of space since for all ϕ ∈ Diff V {\displaystyle \phi \in \operatorname {Diff} _{V}} , d Diff V ( ψ , φ ) = d Diff V ( ψ ∘ ϕ , φ ∘ ϕ ) . {\displaystyle d_{\operatorname {Diff} _{V}}(\psi ,\varphi )=d_{\operatorname {Diff} _{V}}(\psi \circ \phi ,\varphi \circ \phi ).} === The metric on shapes and forms === The distance on images, d I : I × I → R + {\displaystyle d_{\mathcal {I}}:{\mathcal {I}}\times {\mathcal {I}}\rightarrow \mathbb {R} ^{+}} , The distance on shapes and forms, d M : M × M → R + {\displaystyle d_{\mathcal {M}}:{\mathcal {M}}\times {\mathcal {M}}\rightarrow \mathbb {R} ^{+}} , == The metric on geodesic flows of landmarks, surfaces, and volumes within the orbit == For calculating the metric, the geodesics are a dynamical system, the flow of coordinates t ↦ ϕ t ∈ Diff V {\displaystyle t\mapsto \phi _{t}\in \operatorname {Diff} _{V}} and the control the vector field t ↦ v t ∈ V {\displaystyle t\mapsto v_{t}\in V} related via ϕ ˙ t = v t ⋅ ϕ t , ϕ 0 = id . {\displaystyle {\dot {\phi }}_{t}=v_{t}\cdot \phi _{t},\phi _{0}=\operatorname {id} .} The Hamiltonian view reparameterizes the momentum distribution A v ∈ V ∗ {\displaystyle Av\in V^{*}} in terms of the Hamiltonian momentum, a Lagrange multiplier p : ϕ ˙ ↦ ( p ∣ ϕ ˙ ) {\displaystyle p:{\dot {\phi }}\mapsto (p\mid {\dot {\phi }})} constraining the Lagrangian velocity ϕ ˙ t = v t ∘ ϕ t {\displaystyle {\dot {\phi }}_{t}=v_{t}\circ \phi _{t}} .accordingly: H ( ϕ t , p t , v t ) = ∫ X p t ⋅ ( v t ∘ ϕ t ) d x − 1 2 ∫ X A v t ⋅ v t d x . {\displaystyle H(\phi _{t},p_{t},v_{t})=\int _{X}p_{t}\cdot (v_{t}\circ \phi _{t})\,dx-{\frac {1}{2}}\int _{X}Av_{t}\cdot v_{t}\,dx.} The Pontryagin maximum principle gives the Hamiltonian H ( ϕ t , p t ) ≐ max v H ( ϕ t , p t , v ) . {\displaystyle H(\phi _{t},p_{t})\doteq \max _{v}H(\phi _{t},p_{t},v)\ .} The optimizing vector field v t ≐ argmax v H ( ϕ t , p t , v ) {\displaystyle v_{t}\doteq \operatorname {argmax} _{v}H(\phi _{t},p_{t},v)} with dynamics ϕ ˙ t = ∂ H ( ϕ t , p t ) ∂ p , p ˙ t = − ∂ H ( ϕ t , p t ) ∂ ϕ {\displaystyle {\dot {\phi }}_{t}={\frac {\partial H(\phi _{t},p_{t})}{\partial p}},{\dot {p}}_{t}=-{\frac {\partial H(\phi _{t},p_{t})}{\partial \phi }}} . Along the geodesic the Hamiltonian is constant: H ( ϕ t , p t ) = H ( id , p 0 ) = 1 2 ∫ X p 0 ⋅ v 0 d x {\displaystyle H(\phi _{t},p_{t})=H(\operatorname {id} ,p_{0})={\frac {1}{2}}\int _{X}p_{0}\cdot v_{0}\,dx} . The metric distance between coordinate systems connected via the geodesic determined by the induced distance between identity and group element: d D i f f V ( id , φ ) = ‖ v 0 ‖ V = 2 H ( id , p 0 ) {\displaystyle d_{\mathrm {Diff} _{V}}(\operatorname {id} ,\varphi )=\|v_{0}\|_{V}={\sqrt {2H(\operatorname {id} ,p_{0})}}} === Landmark or pointset geodesics === For landmarks, x i , i = 1 , … , n {\displaystyle x_{i},i=1,\dots ,n} , the Hamiltonian momentum p ( i ) , i = 1 , … , n {\displaystyle p(i),i=1,\dots ,n} with Hamiltonian dynamics taking the form H ( ϕ t , p t ) = 1 2 ∑ j ∑ i p t ( i ) ⋅ K ( ϕ t ( x i ) , ϕ t ( x j ) ) p t ( j ) {\displaystyle H(\phi _{t},p_{t})={\frac {1}{2}}\textstyle \sum _{j}\sum _{i}\displaystyle p_{t}(i)\cdot K(\phi _{t}(x_{i}),\phi _{t}(x_{j}))p_{t}(j)} with { v t = ∑ i K ( ⋅ , ϕ t ( x i ) ) p t ( i ) , p ˙ t ( i ) = − ( D v t ) | ϕ t ( x i ) T p t ( i ) , i = 1 , 2 , … , n {\displaystyle {\begin{cases}v_{t}=\textstyle \sum _{i}\displaystyle K(\cdot ,\phi _{t}(x_{i}))p_{t}(i),\ \\{\dot {p}}_{t}(i)=-(Dv_{t})_{|_{\phi _{t}(x_{i})}}^{T}p_{t}(i),i=1,2,\dots ,n\\\end{cases}}} The metric between landmarks d 2 = ∑ i p 0 ( i ) ⋅ ∑ j K ( x i , x j ) p 0 ( j ) . {\displaystyle d^{2}=\textstyle \sum _{i}p_{0}(i)\cdot \sum _{j}\displaystyle K(x_{i},x_{j})p_{0}(j).} The dynamics associated to these geodesics is shown in the accompanying figure. === Surface geodesics === For surfaces, the Hamiltonian momentum is defined across the surface has Hamiltonian H ( ϕ t , p t ) = 1 2 ∫ U ∫ U p t ( u ) ⋅ K ( ϕ t ( m ( u ) ) , ϕ t ( m ( v ) ) ) p t ( v ) d u d v {\displaystyle H(\phi _{t},p_{t})={\frac {1}{2}}\int _{U}\int _{U}p_{t}(u)\cdot K(\phi _{t}(m(u)),\phi _{t}(m(v)))p_{t}(v)\,du\,dv} and dynamics { v t = ∫ U K ( ⋅ , ϕ t ( m ( u ) ) ) p t ( u ) d u , p ˙ t ( u ) = − ( D v t ) | ϕ t ( m ( u ) ) T p t ( u ) , u ∈ U {\displaystyle {\begin{cases}v_{t}=\textstyle \int _{U}\displaystyle K(\cdot ,\phi _{t}(m(u)))p_{t}(u)\,du\ ,\\{\dot {p}}_{t}(u)=-(Dv_{t})_{|_{\phi _{t}(m(u))}}^{T}p_{t}(u),u\in U\end{cases}}} The metric between surface coordinates d 2 = ( p 0 ∣ v 0 ) = ∫ U p 0 ( u ) ⋅ ∫ U K ( m ( u ) , m ( u ′ ) ) p 0 ( u ′ ) d u d u ′ {\displaystyle d^{2}=(p_{0}\mid v_{0})=\int _{U}p_{0}(u)\cdot \int _{U}K(m(u),m(u^{\prime }))p_{0}(u^{\prime })\,du\,du^{\prime }} === Volume geodesics === For volumes the Hamiltonian H ( ϕ t , p t ) = 1 2 ∫ R 3 ∫ R 3 p t ( x ) ⋅ K ( ϕ t ( x ) , ϕ t ( y ) ) p t ( y ) d x d y {\displaystyle H(\phi _{t},p_{t})={\frac {1}{2}}\int _{{\mathbb {R} }^{3}}\int _{{\mathbb {R} }^{3}}p_{t}(x)\cdot K(\phi _{t}(x),\phi _{t}(y))p_{t}(y)\,dx\,dy\displaystyle } with dynamics { v t = ∫ X K ( ⋅ , ϕ t ( x ) ) p t ( x ) d x , p ˙ t ( x ) = − ( D v t ) | ϕ t ( x ) T p t ( x ) , x ∈ R 3 {\displaystyle {\begin{cases}v_{t}=\textstyle \int _{X}\displaystyle K(\cdot ,\phi _{t}(x))p_{t}(x)\,dx\ ,\\{\dot {p}}_{t}(x)=-(Dv_{t})_{|_{\phi _{t}(x)}}^{T}p_{t}(x),x\in {\mathbb {R} }^{3}\end{cases}}} The metric between volumes d 2 = ( p 0 ∣ v 0 ) = ∫ R 3 p 0 ( x ) ⋅ ∫ R 3 K ( x , y ) p 0 ( y ) d y d x . {\displaystyle \displaystyle d^{2}=(p_{0}\mid v_{0})=\int _{\mathbb {R} ^{3}}p_{0}(x)\cdot \int _{{\mathbb {R} }^{3}}K(x,y)p_{0}(y)\,dy\,dx.} == Software for diffeomorphic mapping == Software suites containing a variety of diffeomorphic mapping algorithms include the following: Deformetrica ANTS DARTEL Voxel-based morphometry(VBM) DEMONS LDDMM StationaryLDDMM === Cloud software === MRICloud == References ==
|
Wikipedia:Difference polynomials#0
|
In mathematics, in the area of complex analysis, the general difference polynomials are a polynomial sequence, a certain subclass of the Sheffer polynomials, which include the Newton polynomials, Selberg's polynomials, and the Stirling interpolation polynomials as special cases. == Definition == The general difference polynomial sequence is given by p n ( z ) = z n ( z − β n − 1 n − 1 ) {\displaystyle p_{n}(z)={\frac {z}{n}}{{z-\beta n-1} \choose {n-1}}} where ( z n ) {\displaystyle {z \choose n}} is the binomial coefficient. For β = 0 {\displaystyle \beta =0} , the generated polynomials p n ( z ) {\displaystyle p_{n}(z)} are the Newton polynomials p n ( z ) = ( z n ) = z ( z − 1 ) ⋯ ( z − n + 1 ) n ! . {\displaystyle p_{n}(z)={z \choose n}={\frac {z(z-1)\cdots (z-n+1)}{n!}}.} The case of β = 1 {\displaystyle \beta =1} generates Selberg's polynomials, and the case of β = − 1 / 2 {\displaystyle \beta =-1/2} generates Stirling's interpolation polynomials. == Moving differences == Given an analytic function f ( z ) {\displaystyle f(z)} , define the moving difference of f as L n ( f ) = Δ n f ( β n ) {\displaystyle {\mathcal {L}}_{n}(f)=\Delta ^{n}f(\beta n)} where Δ {\displaystyle \Delta } is the forward difference operator. Then, provided that f obeys certain summability conditions, then it may be represented in terms of these polynomials as f ( z ) = ∑ n = 0 ∞ p n ( z ) L n ( f ) . {\displaystyle f(z)=\sum _{n=0}^{\infty }p_{n}(z){\mathcal {L}}_{n}(f).} The conditions for summability (that is, convergence) for this sequence is a fairly complex topic; in general, one may say that a necessary condition is that the analytic function be of less than exponential type. Summability conditions are discussed in detail in Boas & Buck. == Generating function == The generating function for the general difference polynomials is given by e z t = ∑ n = 0 ∞ p n ( z ) [ ( e t − 1 ) e β t ] n . {\displaystyle e^{zt}=\sum _{n=0}^{\infty }p_{n}(z)\left[\left(e^{t}-1\right)e^{\beta t}\right]^{n}.} This generating function can be brought into the form of the generalized Appell representation K ( z , w ) = A ( w ) Ψ ( z g ( w ) ) = ∑ n = 0 ∞ p n ( z ) w n {\displaystyle K(z,w)=A(w)\Psi (zg(w))=\sum _{n=0}^{\infty }p_{n}(z)w^{n}} by setting A ( w ) = 1 {\displaystyle A(w)=1} , Ψ ( x ) = e x {\displaystyle \Psi (x)=e^{x}} , g ( w ) = t {\displaystyle g(w)=t} and w = ( e t − 1 ) e β t {\displaystyle w=(e^{t}-1)e^{\beta t}} . == See also == Carlson's theorem Bernoulli polynomials of the second kind == References == Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlin. Library of Congress Card Number 63-23263.
|
Wikipedia:Differentiable vector-valued functions from Euclidean space#0
|
In the mathematical discipline of functional analysis, a differentiable vector-valued function from Euclidean space is a differentiable function valued in a topological vector space (TVS) whose domains is a subset of some finite-dimensional Euclidean space. It is possible to generalize the notion of derivative to functions whose domain and codomain are subsets of arbitrary topological vector spaces (TVSs) in multiple ways. But when the domain of a TVS-valued function is a subset of a finite-dimensional Euclidean space then many of these notions become logically equivalent resulting in a much more limited number of generalizations of the derivative and additionally, differentiability is also more well-behaved compared to the general case. This article presents the theory of k {\displaystyle k} -times continuously differentiable functions on an open subset Ω {\displaystyle \Omega } of Euclidean space R n {\displaystyle \mathbb {R} ^{n}} ( 1 ≤ n < ∞ {\displaystyle 1\leq n<\infty } ), which is an important special case of differentiation between arbitrary TVSs. This importance stems partially from the fact that every finite-dimensional vector subspace of a Hausdorff topological vector space is TVS isomorphic to Euclidean space R n {\displaystyle \mathbb {R} ^{n}} so that, for example, this special case can be applied to any function whose domain is an arbitrary Hausdorff TVS by restricting it to finite-dimensional vector subspaces. All vector spaces will be assumed to be over the field F , {\displaystyle \mathbb {F} ,} where F {\displaystyle \mathbb {F} } is either the real numbers R {\displaystyle \mathbb {R} } or the complex numbers C . {\displaystyle \mathbb {C} .} == Continuously differentiable vector-valued functions == A map f , {\displaystyle f,} which may also be denoted by f ( 0 ) , {\displaystyle f^{(0)},} between two topological spaces is said to be 0 {\displaystyle 0} -times continuously differentiable or C 0 {\displaystyle C^{0}} if it is continuous. A topological embedding may also be called a C 0 {\displaystyle C^{0}} -embedding. === Curves === Differentiable curves are an important special case of differentiable vector-valued (i.e. TVS-valued) functions which, in particular, are used in the definition of the Gateaux derivative. They are fundamental to the analysis of maps between two arbitrary topological vector spaces X → Y {\displaystyle X\to Y} and so also to the analysis of TVS-valued maps from Euclidean spaces, which is the focus of this article. A continuous map f : I → X {\displaystyle f:I\to X} from a subset I ⊆ R {\displaystyle I\subseteq \mathbb {R} } that is valued in a topological vector space X {\displaystyle X} is said to be (once or 1 {\displaystyle 1} -time) differentiable if for all t ∈ I , {\displaystyle t\in I,} it is differentiable at t , {\displaystyle t,} which by definition means the following limit in X {\displaystyle X} exists: f ′ ( t ) := f ( 1 ) ( t ) := lim t ≠ r ∈ I r → t f ( r ) − f ( t ) r − t = lim t ≠ t + h ∈ I h → 0 f ( t + h ) − f ( t ) h {\displaystyle f^{\prime }(t):=f^{(1)}(t):=\lim _{\stackrel {r\to t}{t\neq r\in I}}{\frac {f(r)-f(t)}{r-t}}=\lim _{\stackrel {h\to 0}{t\neq t+h\in I}}{\frac {f(t+h)-f(t)}{h}}} where in order for this limit to even be well-defined, t {\displaystyle t} must be an accumulation point of I . {\displaystyle I.} If f : I → X {\displaystyle f:I\to X} is differentiable then it is said to be continuously differentiable or C 1 {\displaystyle C^{1}} if its derivative, which is the induced map f ′ = f ( 1 ) : I → X , {\displaystyle f^{\prime }=f^{(1)}:I\to X,} is continuous. Using induction on 1 < k ∈ N , {\displaystyle 1<k\in \mathbb {N} ,} the map f : I → X {\displaystyle f:I\to X} is k {\displaystyle k} -times continuously differentiable or C k {\displaystyle C^{k}} if its k − 1 th {\displaystyle k-1^{\text{th}}} derivative f ( k − 1 ) : I → X {\displaystyle f^{(k-1)}:I\to X} is continuously differentiable, in which case the k th {\displaystyle k^{\text{th}}} -derivative of f {\displaystyle f} is the map f ( k ) := ( f ( k − 1 ) ) ′ : I → X . {\displaystyle f^{(k)}:=\left(f^{(k-1)}\right)^{\prime }:I\to X.} It is called smooth, C ∞ , {\displaystyle C^{\infty },} or infinitely differentiable if it is k {\displaystyle k} -times continuously differentiable for every integer k ∈ N . {\displaystyle k\in \mathbb {N} .} For k ∈ N , {\displaystyle k\in \mathbb {N} ,} it is called k {\displaystyle k} -times differentiable if it is k − 1 {\displaystyle k-1} -times continuous differentiable and f ( k − 1 ) : I → X {\displaystyle f^{(k-1)}:I\to X} is differentiable. A continuous function f : I → X {\displaystyle f:I\to X} from a non-empty and non-degenerate interval I ⊆ R {\displaystyle I\subseteq \mathbb {R} } into a topological space X {\displaystyle X} is called a curve or a C 0 {\displaystyle C^{0}} curve in X . {\displaystyle X.} A path in X {\displaystyle X} is a curve in X {\displaystyle X} whose domain is compact while an arc or C0-arc in X {\displaystyle X} is a path in X {\displaystyle X} that is also a topological embedding. For any k ∈ { 1 , 2 , … , ∞ } , {\displaystyle k\in \{1,2,\ldots ,\infty \},} a curve f : I → X {\displaystyle f:I\to X} valued in a topological vector space X {\displaystyle X} is called a C k {\displaystyle C^{k}} -embedding if it is a topological embedding and a C k {\displaystyle C^{k}} curve such that f ′ ( t ) ≠ 0 {\displaystyle f^{\prime }(t)\neq 0} for every t ∈ I , {\displaystyle t\in I,} where it is called a C k {\displaystyle C^{k}} -arc if it is also a path (or equivalently, also a C 0 {\displaystyle C^{0}} -arc) in addition to being a C k {\displaystyle C^{k}} -embedding. === Differentiability on Euclidean space === The definition given above for curves are now extended from functions valued defined on subsets of R {\displaystyle \mathbb {R} } to functions defined on open subsets of finite-dimensional Euclidean spaces. Throughout, let Ω {\displaystyle \Omega } be an open subset of R n , {\displaystyle \mathbb {R} ^{n},} where n ≥ 1 {\displaystyle n\geq 1} is an integer. Suppose t = ( t 1 , … , t n ) ∈ Ω {\displaystyle t=\left(t_{1},\ldots ,t_{n}\right)\in \Omega } and f : domain f → Y {\displaystyle f:\operatorname {domain} f\to Y} is a function such that t ∈ domain f {\displaystyle t\in \operatorname {domain} f} with t {\displaystyle t} an accumulation point of domain f . {\displaystyle \operatorname {domain} f.} Then f {\displaystyle f} is differentiable at t {\displaystyle t} if there exist n {\displaystyle n} vectors e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} in Y , {\displaystyle Y,} called the partial derivatives of f {\displaystyle f} at t {\displaystyle t} , such that lim t ≠ p ∈ domain f p → t f ( p ) − f ( t ) − ∑ i = 1 n ( p i − t i ) e i ‖ p − t ‖ 2 = 0 in Y {\displaystyle \lim _{\stackrel {p\to t}{t\neq p\in \operatorname {domain} f}}{\frac {f(p)-f(t)-\sum _{i=1}^{n}\left(p_{i}-t_{i}\right)e_{i}}{\|p-t\|_{2}}}=0{\text{ in }}Y} where p = ( p 1 , … , p n ) . {\displaystyle p=\left(p_{1},\ldots ,p_{n}\right).} If f {\displaystyle f} is differentiable at a point then it is continuous at that point. If f {\displaystyle f} is differentiable at every point in some subset S {\displaystyle S} of its domain then f {\displaystyle f} is said to be (once or 1 {\displaystyle 1} -time) differentiable in S {\displaystyle S} , where if the subset S {\displaystyle S} is not mentioned then this means that it is differentiable at every point in its domain. If f {\displaystyle f} is differentiable and if each of its partial derivatives is a continuous function then f {\displaystyle f} is said to be (once or 1 {\displaystyle 1} -time) continuously differentiable or C 1 . {\displaystyle C^{1}.} For k ∈ N , {\displaystyle k\in \mathbb {N} ,} having defined what it means for a function f {\displaystyle f} to be C k {\displaystyle C^{k}} (or k {\displaystyle k} times continuously differentiable), say that f {\displaystyle f} is k + 1 {\displaystyle k+1} times continuously differentiable or that f {\displaystyle f} is C k + 1 {\displaystyle C^{k+1}} if f {\displaystyle f} is continuously differentiable and each of its partial derivatives is C k . {\displaystyle C^{k}.} Say that f {\displaystyle f} is C ∞ , {\displaystyle C^{\infty },} smooth, C ∞ , {\displaystyle C^{\infty },} or infinitely differentiable if f {\displaystyle f} is C k {\displaystyle C^{k}} for all k = 0 , 1 , … . {\displaystyle k=0,1,\ldots .} The support of a function f {\displaystyle f} is the closure (taken in its domain domain f {\displaystyle \operatorname {domain} f} ) of the set { x ∈ domain f : f ( x ) ≠ 0 } . {\displaystyle \{x\in \operatorname {domain} f:f(x)\neq 0\}.} == Spaces of Ck vector-valued functions == In this section, the space of smooth test functions and its canonical LF-topology are generalized to functions valued in general complete Hausdorff locally convex topological vector spaces (TVSs). After this task is completed, it is revealed that the topological vector space C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} that was constructed could (up to TVS-isomorphism) have instead been defined simply as the completed injective tensor product C k ( Ω ) ⊗ ^ ϵ Y {\displaystyle C^{k}(\Omega ){\widehat {\otimes }}_{\epsilon }Y} of the usual space of smooth test functions C k ( Ω ) {\displaystyle C^{k}(\Omega )} with Y . {\displaystyle Y.} Throughout, let Y {\displaystyle Y} be a Hausdorff topological vector space (TVS), let k ∈ { 0 , 1 , … , ∞ } , {\displaystyle k\in \{0,1,\ldots ,\infty \},} and let Ω {\displaystyle \Omega } be either: an open subset of R n , {\displaystyle \mathbb {R} ^{n},} where n ≥ 1 {\displaystyle n\geq 1} is an integer, or else a locally compact topological space, in which case k {\displaystyle k} can only be 0. {\displaystyle 0.} === Space of Ck functions === For any k = 0 , 1 , … , ∞ , {\displaystyle k=0,1,\ldots ,\infty ,} let C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} denote the vector space of all C k {\displaystyle C^{k}} Y {\displaystyle Y} -valued maps defined on Ω {\displaystyle \Omega } and let C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} denote the vector subspace of C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} consisting of all maps in C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} that have compact support. Let C k ( Ω ) {\displaystyle C^{k}(\Omega )} denote C k ( Ω ; F ) {\displaystyle C^{k}(\Omega ;\mathbb {F} )} and C c k ( Ω ) {\displaystyle C_{c}^{k}(\Omega )} denote C c k ( Ω ; F ) . {\displaystyle C_{c}^{k}(\Omega ;\mathbb {F} ).} Give C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} the topology of uniform convergence of the functions together with their derivatives of order < k + 1 {\displaystyle <k+1} on the compact subsets of Ω . {\displaystyle \Omega .} Suppose Ω 1 ⊆ Ω 2 ⊆ ⋯ {\displaystyle \Omega _{1}\subseteq \Omega _{2}\subseteq \cdots } is a sequence of relatively compact open subsets of Ω {\displaystyle \Omega } whose union is Ω {\displaystyle \Omega } and that satisfy Ω i ¯ ⊆ Ω i + 1 {\displaystyle {\overline {\Omega _{i}}}\subseteq \Omega _{i+1}} for all i . {\displaystyle i.} Suppose that ( V α ) α ∈ A {\displaystyle \left(V_{\alpha }\right)_{\alpha \in A}} is a basis of neighborhoods of the origin in Y . {\displaystyle Y.} Then for any integer ℓ < k + 1 , {\displaystyle \ell <k+1,} the sets: U i , ℓ , α := { f ∈ C k ( Ω ; Y ) : ( ∂ / ∂ p ) q f ( p ) ∈ U α for all p ∈ Ω i and all q ∈ N n , | q | ≤ ℓ } {\displaystyle {\mathcal {U}}_{i,\ell ,\alpha }:=\left\{f\in C^{k}(\Omega ;Y):\left(\partial /\partial p\right)^{q}f(p)\in U_{\alpha }{\text{ for all }}p\in \Omega _{i}{\text{ and all }}q\in \mathbb {N} ^{n},|q|\leq \ell \right\}} form a basis of neighborhoods of the origin for C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} as i , {\displaystyle i,} ℓ , {\displaystyle \ell ,} and α ∈ A {\displaystyle \alpha \in A} vary in all possible ways. If Ω {\displaystyle \Omega } is a countable union of compact subsets and Y {\displaystyle Y} is a Fréchet space, then so is C ( Ω ; Y ) . {\displaystyle C^{(}\Omega ;Y).} Note that U i , l , α {\displaystyle {\mathcal {U}}_{i,l,\alpha }} is convex whenever U α {\displaystyle U_{\alpha }} is convex. If Y {\displaystyle Y} is metrizable (resp. complete, locally convex, Hausdorff) then so is C k ( Ω ; Y ) . {\displaystyle C^{k}(\Omega ;Y).} If ( p α ) α ∈ A {\displaystyle (p_{\alpha })_{\alpha \in A}} is a basis of continuous seminorms for Y {\displaystyle Y} then a basis of continuous seminorms on C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} is: μ i , l , α ( f ) := sup y ∈ Ω i ( ∑ | q | ≤ l p α ( ( ∂ / ∂ p ) q f ( p ) ) ) {\displaystyle \mu _{i,l,\alpha }(f):=\sup _{y\in \Omega _{i}}\left(\sum _{|q|\leq l}p_{\alpha }\left(\left(\partial /\partial p\right)^{q}f(p)\right)\right)} as i , {\displaystyle i,} ℓ , {\displaystyle \ell ,} and α ∈ A {\displaystyle \alpha \in A} vary in all possible ways. === Space of Ck functions with support in a compact subset === The definition of the topology of the space of test functions is now duplicated and generalized. For any compact subset K ⊆ Ω , {\displaystyle K\subseteq \Omega ,} denote the set of all f {\displaystyle f} in C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} whose support lies in K {\displaystyle K} (in particular, if f ∈ C k ( K ; Y ) {\displaystyle f\in C^{k}(K;Y)} then the domain of f {\displaystyle f} is Ω {\displaystyle \Omega } rather than K {\displaystyle K} ) and give it the subspace topology induced by C k ( Ω ; Y ) . {\displaystyle C^{k}(\Omega ;Y).} If K {\displaystyle K} is a compact space and Y {\displaystyle Y} is a Banach space, then C 0 ( K ; Y ) {\displaystyle C^{0}(K;Y)} becomes a Banach space normed by ‖ f ‖ := sup ω ∈ Ω ‖ f ( ω ) ‖ . {\displaystyle \|f\|:=\sup _{\omega \in \Omega }\|f(\omega )\|.} Let C k ( K ) {\displaystyle C^{k}(K)} denote C k ( K ; F ) . {\displaystyle C^{k}(K;\mathbb {F} ).} For any two compact subsets K ⊆ L ⊆ Ω , {\displaystyle K\subseteq L\subseteq \Omega ,} the inclusion In K L : C k ( K ; Y ) → C k ( L ; Y ) {\displaystyle \operatorname {In} _{K}^{L}:C^{k}(K;Y)\to C^{k}(L;Y)} is an embedding of TVSs and that the union of all C k ( K ; Y ) , {\displaystyle C^{k}(K;Y),} as K {\displaystyle K} varies over the compact subsets of Ω , {\displaystyle \Omega ,} is C c k ( Ω ; Y ) . {\displaystyle C_{c}^{k}(\Omega ;Y).} === Space of compactly support Ck functions === For any compact subset K ⊆ Ω , {\displaystyle K\subseteq \Omega ,} let In K : C k ( K ; Y ) → C c k ( Ω ; Y ) {\displaystyle \operatorname {In} _{K}:C^{k}(K;Y)\to C_{c}^{k}(\Omega ;Y)} denote the inclusion map and endow C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} with the strongest topology making all In K {\displaystyle \operatorname {In} _{K}} continuous, which is known as the final topology induced by these map. The spaces C k ( K ; Y ) {\displaystyle C^{k}(K;Y)} and maps In K 1 K 2 {\displaystyle \operatorname {In} _{K_{1}}^{K_{2}}} form a direct system (directed by the compact subsets of Ω {\displaystyle \Omega } ) whose limit in the category of TVSs is C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} together with the injections In K . {\displaystyle \operatorname {In} _{K}.} The spaces C k ( Ω i ¯ ; Y ) {\displaystyle C^{k}\left({\overline {\Omega _{i}}};Y\right)} and maps In Ω i ¯ Ω j ¯ {\displaystyle \operatorname {In} _{\overline {\Omega _{i}}}^{\overline {\Omega _{j}}}} also form a direct system (directed by the total order N {\displaystyle \mathbb {N} } ) whose limit in the category of TVSs is C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} together with the injections In Ω i ¯ . {\displaystyle \operatorname {In} _{\overline {\Omega _{i}}}.} Each embedding In K {\displaystyle \operatorname {In} _{K}} is an embedding of TVSs. A subset S {\displaystyle S} of C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} is a neighborhood of the origin in C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} if and only if S ∩ C k ( K ; Y ) {\displaystyle S\cap C^{k}(K;Y)} is a neighborhood of the origin in C k ( K ; Y ) {\displaystyle C^{k}(K;Y)} for every compact K ⊆ Ω . {\displaystyle K\subseteq \Omega .} This direct limit topology (i.e. the final topology) on C c ∞ ( Ω ) {\displaystyle C_{c}^{\infty }(\Omega )} is known as the canonical LF topology. If Y {\displaystyle Y} is a Hausdorff locally convex space, T {\displaystyle T} is a TVS, and u : C c k ( Ω ; Y ) → T {\displaystyle u:C_{c}^{k}(\Omega ;Y)\to T} is a linear map, then u {\displaystyle u} is continuous if and only if for all compact K ⊆ Ω , {\displaystyle K\subseteq \Omega ,} the restriction of u {\displaystyle u} to C k ( K ; Y ) {\displaystyle C^{k}(K;Y)} is continuous. The statement remains true if "all compact K ⊆ Ω {\displaystyle K\subseteq \Omega } " is replaced with "all K := Ω ¯ i {\displaystyle K:={\overline {\Omega }}_{i}} ". === Properties === === Identification as a tensor product === Suppose henceforth that Y {\displaystyle Y} is Hausdorff. Given a function f ∈ C k ( Ω ) {\displaystyle f\in C^{k}(\Omega )} and a vector y ∈ Y , {\displaystyle y\in Y,} let f ⊗ y {\displaystyle f\otimes y} denote the map f ⊗ y : Ω → Y {\displaystyle f\otimes y:\Omega \to Y} defined by ( f ⊗ y ) ( p ) = f ( p ) y . {\displaystyle (f\otimes y)(p)=f(p)y.} This defines a bilinear map ⊗ : C k ( Ω ) × Y → C k ( Ω ; Y ) {\displaystyle \otimes :C^{k}(\Omega )\times Y\to C^{k}(\Omega ;Y)} into the space of functions whose image is contained in a finite-dimensional vector subspace of Y ; {\displaystyle Y;} this bilinear map turns this subspace into a tensor product of C k ( Ω ) {\displaystyle C^{k}(\Omega )} and Y , {\displaystyle Y,} which we will denote by C k ( Ω ) ⊗ Y . {\displaystyle C^{k}(\Omega )\otimes Y.} Furthermore, if C c k ( Ω ) ⊗ Y {\displaystyle C_{c}^{k}(\Omega )\otimes Y} denotes the vector subspace of C k ( Ω ) ⊗ Y {\displaystyle C^{k}(\Omega )\otimes Y} consisting of all functions with compact support, then C c k ( Ω ) ⊗ Y {\displaystyle C_{c}^{k}(\Omega )\otimes Y} is a tensor product of C c k ( Ω ) {\displaystyle C_{c}^{k}(\Omega )} and Y . {\displaystyle Y.} If X {\displaystyle X} is locally compact then C c 0 ( Ω ) ⊗ Y {\displaystyle C_{c}^{0}(\Omega )\otimes Y} is dense in C 0 ( Ω ; X ) {\displaystyle C^{0}(\Omega ;X)} while if X {\displaystyle X} is an open subset of R n {\displaystyle \mathbb {R} ^{n}} then C c ∞ ( Ω ) ⊗ Y {\displaystyle C_{c}^{\infty }(\Omega )\otimes Y} is dense in C k ( Ω ; X ) . {\displaystyle C^{k}(\Omega ;X).} == See also == Convenient vector space – locally convex vector spaces satisfying a very mild completeness conditionPages displaying wikidata descriptions as a fallback Crinkled arc Differentiation in Fréchet spaces Fréchet derivative – Derivative defined on normed spaces Gateaux derivative – Generalization of the concept of directional derivative Infinite-dimensional vector function – function whose values lie in an infinite-dimensional vector spacePages displaying wikidata descriptions as a fallback Injective tensor product == Notes == == Citations == == References == Diestel, Joe (2008). The Metric Theory of Tensor Products: Grothendieck's Résumé Revisited. Vol. 16. Providence, R.I.: American Mathematical Society. ISBN 9781470424831. OCLC 185095773. Dubinsky, Ed (1979). The Structure of Nuclear Fréchet Spaces. Lecture Notes in Mathematics. Vol. 720. Berlin New York: Springer-Verlag. ISBN 978-3-540-09504-0. OCLC 5126156. Grothendieck, Alexander (1955). "Produits Tensoriels Topologiques et Espaces Nucléaires" [Topological Tensor Products and Nuclear Spaces]. Memoirs of the American Mathematical Society Series (in French). 16. Providence: American Mathematical Society. MR 0075539. OCLC 9308061. Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098. Hogbe-Nlend, Henri; Moscatelli, V. B. (1981). Nuclear and Conuclear Spaces: Introductory Course on Nuclear and Conuclear Spaces in the Light of the Duality "topology-bornology". North-Holland Mathematics Studies. Vol. 52. Amsterdam New York New York: North Holland. ISBN 978-0-08-087163-9. OCLC 316564345. Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370. Pietsch, Albrecht (1979). Nuclear Locally Convex Spaces. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 66 (Second ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-05644-9. OCLC 539541. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Ryan, Raymond A. (2002). Introduction to Tensor Products of Banach Spaces. Springer Monographs in Mathematics. London New York: Springer. ISBN 978-1-85233-437-6. OCLC 48092184. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Wong, Yau-Chuen (1979). Schwartz Spaces, Nuclear Spaces, and Tensor Products. Lecture Notes in Mathematics. Vol. 726. Berlin New York: Springer-Verlag. ISBN 978-3-540-09513-2. OCLC 5126158.
|
Wikipedia:Differential coefficient#0
|
In physics and mathematics, the differential coefficient of a function f(x) is what is now called its derivative df(x)/dx, the (not necessarily constant) multiplicative factor or coefficient of the differential dx in the differential df(x). A coefficient is usually a constant quantity, but the differential coefficient of f is a constant function only if f is a linear function. When f is not linear, its differential coefficient is a function, call it f′, derived by the differentiation of f, hence, the modern term, derivative. The older usage is now rarely seen. Early editions of Silvanus P. Thompson's Calculus Made Easy use the older term. In his 1998 update of this text, Martin Gardner lets the first use of "differential coefficient" stand, along with Thompson's criticism of the term as a needlessly obscure phrase that should not intimidate students, and substitutes "derivative" for the remainder of the book. == References ==
|
Wikipedia:Differential graded module#0
|
In algebra, a differential graded module, or dg-module, is a Z {\displaystyle \mathbb {Z} } -graded module together with a differential; i.e., a square-zero graded endomorphism of the module of degree 1 or −1, depending on the convention. In other words, it is a chain complex having a structure of a module, while a differential graded algebra is a chain complex with a structure of an algebra. In view of the module-variant of Dold–Kan correspondence, the notion of an N 0 {\displaystyle \mathbb {N} _{0}} -graded dg-module is equivalent to that of a simplicial module; "equivalent" in the categorical sense; see § The Dold–Kan correspondence below. == The Dold–Kan correspondence == Given a commutative ring R, by definition, the category of simplicial modules are simplicial objects in the category of R-modules; denoted by sModR. Then sModR can be identified with the category of differential graded modules which vanish in negative degrees via the Dold-Kan correspondence. == See also == Differential graded Lie algebra == Notes == == References == Iyengar, Srikanth; Buchweitz, Ragnar-Olaf; Avramov, Luchezar L. (2006-02-16). "Class and rank of differential modules". Inventiones Mathematicae. 169: 1–35. arXiv:math/0602344. doi:10.1007/s00222-007-0041-6. S2CID 16078533. Henri Cartan, Samuel Eilenberg, Homological algebra Fresse, Benoit (21 April 2017). Homotopy of Operads and Grothendieck-Teichmuller Groups. Mathematical Surveys and Monographs. Vol. 217. American Mathematical Soc. ISBN 978-1-4704-3481-6. Available online.
|
Wikipedia:Differentiation in Fréchet spaces#0
|
In mathematics, in particular in functional analysis and nonlinear analysis, it is possible to define the derivative of a function between two Fréchet spaces. This notion of differentiation, as it is Gateaux derivative between Fréchet spaces, is significantly weaker than the derivative in a Banach space, even between general topological vector spaces. Nevertheless, it is the weakest notion of differentiation for which many of the familiar theorems from calculus hold. In particular, the chain rule is true. With some additional constraints on the Fréchet spaces and functions involved, there is an analog of the inverse function theorem called the Nash–Moser inverse function theorem, having wide applications in nonlinear analysis and differential geometry. == Mathematical details == Formally, the definition of differentiation is identical to the Gateaux derivative. Specifically, let X {\displaystyle X} and Y {\displaystyle Y} be Fréchet spaces, U ⊆ X {\displaystyle U\subseteq X} be an open set, and F : U → Y {\displaystyle F:U\to Y} be a function. The directional derivative of F {\displaystyle F} in the direction v ∈ X {\displaystyle v\in X} is defined by D F ( u ) v = lim τ → 0 F ( u + v τ ) − F ( u ) τ {\displaystyle DF(u)v=\lim _{\tau \to 0}{\frac {F(u+v\tau )-F(u)}{\tau }}} if the limit exists. One says that F {\displaystyle F} is continuously differentiable, or C 1 {\displaystyle C^{1}} if the limit exists for all v ∈ X {\displaystyle v\in X} and the mapping D F : U × X → Y {\displaystyle DF:U\times X\to Y} is a continuous map. Higher order derivatives are defined inductively via D k + 1 F ( u ) { v 1 , v 2 , … , v k + 1 } = lim τ → 0 D k F ( u + τ v k + 1 ) { v 1 , … , v k } − D k F ( u ) { v 1 , … , v k } τ . {\displaystyle D^{k+1}F(u)\left\{v_{1},v_{2},\ldots ,v_{k+1}\right\}=\lim _{\tau \to 0}{\frac {D^{k}F(u+\tau v_{k+1})\left\{v_{1},\ldots ,v_{k}\right\}-D^{k}F(u)\left\{v_{1},\ldots ,v_{k}\right\}}{\tau }}.} A function is said to be C k {\displaystyle C^{k}} if D k F : U × X × X × ⋯ × X → Y {\displaystyle D^{k}F:U\times X\times X\times \cdots \times X\to Y} is continuous. It is C ∞ , {\displaystyle C^{\infty },} or smooth if it is C k {\displaystyle C^{k}} for every k . {\displaystyle k.} == Properties == Let X , Y , {\displaystyle X,Y,} and Z {\displaystyle Z} be Fréchet spaces. Suppose that U {\displaystyle U} is an open subset of X , {\displaystyle X,} V {\displaystyle V} is an open subset of Y , {\displaystyle Y,} and F : U → V , {\displaystyle F:U\to V,} G : V → Z {\displaystyle G:V\to Z} are a pair of C 1 {\displaystyle C^{1}} functions. Then the following properties hold: Fundamental theorem of calculus. If the line segment from a {\displaystyle a} to b {\displaystyle b} lies entirely within U , {\displaystyle U,} then F ( b ) − F ( a ) = ∫ 0 1 D F ( a + ( b − a ) t ) ⋅ ( b − a ) d t . {\displaystyle F(b)-F(a)=\int _{0}^{1}DF(a+(b-a)t)\cdot (b-a)dt.} The chain rule. For all u ∈ U {\displaystyle u\in U} and x ∈ X , {\displaystyle x\in X,} D ( G ∘ F ) ( u ) x = D G ( F ( u ) ) D F ( u ) x {\displaystyle D(G\circ F)(u)x=DG(F(u))DF(u)x} Linearity. D F ( u ) x {\displaystyle DF(u)x} is linear in x . {\displaystyle x.} More generally, if F {\displaystyle F} is C k , {\displaystyle C^{k},} then D F ( u ) { x 1 , … , x k } {\displaystyle DF(u)\left\{x_{1},\ldots ,x_{k}\right\}} is multilinear in the x {\displaystyle x} 's. Taylor's theorem with remainder. Suppose that the line segment between u ∈ U {\displaystyle u\in U} and u + h {\displaystyle u+h} lies entirely within U . {\displaystyle U.} If F {\displaystyle F} is C k {\displaystyle C^{k}} then F ( u + h ) = F ( u ) + D F ( u ) h + 1 2 ! D 2 F ( u ) { h , h } + ⋯ + 1 ( k − 1 ) ! D k − 1 F ( u ) { h , h , … , h } + R k {\displaystyle F(u+h)=F(u)+DF(u)h+{\frac {1}{2!}}D^{2}F(u)\{h,h\}+\cdots +{\frac {1}{(k-1)!}}D^{k-1}F(u)\{h,h,\ldots ,h\}+R_{k}} where the remainder term is given by R k ( u , h ) = 1 ( k − 1 ) ! ∫ 0 1 ( 1 − t ) k − 1 D k F ( u + t h ) { h , h , … , h } d t {\displaystyle R_{k}(u,h)={\frac {1}{(k-1)!}}\int _{0}^{1}(1-t)^{k-1}D^{k}F(u+th)\{h,h,\ldots ,h\}dt} Commutativity of directional derivatives. If F {\displaystyle F} is C k , {\displaystyle C^{k},} then D k F ( u ) { h 1 , … , h k } = D k F ( u ) { h σ ( 1 ) , … , h σ ( k ) } {\displaystyle D^{k}F(u)\left\{h_{1},\ldots ,h_{k}\right\}=D^{k}F(u)\left\{h_{\sigma (1)},\ldots ,h_{\sigma (k)}\right\}} for every permutation σ of { 1 , 2 , … , k } . {\displaystyle \{1,2,\ldots ,k\}.} The proofs of many of these properties rely fundamentally on the fact that it is possible to define the Riemann integral of continuous curves in a Fréchet space. == Smooth mappings == Surprisingly, a mapping between open subset of Fréchet spaces is smooth (infinitely often differentiable) if it maps smooth curves to smooth curves; see Convenient analysis. Moreover, smooth curves in spaces of smooth functions are just smooth functions of one variable more. == Consequences in differential geometry == The existence of a chain rule allows for the definition of a manifold modeled on a Fréchet space: a Fréchet manifold. Furthermore, the linearity of the derivative implies that there is an analog of the tangent bundle for Fréchet manifolds. == Tame Fréchet spaces == Frequently the Fréchet spaces that arise in practical applications of the derivative enjoy an additional property: they are tame. Roughly speaking, a tame Fréchet space is one which is almost a Banach space. On tame spaces, it is possible to define a preferred class of mappings, known as tame maps. On the category of tame spaces under tame maps, the underlying topology is strong enough to support a fully fledged theory of differential topology. Within this context, many more techniques from calculus hold. In particular, there are versions of the inverse and implicit function theorems. == See also == Differentiable vector-valued functions from Euclidean space – Differentiable function in functional analysis Infinite-dimensional vector function – function whose values lie in an infinite-dimensional vector spacePages displaying wikidata descriptions as a fallback == References == Hamilton, R. S. (1982). "The inverse function theorem of Nash and Moser". Bull. Amer. Math. Soc. 7 (1): 65–222. doi:10.1090/S0273-0979-1982-15004-2. MR 0656198.
|
Wikipedia:Differentiation of integrals#0
|
In mathematics, the problem of differentiation of integrals is that of determining under what circumstances the mean value integral of a suitable function on a small neighbourhood of a point approximates the value of the function at that point. More formally, given a space X with a measure μ and a metric d, one asks for what functions f : X → R does lim r → 0 1 μ ( B r ( x ) ) ∫ B r ( x ) f ( y ) d μ ( y ) = f ( x ) {\displaystyle \lim _{r\to 0}{\frac {1}{\mu {\big (}B_{r}(x){\big )}}}\int _{B_{r}(x)}f(y)\,\mathrm {d} \mu (y)=f(x)} for all (or at least μ-almost all) x ∈ X? (Here, as in the rest of the article, Br(x) denotes the open ball in X with d-radius r and centre x.) This is a natural question to ask, especially in view of the heuristic construction of the Riemann integral, in which it is almost implicit that f(x) is a "good representative" for the values of f near x. == Theorems on the differentiation of integrals == === Lebesgue measure === One result on the differentiation of integrals is the Lebesgue differentiation theorem, as proved by Henri Lebesgue in 1910. Consider n-dimensional Lebesgue measure λn on n-dimensional Euclidean space Rn. Then, for any locally integrable function f : Rn → R, one has lim r → 0 1 λ n ( B r ( x ) ) ∫ B r ( x ) f ( y ) d λ n ( y ) = f ( x ) {\displaystyle \lim _{r\to 0}{\frac {1}{\lambda ^{n}{\big (}B_{r}(x){\big )}}}\int _{B_{r}(x)}f(y)\,\mathrm {d} \lambda ^{n}(y)=f(x)} for λn-almost all points x ∈ Rn. It is important to note, however, that the measure zero set of "bad" points depends on the function f. === Borel measures on Rn === The result for Lebesgue measure turns out to be a special case of the following result, which is based on the Besicovitch covering theorem: if μ is any locally finite Borel measure on Rn and f : Rn → R is locally integrable with respect to μ, then lim r → 0 1 μ ( B r ( x ) ) ∫ B r ( x ) f ( y ) d μ ( y ) = f ( x ) {\displaystyle \lim _{r\to 0}{\frac {1}{\mu {\big (}B_{r}(x){\big )}}}\int _{B_{r}(x)}f(y)\,\mathrm {d} \mu (y)=f(x)} for μ-almost all points x ∈ Rn. === Gaussian measures === The problem of the differentiation of integrals is much harder in an infinite-dimensional setting. Consider a separable Hilbert space (H, ⟨ , ⟩) equipped with a Gaussian measure γ. As stated in the article on the Vitali covering theorem, the Vitali covering theorem fails for Gaussian measures on infinite-dimensional Hilbert spaces. Two results of David Preiss (1981 and 1983) show the kind of difficulties that one can expect to encounter in this setting: There is a Gaussian measure γ on a separable Hilbert space H and a Borel set M ⊆ H so that, for γ-almost all x ∈ H, lim r → 0 γ ( M ∩ B r ( x ) ) γ ( B r ( x ) ) = 1. {\displaystyle \lim _{r\to 0}{\frac {\gamma {\big (}M\cap B_{r}(x){\big )}}{\gamma {\big (}B_{r}(x){\big )}}}=1.} There is a Gaussian measure γ on a separable Hilbert space H and a function f ∈ L1(H, γ; R) such that lim r → 0 inf { 1 γ ( B s ( x ) ) ∫ B s ( x ) f ( y ) d γ ( y ) | x ∈ H , 0 < s < r } = + ∞ . {\displaystyle \lim _{r\to 0}\inf \left\{\left.{\frac {1}{\gamma {\big (}B_{s}(x){\big )}}}\int _{B_{s}(x)}f(y)\,\mathrm {d} \gamma (y)\right|x\in H,0<s<r\right\}=+\infty .} However, there is some hope if one has good control over the covariance of γ. Let the covariance operator of γ be S : H → H given by ⟨ S x , y ⟩ = ∫ H ⟨ x , z ⟩ ⟨ y , z ⟩ d γ ( z ) , {\displaystyle \langle Sx,y\rangle =\int _{H}\langle x,z\rangle \langle y,z\rangle \,\mathrm {d} \gamma (z),} or, for some countable orthonormal basis (ei)i∈N of H, S x = ∑ i ∈ N σ i 2 ⟨ x , e i ⟩ e i . {\displaystyle Sx=\sum _{i\in \mathbf {N} }\sigma _{i}^{2}\langle x,e_{i}\rangle e_{i}.} In 1981, Preiss and Jaroslav Tišer showed that if there exists a constant 0 < q < 1 such that σ i + 1 2 ≤ q σ i 2 , {\displaystyle \sigma _{i+1}^{2}\leq q\sigma _{i}^{2},} then, for all f ∈ L1(H, γ; R), 1 μ ( B r ( x ) ) ∫ B r ( x ) f ( y ) d μ ( y ) → r → 0 γ f ( x ) , {\displaystyle {\frac {1}{\mu {\big (}B_{r}(x){\big )}}}\int _{B_{r}(x)}f(y)\,\mathrm {d} \mu (y){\xrightarrow[{r\to 0}]{\gamma }}f(x),} where the convergence is convergence in measure with respect to γ. In 1988, Tišer showed that if σ i + 1 2 ≤ σ i 2 i α {\displaystyle \sigma _{i+1}^{2}\leq {\frac {\sigma _{i}^{2}}{i^{\alpha }}}} for some α > 5 ⁄ 2, then 1 μ ( B r ( x ) ) ∫ B r ( x ) f ( y ) d μ ( y ) → r → 0 f ( x ) , {\displaystyle {\frac {1}{\mu {\big (}B_{r}(x){\big )}}}\int _{B_{r}(x)}f(y)\,\mathrm {d} \mu (y){\xrightarrow[{r\to 0}]{}}f(x),} for γ-almost all x and all f ∈ Lp(H, γ; R), p > 1. As of 2007, it is still an open question whether there exists an infinite-dimensional Gaussian measure γ on a separable Hilbert space H so that, for all f ∈ L1(H, γ; R), lim r → 0 1 γ ( B r ( x ) ) ∫ B r ( x ) f ( y ) d γ ( y ) = f ( x ) {\displaystyle \lim _{r\to 0}{\frac {1}{\gamma {\big (}B_{r}(x){\big )}}}\int _{B_{r}(x)}f(y)\,\mathrm {d} \gamma (y)=f(x)} for γ-almost all x ∈ H. However, it is conjectured that no such measure exists, since the σi would have to decay very rapidly. == See also == Differentiation rules – Rules for computing derivatives of functions Leibniz integral rule – Differentiation under the integral sign formula Reynolds transport theorem – 3D generalization of the Leibniz integral rule == References ==
|
Wikipedia:Differentiation of trigonometric functions#0
|
The differentiation of trigonometric functions is the mathematical process of finding the derivative of a trigonometric function, or its rate of change with respect to a variable. For example, the derivative of the sine function is written sin′(a) = cos(a), meaning that the rate of change of sin(x) at a particular angle x = a is given by the cosine of that angle. All derivatives of circular trigonometric functions can be found from those of sin(x) and cos(x) by means of the quotient rule applied to functions such as tan(x) = sin(x)/cos(x). Knowing these derivatives, the derivatives of the inverse trigonometric functions are found using implicit differentiation. == Proofs of derivatives of trigonometric functions == === Limit of sin(θ)/θ as θ tends to 0 === The diagram at right shows a circle with centre O and radius r = 1. Let two radii OA and OB make an arc of θ radians. Since we are considering the limit as θ tends to zero, we may assume θ is a small positive number, say 0 < θ < 1/2 π in the first quadrant. In the diagram, let R1 be the triangle OAB, R2 the circular sector OAB, and R3 the triangle OAC. The area of triangle OAB is: A r e a ( R 1 ) = 1 2 | O A | | O B | sin θ = 1 2 sin θ . {\displaystyle \mathrm {Area} (R_{1})={\tfrac {1}{2}}\ |OA|\ |OB|\sin \theta ={\tfrac {1}{2}}\sin \theta \,.} The area of the circular sector OAB is: A r e a ( R 2 ) = 1 2 θ . {\displaystyle \mathrm {Area} (R_{2})={\tfrac {1}{2}}\theta \,.} The area of the triangle OAC is given by: A r e a ( R 3 ) = 1 2 | O A | | A C | = 1 2 tan θ . {\displaystyle \mathrm {Area} (R_{3})={\tfrac {1}{2}}\ |OA|\ |AC|={\tfrac {1}{2}}\tan \theta \,.} Since each region is contained in the next, one has: Area ( R 1 ) < Area ( R 2 ) < Area ( R 3 ) ⟹ 1 2 sin θ < 1 2 θ < 1 2 tan θ . {\displaystyle {\text{Area}}(R_{1})<{\text{Area}}(R_{2})<{\text{Area}}(R_{3})\implies {\tfrac {1}{2}}\sin \theta <{\tfrac {1}{2}}\theta <{\tfrac {1}{2}}\tan \theta \,.} Moreover, since sin θ > 0 in the first quadrant, we may divide through by 1/2 sin θ, giving: 1 < θ sin θ < 1 cos θ ⟹ 1 > sin θ θ > cos θ . {\displaystyle 1<{\frac {\theta }{\sin \theta }}<{\frac {1}{\cos \theta }}\implies 1>{\frac {\sin \theta }{\theta }}>\cos \theta \,.} In the last step we took the reciprocals of the three positive terms, reversing the inequities. We conclude that for 0 < θ < 1/2 π, the quantity sin(θ)/θ is always less than 1 and always greater than cos(θ). Thus, as θ gets closer to 0, sin(θ)/θ is "squeezed" between a ceiling at height 1 and a floor at height cos θ, which rises towards 1; hence sin(θ)/θ must tend to 1 as θ tends to 0 from the positive side: lim θ → 0 + sin θ θ = 1 . {\displaystyle \lim _{\theta \to 0^{+}}{\frac {\sin \theta }{\theta }}=1\,.} For the case where θ is a small negative number –1/2 π < θ < 0, we use the fact that sine is an odd function: lim θ → 0 − sin θ θ = lim θ → 0 + sin ( − θ ) − θ = lim θ → 0 + − sin θ − θ = lim θ → 0 + sin θ θ = 1 . {\displaystyle \lim _{\theta \to 0^{-}}\!{\frac {\sin \theta }{\theta }}\ =\ \lim _{\theta \to 0^{+}}\!{\frac {\sin(-\theta )}{-\theta }}\ =\ \lim _{\theta \to 0^{+}}\!{\frac {-\sin \theta }{-\theta }}\ =\ \lim _{\theta \to 0^{+}}\!{\frac {\sin \theta }{\theta }}\ =\ 1\,.} === Limit of (cos(θ)-1)/θ as θ tends to 0 === The last section enables us to calculate this new limit relatively easily. This is done by employing a simple trick. In this calculation, the sign of θ is unimportant. lim θ → 0 cos θ − 1 θ = lim θ → 0 ( cos θ − 1 θ ) ( cos θ + 1 cos θ + 1 ) = lim θ → 0 cos 2 θ − 1 θ ( cos θ + 1 ) . {\displaystyle \lim _{\theta \to 0}\,{\frac {\cos \theta -1}{\theta }}\ =\ \lim _{\theta \to 0}\left({\frac {\cos \theta -1}{\theta }}\right)\!\!\left({\frac {\cos \theta +1}{\cos \theta +1}}\right)\ =\ \lim _{\theta \to 0}\,{\frac {\cos ^{2}\!\theta -1}{\theta \,(\cos \theta +1)}}.} Using cos2θ – 1 = –sin2θ, the fact that the limit of a product is the product of limits, and the limit result from the previous section, we find that: lim θ → 0 cos θ − 1 θ = lim θ → 0 − sin 2 θ θ ( cos θ + 1 ) = ( − lim θ → 0 sin θ θ ) ( lim θ → 0 sin θ cos θ + 1 ) = ( − 1 ) ( 0 2 ) = 0 . {\displaystyle \lim _{\theta \to 0}\,{\frac {\cos \theta -1}{\theta }}\ =\ \lim _{\theta \to 0}\,{\frac {-\sin ^{2}\theta }{\theta (\cos \theta +1)}}\ =\ \left(-\lim _{\theta \to 0}{\frac {\sin \theta }{\theta }}\right)\!\left(\lim _{\theta \to 0}\,{\frac {\sin \theta }{\cos \theta +1}}\right)\ =\ (-1)\left({\frac {0}{2}}\right)=0\,.} === Limit of tan(θ)/θ as θ tends to 0 === Using the limit for the sine function, the fact that the tangent function is odd, and the fact that the limit of a product is the product of limits, we find: lim θ → 0 tan θ θ = ( lim θ → 0 sin θ θ ) ( lim θ → 0 1 cos θ ) = ( 1 ) ( 1 ) = 1 . {\displaystyle \lim _{\theta \to 0}{\frac {\tan \theta }{\theta }}\ =\ \left(\lim _{\theta \to 0}{\frac {\sin \theta }{\theta }}\right)\!\left(\lim _{\theta \to 0}{\frac {1}{\cos \theta }}\right)\ =\ (1)(1)\ =\ 1\,.} === Derivative of the sine function === We calculate the derivative of the sine function from the limit definition: d d θ sin θ = lim δ → 0 sin ( θ + δ ) − sin θ δ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\sin \theta =\lim _{\delta \to 0}{\frac {\sin(\theta +\delta )-\sin \theta }{\delta }}.} Using the angle addition formula sin(α+β) = sin α cos β + sin β cos α, we have: d d θ sin θ = lim δ → 0 sin θ cos δ + sin δ cos θ − sin θ δ = lim δ → 0 ( sin δ δ cos θ + cos δ − 1 δ sin θ ) . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\sin \theta =\lim _{\delta \to 0}{\frac {\sin \theta \cos \delta +\sin \delta \cos \theta -\sin \theta }{\delta }}=\lim _{\delta \to 0}\left({\frac {\sin \delta }{\delta }}\cos \theta +{\frac {\cos \delta -1}{\delta }}\sin \theta \right).} Using the limits for the sine and cosine functions: d d θ sin θ = ( 1 ) cos θ + ( 0 ) sin θ = cos θ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\sin \theta =(1)\cos \theta +(0)\sin \theta =\cos \theta \,.} === Derivative of the cosine function === ==== From the definition of derivative ==== We again calculate the derivative of the cosine function from the limit definition: d d θ cos θ = lim δ → 0 cos ( θ + δ ) − cos θ δ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\cos \theta =\lim _{\delta \to 0}{\frac {\cos(\theta +\delta )-\cos \theta }{\delta }}.} Using the angle addition formula cos(α+β) = cos α cos β – sin α sin β, we have: d d θ cos θ = lim δ → 0 cos θ cos δ − sin θ sin δ − cos θ δ = lim δ → 0 ( cos δ − 1 δ cos θ − sin δ δ sin θ ) . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\cos \theta =\lim _{\delta \to 0}{\frac {\cos \theta \cos \delta -\sin \theta \sin \delta -\cos \theta }{\delta }}=\lim _{\delta \to 0}\left({\frac {\cos \delta -1}{\delta }}\cos \theta \,-\,{\frac {\sin \delta }{\delta }}\sin \theta \right).} Using the limits for the sine and cosine functions: d d θ cos θ = ( 0 ) cos θ − ( 1 ) sin θ = − sin θ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\cos \theta =(0)\cos \theta -(1)\sin \theta =-\sin \theta \,.} ==== From the chain rule ==== To compute the derivative of the cosine function from the chain rule, first observe the following three facts: cos θ = sin ( π 2 − θ ) {\displaystyle \cos \theta =\sin \left({\tfrac {\pi }{2}}-\theta \right)} sin θ = cos ( π 2 − θ ) {\displaystyle \sin \theta =\cos \left({\tfrac {\pi }{2}}-\theta \right)} d d θ sin θ = cos θ {\displaystyle {\tfrac {\operatorname {d} }{\operatorname {d} \!\theta }}\sin \theta =\cos \theta } The first and the second are trigonometric identities, and the third is proven above. Using these three facts, we can write the following, d d θ cos θ = d d θ sin ( π 2 − θ ) {\displaystyle {\tfrac {\operatorname {d} }{\operatorname {d} \!\theta }}\cos \theta ={\tfrac {\operatorname {d} }{\operatorname {d} \!\theta }}\sin \left({\tfrac {\pi }{2}}-\theta \right)} We can differentiate this using the chain rule. Letting f ( x ) = sin x , g ( θ ) = π 2 − θ {\displaystyle f(x)=\sin x,\ \ g(\theta )={\tfrac {\pi }{2}}-\theta } , we have: d d θ f ( g ( θ ) ) = f ′ ( g ( θ ) ) ⋅ g ′ ( θ ) = cos ( π 2 − θ ) ⋅ ( 0 − 1 ) = − sin θ {\displaystyle {\tfrac {\operatorname {d} }{\operatorname {d} \!\theta }}f\!\left(g\!\left(\theta \right)\right)=f^{\prime }\!\left(g\!\left(\theta \right)\right)\cdot g^{\prime }\!\left(\theta \right)=\cos \left({\tfrac {\pi }{2}}-\theta \right)\cdot (0-1)=-\sin \theta } . Therefore, we have proven that d d θ cos θ = − sin θ {\displaystyle {\tfrac {\operatorname {d} }{\operatorname {d} \!\theta }}\cos \theta =-\sin \theta } . === Derivative of the tangent function === ==== From the definition of derivative ==== To calculate the derivative of the tangent function tan θ, we use first principles. By definition: d d θ tan θ = lim δ → 0 ( tan ( θ + δ ) − tan θ δ ) . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\tan \theta =\lim _{\delta \to 0}\left({\frac {\tan(\theta +\delta )-\tan \theta }{\delta }}\right).} Using the well-known angle formula tan(α+β) = (tan α + tan β) / (1 - tan α tan β), we have: d d θ tan θ = lim δ → 0 [ tan θ + tan δ 1 − tan θ tan δ − tan θ δ ] = lim δ → 0 [ tan θ + tan δ − tan θ + tan 2 θ tan δ δ ( 1 − tan θ tan δ ) ] . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\tan \theta =\lim _{\delta \to 0}\left[{\frac {{\frac {\tan \theta +\tan \delta }{1-\tan \theta \tan \delta }}-\tan \theta }{\delta }}\right]=\lim _{\delta \to 0}\left[{\frac {\tan \theta +\tan \delta -\tan \theta +\tan ^{2}\theta \tan \delta }{\delta \left(1-\tan \theta \tan \delta \right)}}\right].} Using the fact that the limit of a product is the product of the limits: d d θ tan θ = lim δ → 0 tan δ δ × lim δ → 0 ( 1 + tan 2 θ 1 − tan θ tan δ ) . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\tan \theta =\lim _{\delta \to 0}{\frac {\tan \delta }{\delta }}\times \lim _{\delta \to 0}\left({\frac {1+\tan ^{2}\theta }{1-\tan \theta \tan \delta }}\right).} Using the limit for the tangent function, and the fact that tan δ tends to 0 as δ tends to 0: d d θ tan θ = 1 × 1 + tan 2 θ 1 − 0 = 1 + tan 2 θ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\tan \theta =1\times {\frac {1+\tan ^{2}\theta }{1-0}}=1+\tan ^{2}\theta .} We see immediately that: d d θ tan θ = 1 + sin 2 θ cos 2 θ = cos 2 θ + sin 2 θ cos 2 θ = 1 cos 2 θ = sec 2 θ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\tan \theta =1+{\frac {\sin ^{2}\theta }{\cos ^{2}\theta }}={\frac {\cos ^{2}\theta +\sin ^{2}\theta }{\cos ^{2}\theta }}={\frac {1}{\cos ^{2}\theta }}=\sec ^{2}\theta \,.} ==== From the quotient rule ==== One can also compute the derivative of the tangent function using the quotient rule. d d θ tan θ = d d θ sin θ cos θ = ( sin θ ) ′ ⋅ cos θ − sin θ ⋅ ( cos θ ) ′ cos 2 θ = cos 2 θ + sin 2 θ cos 2 θ {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\tan \theta ={\frac {\operatorname {d} }{\operatorname {d} \!\theta }}{\frac {\sin \theta }{\cos \theta }}={\frac {\left(\sin \theta \right)^{\prime }\cdot \cos \theta -\sin \theta \cdot \left(\cos \theta \right)^{\prime }}{\cos ^{2}\theta }}={\frac {\cos ^{2}\theta +\sin ^{2}\theta }{\cos ^{2}\theta }}} The numerator can be simplified to 1 by the Pythagorean identity, giving us, 1 cos 2 θ = sec 2 θ {\displaystyle {\frac {1}{\cos ^{2}\theta }}=\sec ^{2}\theta } Therefore, d d θ tan θ = sec 2 θ {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\tan \theta =\sec ^{2}\theta } == Proofs of derivatives of inverse trigonometric functions == The following derivatives are found by setting a variable y equal to the inverse trigonometric function that we wish to take the derivative of. Using implicit differentiation and then solving for dy/dx, the derivative of the inverse function is found in terms of y. To convert dy/dx back into being in terms of x, we can draw a reference triangle on the unit circle, letting θ be y. Using the Pythagorean theorem and the definition of the regular trigonometric functions, we can finally express dy/dx in terms of x. === Differentiating the inverse sine function === We let y = arcsin x {\displaystyle y=\arcsin x\,\!} Where − π 2 ≤ y ≤ π 2 {\displaystyle -{\frac {\pi }{2}}\leq y\leq {\frac {\pi }{2}}} Then sin y = x {\displaystyle \sin y=x\,\!} Taking the derivative with respect to x {\displaystyle x} on both sides and solving for dy/dx: d d x sin y = d d x x {\displaystyle {d \over dx}\sin y={d \over dx}x} cos y ⋅ d y d x = 1 {\displaystyle \cos y\cdot {dy \over dx}=1\,\!} Substituting cos y = 1 − sin 2 y {\displaystyle \cos y={\sqrt {1-\sin ^{2}y}}} in from above, 1 − sin 2 y ⋅ d y d x = 1 {\displaystyle {\sqrt {1-\sin ^{2}y}}\cdot {dy \over dx}=1} Substituting x = sin y {\displaystyle x=\sin y} in from above, 1 − x 2 ⋅ d y d x = 1 {\displaystyle {\sqrt {1-x^{2}}}\cdot {dy \over dx}=1} d y d x = 1 1 − x 2 {\displaystyle {dy \over dx}={\frac {1}{\sqrt {1-x^{2}}}}} === Differentiating the inverse cosine function === We let y = arccos x {\displaystyle y=\arccos x\,\!} Where 0 ≤ y ≤ π {\displaystyle 0\leq y\leq \pi } Then cos y = x {\displaystyle \cos y=x\,\!} Taking the derivative with respect to x {\displaystyle x} on both sides and solving for dy/dx: d d x cos y = d d x x {\displaystyle {d \over dx}\cos y={d \over dx}x} − sin y ⋅ d y d x = 1 {\displaystyle -\sin y\cdot {dy \over dx}=1} Substituting sin y = 1 − cos 2 y {\displaystyle \sin y={\sqrt {1-\cos ^{2}y}}\,\!} in from above, we get − 1 − cos 2 y ⋅ d y d x = 1 {\displaystyle -{\sqrt {1-\cos ^{2}y}}\cdot {dy \over dx}=1} Substituting x = cos y {\displaystyle x=\cos y\,\!} in from above, we get − 1 − x 2 ⋅ d y d x = 1 {\displaystyle -{\sqrt {1-x^{2}}}\cdot {dy \over dx}=1} d y d x = − 1 1 − x 2 {\displaystyle {dy \over dx}=-{\frac {1}{\sqrt {1-x^{2}}}}} Alternatively, once the derivative of arcsin x {\displaystyle \arcsin x} is established, the derivative of arccos x {\displaystyle \arccos x} follows immediately by differentiating the identity arcsin x + arccos x = π / 2 {\displaystyle \arcsin x+\arccos x=\pi /2} so that ( arccos x ) ′ = − ( arcsin x ) ′ {\displaystyle (\arccos x)'=-(\arcsin x)'} . === Differentiating the inverse tangent function === We let y = arctan x {\displaystyle y=\arctan x\,\!} Where − π 2 < y < π 2 {\displaystyle -{\frac {\pi }{2}}<y<{\frac {\pi }{2}}} Then tan y = x {\displaystyle \tan y=x\,\!} Taking the derivative with respect to x {\displaystyle x} on both sides and solving for dy/dx: d d x tan y = d d x x {\displaystyle {d \over dx}\tan y={d \over dx}x} Left side: d d x tan y = sec 2 y ⋅ d y d x = ( 1 + tan 2 y ) d y d x {\displaystyle {d \over dx}\tan y=\sec ^{2}y\cdot {dy \over dx}=(1+\tan ^{2}y){dy \over dx}} using the Pythagorean identity Right side: d d x x = 1 {\displaystyle {d \over dx}x=1} Therefore, ( 1 + tan 2 y ) d y d x = 1 {\displaystyle (1+\tan ^{2}y){dy \over dx}=1} Substituting x = tan y {\displaystyle x=\tan y\,\!} in from above, we get ( 1 + x 2 ) d y d x = 1 {\displaystyle (1+x^{2}){dy \over dx}=1} d y d x = 1 1 + x 2 {\displaystyle {dy \over dx}={\frac {1}{1+x^{2}}}} === Differentiating the inverse cotangent function === We let y = arccot x {\displaystyle y=\operatorname {arccot} x} where 0 < y < π {\displaystyle 0<y<\pi } . Then cot y = x {\displaystyle \cot y=x} Taking the derivative with respect to x {\displaystyle x} on both sides and solving for dy/dx: d d x cot y = d d x x {\displaystyle {\frac {d}{dx}}\cot y={\frac {d}{dx}}x} Left side: d d x cot y = − csc 2 y ⋅ d y d x = − ( 1 + cot 2 y ) d y d x {\displaystyle {d \over dx}\cot y=-\csc ^{2}y\cdot {dy \over dx}=-(1+\cot ^{2}y){dy \over dx}} using the Pythagorean identity Right side: d d x x = 1 {\displaystyle {d \over dx}x=1} Therefore, − ( 1 + cot 2 y ) d y d x = 1 {\displaystyle -(1+\cot ^{2}y){\frac {dy}{dx}}=1} Substituting x = cot y {\displaystyle x=\cot y} , − ( 1 + x 2 ) d y d x = 1 {\displaystyle -(1+x^{2}){\frac {dy}{dx}}=1} d y d x = − 1 1 + x 2 {\displaystyle {\frac {dy}{dx}}=-{\frac {1}{1+x^{2}}}} Alternatively, as the derivative of arctan x {\displaystyle \arctan x} is derived as shown above, then using the identity arctan x + arccot x = π 2 {\displaystyle \arctan x+\operatorname {arccot} x={\dfrac {\pi }{2}}} follows immediately that d d x arccot x = d d x ( π 2 − arctan x ) = − 1 1 + x 2 {\displaystyle {\begin{aligned}{\dfrac {d}{dx}}\operatorname {arccot} x&={\dfrac {d}{dx}}\left({\dfrac {\pi }{2}}-\arctan x\right)\\&=-{\dfrac {1}{1+x^{2}}}\end{aligned}}} === Differentiating the inverse secant function === ==== Using implicit differentiation ==== Let y = arcsec x ∣ | x | ≥ 1 {\displaystyle y=\operatorname {arcsec} x\ \mid |x|\geq 1} Then x = sec y ∣ y ∈ [ 0 , π 2 ) ∪ ( π 2 , π ] {\displaystyle x=\sec y\mid \ y\in \left[0,{\frac {\pi }{2}}\right)\cup \left({\frac {\pi }{2}},\pi \right]} d x d y = sec y tan y = | x | x 2 − 1 {\displaystyle {\frac {dx}{dy}}=\sec y\tan y=|x|{\sqrt {x^{2}-1}}} (The absolute value in the expression is necessary as the product of secant and tangent in the interval of y is always nonnegative, while the radical x 2 − 1 {\displaystyle {\sqrt {x^{2}-1}}} is always nonnegative by definition of the principal square root, so the remaining factor must also be nonnegative, which is achieved by using the absolute value of x.) d y d x = 1 | x | x 2 − 1 {\displaystyle {\frac {dy}{dx}}={\frac {1}{|x|{\sqrt {x^{2}-1}}}}} ==== Using the chain rule ==== Alternatively, the derivative of arcsecant may be derived from the derivative of arccosine using the chain rule. Let y = arcsec x = arccos ( 1 x ) {\displaystyle y=\operatorname {arcsec} x=\arccos \left({\frac {1}{x}}\right)} Where | x | ≥ 1 {\displaystyle |x|\geq 1} and y ∈ [ 0 , π 2 ) ∪ ( π 2 , π ] {\displaystyle y\in \left[0,{\frac {\pi }{2}}\right)\cup \left({\frac {\pi }{2}},\pi \right]} Then, applying the chain rule to arccos ( 1 x ) {\displaystyle \arccos \left({\frac {1}{x}}\right)} : d y d x = − 1 1 − ( 1 x ) 2 ⋅ ( − 1 x 2 ) = 1 x 2 1 − 1 x 2 = 1 x 2 x 2 − 1 x 2 = 1 x 2 x 2 − 1 = 1 | x | x 2 − 1 {\displaystyle {\frac {dy}{dx}}=-{\frac {1}{\sqrt {1-({\frac {1}{x}})^{2}}}}\cdot \left(-{\frac {1}{x^{2}}}\right)={\frac {1}{x^{2}{\sqrt {1-{\frac {1}{x^{2}}}}}}}={\frac {1}{x^{2}{\frac {\sqrt {x^{2}-1}}{\sqrt {x^{2}}}}}}={\frac {1}{{\sqrt {x^{2}}}{\sqrt {x^{2}-1}}}}={\frac {1}{|x|{\sqrt {x^{2}-1}}}}} === Differentiating the inverse cosecant function === ==== Using implicit differentiation ==== Let y = arccsc x ∣ | x | ≥ 1 {\displaystyle y=\operatorname {arccsc} x\ \mid |x|\geq 1} Then x = csc y ∣ y ∈ [ − π 2 , 0 ) ∪ ( 0 , π 2 ] {\displaystyle x=\csc y\ \mid \ y\in \left[-{\frac {\pi }{2}},0\right)\cup \left(0,{\frac {\pi }{2}}\right]} d x d y = − csc y cot y = − | x | x 2 − 1 {\displaystyle {\frac {dx}{dy}}=-\csc y\cot y=-|x|{\sqrt {x^{2}-1}}} (The absolute value in the expression is necessary as the product of cosecant and cotangent in the interval of y is always nonnegative, while the radical x 2 − 1 {\displaystyle {\sqrt {x^{2}-1}}} is always nonnegative by definition of the principal square root, so the remaining factor must also be nonnegative, which is achieved by using the absolute value of x.) d y d x = − 1 | x | x 2 − 1 {\displaystyle {\frac {dy}{dx}}={\frac {-1}{|x|{\sqrt {x^{2}-1}}}}} ==== Using the chain rule ==== Alternatively, the derivative of arccosecant may be derived from the derivative of arcsine using the chain rule. Let y = arccsc x = arcsin ( 1 x ) {\displaystyle y=\operatorname {arccsc} x=\arcsin \left({\frac {1}{x}}\right)} Where | x | ≥ 1 {\displaystyle |x|\geq 1} and y ∈ [ − π 2 , 0 ) ∪ ( 0 , π 2 ] {\displaystyle y\in \left[-{\frac {\pi }{2}},0\right)\cup \left(0,{\frac {\pi }{2}}\right]} Then, applying the chain rule to arcsin ( 1 x ) {\displaystyle \arcsin \left({\frac {1}{x}}\right)} : d y d x = 1 1 − ( 1 x ) 2 ⋅ ( − 1 x 2 ) = − 1 x 2 1 − 1 x 2 = − 1 x 2 x 2 − 1 x 2 = − 1 x 2 x 2 − 1 = − 1 | x | x 2 − 1 {\displaystyle {\frac {dy}{dx}}={\frac {1}{\sqrt {1-({\frac {1}{x}})^{2}}}}\cdot \left(-{\frac {1}{x^{2}}}\right)=-{\frac {1}{x^{2}{\sqrt {1-{\frac {1}{x^{2}}}}}}}=-{\frac {1}{x^{2}{\frac {\sqrt {x^{2}-1}}{\sqrt {x^{2}}}}}}=-{\frac {1}{{\sqrt {x^{2}}}{\sqrt {x^{2}-1}}}}=-{\frac {1}{|x|{\sqrt {x^{2}-1}}}}} == See also == Calculus – Branch of mathematics Derivative – Instantaneous rate of change (mathematics) Differentiation rules – Rules for computing derivatives of functions General Leibniz rule – Generalization of the product rule in calculus Inverse functions and differentiation – Formula for the derivative of an inverse functionPages displaying short descriptions of redirect targets Linearity of differentiation – Calculus property List of integrals of inverse trigonometric functions List of trigonometric identities Trigonometry – Area of geometry, about angles and lengths == References == == Bibliography == Handbook of Mathematical Functions, Edited by Abramowitz and Stegun, National Bureau of Standards, Applied Mathematics Series, 55 (1964)
|
Wikipedia:Differentiation rules#0
|
This article is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus. == Elementary rules of differentiation == Unless otherwise stated, all functions are functions of real numbers ( R {\textstyle \mathbb {R} } ) that return real values, although, more generally, the formulas below apply wherever they are well defined, including the case of complex numbers ( C {\textstyle \mathbb {C} } ). === Constant term rule === For any value of c {\textstyle c} , where c ∈ R {\textstyle c\in \mathbb {R} } , if f ( x ) {\textstyle f(x)} is the constant function given by f ( x ) = c {\textstyle f(x)=c} , then d f d x = 0 {\textstyle {\frac {df}{dx}}=0} . ==== Proof ==== Let c ∈ R {\textstyle c\in \mathbb {R} } and f ( x ) = c {\textstyle f(x)=c} . By the definition of the derivative: f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h = lim h → 0 ( c ) − ( c ) h = lim h → 0 0 h = lim h → 0 0 = 0. {\displaystyle {\begin{aligned}f'(x)&=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\\&=\lim _{h\to 0}{\frac {(c)-(c)}{h}}\\&=\lim _{h\to 0}{\frac {0}{h}}\\&=\lim _{h\to 0}0\\&=0.\end{aligned}}} This computation shows that the derivative of any constant function is 0. ==== Intuitive (geometric) explanation ==== The derivative of the function at a point is the slope of the line tangent to the curve at the point. The slope of the constant function is 0, because the tangent line to the constant function is horizontal and its angle is 0. In other words, the value of the constant function, y {\textstyle y} , will not change as the value of x {\textstyle x} increases or decreases. === Differentiation is linear === For any functions f {\textstyle f} and g {\textstyle g} and any real numbers a {\textstyle a} and b {\textstyle b} , the derivative of the function h ( x ) = a f ( x ) + b g ( x ) {\textstyle h(x)=af(x)+bg(x)} with respect to x {\textstyle x} is h ′ ( x ) = a f ′ ( x ) + b g ′ ( x ) {\textstyle h'(x)=af'(x)+bg'(x)} . In Leibniz's notation, this formula is written as: d ( a f + b g ) d x = a d f d x + b d g d x . {\displaystyle {\frac {d(af+bg)}{dx}}=a{\frac {df}{dx}}+b{\frac {dg}{dx}}.} Special cases include: The constant factor rule: ( a f ) ′ = a f ′ , {\displaystyle (af)'=af',} The sum rule: ( f + g ) ′ = f ′ + g ′ , {\displaystyle (f+g)'=f'+g',} The difference rule: ( f − g ) ′ = f ′ − g ′ . {\displaystyle (f-g)'=f'-g'.} === Product rule === For the functions f {\textstyle f} and g {\textstyle g} , the derivative of the function h ( x ) = f ( x ) g ( x ) {\textstyle h(x)=f(x)g(x)} with respect to x {\textstyle x} is: h ′ ( x ) = ( f g ) ′ ( x ) = f ′ ( x ) g ( x ) + f ( x ) g ′ ( x ) . {\displaystyle h'(x)=(fg)'(x)=f'(x)g(x)+f(x)g'(x).} In Leibniz's notation, this formula is written: d ( f g ) d x = g d f d x + f d g d x . {\displaystyle {\frac {d(fg)}{dx}}=g{\frac {df}{dx}}+f{\frac {dg}{dx}}.} === Chain rule === The derivative of the function h ( x ) = f ( g ( x ) ) {\textstyle h(x)=f(g(x))} is: h ′ ( x ) = f ′ ( g ( x ) ) ⋅ g ′ ( x ) . {\displaystyle h'(x)=f'(g(x))\cdot g'(x).} In Leibniz's notation, this formula is written as: d d x h ( x ) = d d z f ( z ) | z = g ( x ) ⋅ d d x g ( x ) , {\displaystyle {\frac {d}{dx}}h(x)=\left.{\frac {d}{dz}}f(z)\right|_{z=g(x)}\cdot {\frac {d}{dx}}g(x),} often abridged to: d h ( x ) d x = d f ( g ( x ) ) d g ( x ) ⋅ d g ( x ) d x . {\displaystyle {\frac {dh(x)}{dx}}={\frac {df(g(x))}{dg(x)}}\cdot {\frac {dg(x)}{dx}}.} Focusing on the notion of maps, and the differential being a map D {\textstyle {\text{D}}} , this formula is written in a more concise way as: [ D ( f ∘ g ) ] x = [ D f ] g ( x ) ⋅ [ D g ] x . {\displaystyle [{\text{D}}(f\circ g)]_{x}=[{\text{D}}f]_{g(x)}\cdot [{\text{D}}g]_{x}.} === Inverse function rule === If the function f {\textstyle f} has an inverse function g {\textstyle g} , meaning that g ( f ( x ) ) = x {\textstyle g(f(x))=x} and f ( g ( y ) ) = y {\textstyle f(g(y))=y} , then: g ′ = 1 f ′ ∘ g . {\displaystyle g'={\frac {1}{f'\circ g}}.} In Leibniz notation, this formula is written as: d x d y = 1 d y d x . {\displaystyle {\frac {dx}{dy}}={\frac {1}{\frac {dy}{dx}}}.} == Power laws, polynomials, quotients, and reciprocals == === Polynomial or elementary power rule === If f ( x ) = x r {\textstyle f(x)=x^{r}} , for any real number r ≠ 0 {\textstyle r\neq 0} , then: f ′ ( x ) = r x r − 1 . {\displaystyle f'(x)=rx^{r-1}.} When r = 1 {\textstyle r=1} , this formula becomes the special case that, if f ( x ) = x {\textstyle f(x)=x} , then f ′ ( x ) = 1 {\textstyle f'(x)=1} . Combining the power rule with the sum and constant multiple rules permits the computation of the derivative of any polynomial. === Reciprocal rule === The derivative of h ( x ) = 1 f ( x ) {\textstyle h(x)={\frac {1}{f(x)}}} for any (nonvanishing) function f {\textstyle f} is: h ′ ( x ) = − f ′ ( x ) ( f ( x ) ) 2 , {\displaystyle h'(x)=-{\frac {f'(x)}{(f(x))^{2}}},} wherever f {\textstyle f} is nonzero. In Leibniz's notation, this formula is written: d ( 1 f ) d x = − 1 f 2 d f d x . {\displaystyle {\frac {d\left({\frac {1}{f}}\right)}{dx}}=-{\frac {1}{f^{2}}}{\frac {df}{dx}}.} The reciprocal rule can be derived either from the quotient rule or from the combination of power rule and chain rule. === Quotient rule === If f {\textstyle f} and g {\textstyle g} are functions, then: ( f g ) ′ = f ′ g − g ′ f g 2 , {\displaystyle \left({\frac {f}{g}}\right)'={\frac {f'g-g'f}{g^{2}}},} wherever g {\textstyle g} is nonzero. This can be derived from the product rule and the reciprocal rule. === Generalized power rule === The elementary power rule generalizes considerably. The most general power rule is the functional power rule: for any functions f {\textstyle f} and g {\textstyle g} , ( f g ) ′ = ( e g ln f ) ′ = f g ( f ′ g f + g ′ ln f ) , {\displaystyle (f^{g})'=\left(e^{g\ln f}\right)'=f^{g}\left(f'{g \over f}+g'\ln f\right),\quad } wherever both sides are well defined. Special cases: If f ( x ) = x a {\textstyle f(x)=x^{a}} , then f ′ ( x ) = a x a − 1 {\textstyle f'(x)=ax^{a-1}} when a {\textstyle a} is any nonzero real number and x {\textstyle x} is positive. The reciprocal rule may be derived as the special case where g ( x ) = − 1 {\textstyle g(x)=-1\!} . == Derivatives of exponential and logarithmic functions == d d x ( c a x ) = a c a x ln c , c > 0. {\displaystyle {\frac {d}{dx}}\left(c^{ax}\right)={ac^{ax}\ln c},\qquad c>0.} The equation above is true for all c {\displaystyle c} , but the derivative for c < 0 {\displaystyle c<0} yields a complex number. d d x ( e a x ) = a e a x . {\displaystyle {\frac {d}{dx}}\left(e^{ax}\right)=ae^{ax}.} d d x ( log c x ) = 1 x ln c , c > 1. {\displaystyle {\frac {d}{dx}}\left(\log _{c}x\right)={1 \over x\ln c},\qquad c>1.} The equation above is also true for all c {\textstyle c} but yields a complex number if c < 0 {\textstyle c<0} . d d x ( ln x ) = 1 x , x > 0. {\displaystyle {\frac {d}{dx}}\left(\ln x\right)={1 \over x},\qquad x>0.} d d x ( ln | x | ) = 1 x , x ≠ 0. {\displaystyle {\frac {d}{dx}}\left(\ln |x|\right)={1 \over x},\qquad x\neq 0.} d d x ( W ( x ) ) = 1 x + e W ( x ) , x > − 1 e , {\displaystyle {\frac {d}{dx}}\left(W(x)\right)={1 \over {x+e^{W(x)}}},\qquad x>-{1 \over e},} where W ( x ) {\textstyle W(x)} is the Lambert W function. d d x ( x x ) = x x ( 1 + ln x ) . {\displaystyle {\frac {d}{dx}}\left(x^{x}\right)=x^{x}(1+\ln x).} d d x ( f ( x ) g ( x ) ) = g ( x ) f ( x ) g ( x ) − 1 d f d x + f ( x ) g ( x ) ln ( f ( x ) ) d g d x , if f ( x ) > 0 and d f d x and d g d x exist. {\displaystyle {\frac {d}{dx}}\left(f(x)^{g(x)}\right)=g(x)f(x)^{g(x)-1}{\frac {df}{dx}}+f(x)^{g(x)}\ln {(f(x))}{\frac {dg}{dx}},\qquad {\text{if }}f(x)>0{\text{ and }}{\frac {df}{dx}}{\text{ and }}{\frac {dg}{dx}}{\text{ exist.}}} d d x ( f 1 ( x ) f 2 ( x ) ( . . . ) f n ( x ) ) = [ ∑ k = 1 n ∂ ∂ x k ( f 1 ( x 1 ) f 2 ( x 2 ) ( . . . ) f n ( x n ) ) ] | x 1 = x 2 = . . . = x n = x , if f i < n ( x ) > 0 and d f i d x exists. {\displaystyle {\frac {d}{dx}}\left(f_{1}(x)^{f_{2}(x)^{\left(...\right)^{f_{n}(x)}}}\right)=\left[\sum \limits _{k=1}^{n}{\frac {\partial }{\partial x_{k}}}\left(f_{1}(x_{1})^{f_{2}(x_{2})^{\left(...\right)^{f_{n}(x_{n})}}}\right)\right]{\biggr \vert }_{x_{1}=x_{2}=...=x_{n}=x},\qquad {\text{ if }}f_{i<n}(x)>0{\text{ and }}{\frac {df_{i}}{dx}}{\text{ exists.}}} === Logarithmic derivatives === The logarithmic derivative is another way of stating the rule for differentiating the logarithm of a function (using the chain rule): ( ln f ) ′ = f ′ f , {\displaystyle (\ln f)'={\frac {f'}{f}},} wherever f {\textstyle f} is positive. Logarithmic differentiation is a technique which uses logarithms and its differentiation rules to simplify certain expressions before actually applying the derivative. Logarithms can be used to remove exponents, convert products into sums, and convert division into subtraction—each of which may lead to a simplified expression for taking derivatives. == Derivatives of trigonometric functions == The derivatives in the table above are for when the range of the inverse secant is [ 0 , π ] {\textstyle [0,\pi ]} and when the range of the inverse cosecant is [ − π 2 , π 2 ] {\textstyle \left[-{\frac {\pi }{2}},{\frac {\pi }{2}}\right]} . It is common to additionally define an inverse tangent function with two arguments, arctan ( y , x ) {\textstyle \arctan(y,x)} . Its value lies in the range [ − π , π ] {\textstyle [-\pi ,\pi ]} and reflects the quadrant of the point ( x , y ) {\textstyle (x,y)} . For the first and fourth quadrant (i.e., x > 0 {\displaystyle x>0} ), one has arctan ( y , x > 0 ) = arctan ( y x ) {\textstyle \arctan(y,x>0)=\arctan({\frac {y}{x}})} . Its partial derivatives are: ∂ arctan ( y , x ) ∂ y = x x 2 + y 2 and ∂ arctan ( y , x ) ∂ x = − y x 2 + y 2 . {\displaystyle {\frac {\partial \arctan(y,x)}{\partial y}}={\frac {x}{x^{2}+y^{2}}}\qquad {\text{and}}\qquad {\frac {\partial \arctan(y,x)}{\partial x}}={\frac {-y}{x^{2}+y^{2}}}.} == Derivatives of hyperbolic functions == == Derivatives of special functions == === Gamma function === Γ ( x ) = ∫ 0 ∞ t x − 1 e − t d t {\displaystyle \Gamma (x)=\int _{0}^{\infty }t^{x-1}e^{-t}\,dt} Γ ′ ( x ) = ∫ 0 ∞ t x − 1 e − t ln t d t = Γ ( x ) ( ∑ n = 1 ∞ ( ln ( 1 + 1 n ) − 1 x + n ) − 1 x ) = Γ ( x ) ψ ( x ) , {\displaystyle {\begin{aligned}\Gamma '(x)&=\int _{0}^{\infty }t^{x-1}e^{-t}\ln t\,dt\\&=\Gamma (x)\left(\sum _{n=1}^{\infty }\left(\ln \left(1+{\dfrac {1}{n}}\right)-{\dfrac {1}{x+n}}\right)-{\dfrac {1}{x}}\right)\\&=\Gamma (x)\psi (x),\end{aligned}}} with ψ ( x ) {\textstyle \psi (x)} being the digamma function, expressed by the parenthesized expression to the right of Γ ( x ) {\textstyle \Gamma (x)} in the line above. === Riemann zeta function === ζ ( x ) = ∑ n = 1 ∞ 1 n x {\displaystyle \zeta (x)=\sum _{n=1}^{\infty }{\frac {1}{n^{x}}}} ζ ′ ( x ) = − ∑ n = 1 ∞ ln n n x = − ln 2 2 x − ln 3 3 x − ln 4 4 x − ⋯ = − ∑ p prime p − x ln p ( 1 − p − x ) 2 ∏ q prime , q ≠ p 1 1 − q − x {\displaystyle {\begin{aligned}\zeta '(x)&=-\sum _{n=1}^{\infty }{\frac {\ln n}{n^{x}}}=-{\frac {\ln 2}{2^{x}}}-{\frac {\ln 3}{3^{x}}}-{\frac {\ln 4}{4^{x}}}-\cdots \\&=-\sum _{p{\text{ prime}}}{\frac {p^{-x}\ln p}{(1-p^{-x})^{2}}}\prod _{q{\text{ prime}},q\neq p}{\frac {1}{1-q^{-x}}}\end{aligned}}} == Derivatives of integrals == Suppose that it is required to differentiate with respect to x {\textstyle x} the function: F ( x ) = ∫ a ( x ) b ( x ) f ( x , t ) d t , {\displaystyle F(x)=\int _{a(x)}^{b(x)}f(x,t)\,dt,} where the functions f ( x , t ) {\textstyle f(x,t)} and ∂ ∂ x f ( x , t ) {\textstyle {\frac {\partial }{\partial x}}\,f(x,t)} are both continuous in both t {\textstyle t} and x {\textstyle x} in some region of the ( t , x ) {\textstyle (t,x)} plane, including a ( x ) ≤ t ≤ b ( x ) {\textstyle a(x)\leq t\leq b(x)} , where x 0 ≤ x ≤ x 1 {\textstyle x_{0}\leq x\leq x_{1}} , and the functions a ( x ) {\textstyle a(x)} and b ( x ) {\textstyle b(x)} are both continuous and both have continuous derivatives for x 0 ≤ x ≤ x 1 {\textstyle x_{0}\leq x\leq x_{1}} . Then, for x 0 ≤ x ≤ x 1 {\textstyle \,x_{0}\leq x\leq x_{1}} : F ′ ( x ) = f ( x , b ( x ) ) b ′ ( x ) − f ( x , a ( x ) ) a ′ ( x ) + ∫ a ( x ) b ( x ) ∂ ∂ x f ( x , t ) d t . {\displaystyle F'(x)=f(x,b(x))\,b'(x)-f(x,a(x))\,a'(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}\,f(x,t)\;dt\,.} This formula is the general form of the Leibniz integral rule and can be derived using the fundamental theorem of calculus. == Derivatives to nth order == Some rules exist for computing the n {\textstyle n} th derivative of functions, where n {\textstyle n} is a positive integer, including: === Faà di Bruno's formula === If f {\textstyle f} and g {\textstyle g} are n {\textstyle n} -times differentiable, then: d n d x n [ f ( g ( x ) ) ] = n ! ∑ { k m } f ( r ) ( g ( x ) ) ∏ m = 1 n 1 k m ! ( g ( m ) ( x ) ) k m , {\displaystyle {\frac {d^{n}}{dx^{n}}}[f(g(x))]=n!\sum _{\{k_{m}\}}f^{(r)}(g(x))\prod _{m=1}^{n}{\frac {1}{k_{m}!}}\left(g^{(m)}(x)\right)^{k_{m}},} where r = ∑ m = 1 n − 1 k m {\textstyle r=\sum _{m=1}^{n-1}k_{m}} and the set { k m } {\textstyle \{k_{m}\}} consists of all non-negative integer solutions of the Diophantine equation ∑ m = 1 n m k m = n {\textstyle \sum _{m=1}^{n}mk_{m}=n} . === General Leibniz rule === If f {\textstyle f} and g {\textstyle g} are n {\textstyle n} -times differentiable, then: d n d x n [ f ( x ) g ( x ) ] = ∑ k = 0 n ( n k ) d n − k d x n − k f ( x ) d k d x k g ( x ) . {\displaystyle {\frac {d^{n}}{dx^{n}}}[f(x)g(x)]=\sum _{k=0}^{n}{\binom {n}{k}}{\frac {d^{n-k}}{dx^{n-k}}}f(x){\frac {d^{k}}{dx^{k}}}g(x).} == See also == Differentiable function – Mathematical function whose derivative exists Differential of a function – Notion in calculus Differentiation of integrals – Problem in mathematics Differentiation under the integral sign – Differentiation under the integral sign formulaPages displaying short descriptions of redirect targets Hyperbolic functions – Collective name of 6 mathematical functions Inverse hyperbolic functions – Mathematical functions Inverse trigonometric functions – Inverse functions of sin, cos, tan, etc. Lists of integrals List of mathematical functions Matrix calculus – Specialized notation for multivariable calculus Trigonometric functions – Functions of an angle Vector calculus identities – Mathematical identities == References == == Sources and further reading == These rules are given in many books, both on elementary and advanced calculus, in pure and applied mathematics. Those in this article (in addition to the above references) can be found in: Mathematical Handbook of Formulas and Tables (3rd edition), S. Lipschutz, M.R. Spiegel, J. Liu, Schaum's Outline Series, 2009, ISBN 978-0-07-154855-7. The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, ISBN 978-0-521-57507-2. Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010, ISBN 978-0-521-86153-3 NIST Handbook of Mathematical Functions, F. W. J. Olver, D. W. Lozier, R. F. Boisvert, C. W. Clark, Cambridge University Press, 2010, ISBN 978-0-521-19225-5. == External links == Derivative calculator with formula simplification The table of derivatives with animated proves
|
Wikipedia:Difunctional#0
|
In mathematics, a binary relation associates some elements of one set called the domain with some elements of another set called the codomain. Precisely, a binary relation over sets X {\displaystyle X} and Y {\displaystyle Y} is a set of ordered pairs ( x , y ) {\displaystyle (x,y)} , where x {\displaystyle x} is an element of X {\displaystyle X} and y {\displaystyle y} is an element of Y {\displaystyle Y} . It encodes the common concept of relation: an element x {\displaystyle x} is related to an element y {\displaystyle y} , if and only if the pair ( x , y ) {\displaystyle (x,y)} belongs to the set of ordered pairs that defines the binary relation. An example of a binary relation is the "divides" relation over the set of prime numbers P {\displaystyle \mathbb {P} } and the set of integers Z {\displaystyle \mathbb {Z} } , in which each prime p {\displaystyle p} is related to each integer z {\displaystyle z} that is a multiple of p {\displaystyle p} , but not to an integer that is not a multiple of p {\displaystyle p} . In this relation, for instance, the prime number 2 {\displaystyle 2} is related to numbers such as − 4 {\displaystyle -4} , 0 {\displaystyle 0} , 6 {\displaystyle 6} , 10 {\displaystyle 10} , but not to 1 {\displaystyle 1} or 9 {\displaystyle 9} , just as the prime number 3 {\displaystyle 3} is related to 0 {\displaystyle 0} , 6 {\displaystyle 6} , and 9 {\displaystyle 9} , but not to 4 {\displaystyle 4} or 13 {\displaystyle 13} . Binary relations, and especially homogeneous relations, are used in many branches of mathematics to model a wide variety of concepts. These include, among others: the "is greater than", "is equal to", and "divides" relations in arithmetic; the "is congruent to" relation in geometry; the "is adjacent to" relation in graph theory; the "is orthogonal to" relation in linear algebra. A function may be defined as a binary relation that meets additional constraints. Binary relations are also heavily used in computer science. A binary relation over sets X {\displaystyle X} and Y {\displaystyle Y} is an element of the power set of X × Y . {\displaystyle X\times Y.} Since the latter set is ordered by inclusion ( ⊆ {\displaystyle \subseteq } ), each relation has a place in the lattice of subsets of X × Y . {\displaystyle X\times Y.} A binary relation is called a homogeneous relation when X = Y {\displaystyle X=Y} . A binary relation is also called a heterogeneous relation when it is not necessary that X = Y {\displaystyle X=Y} . Since relations are sets, they can be manipulated using set operations, including union, intersection, and complementation, and satisfying the laws of an algebra of sets. Beyond that, operations like the converse of a relation and the composition of relations are available, satisfying the laws of a calculus of relations, for which there are textbooks by Ernst Schröder, Clarence Lewis, and Gunther Schmidt. A deeper analysis of relations involves decomposing them into subsets called concepts, and placing them in a complete lattice. In some systems of axiomatic set theory, relations are extended to classes, which are generalizations of sets. This extension is needed for, among other things, modeling the concepts of "is an element of" or "is a subset of" in set theory, without running into logical inconsistencies such as Russell's paradox. A binary relation is the most studied special case n = 2 {\displaystyle n=2} of an n {\displaystyle n} -ary relation over sets X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} , which is a subset of the Cartesian product X 1 × ⋯ × X n . {\displaystyle X_{1}\times \cdots \times X_{n}.} == Definition == Given sets X {\displaystyle X} and Y {\displaystyle Y} , the Cartesian product X × Y {\displaystyle X\times Y} is defined as { ( x , y ) ∣ x ∈ X and y ∈ Y } , {\displaystyle \{(x,y)\mid x\in X{\text{ and }}y\in Y\},} and its elements are called ordered pairs. A binary relation R {\displaystyle R} over sets X {\displaystyle X} and Y {\displaystyle Y} is a subset of X × Y . {\displaystyle X\times Y.} The set X {\displaystyle X} is called the domain or set of departure of R {\displaystyle R} , and the set Y {\displaystyle Y} the codomain or set of destination of R {\displaystyle R} . In order to specify the choices of the sets X {\displaystyle X} and Y {\displaystyle Y} , some authors define a binary relation or correspondence as an ordered triple ( X , Y , G ) {\displaystyle (X,Y,G)} , where G {\displaystyle G} is a subset of X × Y {\displaystyle X\times Y} called the graph of the binary relation. The statement ( x , y ) ∈ R {\displaystyle (x,y)\in R} reads " x {\displaystyle x} is R {\displaystyle R} -related to y {\displaystyle y} " and is denoted by x R y {\displaystyle xRy} . The domain of definition or active domain of R {\displaystyle R} is the set of all x {\displaystyle x} such that x R y {\displaystyle xRy} for at least one y {\displaystyle y} . The codomain of definition, active codomain, image or range of R {\displaystyle R} is the set of all y {\displaystyle y} such that x R y {\displaystyle xRy} for at least one x {\displaystyle x} . The field of R {\displaystyle R} is the union of its domain of definition and its codomain of definition. When X = Y , {\displaystyle X=Y,} a binary relation is called a homogeneous relation (or endorelation). To emphasize the fact that X {\displaystyle X} and Y {\displaystyle Y} are allowed to be different, a binary relation is also called a heterogeneous relation. The prefix hetero is from the Greek ἕτερος (heteros, "other, another, different"). A heterogeneous relation has been called a rectangular relation, suggesting that it does not have the square-like symmetry of a homogeneous relation on a set where A = B . {\displaystyle A=B.} Commenting on the development of binary relations beyond homogeneous relations, researchers wrote, "... a variant of the theory has evolved that treats relations from the very beginning as heterogeneous or rectangular, i.e. as relations where the normal case is that they are relations between different sets." The terms correspondence, dyadic relation and two-place relation are synonyms for binary relation, though some authors use the term "binary relation" for any subset of a Cartesian product X × Y {\displaystyle X\times Y} without reference to X {\displaystyle X} and Y {\displaystyle Y} , and reserve the term "correspondence" for a binary relation with reference to X {\displaystyle X} and Y {\displaystyle Y} . In a binary relation, the order of the elements is important; if x ≠ y {\displaystyle x\neq y} then y R x {\displaystyle yRx} can be true or false independently of x R y {\displaystyle xRy} . For example, 3 {\displaystyle 3} divides 9 {\displaystyle 9} , but 9 {\displaystyle 9} does not divide 3 {\displaystyle 3} . == Operations == === Union === If R {\displaystyle R} and S {\displaystyle S} are binary relations over sets X {\displaystyle X} and Y {\displaystyle Y} then R ∪ S = { ( x , y ) ∣ x R y or x S y } {\displaystyle R\cup S=\{(x,y)\mid xRy{\text{ or }}xSy\}} is the union relation of R {\displaystyle R} and S {\displaystyle S} over X {\displaystyle X} and Y {\displaystyle Y} . The identity element is the empty relation. For example, ≤ {\displaystyle \leq } is the union of < and =, and ≥ {\displaystyle \geq } is the union of > and =. === Intersection === If R {\displaystyle R} and S {\displaystyle S} are binary relations over sets X {\displaystyle X} and Y {\displaystyle Y} then R ∩ S = { ( x , y ) ∣ x R y and x S y } {\displaystyle R\cap S=\{(x,y)\mid xRy{\text{ and }}xSy\}} is the intersection relation of R {\displaystyle R} and S {\displaystyle S} over X {\displaystyle X} and Y {\displaystyle Y} . The identity element is the universal relation. For example, the relation "is divisible by 6" is the intersection of the relations "is divisible by 3" and "is divisible by 2". === Composition === If R {\displaystyle R} is a binary relation over sets X {\displaystyle X} and Y {\displaystyle Y} , and S {\displaystyle S} is a binary relation over sets Y {\displaystyle Y} and Z {\displaystyle Z} then S ∘ R = { ( x , z ) ∣ there exists y ∈ Y such that x R y and y S z } {\displaystyle S\circ R=\{(x,z)\mid {\text{ there exists }}y\in Y{\text{ such that }}xRy{\text{ and }}ySz\}} (also denoted by R ; S {\displaystyle R;S} ) is the composition relation of R {\displaystyle R} and S {\displaystyle S} over X {\displaystyle X} and Z {\displaystyle Z} . The identity element is the identity relation. The order of R {\displaystyle R} and S {\displaystyle S} in the notation S ∘ R , {\displaystyle S\circ R,} used here agrees with the standard notational order for composition of functions. For example, the composition (is parent of) ∘ {\displaystyle \circ } (is mother of) yields (is maternal grandparent of), while the composition (is mother of) ∘ {\displaystyle \circ } (is parent of) yields (is grandmother of). For the former case, if x {\displaystyle x} is the parent of y {\displaystyle y} and y {\displaystyle y} is the mother of z {\displaystyle z} , then x {\displaystyle x} is the maternal grandparent of z {\displaystyle z} . === Converse === If R {\displaystyle R} is a binary relation over sets X {\displaystyle X} and Y {\displaystyle Y} then R T = { ( y , x ) ∣ x R y } {\displaystyle R^{\textsf {T}}=\{(y,x)\mid xRy\}} is the converse relation, also called inverse relation, of R {\displaystyle R} over Y {\displaystyle Y} and X {\displaystyle X} . For example, = {\displaystyle =} is the converse of itself, as is ≠ {\displaystyle \neq } , and < {\displaystyle <} and > {\displaystyle >} are each other's converse, as are ≤ {\displaystyle \leq } and ≥ . {\displaystyle \geq .} A binary relation is equal to its converse if and only if it is symmetric. === Complement === If R {\displaystyle R} is a binary relation over sets X {\displaystyle X} and Y {\displaystyle Y} then R ¯ = { ( x , y ) ∣ ¬ x R y } {\displaystyle {\bar {R}}=\{(x,y)\mid \neg xRy\}} (also denoted by ¬ R {\displaystyle \neg R} ) is the complementary relation of R {\displaystyle R} over X {\displaystyle X} and Y {\displaystyle Y} . For example, = {\displaystyle =} and ≠ {\displaystyle \neq } are each other's complement, as are ⊆ {\displaystyle \subseteq } and ⊈ {\displaystyle \not \subseteq } , ⊇ {\displaystyle \supseteq } and ⊉ {\displaystyle \not \supseteq } , ∈ {\displaystyle \in } and ∉ {\displaystyle \not \in } , and for total orders also < {\displaystyle <} and ≥ {\displaystyle \geq } , and > {\displaystyle >} and ≤ {\displaystyle \leq } . The complement of the converse relation R T {\displaystyle R^{\textsf {T}}} is the converse of the complement: R T ¯ = R ¯ T . {\displaystyle {\overline {R^{\mathsf {T}}}}={\bar {R}}^{\mathsf {T}}.} If X = Y , {\displaystyle X=Y,} the complement has the following properties: If a relation is symmetric, then so is the complement. The complement of a reflexive relation is irreflexive—and vice versa. The complement of a strict weak order is a total preorder—and vice versa. === Restriction === If R {\displaystyle R} is a binary homogeneous relation over a set X {\displaystyle X} and S {\displaystyle S} is a subset of X {\displaystyle X} then R | S = { ( x , y ) ∣ x R y and x ∈ S and y ∈ S } {\displaystyle R_{\vert S}=\{(x,y)\mid xRy{\text{ and }}x\in S{\text{ and }}y\in S\}} is the restriction relation of R {\displaystyle R} to S {\displaystyle S} over X {\displaystyle X} . If R {\displaystyle R} is a binary relation over sets X {\displaystyle X} and Y {\displaystyle Y} and if S {\displaystyle S} is a subset of X {\displaystyle X} then R | S = { ( x , y ) ∣ x R y and x ∈ S } {\displaystyle R_{\vert S}=\{(x,y)\mid xRy{\text{ and }}x\in S\}} is the left-restriction relation of R {\displaystyle R} to S {\displaystyle S} over X {\displaystyle X} and Y {\displaystyle Y} . If a relation is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, strict weak order, total preorder (weak order), or an equivalence relation, then so too are its restrictions. However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general not equal. For example, restricting the relation " x {\displaystyle x} is parent of y {\displaystyle y} " to females yields the relation " x {\displaystyle x} is mother of the woman y {\displaystyle y} "; its transitive closure does not relate a woman with her paternal grandmother. On the other hand, the transitive closure of "is parent of" is "is ancestor of"; its restriction to females does relate a woman with her paternal grandmother. Also, the various concepts of completeness (not to be confused with being "total") do not carry over to restrictions. For example, over the real numbers a property of the relation ≤ {\displaystyle \leq } is that every non-empty subset S ⊆ R {\displaystyle S\subseteq \mathbb {R} } with an upper bound in R {\displaystyle \mathbb {R} } has a least upper bound (also called supremum) in R . {\displaystyle \mathbb {R} .} However, for the rational numbers this supremum is not necessarily rational, so the same property does not hold on the restriction of the relation ≤ {\displaystyle \leq } to the rational numbers. A binary relation R {\displaystyle R} over sets X {\displaystyle X} and Y {\displaystyle Y} is said to be contained in a relation S {\displaystyle S} over X {\displaystyle X} and Y {\displaystyle Y} , written R ⊆ S , {\displaystyle R\subseteq S,} if R {\displaystyle R} is a subset of S {\displaystyle S} , that is, for all x ∈ X {\displaystyle x\in X} and y ∈ Y , {\displaystyle y\in Y,} if x R y {\displaystyle xRy} , then x S y {\displaystyle xSy} . If R {\displaystyle R} is contained in S {\displaystyle S} and S {\displaystyle S} is contained in R {\displaystyle R} , then R {\displaystyle R} and S {\displaystyle S} are called equal written R = S {\displaystyle R=S} . If R {\displaystyle R} is contained in S {\displaystyle S} but S {\displaystyle S} is not contained in R {\displaystyle R} , then R {\displaystyle R} is said to be smaller than S {\displaystyle S} , written R ⊊ S . {\displaystyle R\subsetneq S.} For example, on the rational numbers, the relation > {\displaystyle >} is smaller than ≥ {\displaystyle \geq } , and equal to the composition > ∘ > {\displaystyle >\circ >} . === Matrix representation === Binary relations over sets X {\displaystyle X} and Y {\displaystyle Y} can be represented algebraically by logical matrices indexed by X {\displaystyle X} and Y {\displaystyle Y} with entries in the Boolean semiring (addition corresponds to OR and multiplication to AND) where matrix addition corresponds to union of relations, matrix multiplication corresponds to composition of relations (of a relation over X {\displaystyle X} and Y {\displaystyle Y} and a relation over Y {\displaystyle Y} and Z {\displaystyle Z} ), the Hadamard product corresponds to intersection of relations, the zero matrix corresponds to the empty relation, and the matrix of ones corresponds to the universal relation. Homogeneous relations (when X = Y {\displaystyle X=Y} ) form a matrix semiring (indeed, a matrix semialgebra over the Boolean semiring) where the identity matrix corresponds to the identity relation. == Examples == == Types of binary relations == Some important types of binary relations R {\displaystyle R} over sets X {\displaystyle X} and Y {\displaystyle Y} are listed below. Uniqueness properties: Injective (also called left-unique): for all x , y ∈ X {\displaystyle x,y\in X} and all z ∈ Y , {\displaystyle z\in Y,} if x R z {\displaystyle xRz} and y R z {\displaystyle yRz} then x = y {\displaystyle x=y} . In other words, every element of the codomain has at most one preimage element. For such a relation, Y {\displaystyle Y} is called a primary key of R {\displaystyle R} . For example, the green and blue binary relations in the diagram are injective, but the red one is not (as it relates both − 1 {\displaystyle -1} and 1 {\displaystyle 1} to 1 {\displaystyle 1} ), nor the black one (as it relates both − 1 {\displaystyle -1} and 1 {\displaystyle 1} to 0 {\displaystyle 0} ). Functional (also called right-unique or univalent): for all x ∈ X {\displaystyle x\in X} and all y , z ∈ Y , {\displaystyle y,z\in Y,} if x R y {\displaystyle xRy} and x R z {\displaystyle xRz} then y = z {\displaystyle y=z} . In other words, every element of the domain has at most one image element. Such a binary relation is called a partial function or partial mapping. For such a relation, { X } {\displaystyle \{X\}} is called a primary key of R {\displaystyle R} . For example, the red and green binary relations in the diagram are functional, but the blue one is not (as it relates 1 {\displaystyle 1} to both 1 {\displaystyle 1} and − 1 {\displaystyle -1} ), nor the black one (as it relates 0 {\displaystyle 0} to both − 1 {\displaystyle -1} and 1 {\displaystyle 1} ). One-to-one: injective and functional. For example, the green binary relation in the diagram is one-to-one, but the red, blue and black ones are not. One-to-many: injective and not functional. For example, the blue binary relation in the diagram is one-to-many, but the red, green and black ones are not. Many-to-one: functional and not injective. For example, the red binary relation in the diagram is many-to-one, but the green, blue and black ones are not. Many-to-many: not injective nor functional. For example, the black binary relation in the diagram is many-to-many, but the red, green and blue ones are not. Totality properties (only definable if the domain X {\displaystyle X} and codomain Y {\displaystyle Y} are specified): Total (also called left-total): for all x ∈ X {\displaystyle x\in X} there exists a y ∈ Y {\displaystyle y\in Y} such that x R y {\displaystyle xRy} . In other words, every element of the domain has at least one image element. In other words, the domain of definition of R {\displaystyle R} is equal to X {\displaystyle X} . This property, is different from the definition of connected (also called total by some authors) in Properties. Such a binary relation is called a multivalued function. For example, the red and green binary relations in the diagram are total, but the blue one is not (as it does not relate − 1 {\displaystyle -1} to any real number), nor the black one (as it does not relate 2 {\displaystyle 2} to any real number). As another example, > {\displaystyle >} is a total relation over the integers. But it is not a total relation over the positive integers, because there is no y {\displaystyle y} in the positive integers such that 1 > y {\displaystyle 1>y} . However, < {\displaystyle <} is a total relation over the positive integers, the rational numbers and the real numbers. Every reflexive relation is total: for a given x {\displaystyle x} , choose y = x {\displaystyle y=x} . Surjective (also called right-total): for all y ∈ Y {\displaystyle y\in Y} , there exists an x ∈ X {\displaystyle x\in X} such that x R y {\displaystyle xRy} . In other words, every element of the codomain has at least one preimage element. In other words, the codomain of definition of R {\displaystyle R} is equal to Y {\displaystyle Y} . For example, the green and blue binary relations in the diagram are surjective, but the red one is not (as it does not relate any real number to − 1 {\displaystyle -1} ), nor the black one (as it does not relate any real number to 2 {\displaystyle 2} ). Uniqueness and totality properties (only definable if the domain X {\displaystyle X} and codomain Y {\displaystyle Y} are specified): A function (also called mapping): a binary relation that is functional and total. In other words, every element of the domain has exactly one image element. For example, the red and green binary relations in the diagram are functions, but the blue and black ones are not. An injection: a function that is injective. For example, the green relation in the diagram is an injection, but the red one is not; the black and the blue relation is not even a function. A surjection: a function that is surjective. For example, the green relation in the diagram is a surjection, but the red one is not. A bijection: a function that is injective and surjective. In other words, every element of the domain has exactly one image element and every element of the codomain has exactly one preimage element. For example, the green binary relation in the diagram is a bijection, but the red one is not. If relations over proper classes are allowed: Set-like (also called local): for all x ∈ X {\displaystyle x\in X} , the class of all y ∈ Y {\displaystyle y\in Y} such that y R x {\displaystyle yRx} , i.e. { y ∈ Y , y R x } {\displaystyle \{y\in Y,yRx\}} , is a set. For example, the relation ∈ {\displaystyle \in } is set-like, and every relation on two sets is set-like. The usual ordering < over the class of ordinal numbers is a set-like relation, while its inverse > is not. == Sets versus classes == Certain mathematical "relations", such as "equal to", "subset of", and "member of", cannot be understood to be binary relations as defined above, because their domains and codomains cannot be taken to be sets in the usual systems of axiomatic set theory. For example, to model the general concept of "equality" as a binary relation = {\displaystyle =} , take the domain and codomain to be the "class of all sets", which is not a set in the usual set theory. In most mathematical contexts, references to the relations of equality, membership and subset are harmless because they can be understood implicitly to be restricted to some set in the context. The usual work-around to this problem is to select a "large enough" set A {\displaystyle A} , that contains all the objects of interest, and work with the restriction = A {\displaystyle =_{A}} instead of = {\displaystyle =} . Similarly, the "subset of" relation ⊆ {\displaystyle \subseteq } needs to be restricted to have domain and codomain P ( A ) {\displaystyle P(A)} (the power set of a specific set A {\displaystyle A} ): the resulting set relation can be denoted by ⊆ A . {\displaystyle \subseteq _{A}.} Also, the "member of" relation needs to be restricted to have domain A {\displaystyle A} and codomain P ( A ) {\displaystyle P(A)} to obtain a binary relation ∈ A {\displaystyle \in _{A}} that is a set. Bertrand Russell has shown that assuming ∈ {\displaystyle \in } to be defined over all sets leads to a contradiction in naive set theory, see Russell's paradox. Another solution to this problem is to use a set theory with proper classes, such as NBG or Morse–Kelley set theory, and allow the domain and codomain (and so the graph) to be proper classes: in such a theory, equality, membership, and subset are binary relations without special comment. (A minor modification needs to be made to the concept of the ordered triple ( X , Y , G ) {\displaystyle (X,Y,G)} , as normally a proper class cannot be a member of an ordered tuple; or of course one can identify the binary relation with its graph in this context.) With this definition one can for instance define a binary relation over every set and its power set. == Homogeneous relation == A homogeneous relation over a set X {\displaystyle X} is a binary relation over X {\displaystyle X} and itself, i.e. it is a subset of the Cartesian product X × X . {\displaystyle X\times X.} It is also simply called a (binary) relation over X {\displaystyle X} . A homogeneous relation R {\displaystyle R} over a set X {\displaystyle X} may be identified with a directed simple graph permitting loops, where X {\displaystyle X} is the vertex set and R {\displaystyle R} is the edge set (there is an edge from a vertex x {\displaystyle x} to a vertex y {\displaystyle y} if and only if x R y {\displaystyle xRy} ). The set of all homogeneous relations B ( X ) {\displaystyle {\mathcal {B}}(X)} over a set X {\displaystyle X} is the power set 2 X × X {\displaystyle 2^{X\times X}} which is a Boolean algebra augmented with the involution of mapping of a relation to its converse relation. Considering composition of relations as a binary operation on B ( X ) {\displaystyle {\mathcal {B}}(X)} , it forms a semigroup with involution. Some important properties that a homogeneous relation R {\displaystyle R} over a set X {\displaystyle X} may have are: Reflexive: for all x ∈ X , {\displaystyle x\in X,} x R x {\displaystyle xRx} . For example, ≥ {\displaystyle \geq } is a reflexive relation but > is not. Irreflexive: for all x ∈ X , {\displaystyle x\in X,} not x R x {\displaystyle xRx} . For example, > {\displaystyle >} is an irreflexive relation, but ≥ {\displaystyle \geq } is not. Symmetric: for all x , y ∈ X , {\displaystyle x,y\in X,} if x R y {\displaystyle xRy} then y R x {\displaystyle yRx} . For example, "is a blood relative of" is a symmetric relation. Antisymmetric: for all x , y ∈ X , {\displaystyle x,y\in X,} if x R y {\displaystyle xRy} and y R x {\displaystyle yRx} then x = y . {\displaystyle x=y.} For example, ≥ {\displaystyle \geq } is an antisymmetric relation. Asymmetric: for all x , y ∈ X , {\displaystyle x,y\in X,} if x R y {\displaystyle xRy} then not y R x {\displaystyle yRx} . A relation is asymmetric if and only if it is both antisymmetric and irreflexive. For example, > is an asymmetric relation, but ≥ {\displaystyle \geq } is not. Transitive: for all x , y , z ∈ X , {\displaystyle x,y,z\in X,} if x R y {\displaystyle xRy} and y R z {\displaystyle yRz} then x R z {\displaystyle xRz} . A transitive relation is irreflexive if and only if it is asymmetric. For example, "is ancestor of" is a transitive relation, while "is parent of" is not. Connected: for all x , y ∈ X , {\displaystyle x,y\in X,} if x ≠ y {\displaystyle x\neq y} then x R y {\displaystyle xRy} or y R x {\displaystyle yRx} . Strongly connected: for all x , y ∈ X , {\displaystyle x,y\in X,} x R y {\displaystyle xRy} or y R x {\displaystyle yRx} . Dense: for all x , y ∈ X , {\displaystyle x,y\in X,} if x R y , {\displaystyle xRy,} then some z ∈ X {\displaystyle z\in X} exists such that x R z {\displaystyle xRz} and z R y {\displaystyle zRy} . A partial order is a relation that is reflexive, antisymmetric, and transitive. A strict partial order is a relation that is irreflexive, asymmetric, and transitive. A total order is a relation that is reflexive, antisymmetric, transitive and connected. A strict total order is a relation that is irreflexive, asymmetric, transitive and connected. An equivalence relation is a relation that is reflexive, symmetric, and transitive. For example, " x {\displaystyle x} divides y {\displaystyle y} " is a partial, but not a total order on natural numbers N , {\displaystyle \mathbb {N} ,} " x < y {\displaystyle x<y} " is a strict total order on N , {\displaystyle \mathbb {N} ,} and " x {\displaystyle x} is parallel to y {\displaystyle y} " is an equivalence relation on the set of all lines in the Euclidean plane. All operations defined in section § Operations also apply to homogeneous relations. Beyond that, a homogeneous relation over a set X {\displaystyle X} may be subjected to closure operations like: Reflexive closure the smallest reflexive relation over X {\displaystyle X} containing R {\displaystyle R} , Transitive closure the smallest transitive relation over X {\displaystyle X} containing R {\displaystyle R} , Equivalence closure the smallest equivalence relation over X {\displaystyle X} containing R {\displaystyle R} . == Calculus of relations == Developments in algebraic logic have facilitated usage of binary relations. The calculus of relations includes the algebra of sets, extended by composition of relations and the use of converse relations. The inclusion R ⊆ S , {\displaystyle R\subseteq S,} meaning that a R b {\displaystyle aRb} implies a S b {\displaystyle aSb} , sets the scene in a lattice of relations. But since P ⊆ Q ≡ ( P ∩ Q ¯ = ∅ ) ≡ ( P ∩ Q = P ) , {\displaystyle P\subseteq Q\equiv (P\cap {\bar {Q}}=\varnothing )\equiv (P\cap Q=P),} the inclusion symbol is superfluous. Nevertheless, composition of relations and manipulation of the operators according to Schröder rules, provides a calculus to work in the power set of A × B . {\displaystyle A\times B.} In contrast to homogeneous relations, the composition of relations operation is only a partial function. The necessity of matching target to source of composed relations has led to the suggestion that the study of heterogeneous relations is a chapter of category theory as in the category of sets, except that the morphisms of this category are relations. The objects of the category Rel are sets, and the relation-morphisms compose as required in a category. == Induced concept lattice == Binary relations have been described through their induced concept lattices: A concept C ⊂ R {\displaystyle C\subset R} satisfies two properties: The logical matrix of C {\displaystyle C} is the outer product of logical vectors C i j = u i v j , u , v {\displaystyle C_{ij}=u_{i}v_{j},\quad u,v} logical vectors. C {\displaystyle C} is maximal, not contained in any other outer product. Thus C {\displaystyle C} is described as a non-enlargeable rectangle. For a given relation R ⊆ X × Y , {\displaystyle R\subseteq X\times Y,} the set of concepts, enlarged by their joins and meets, forms an "induced lattice of concepts", with inclusion ⊑ {\displaystyle \sqsubseteq } forming a preorder. The MacNeille completion theorem (1937) (that any partial order may be embedded in a complete lattice) is cited in a 2013 survey article "Decomposition of relations on concept lattices". The decomposition is R = f E g T {\displaystyle R=fEg^{\textsf {T}}} , where f {\displaystyle f} and g {\displaystyle g} are functions, called mappings or left-total, functional relations in this context. The "induced concept lattice is isomorphic to the cut completion of the partial order E {\displaystyle E} that belongs to the minimal decomposition ( f , g , E ) {\displaystyle (f,g,E)} of the relation R {\displaystyle R} ." Particular cases are considered below: E {\displaystyle E} total order corresponds to Ferrers type, and E {\displaystyle E} identity corresponds to difunctional, a generalization of equivalence relation on a set. Relations may be ranked by the Schein rank which counts the number of concepts necessary to cover a relation. Structural analysis of relations with concepts provides an approach for data mining. == Particular relations == Proposition: If R {\displaystyle R} is a surjective relation and R T {\displaystyle R^{\mathsf {T}}} is its transpose, then I ⊆ R T R {\displaystyle I\subseteq R^{\textsf {T}}R} where I {\displaystyle I} is the m × m {\displaystyle m\times m} identity relation. Proposition: If R {\displaystyle R} is a serial relation, then I ⊆ R R T {\displaystyle I\subseteq RR^{\textsf {T}}} where I {\displaystyle I} is the n × n {\displaystyle n\times n} identity relation. === Difunctional === The idea of a difunctional relation is to partition objects by distinguishing attributes, as a generalization of the concept of an equivalence relation. One way this can be done is with an intervening set Z = { x , y , z , … } {\displaystyle Z=\{x,y,z,\ldots \}} of indicators. The partitioning relation R = F G T {\displaystyle R=FG^{\textsf {T}}} is a composition of relations using functional relations F ⊆ A × Z and G ⊆ B × Z . {\displaystyle F\subseteq A\times Z{\text{ and }}G\subseteq B\times Z.} Jacques Riguet named these relations difunctional since the composition F G T {\displaystyle FG^{\mathsf {T}}} involves functional relations, commonly called partial functions. In 1950 Riguet showed that such relations satisfy the inclusion: R R T R ⊆ R {\displaystyle RR^{\textsf {T}}R\subseteq R} In automata theory, the term rectangular relation has also been used to denote a difunctional relation. This terminology recalls the fact that, when represented as a logical matrix, the columns and rows of a difunctional relation can be arranged as a block matrix with rectangular blocks of ones on the (asymmetric) main diagonal. More formally, a relation R {\displaystyle R} on X × Y {\displaystyle X\times Y} is difunctional if and only if it can be written as the union of Cartesian products A i × B i {\displaystyle A_{i}\times B_{i}} , where the A i {\displaystyle A_{i}} are a partition of a subset of X {\displaystyle X} and the B i {\displaystyle B_{i}} likewise a partition of a subset of Y {\displaystyle Y} . Using the notation { y ∣ x R y } = x R {\displaystyle \{y\mid xRy\}=xR} , a difunctional relation can also be characterized as a relation R {\displaystyle R} such that wherever x 1 R {\displaystyle x_{1}R} and x 2 R {\displaystyle x_{2}R} have a non-empty intersection, then these two sets coincide; formally x 1 ∩ x 2 ≠ ∅ {\displaystyle x_{1}\cap x_{2}\neq \varnothing } implies x 1 R = x 2 R . {\displaystyle x_{1}R=x_{2}R.} In 1997 researchers found "utility of binary decomposition based on difunctional dependencies in database management." Furthermore, difunctional relations are fundamental in the study of bisimulations. In the context of homogeneous relations, a partial equivalence relation is difunctional. === Ferrers type === A strict order on a set is a homogeneous relation arising in order theory. In 1951 Jacques Riguet adopted the ordering of an integer partition, called a Ferrers diagram, to extend ordering to binary relations in general. The corresponding logical matrix of a general binary relation has rows which finish with a sequence of ones. Thus the dots of a Ferrer's diagram are changed to ones and aligned on the right in the matrix. An algebraic statement required for a Ferrers type relation R is R R ¯ T R ⊆ R . {\displaystyle R{\bar {R}}^{\textsf {T}}R\subseteq R.} If any one of the relations R , R ¯ , R T {\displaystyle R,{\bar {R}},R^{\textsf {T}}} is of Ferrers type, then all of them are. === Contact === Suppose B {\displaystyle B} is the power set of A {\displaystyle A} , the set of all subsets of A {\displaystyle A} . Then a relation g {\displaystyle g} is a contact relation if it satisfies three properties: for all x ∈ A , Y = { x } implies x g Y . {\displaystyle {\text{for all }}x\in A,Y=\{x\}{\text{ implies }}xgY.} Y ⊆ Z and x g Y implies x g Z . {\displaystyle Y\subseteq Z{\text{ and }}xgY{\text{ implies }}xgZ.} for all y ∈ Y , y g Z and x g Y implies x g Z . {\displaystyle {\text{for all }}y\in Y,ygZ{\text{ and }}xgY{\text{ implies }}xgZ.} The set membership relation, ϵ = {\displaystyle \epsilon =} "is an element of", satisfies these properties so ϵ {\displaystyle \epsilon } is a contact relation. The notion of a general contact relation was introduced by Georg Aumann in 1970. In terms of the calculus of relations, sufficient conditions for a contact relation include C T C ¯ ⊆∋ C ¯ ≡ C ∋ C ¯ ¯ ⊆ C , {\displaystyle C^{\textsf {T}}{\bar {C}}\subseteq \ni {\bar {C}}\equiv C{\overline {\ni {\bar {C}}}}\subseteq C,} where ∋ {\displaystyle \ni } is the converse of set membership ( ∈ {\displaystyle \in } ).: 280 == Preorder R\R == Every relation R {\displaystyle R} generates a preorder R ∖ R {\displaystyle R\backslash R} which is the left residual. In terms of converse and complements, R ∖ R ≡ R T R ¯ ¯ . {\displaystyle R\backslash R\equiv {\overline {R^{\textsf {T}}{\bar {R}}}}.} Forming the diagonal of R T R ¯ {\displaystyle R^{\textsf {T}}{\bar {R}}} , the corresponding row of R T {\displaystyle R^{\textsf {T}}} and column of R ¯ {\displaystyle {\bar {R}}} will be of opposite logical values, so the diagonal is all zeros. Then R T R ¯ ⊆ I ¯ ⟹ I ⊆ R T R ¯ ¯ = R ∖ R {\displaystyle R^{\textsf {T}}{\bar {R}}\subseteq {\bar {I}}\implies I\subseteq {\overline {R^{\textsf {T}}{\bar {R}}}}=R\backslash R} , so that R ∖ R {\displaystyle R\backslash R} is a reflexive relation. To show transitivity, one requires that ( R ∖ R ) ( R ∖ R ) ⊆ R ∖ R . {\displaystyle (R\backslash R)(R\backslash R)\subseteq R\backslash R.} Recall that X = R ∖ R {\displaystyle X=R\backslash R} is the largest relation such that R X ⊆ R . {\displaystyle RX\subseteq R.} Then R ( R ∖ R ) ⊆ R {\displaystyle R(R\backslash R)\subseteq R} R ( R ∖ R ) ( R ∖ R ) ⊆ R {\displaystyle R(R\backslash R)(R\backslash R)\subseteq R} (repeat) ≡ R T R ¯ ⊆ ( R ∖ R ) ( R ∖ R ) ¯ {\displaystyle \equiv R^{\textsf {T}}{\bar {R}}\subseteq {\overline {(R\backslash R)(R\backslash R)}}} (Schröder's rule) ≡ ( R ∖ R ) ( R ∖ R ) ⊆ R T R ¯ ¯ {\displaystyle \equiv (R\backslash R)(R\backslash R)\subseteq {\overline {R^{\textsf {T}}{\bar {R}}}}} (complementation) ≡ ( R ∖ R ) ( R ∖ R ) ⊆ R ∖ R . {\displaystyle \equiv (R\backslash R)(R\backslash R)\subseteq R\backslash R.} (definition) The inclusion relation Ω on the power set of U {\displaystyle U} can be obtained in this way from the membership relation ∈ {\displaystyle \in } on subsets of U {\displaystyle U} : Ω = ∋ ∈ ¯ ¯ =∈ ∖ ∈ . {\displaystyle \Omega ={\overline {\ni {\bar {\in }}}}=\in \backslash \in .} : 283 == Fringe of a relation == Given a relation R {\displaystyle R} , its fringe is the sub-relation defined as fringe ( R ) = R ∩ R R ¯ T R ¯ . {\displaystyle \operatorname {fringe} (R)=R\cap {\overline {R{\bar {R}}^{\textsf {T}}R}}.} When R {\displaystyle R} is a partial identity relation, difunctional, or a block diagonal relation, then fringe ( R ) = R {\displaystyle \operatorname {fringe} (R)=R} . Otherwise the fringe {\displaystyle \operatorname {fringe} } operator selects a boundary sub-relation described in terms of its logical matrix: fringe ( R ) {\displaystyle \operatorname {fringe} (R)} is the side diagonal if R {\displaystyle R} is an upper right triangular linear order or strict order. fringe ( R ) {\displaystyle \operatorname {fringe} (R)} is the block fringe if R {\displaystyle R} is irreflexive ( R ⊆ I ¯ {\displaystyle R\subseteq {\bar {I}}} ) or upper right block triangular. fringe ( R ) {\displaystyle \operatorname {fringe} (R)} is a sequence of boundary rectangles when R {\displaystyle R} is of Ferrers type. On the other hand, fringe ( R ) = ∅ {\displaystyle \operatorname {fringe} (R)=\emptyset } when R {\displaystyle R} is a dense, linear, strict order. == Mathematical heaps == Given two sets A {\displaystyle A} and B {\displaystyle B} , the set of binary relations between them B ( A , B ) {\displaystyle {\mathcal {B}}(A,B)} can be equipped with a ternary operation [ a , b , c ] = a b T c {\displaystyle [a,b,c]=ab^{\textsf {T}}c} where b T {\displaystyle b^{\mathsf {T}}} denotes the converse relation of b {\displaystyle b} . In 1953 Viktor Wagner used properties of this ternary operation to define semiheaps, heaps, and generalized heaps. The contrast of heterogeneous and homogeneous relations is highlighted by these definitions: There is a pleasant symmetry in Wagner's work between heaps, semiheaps, and generalised heaps on the one hand, and groups, semigroups, and generalised groups on the other. Essentially, the various types of semiheaps appear whenever we consider binary relations (and partial one-one mappings) between different sets A {\displaystyle A} and B {\displaystyle B} , while the various types of semigroups appear in the case where A = B {\displaystyle A=B} . == See also == == Notes == == References == == Bibliography == Schmidt, Gunther (2010). Relational Mathematics. Berlin: Cambridge University Press. ISBN 9780511778810. Schmidt, Gunther; Ströhlein, Thomas (2012). "Chapter 3: Heterogeneous relations". Relations and Graphs: Discrete Mathematics for Computer Scientists. Springer Science & Business Media. ISBN 978-3-642-77968-8. Ernst Schröder (1895) Algebra der Logik, Band III, via Internet Archive Codd, Edgar Frank (1990). The Relational Model for Database Management: Version 2 (PDF). Boston: Addison-Wesley. ISBN 978-0201141924. Archived (PDF) from the original on 2022-10-09. Enderton, Herbert (1977). Elements of Set Theory. Boston: Academic Press. ISBN 978-0-12-238440-0. Kilp, Mati; Knauer, Ulrich; Mikhalev, Alexander (2000). Monoids, Acts and Categories: with Applications to Wreath Products and Graphs. Berlin: De Gruyter. ISBN 978-3-11-015248-7. Van Gasteren, Antonetta (1990). On the Shape of Mathematical Arguments. Berlin: Springer. ISBN 9783540528494. Peirce, Charles Sanders (1873). "Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic". Memoirs of the American Academy of Arts and Sciences. 9 (2): 317–178. Bibcode:1873MAAAS...9..317P. doi:10.2307/25058006. hdl:2027/hvd.32044019561034. JSTOR 25058006. Retrieved 2020-05-05. Schmidt, Gunther (2010). Relational Mathematics. Cambridge: Cambridge University Press. ISBN 978-0-521-76268-7. == External links == "Binary relation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia:Digital root#0
|
The digital root (also repeated digital sum) of a natural number in a given radix is the (single digit) value obtained by an iterative process of summing digits, on each iteration using the result from the previous iteration to compute a digit sum. The process continues until a single-digit number is reached. For example, in base 10, the digital root of the number 12345 is 6 because the sum of the digits in the number is 1 + 2 + 3 + 4 + 5 = 15, then the addition process is repeated again for the resulting number 15, so that the sum of 1 + 5 equals 6, which is the digital root of that number. In base 10, this is equivalent to taking the remainder upon division by 9 (except when the digital root is 9, where the remainder upon division by 9 will be 0), which allows it to be used as a divisibility rule. == Formal definition == Let n {\displaystyle n} be a natural number. For base b > 1 {\displaystyle b>1} , we define the digit sum F b : N → N {\displaystyle F_{b}:\mathbb {N} \rightarrow \mathbb {N} } to be the following: F b ( n ) = ∑ i = 0 k − 1 d i {\displaystyle F_{b}(n)=\sum _{i=0}^{k-1}d_{i}} where k = ⌊ log b n ⌋ + 1 {\displaystyle k=\lfloor \log _{b}{n}\rfloor +1} is the number of digits in the number in base b {\displaystyle b} , and d i = n mod b i + 1 − n mod b i b i {\displaystyle d_{i}={\frac {n{\bmod {b^{i+1}}}-n{\bmod {b}}^{i}}{b^{i}}}} is the value of each digit of the number. A natural number n {\displaystyle n} is a digital root if it is a fixed point for F b {\displaystyle F_{b}} , which occurs if F b ( n ) = n {\displaystyle F_{b}(n)=n} . All natural numbers n {\displaystyle n} are preperiodic points for F b {\displaystyle F_{b}} , regardless of the base. This is because if n ≥ b {\displaystyle n\geq b} , then n = ∑ i = 0 k − 1 d i b i {\displaystyle n=\sum _{i=0}^{k-1}d_{i}b^{i}} and therefore F b ( n ) = ∑ i = 0 k − 1 d i < ∑ i = 0 k − 1 d i b i = n {\displaystyle F_{b}(n)=\sum _{i=0}^{k-1}d_{i}<\sum _{i=0}^{k-1}d_{i}b^{i}=n} because b > 1 {\displaystyle b>1} . If n < b {\displaystyle n<b} , then trivially F b ( n ) = n {\displaystyle F_{b}(n)=n} Therefore, the only possible digital roots are the natural numbers 0 ≤ n < b {\displaystyle 0\leq n<b} , and there are no cycles other than the fixed points of 0 ≤ n < b {\displaystyle 0\leq n<b} . === Example === In base 12, 8 is the additive digital root of the base 10 number 3110, as for n = 3110 {\displaystyle n=3110} d 0 = 3110 mod 12 0 + 1 − 3110 mod 1 2 0 12 0 = 3110 mod 12 − 3110 mod 1 1 = 2 − 0 1 = 2 1 = 2 {\displaystyle d_{0}={\frac {3110{\bmod {12^{0+1}}}-3110{\bmod {1}}2^{0}}{12^{0}}}={\frac {3110{\bmod {12}}-3110{\bmod {1}}}{1}}={\frac {2-0}{1}}={\frac {2}{1}}=2} d 1 = 3110 mod 12 1 + 1 − 3110 mod 1 2 1 12 1 = 3110 mod 144 − 3110 mod 1 2 12 = 86 − 2 12 = 84 12 = 7 {\displaystyle d_{1}={\frac {3110{\bmod {12^{1+1}}}-3110{\bmod {1}}2^{1}}{12^{1}}}={\frac {3110{\bmod {144}}-3110{\bmod {1}}2}{12}}={\frac {86-2}{12}}={\frac {84}{12}}=7} d 2 = 3110 mod 12 2 + 1 − 3110 mod 1 2 2 12 2 = 3110 mod 1728 − 3110 mod 1 44 144 = 1382 − 86 144 = 1296 144 = 9 {\displaystyle d_{2}={\frac {3110{\bmod {12^{2+1}}}-3110{\bmod {1}}2^{2}}{12^{2}}}={\frac {3110{\bmod {1728}}-3110{\bmod {1}}44}{144}}={\frac {1382-86}{144}}={\frac {1296}{144}}=9} d 3 = 3110 mod 12 3 + 1 − 3110 mod 1 2 3 12 3 = 3110 mod 20736 − 3110 mod 1 728 1728 = 3110 − 1382 1728 = 1728 1728 = 1 {\displaystyle d_{3}={\frac {3110{\bmod {12^{3+1}}}-3110{\bmod {1}}2^{3}}{12^{3}}}={\frac {3110{\bmod {20736}}-3110{\bmod {1}}728}{1728}}={\frac {3110-1382}{1728}}={\frac {1728}{1728}}=1} F 12 ( 3110 ) = ∑ i = 0 4 − 1 d i = 2 + 7 + 9 + 1 = 19 {\displaystyle F_{12}(3110)=\sum _{i=0}^{4-1}d_{i}=2+7+9+1=19} This process shows that 3110 is 1972 in base 12. Now for F 12 ( 3110 ) = 19 {\displaystyle F_{12}(3110)=19} d 0 = 19 mod 12 0 + 1 − 19 mod 1 2 0 12 0 = 19 mod 12 − 19 mod 1 1 = 7 − 0 1 = 7 1 = 7 {\displaystyle d_{0}={\frac {19{\bmod {12^{0+1}}}-19{\bmod {1}}2^{0}}{12^{0}}}={\frac {19{\bmod {12}}-19{\bmod {1}}}{1}}={\frac {7-0}{1}}={\frac {7}{1}}=7} d 1 = 19 mod 12 1 + 1 − 19 mod 1 2 1 12 1 = 19 mod 144 − 19 mod 1 2 12 = 19 − 7 12 = 12 12 = 1 {\displaystyle d_{1}={\frac {19{\bmod {12^{1+1}}}-19{\bmod {1}}2^{1}}{12^{1}}}={\frac {19{\bmod {144}}-19{\bmod {1}}2}{12}}={\frac {19-7}{12}}={\frac {12}{12}}=1} F 12 ( 19 ) = ∑ i = 0 2 − 1 d i = 1 + 7 = 8 {\displaystyle F_{12}(19)=\sum _{i=0}^{2-1}d_{i}=1+7=8} shows that 19 is 17 in base 12. And as 8 is a 1-digit number in base 12, F 12 ( 8 ) = 8 {\displaystyle F_{12}(8)=8} . == Direct formulas == We can define the digit root directly for base b > 1 {\displaystyle b>1} dr b : N → N {\displaystyle \operatorname {dr} _{b}:\mathbb {N} \rightarrow \mathbb {N} } in the following ways: === Congruence formula === The formula in base b {\displaystyle b} is: dr b ( n ) = { 0 if n = 0 , b − 1 if n ≠ 0 , n ≡ 0 ( mod ( b − 1 ) ) , n mod ( b − 1 ) if n ≢ 0 ( mod ( b − 1 ) ) {\displaystyle \operatorname {dr} _{b}(n)={\begin{cases}0&{\mbox{if}}\ n=0,\\b-1&{\mbox{if}}\ n\neq 0,\ n\ \equiv 0{\pmod {(b-1)}},\\n{\bmod {(b-1)}}&{\mbox{if}}\ n\not \equiv 0{\pmod {(b-1)}}\end{cases}}} or, dr b ( n ) = { 0 if n = 0 , 1 + ( ( n − 1 ) mod ( b − 1 ) ) if n ≠ 0. {\displaystyle \operatorname {dr} _{b}(n)={\begin{cases}0&{\mbox{if}}\ n=0,\\1\ +\ ((n-1){\bmod {(b-1)}})&{\mbox{if}}\ n\neq 0.\end{cases}}} In base 10, the corresponding sequence is (sequence A010888 in the OEIS). The digital root is the value modulo ( b − 1 ) {\displaystyle (b-1)} because b ≡ 1 ( mod ( b − 1 ) ) , {\displaystyle b\equiv 1{\pmod {(b-1)}},} and thus b i ≡ 1 i ≡ 1 ( mod ( b − 1 ) ) . {\displaystyle b^{i}\equiv 1^{i}\equiv 1{\pmod {(b-1)}}.} So regardless of the position i {\displaystyle i} of digit d i {\displaystyle d_{i}} , d i b i ≡ d i ( mod ( b − 1 ) ) {\displaystyle d_{i}b^{i}\equiv d_{i}{\pmod {(b-1)}}} , which explains why digits can be meaningfully added. Concretely, for a three-digit number n = d 2 b 2 + d 1 b 1 + d 0 b 0 {\displaystyle n=d_{2}b^{2}+d_{1}b^{1}+d_{0}b^{0}} , dr b ( n ) ≡ d 2 b 2 + d 1 b 1 + d 0 b 0 ≡ d 2 ( 1 ) + d 1 ( 1 ) + d 0 ( 1 ) ≡ d 2 + d 1 + d 0 ( mod ( b − 1 ) ) . {\displaystyle \operatorname {dr} _{b}(n)\equiv d_{2}b^{2}+d_{1}b^{1}+d_{0}b^{0}\equiv d_{2}(1)+d_{1}(1)+d_{0}(1)\equiv d_{2}+d_{1}+d_{0}{\pmod {(b-1)}}.} To obtain the modular value with respect to other numbers m {\displaystyle m} , one can take weighted sums, where the weight on the i {\displaystyle i} -th digit corresponds to the value of b i mod m {\displaystyle b^{i}{\bmod {m}}} . In base 10, this is simplest for m = 2 , 5 , and 10 {\displaystyle m=2,5,{\text{ and }}10} , where higher digits except for the unit digit vanish (since 2 and 5 divide powers of 10), which corresponds to the familiar fact that the divisibility of a decimal number with respect to 2, 5, and 10 can be checked by the last digit. Also of note is the modulus m = b + 1 {\displaystyle m=b+1} . Since b ≡ − 1 ( mod ( b + 1 ) ) , {\displaystyle b\equiv -1{\pmod {(b+1)}},} and thus b 2 ≡ ( − 1 ) 2 ≡ 1 ( mod ( b + 1 ) ) , {\displaystyle b^{2}\equiv (-1)^{2}\equiv 1{\pmod {(b+1)}},} taking the alternating sum of digits yields the value modulo ( b + 1 ) {\displaystyle (b+1)} . === Using the floor function === It helps to see the digital root of a positive integer as the position it holds with respect to the largest multiple of b − 1 {\displaystyle b-1} less than the number itself. For example, in base 6 the digital root of 11 is 2, which means that 11 is the second number after 6 − 1 = 5 {\displaystyle 6-1=5} . Likewise, in base 10 the digital root of 2035 is 1, which means that 2035 − 1 = 2034 | 9 {\displaystyle 2035-1=2034|9} . If a number produces a digital root of exactly b − 1 {\displaystyle b-1} , then the number is a multiple of b − 1 {\displaystyle b-1} . With this in mind the digital root of a positive integer n {\displaystyle n} may be defined by using floor function ⌊ x ⌋ {\displaystyle \lfloor x\rfloor } , as dr b ( n ) = n − ( b − 1 ) ⌊ n − 1 b − 1 ⌋ . {\displaystyle \operatorname {dr} _{b}(n)=n-(b-1)\left\lfloor {\frac {n-1}{b-1}}\right\rfloor .} == Properties == The digital root of a 1 + a 2 {\displaystyle a_{1}+a_{2}} in base b {\displaystyle b} is the digital root of the sum of the digital root of a 1 {\displaystyle a_{1}} and the digital root of a 2 {\displaystyle a_{2}} : dr b ( a 1 + a 2 ) = dr b ( dr b ( a 1 ) + dr b ( a 2 ) ) . {\displaystyle \operatorname {dr} _{b}(a_{1}+a_{2})=\operatorname {dr} _{b}(\operatorname {dr} _{b}(a_{1})+\operatorname {dr} _{b}(a_{2})).} This property can be used as a sort of checksum, to check that a sum has been performed correctly. The digital root of a 1 − a 2 {\displaystyle a_{1}-a_{2}} in base b {\displaystyle b} is congruent to the difference of the digital root of a 1 {\displaystyle a_{1}} and the digital root of a 2 {\displaystyle a_{2}} modulo ( b − 1 ) {\displaystyle (b-1)} : dr b ( a 1 − a 2 ) ≡ ( dr b ( a 1 ) − dr b ( a 2 ) ) ( mod ( b − 1 ) ) . {\displaystyle \operatorname {dr} _{b}(a_{1}-a_{2})\equiv (\operatorname {dr} _{b}(a_{1})-\operatorname {dr} _{b}(a_{2})){\pmod {(b-1)}}.} The digital root of − n {\displaystyle -n} in base b {\displaystyle b} is dr b ( − n ) ≡ − dr b ( n ) mod b − 1 . {\displaystyle \operatorname {dr} _{b}(-n)\equiv -\operatorname {dr} _{b}(n){\bmod {b-1}}.} The digital root of the product of nonzero single digit numbers a 1 ⋅ a 2 {\displaystyle a_{1}\cdot a_{2}} in base b {\displaystyle b} is given by the Vedic Square in base b {\displaystyle b} . The digital root of a 1 ⋅ a 2 {\displaystyle a_{1}\cdot a_{2}} in base b {\displaystyle b} is the digital root of the product of the digital root of a 1 {\displaystyle a_{1}} and the digital root of a 2 {\displaystyle a_{2}} : dr b ( a 1 a 2 ) = dr b ( dr b ( a 1 ) ⋅ dr b ( a 2 ) ) . {\displaystyle \operatorname {dr} _{b}(a_{1}a_{2})=\operatorname {dr} _{b}(\operatorname {dr} _{b}(a_{1})\cdot \operatorname {dr} _{b}(a_{2})).} == Additive persistence == The additive persistence counts how many times we must sum its digits to arrive at its digital root. For example, the additive persistence of 2718 in base 10 is 2: first we find that 2 + 7 + 1 + 8 = 18, then that 1 + 8 = 9. There is no limit to the additive persistence of a number in a number base b {\displaystyle b} . Proof: For a given number n {\displaystyle n} , the persistence of the number consisting of n {\displaystyle n} repetitions of the digit 1 is 1 higher than that of n {\displaystyle n} . The smallest numbers of additive persistence 0, 1, ... in base 10 are: 0, 10, 19, 199, 19 999 999 999 999 999 999 999, ... (sequence A006050 in the OEIS) The next number in the sequence (the smallest number of additive persistence 5) is 2 × 102×(1022 − 1)/9 − 1 (that is, 1 followed by 2 222 222 222 222 222 222 222 nines). For any fixed base, the sum of the digits of a number is proportional to its logarithm; therefore, the additive persistence is proportional to the iterated logarithm. == Programming example == The example below implements the digit sum described in the definition above to search for digital roots and additive persistences in Python. == In popular culture == Digital roots are used in Western numerology, but certain numbers deemed to have occult significance (such as 11 and 22) are not always completely reduced to a single digit. Digital roots form an important mechanic in the visual novel adventure game Nine Hours, Nine Persons, Nine Doors. == See also == == References == Averbach, Bonnie; Chein, Orin (27 May 1999), Problem Solving Through Recreational Mathematics, Dover Books on Mathematics (reprinted ed.), Mineola, NY: Courier Dover Publications, pp. 125–127, ISBN 0-486-40917-1 (online copy, p. 125, at Google Books) Ghannam, Talal (4 January 2011), The Mystery of Numbers: Revealed Through Their Digital Root, CreateSpace Publications, pp. 68–73, ISBN 978-1-4776-7841-1, archived from the original on 29 March 2016, retrieved 11 February 2016 (online copy, p. 68, at Google Books) Hall, F. M. (1980), An Introduction into Abstract Algebra, vol. 1 (2nd ed.), Cambridge, U.K.: CUP Archive, p. 101, ISBN 978-0-521-29861-2 (online copy, p. 101, at Google Books) O'Beirne, T. H. (13 March 1961), "Puzzles and Paradoxes", New Scientist, 10 (230), Reed Business Information: 53–54, ISSN 0262-4079 (online copy, p. 53, at Google Books) Rouse Ball, W. W.; Coxeter, H. S. M. (6 May 2010), Mathematical Recreations and Essays, Dover Recreational Mathematics (13th ed.), NY: Dover Publications, ISBN 978-0-486-25357-2 (online copy at Google Books) == External links == Patterns of digital roots using MS Excel Weisstein, Eric W. "Digital Root". MathWorld.
|
Wikipedia:Digroup#0
|
In the mathematical area of algebra, a digroup is a generalization of a group that has two one-sided product operations, ⊢ {\displaystyle \vdash } and ⊣ {\displaystyle \dashv } , instead of the single operation in a group. Digroups were introduced independently by Liu (2004), Felipe (2006), and Kinyon (2007), inspired by a question about Leibniz algebras. To explain digroups, consider a group. In a group there is one operation, such as addition in the set of integers; there is a single "unit" element, like 0 in the integers, and there are inverses, like − x {\displaystyle -x} in the integers, for which both the following equations hold: ( − x ) + x = 0 {\displaystyle (-x)+x=0} and x + ( − x ) = 0 {\displaystyle x+(-x)=0} . A digroup replaces the one operation by two operations that interact in a complicated way, as stated below. A digroup may also have more than one "unit", and an element x {\displaystyle x} may have different inverses for each "unit". This makes a digroup vastly more complicated than a group. Despite that complexity, there are reasons to consider digroups, for which see the references. == Definition == A digroup is a set D with two binary operations, ⊢ {\displaystyle \vdash } and ⊣ {\displaystyle \dashv } , that satisfy the following laws (e.g., Ongay 2010): Associativity: ⊢ {\displaystyle \vdash } and ⊣ {\displaystyle \dashv } are associative, ( x ⊢ y ) ⊢ z = ( x ⊣ y ) ⊢ z , {\displaystyle (x\vdash y)\vdash z=(x\dashv y)\vdash z,} x ⊣ ( y ⊣ z ) = x ⊣ ( y ⊢ z ) , {\displaystyle x\dashv (y\dashv z)=x\dashv (y\vdash z),} ( x ⊢ y ) ⊣ z = x ⊢ ( y ⊣ z ) . {\displaystyle (x\vdash y)\dashv z=x\vdash (y\dashv z).} Bar units: There is at least one bar unit, an e ∈ D {\displaystyle e\in D} , such that for every x ∈ D , {\displaystyle x\in D,} e ⊢ x = x ⊣ e = x . {\displaystyle e\vdash x=x\dashv e=x.} The set of bar units is called the halo of D. Inverse: For each bar unit e, each x ∈ D {\displaystyle x\in D} has a unique e-inverse, x e − 1 ∈ D {\displaystyle x_{e}^{-1}\in D} , such that x ⊢ x e − 1 = x e − 1 ⊣ x = e . {\displaystyle x\vdash x_{e}^{-1}=x_{e}^{-1}\dashv x=e.} == Generalized digroup == In a generalized digroup or g-digroup, a generalization due to Salazar-Díaz, Velásquez, and Wills-Toro (2016), each element has a left inverse and a right inverse instead of one two-sided inverse. One reason for this generalization is that it permits analogs of the isomorphism theorems of group theory that cannot be formulated within digroups. == References == Raúl Felipe (2006), Digroups and their linear representations, East-West Journal of Mathematics Vol. 8, No. 1, 27–48. Michael K. Kinyon (2007), Leibniz algebras, Lie racks, and digroups, Journal of Lie Theory, Vol. 17, No. 4, 99–114. Keqin Liu (2004), Transformation digroups, unpublished manuscript, arXiv:GR/0409256. Fausto Ongay (2010), On the notion of digroup, Comunicación del CIMAT, No. I-10-04/17-05-2010. O.P. Salazar-Díaz, R. Velásquez, and L. A. Wills-Toro (2016), Generalized digroups, Communications in Algebra, Vol. 44, 2760–2785.
|
Wikipedia:Dilip Madan#0
|
Dilip B. Madan is an American financial economist, mathematician, academic, and author. He is professor emeritus of finance at the University of Maryland. Madan is most known for his work on the variance gamma model, the fast Fourier transform method for option pricing, and the development of Conic Finance. Madan is a recipient of the 2006 Humboldt Research Award. He has authored several books, including Applied Conic Finance and Nonlinear Valuation and Non-Gaussian Risks in Finance. == Education == Madan completed his Bachelor of Commerce in Accounting from the University of Bombay in 1967. In 1972, he obtained a Ph.D. in Economics from the University of Maryland, followed by another PhD in mathematics in 1975 from the same university. == Career == Madan began his academic career in 1972 as an assistant professor of economics at the University of Maryland. In 1976, he joined the University of Sydney and held various positions, including lecturer in economic statistics from 1976 to 1979 and senior lecturer in econometrics from 1980 to 1988. Subsequently, he rejoined the University of Maryland, where he was appointed as assistant professor of finance between 1989 and 1992, served as an associate professor of finance between 1992 and 1997, and held an appointment as a professor of finance between 1997 and 2019. Currently, he is professor emeritus of finance at the University of Maryland since 2019. Madan has been a director and treasurer of the Scientific Association of Mathematical Finance since 2021. == Research == Madan's quantitative finance research has won him the 2021 Northfield Financial Engineer of the Year Award from the International Association for Quantitative Finance. He has authored numerous publications spanning the areas of financial markets, general equilibrium theory, and mathematical finance including books and articles in peer-reviewed journals. === Valuation model === Madan's valuation model research has contributed to the improvement and development of valuation models in various fields including business and finance. In his analysis of the impact of model risk on the valuation of barrier options, he highlighted the divergent pricing outcomes of up-and-out call options resulting from the use of different stochastic processes to calibrate the underlying vanilla options surface. He conducted pricing comparisons between Sato processes and conventional models, revealing that Sato processes exhibit relatively higher pricing for cliquets, while effectively preserving the value of long-dated out-of-the-money realized variance options. Focusing his research efforts on credit value adjustments, his study proposed a theory of capital requirements to address the problem of cross-default exposures. Furthermore, he presented a Markov chain-based method for valuing structured financial products, offering financial institutions a tool to assess locally capped and floored cliquets, as well as unhedged and hedged variance swap contracts. He also introduced a conic finance-based nonlinear equity valuation model, which integrated risk charges contingent upon measure distortions. More recently in 2016 and 2022, he co-authored with Wim Schoutens two books titled Applied Conic Finance and Nonlinear valuation and non-Gaussian risks in finance, which provided an overview of the newly established conic finance theory, including its theoretical framework and various applications. === Options pricing === Madan's options pricing research has focused on conducting empirical studies to test the performance of various option pricing models using real-world data. While exploring the valuation of European call options employing the Vasicek-Gaussian stochastic process, his research proposed an approach to approximate and determine the equilibrium change of measure in incomplete markets, using log return mean, variance, and kurtosis. In a collaborative study with Robert A. Jarrow, he demonstrated the application of term-structure-associated financial instruments in formulating dynamic portfolio management tactics, specifically aimed at mitigating distinct systematic jump hazards inherent in asset returns. In his early works, he introduced the variance gamma process, a stochastic model for log stock price dynamics, highlighting its symmetric statistical density with some kurtosis and negatively skewed risk-neutral density with higher kurtosis. His study further proposed using Markov Chains and homogeneous Levy processes, specifically the variance gamma process, as a robust modeling approach for financial asset prices, thereby facilitating the computation of option and series prices. His research work on pricing European options involved exploring self-similar risk-neutral processes and proposing two parameter stability-based models, highlighting their usefulness in studying the time variation of option prices. Concentrating his research on risk premia in options markets, he used the variance gamma model for density synthesis, revealing mean reversion and predictability in premia, with particular emphasis on short-term market crashes and long-term market rallies. His recent work in 2021 has contributed to the understanding of risk-neutral densities and jump arrival tails by introducing theoretical examples and practical models based on quasi-infinitely divisible distributions. === Asset pricing === Madan's contributions to asset pricing research have resulted in the development of asset pricing models. His early research examined the minimum variance estimator to achieve a singular optimal power and provided approximations for estimating the scalar diffusion coefficient through the application of Ito calculus and Milstein methods. He also addressed the paradox posed by Artzner and Heath, offering a solution determining that completeness pertains to the topology of the cash flow space and is associated with the singular nature of the price functional in the topological dual space. In 2001, he proposed a modeling approach for asset price processes and illustrated that asset price dynamics are more suitably represented by pure jump processes, devoid of any continuous martingale component. In his analysis of equilibrium asset pricing, his work established that factor prices are influenced by exponentially tilted prices due to non-Gaussian factor risk exposures, which can be determined from the univariate probability distribution of the factor exposure. Moreover, with Wim Schoutens, he co-developed a technique that uses historical data to establish upper and lower valuations, leading to enhanced risk evaluation in the stock market through the integration of risk attributes into required returns. == Awards and honors == 2006 – Humboldt Research Award, Alexander von Humboldt Foundation 2021 – Northfield Financial Engineer of the Year Award, International Association for Quantitative Finance == Bibliography == === Books === Mathematical Finance – Bachelier Congress 2000 (2002) ISBN 9783540677819 Structured Products (2008) ISBN 9781904339618 Stochastic Processes, Finance and Control: A Festschrift in Honor of Robert J Elliott (2012) ISBN 9789814383301 Applied Conic Finance (2016) ISBN 9781107151697 Nonlinear Valuation and Non-Gaussian Risks in Finance (2022) ISBN 9781316518090 === Selected articles === Madan, D. B., & Seneta, E. (1990). The variance gamma (VG) model for share market returns. Journal of business, 511–524. Madan, D. B., Carr, P. P., & Chang, E. C. (1998). The variance gamma process and option pricing. Review of Finance, 2(1), 79–105. Carr, P., & Madan, D. (1999). Option valuation using the fast Fourier transform. Journal of computational finance, 2(4), 61–73. Carr, P., Geman, H., Madan, D. B., & Yor, M. (2002). The fine structure of asset returns: An empirical investigation. The Journal of Business, 75(2), 305–332. Bakshi, G., Kapadia, N., & Madan, D. (2003). Stock return characteristics, skew laws, and the differential pricing of individual equity options. The Review of Financial Studies, 16(1), 101–143. Carr, P., Geman, H., Madan D. B., and Yor, M. (2007). Self-Decomposability and Option Pricing. Mathematical Finance 17, 31-57. Cherny, A., and Madan, D. B. (2009). New Measures of Performance Evaluation. Review of Financial Studies. 12, 213–230. Eberlein, E., Madan, D. B., Pistorius, M., and Yor, M. (2014). Bid and Ask Prices as Non-Linear Continuous Time G-Expectations Based on Distortions. Mathematics and Financial Economics 8, 265–289. Elliott, R. J., Madan, D. B., and Wang K. (2022). High Dimensional Markovian Trading of a Single Stock. Forntiers of Mathematical Finance 1, 375–396. == References ==
|
Wikipedia:Dima Grigoriev#0
|
Dima Grigoriev (Dmitry Grigoryev) (born 10 May 1954) is a Russian mathematician. His research interests include algebraic geometry, symbolic computation and computational complexity theory in computer algebra, with over 130 published articles. Dima Grigoriev was born in Leningrad, Russia and graduated from the Leningrad State University, Dept. of Mathematics and Mechanics, in 1976 (Honours Diploma). During 1976–1992 he was with LOMI, Leningrad Department of the Steklov Mathematical Institute of the USSR Academy of Sciences. In 1979 he earned PhD (Candidate of Sciences) in Physics and Mathematics with thesis "Multiplicative Complexity of a Family of Bilinear Forms" (from LOMI, under the direction of Anatol Slissenko). In 1985 he earned Doctor of Science (higher doctorate) with thesis "Computational Complexity in Polynomial Algebra". Since 1988 until 1992 he was the head of Laboratory of algorithmic methods Leningrad Department of the Steklov Mathematical Institute. During 1992–1998 Grigoriev hold the position of full professor at Penn State University. Since 1998 he hold the position of Research Director at CNRS, University of Rennes 1, and since 2008 – Research Director at CNRS, Laboratory Paul Painleve University Lille 1 in France. He is member of editorial boards of the Journal Computational Complexity, Journal of Applicable Algebra in Engineering, Communications and Computations and Groups, Complexity, Cryptology. He is recipient of the Prize of Leningrad Mathematical Society (1984), Max Planck Research Award of the Max Planck Society, Germany (1994), and Humboldt Prize of Humboldt Foundation, Germany (2002), Invited Speaker of International Congress of Mathematicians, Berkeley, California, 1986. He has Erdős number 2 due to his collaborations with Andrew Odlyzko. == References == == External links == Dima Grigoriev at the Mathematics Genealogy Project
|
Wikipedia:Dima Von-Der-Flaass#0
|
D. G. Von Der Flaass (September 8, 1962 – June 10, 2010) was a Russian mathematician and educator, Candidate of Physical and Mathematical Sciences, senior researcher at the Sobolev Institute of Mathematics. He was a specialist in combinatorics, a popularizer of mathematics, and an author of International Mathematical Olympiad problems. He was also a jury member for numerous mathematical olympiads. He had an Erdős number of 1. == Biography == === Early years === D. G. von der Flaass was born in the city of Krasnokamsk, Perm Krai on September 8, 1962, into the family of Herman Sergeevich von der Flaass, a Doctor of Geological and Mineralogical Sciences and a professor. The von der Flaass family lineage traces back to an officer in Napoleon's army, a Dutchman by origin, who was taken prisoner and remained in Russia. In 1975, at the age of 13 (two years earlier than usual), Von Der Flaass was admitted to the Lavrentiev Physics and Mathematics School. He actively participated in school mathematics Olympiads, consistently winning prizes in the Soviet Student Olympiads. He was a member of the USSR school team at the XIX International Mathematical Olympiad in Belgrade, where he won a bronze medal, despite being 3–4 years younger than his competitors. After graduating from school, Von-der-Flaass remained in Novosibirsk, where he studied, lived, and worked for almost his entire life. At the age of 15, he enrolled in the Faculty of Mechanics and Mathematics at Novosibirsk State University (NSU). He was an excellent student and actively participated in and won Olympiads held within the framework of the All-Union Conferences "Student and Scientific-Technical Progress." He specialized in the Department of Algebra and Mathematical Logic, where, under the supervision of Professor V. D. Mazurov, he researched finite groups. He defended his diploma on this topic, entered the postgraduate program at NSU, and in 1986 (at the age of 23), defended his Candidate's dissertation on maximal subgroups of finite simple groups. The results of his dissertation attracted great interest from specialists and significantly contributed to the classification of finite simple groups at that time. According to his scientific advisor, even while writing his Candidate’s dissertation, Von-der-Flaass showed a clear inclination toward elegant and ingenious combinatorial constructions. Von der Flaass taught for several years in the United States and the United Kingdom but later returned to Russia, stating that the only place where he could feel comfortable was Akademgorodok. == Scientific work == Von der Flaass specialized in combinatorics as a research fellow at the Sobolev Institute of Mathematics. His main interests lay in graph theory and coding theory. Over 25 years of work, he published a significant number of research papers, and in the last 10 years, his results were recognized four times as among the most important in the institute’s annual reports. As a result, Von der Flaass became a well-known specialist in his field. However, the diversity and versatility of his creative nature prevented him from formalizing a doctoral dissertation based on his many published results. Only under years of pressure from his superiors and with technical support from colleagues at the institute did he prepare his doctoral dissertation, "The Algebraic Method in Combinatorial Problems." It successfully passed all levels of evaluation and was even listed in the Higher Attestation Commission bulletin. However, it was ultimately never defended, as the candidate was unwilling to spend a few more days on it. Even during his postgraduate studies, Von der Flaass repeatedly demonstrated his ability to quickly and deeply grasp almost any issue across various fields of mathematics. He was a walking encyclopedia on all matters of algebraic combinatorics and graph theory, possessed a sharp "Olympiad-style" mind, and had the ability to skim through any mathematical paper while still absorbing its key ideas. He dedicated considerable effort to popularizing mathematics among mathematicians and students, frequently giving lectures on various topics in a highly engaging manner. Even while terminally ill, Von der Flaass remained deeply engaged in science. In his last three months, he wrote three papers and conceived another, continued solving and discussing Olympiad problems, corresponded with colleagues, and searched the Internet for old but important works in group theory, algebra, and combinatorics, attempting to uncover their underlying philosophical depth. Dmitry Germanovich von der Flaass died from esophageal cancer on June 10, 2010. In 2012, a collection of memoirs about him was published. A posthumous publication of his doctoral dissertation was also planned. == Olympiad and teaching activities == Alongside successful professional work in "pure mathematics," activity in the field of mathematical olympiads for school and university students was a significant and inseparable part of the life of D. G. von der Flaass. From the mid-1980s to 2009, with some interruptions, von der Flaass was a member of the Central Subject Methodological Commission, a jury member for the Soviet Student Olympiads, and later the All-Russian Mathematical Olympiad for school students, as well as a coach of the Russian national school team at the International Mathematical Olympiad. For several years, he was also a coach of the national teams of the United Kingdom, Kazakhstan, and Yakutia, achieving notable success everywhere. The teaching talents of von der Flaass manifested in his work with gifted children, which he carried out with high quality and liveliness, without immersing himself in routine. He presented mathematics to students as a collection of beautiful and highly general ideas, realized in various ways, and then taught them to recognize and use these ideas without offering ready-made methods for solving problems. In the All-Russian Olympiad jury, his specialty, as in professional mathematics, was combinatorics. In the evaluation of high-level combinatorial problems, where the absence of standard formulas shifts the focus to intricate reasoning, the unique talent of von der Flaass was most vividly displayed. Receiving such a participant’s work, he always immersed himself in it with keen interest, before either happily exclaiming: "Well done, look at this solution!" or silently pointing out a flaw in the reasoning. He always rejoiced at good solutions to difficult problems as if they were his own and frequently discussed them with colleagues. Often, after his comments such as: "Well, this is clear! Let’s swap these two sections, skip this part, correct two letters here, and it’s done!", even the most convoluted and unreadable text would become clear and well-structured. Von der Flaass was typically assigned the most difficult part of the evaluation, and his judgment on any given work was never questioned. It was always a joy for any jury to hear that von der Flaass would be attending the Olympiad. Von der Flaass also participated in the work of the Methodological Commission on problem creation for mathematical olympiads. Many olympiad problems by von der Flaass stemmed from his professional activity or were related to it, but they were always of very high quality and interest, typically among the most challenging at the olympiads. Bringing scientific results to a form understandable and accessible even to school students attracted von der Flaass the most. On this note, he concluded his activity, achieving a new mathematical result and transforming it into a beautiful problem, which became the most difficult problem of the final round of the 2010 All-Russian Mathematical Olympiad for school students. == Selected publications == P. Erdős, D. G. von der Flaass, A. V. Kostochka, Zs. Tuza (1992). "Small transversals in uniform hypergraphs". Siberian Adv. Math. (in undetermined language). 2: 82–88.{{cite journal}}: CS1 maint: multiple names: authors list (link) M. Alekseev, D. Barsky, A. Vorobey, G. Merzon, Yu. Prokopchuk, D. von der Flaass (2004). "On a problem of sequential decoding" (PDF). Proceedings of the XV International School-Seminar "Synthesis and Complexity of Control Systems": 5–8.{{cite journal}}: CS1 maint: multiple names: authors list (link) D. G. von der Flaass (2010). "Extending pairings to Hamiltonian cycles" (PDF). Siberian Electronic Mathematical Reports. 7: 115–118. D. G. von der Flaass (2010). "The Sophist Gorgias' Theorems and Modern Mathematics". Kvant (5). Publications of D. G. von der Flaass on Mathnet.ru Olympiad problems by D. G. von der Flaass on Problems.ru == Notes == == External links == Blog of D. G. von der Flaass on LiveJournal M. Shkolnik (2010-06-18). "Gone to Olympus". Navigator (23). Archived from the original on 2011-07-31. Photographs with Dima von der Flaass from the home archive Alexandre Borovik, Dima von der Flaass, a child of lost time The first user of Kronos, D. G. von der Flaass, presenting the game “Labyrinth” written by him to Marina von der Flaass (Filippova). The photograph was taken by a correspondent of the magazine “Young Technician.”
|
Wikipedia:Dimension#0
|
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordinate is needed to specify a point on it – for example, the point at 5 on a number line. A surface, such as the boundary of a cylinder or sphere, has a dimension of two (2D) because two coordinates are needed to specify a point on it – for example, both a latitude and longitude are required to locate a point on the surface of a sphere. A two-dimensional Euclidean space is a two-dimensional space on the plane. The inside of a cube, a cylinder or a sphere is three-dimensional (3D) because three coordinates are needed to locate a point within these spaces. In classical mechanics, space and time are different categories and refer to absolute space and time. That conception of the world is a four-dimensional space but not the one that was found necessary to describe electromagnetism. The four dimensions (4D) of spacetime consist of events that are not absolutely defined spatially and temporally, but rather are known relative to the motion of an observer. Minkowski space first approximates the universe without gravity; the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity. 10 dimensions are used to describe superstring theory (6D hyperspace + 4D), 11 dimensions can describe supergravity and M-theory (7D hyperspace + 4D), and the state-space of quantum mechanics is an infinite-dimensional function space. The concept of dimension is not restricted to physical objects. High-dimensional spaces frequently occur in mathematics and the sciences. They may be Euclidean spaces or more general parameter spaces or configuration spaces such as in Lagrangian or Hamiltonian mechanics; these are abstract spaces, independent of the physical space. == In mathematics == In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two, etc. The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded. For example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curve. This is independent from the fact that a curve cannot be embedded in a Euclidean space of dimension lower than two, unless it is a line. Similarly, a surface is of dimension two, even if embedded in three-dimensional space. The dimension of Euclidean n-space En is n. When trying to generalize to other types of spaces, one is faced with the question "what makes En n-dimensional?" One answer is that to cover a fixed ball in En by small balls of radius ε, one needs on the order of ε−n such small balls. This observation leads to the definition of the Minkowski dimension and its more sophisticated variant, the Hausdorff dimension, but there are also other answers to that question. For example, the boundary of a ball in En looks locally like En-1 and this leads to the notion of the inductive dimension. While these notions agree on En, they turn out to be different when one looks at more general spaces. A tesseract is an example of a four-dimensional object. Whereas outside mathematics the use of the term "dimension" is as in: "A tesseract has four dimensions", mathematicians usually express this as: "The tesseract has dimension 4", or: "The dimension of the tesseract is 4". Although the notion of higher dimensions goes back to René Descartes, substantial development of a higher-dimensional geometry only began in the 19th century, via the work of Arthur Cayley, William Rowan Hamilton, Ludwig Schläfli and Bernhard Riemann. Riemann's 1854 Habilitationsschrift, Schläfli's 1852 Theorie der vielfachen Kontinuität, and Hamilton's discovery of the quaternions and John T. Graves' discovery of the octonions in 1843 marked the beginning of higher-dimensional geometry. The rest of this section examines some of the more important mathematical definitions of dimension. === Vector spaces === The dimension of a vector space is the number of vectors in any basis for the space, i.e. the number of coordinates necessary to specify any vector. This notion of dimension (the cardinality of a basis) is often referred to as the Hamel dimension or algebraic dimension to distinguish it from other notions of dimension. For the non-free case, this generalizes to the notion of the length of a module. === Manifolds === The uniquely defined dimension of every connected topological manifold can be calculated. A connected topological manifold is locally homeomorphic to Euclidean n-space, in which the number n is the manifold's dimension. For connected differentiable manifolds, the dimension is also the dimension of the tangent vector space at any point. In geometric topology, the theory of manifolds is characterized by the way dimensions 1 and 2 are relatively elementary, the high-dimensional cases n > 4 are simplified by having extra space in which to "work"; and the cases n = 3 and 4 are in some senses the most difficult. This state of affairs was highly marked in the various cases of the Poincaré conjecture, in which four different proof methods are applied. ==== Complex dimension ==== The dimension of a manifold depends on the base field with respect to which Euclidean space is defined. While analysis usually assumes a manifold to be over the real numbers, it is sometimes useful in the study of complex manifolds and algebraic varieties to work over the complex numbers instead. A complex number (x + iy) has a real part x and an imaginary part y, in which x and y are both real numbers; hence, the complex dimension is half the real dimension. Conversely, in algebraically unconstrained contexts, a single complex coordinate system may be applied to an object having two real dimensions. For example, an ordinary two-dimensional spherical surface, when given a complex metric, becomes a Riemann sphere of one complex dimension. === Varieties === The dimension of an algebraic variety may be defined in various equivalent ways. The most intuitive way is probably the dimension of the tangent space at any Regular point of an algebraic variety. Another intuitive way is to define the dimension as the number of hyperplanes that are needed in order to have an intersection with the variety that is reduced to a finite number of points (dimension zero). This definition is based on the fact that the intersection of a variety with a hyperplane reduces the dimension by one unless if the hyperplane contains the variety. An algebraic set being a finite union of algebraic varieties, its dimension is the maximum of the dimensions of its components. It is equal to the maximal length of the chains V 0 ⊊ V 1 ⊊ ⋯ ⊊ V d {\displaystyle V_{0}\subsetneq V_{1}\subsetneq \cdots \subsetneq V_{d}} of sub-varieties of the given algebraic set (the length of such a chain is the number of " ⊊ {\displaystyle \subsetneq } "). Each variety can be considered as an algebraic stack, and its dimension as variety agrees with its dimension as stack. There are however many stacks which do not correspond to varieties, and some of these have negative dimension. Specifically, if V is a variety of dimension m and G is an algebraic group of dimension n acting on V, then the quotient stack [V/G] has dimension m − n. === Krull dimension === The Krull dimension of a commutative ring is the maximal length of chains of prime ideals in it, a chain of length n being a sequence P 0 ⊊ P 1 ⊊ ⋯ ⊊ P n {\displaystyle {\mathcal {P}}_{0}\subsetneq {\mathcal {P}}_{1}\subsetneq \cdots \subsetneq {\mathcal {P}}_{n}} of prime ideals related by inclusion. It is strongly related to the dimension of an algebraic variety, because of the natural correspondence between sub-varieties and prime ideals of the ring of the polynomials on the variety. For an algebra over a field, the dimension as vector space is finite if and only if its Krull dimension is 0. === Topological spaces === For any normal topological space X, the Lebesgue covering dimension of X is defined to be the smallest integer n for which the following holds: any open cover has an open refinement (a second open cover in which each element is a subset of an element in the first cover) such that no point is included in more than n + 1 elements. In this case dim X = n. For X a manifold, this coincides with the dimension mentioned above. If no such integer n exists, then the dimension of X is said to be infinite, and one writes dim X = ∞. Moreover, X has dimension −1, i.e. dim X = −1 if and only if X is empty. This definition of covering dimension can be extended from the class of normal spaces to all Tychonoff spaces merely by replacing the term "open" in the definition by the term "functionally open". An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a new direction, one obtains a 2-dimensional object. In general, one obtains an (n + 1)-dimensional object by dragging an n-dimensional object in a new direction. The inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, (n + 1)-dimensional balls have n-dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension −1. Similarly, for the class of CW complexes, the dimension of an object is the largest n for which the n-skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles. === Hausdorff dimension === The Hausdorff dimension is useful for studying structurally complicated sets, especially fractals. The Hausdorff dimension is defined for all metric spaces and, unlike the dimensions considered above, can also have non-integer real values. The box dimension or Minkowski dimension is a variant of the same idea. In general, there exist more definitions of fractal dimensions that work for highly irregular sets and attain non-integer positive real values. === Hilbert spaces === Every Hilbert space admits an orthonormal basis, and any two such bases for a particular space have the same cardinality. This cardinality is called the dimension of the Hilbert space. This dimension is finite if and only if the space's Hamel dimension is finite, and in this case the two dimensions coincide. == In physics == === Spatial dimensions === Classical physics theories describe three physical dimensions: from a particular point in space, the basic directions in which we can move are up/down, left/right, and forward/backward. Movement in any other direction can be expressed in terms of just these three. Moving down is the same as moving up a negative distance. Moving diagonally upward and forward is just as the name of the direction implies i.e., moving in a linear combination of up and forward. In its simplest form: a line describes one dimension, a plane describes two dimensions, and a cube describes three dimensions. (See Space and Cartesian coordinate system.) === Time === A temporal dimension, or time dimension, is a dimension of time. Time is often referred to as the "fourth dimension" for this reason, but that is not to imply that it is a spatial dimension. A temporal dimension is one way to measure physical change. It is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move freely in time but subjectively move in one direction. The equations used in physics to model reality do not treat time in the same way that humans commonly perceive it. The equations of classical mechanics are symmetric with respect to time, and equations of quantum mechanics are typically symmetric if both time and other quantities (such as charge and parity) are reversed. In these models, the perception of time flowing in one direction is an artifact of the laws of thermodynamics (we perceive time as flowing in the direction of increasing entropy). The best-known treatment of time as a dimension is Poincaré and Einstein's special relativity (and extended to general relativity), which treats perceived space and time as components of a four-dimensional manifold, known as spacetime, and in the special, flat case as Minkowski space. Time is different from other spatial dimensions as time operates in all spatial dimensions. Time operates in the first, second and third as well as theoretical spatial dimensions such as a fourth spatial dimension. Time is not however present in a single point of absolute infinite singularity as defined as a geometric point, as an infinitely small point can have no change and therefore no time. Just as when an object moves through positions in space, it also moves through positions in time. In this sense the force moving any object to change is time. === Additional dimensions === In physics, three dimensions of space and one of time is the accepted norm. However, there are theories that attempt to unify the four fundamental forces by introducing extra dimensions/hyperspace. Most notably, superstring theory requires 10 spacetime dimensions, and originates from a more fundamental 11-dimensional theory tentatively called M-theory which subsumes five previously distinct superstring theories. Supergravity theory also promotes 11D spacetime = 7D hyperspace + 4 common dimensions. To date, no direct experimental or observational evidence is available to support the existence of these extra dimensions. If hyperspace exists, it must be hidden from us by some physical mechanism. One well-studied possibility is that the extra dimensions may be "curled up" (compactified) at such tiny scales as to be effectively invisible to current experiments. In 1921, Kaluza–Klein theory presented 5D including an extra dimension of space. At the level of quantum field theory, Kaluza–Klein theory unifies gravity with gauge interactions, based on the realization that gravity propagating in small, compact extra dimensions is equivalent to gauge interactions at long distances. In particular when the geometry of the extra dimensions is trivial, it reproduces electromagnetism. However, at sufficiently high energies or short distances, this setup still suffers from the same pathologies that famously obstruct direct attempts to describe quantum gravity. Therefore, these models still require a UV completion, of the kind that string theory is intended to provide. In particular, superstring theory requires six compact dimensions (6D hyperspace) forming a Calabi–Yau manifold. Thus Kaluza-Klein theory may be considered either as an incomplete description on its own, or as a subset of string theory model building. In addition to small and curled up extra dimensions, there may be extra dimensions that instead are not apparent because the matter associated with our visible universe is localized on a (3 + 1)-dimensional subspace. Thus, the extra dimensions need not be small and compact but may be large extra dimensions. D-branes are dynamical extended objects of various dimensionalities predicted by string theory that could play this role. They have the property that open string excitations, which are associated with gauge interactions, are confined to the brane by their endpoints, whereas the closed strings that mediate the gravitational interaction are free to propagate into the whole spacetime, or "the bulk". This could be related to why gravity is exponentially weaker than the other forces, as it effectively dilutes itself as it propagates into a higher-dimensional volume. Some aspects of brane physics have been applied to cosmology. For example, brane gas cosmology attempts to explain why there are three dimensions of space using topological and thermodynamic considerations. According to this idea it would be since three is the largest number of spatial dimensions in which strings can generically intersect. If initially there are many windings of strings around compact dimensions, space could only expand to macroscopic sizes once these windings are eliminated, which requires oppositely wound strings to find each other and annihilate. But strings can only find each other to annihilate at a meaningful rate in three dimensions, so it follows that only three dimensions of space are allowed to grow large given this kind of initial configuration. Extra dimensions are said to be universal if all fields are equally free to propagate within them. == In computer graphics and spatial data == Several types of digital systems are based on the storage, analysis, and visualization of geometric shapes, including illustration software, Computer-aided design, and Geographic information systems. Different vector systems use a wide variety of data structures to represent shapes, but almost all are fundamentally based on a set of geometric primitives corresponding to the spatial dimensions: Point (0-dimensional), a single coordinate in a Cartesian coordinate system. Line or Polyline (1-dimensional) usually represented as an ordered list of points sampled from a continuous line, whereupon the software is expected to interpolate the intervening shape of the line as straight- or curved-line segments. Polygon (2-dimensional) usually represented as a line that closes at its endpoints, representing the boundary of a two-dimensional region. The software is expected to use this boundary to partition 2-dimensional space into an interior and exterior. Surface (3-dimensional) represented using a variety of strategies, such as a polyhedron consisting of connected polygon faces. The software is expected to use this surface to partition 3-dimensional space into an interior and exterior. Frequently in these systems, especially GIS and Cartography, a representation of a real-world phenomenon may have a different (usually lower) dimension than the phenomenon being represented. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional volume of material) may be represented as a line. This dimensional generalization correlates with tendencies in spatial cognition. For example, asking the distance between two cities presumes a conceptual model of the cities as points, while giving directions involving travel "up," "down," or "along" a road imply a one-dimensional conceptual model. This is frequently done for purposes of data efficiency, visual simplicity, or cognitive efficiency, and is acceptable if the distinction between the representation and the represented is understood but can cause confusion if information users assume that the digital shape is a perfect representation of reality (i.e., believing that roads really are lines). == More dimensions == == List of topics by dimension == == See also == == References == == Further reading == Murty, Katta G. (2014). "1. Systems of Simultaneous Linear Equations" (PDF). Computational and Algorithmic Linear Algebra and n-Dimensional Geometry. World Scientific Publishing. doi:10.1142/8261. ISBN 978-981-4366-62-5. Abbott, Edwin A. (1884). Flatland: A Romance of Many Dimensions. London: Seely & Co. —. Flatland: ... Project Gutenberg. —; Stewart, Ian (2008). The Annotated Flatland: A Romance of Many Dimensions. Basic Books. ISBN 978-0-7867-2183-2. Banchoff, Thomas F. (1996). Beyond the Third Dimension: Geometry, Computer Graphics, and Higher Dimensions. Scientific American Library. ISBN 978-0-7167-6015-3. Pickover, Clifford A. (2001). Surfing through Hyperspace: Understanding Higher Universes in Six Easy Lessons. Oxford University Press. ISBN 978-0-19-992381-6. Rucker, Rudy (2014) [1984]. The Fourth Dimension: Toward a Geometry of Higher Reality. Courier Corporation. ISBN 978-0-486-77978-2. Google preview Kaku, Michio (1994). Hyperspace, a Scientific Odyssey Through the 10th Dimension. Oxford University Press. ISBN 978-0-19-286189-4. Krauss, Lawrence M. (2005). Hiding in the Mirror. Viking Press. ISBN 978-0-670-03395-9. == External links == Copeland, Ed (2009). "Extra Dimensions". Sixty Symbols. Brady Haran for the University of Nottingham.
|
Wikipedia:Dimension (vector space)#0
|
In mathematics, the dimension of a vector space V is the cardinality (i.e., the number of vectors) of a basis of V over its base field. It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to distinguish it from other types of dimension. For every vector space there exists a basis, and all bases of a vector space have equal cardinality; as a result, the dimension of a vector space is uniquely defined. We say V {\displaystyle V} is finite-dimensional if the dimension of V {\displaystyle V} is finite, and infinite-dimensional if its dimension is infinite. The dimension of the vector space V {\displaystyle V} over the field F {\displaystyle F} can be written as dim F ( V ) {\displaystyle \dim _{F}(V)} or as [ V : F ] , {\displaystyle [V:F],} read "dimension of V {\displaystyle V} over F {\displaystyle F} ". When F {\displaystyle F} can be inferred from context, dim ( V ) {\displaystyle \dim(V)} is typically written. == Examples == The vector space R 3 {\displaystyle \mathbb {R} ^{3}} has { ( 1 0 0 ) , ( 0 1 0 ) , ( 0 0 1 ) } {\displaystyle \left\{{\begin{pmatrix}1\\0\\0\end{pmatrix}},{\begin{pmatrix}0\\1\\0\end{pmatrix}},{\begin{pmatrix}0\\0\\1\end{pmatrix}}\right\}} as a standard basis, and therefore dim R ( R 3 ) = 3. {\displaystyle \dim _{\mathbb {R} }(\mathbb {R} ^{3})=3.} More generally, dim R ( R n ) = n , {\displaystyle \dim _{\mathbb {R} }(\mathbb {R} ^{n})=n,} and even more generally, dim F ( F n ) = n {\displaystyle \dim _{F}(F^{n})=n} for any field F . {\displaystyle F.} The complex numbers C {\displaystyle \mathbb {C} } are both a real and complex vector space; we have dim R ( C ) = 2 {\displaystyle \dim _{\mathbb {R} }(\mathbb {C} )=2} and dim C ( C ) = 1. {\displaystyle \dim _{\mathbb {C} }(\mathbb {C} )=1.} So the dimension depends on the base field. The only vector space with dimension 0 {\displaystyle 0} is { 0 } , {\displaystyle \{0\},} the vector space consisting only of its zero element. == Properties == If W {\displaystyle W} is a linear subspace of V {\displaystyle V} then dim ( W ) ≤ dim ( V ) . {\displaystyle \dim(W)\leq \dim(V).} To show that two finite-dimensional vector spaces are equal, the following criterion can be used: if V {\displaystyle V} is a finite-dimensional vector space and W {\displaystyle W} is a linear subspace of V {\displaystyle V} with dim ( W ) = dim ( V ) , {\displaystyle \dim(W)=\dim(V),} then W = V . {\displaystyle W=V.} The space R n {\displaystyle \mathbb {R} ^{n}} has the standard basis { e 1 , … , e n } , {\displaystyle \left\{e_{1},\ldots ,e_{n}\right\},} where e i {\displaystyle e_{i}} is the i {\displaystyle i} -th column of the corresponding identity matrix. Therefore, R n {\displaystyle \mathbb {R} ^{n}} has dimension n . {\displaystyle n.} Any two finite dimensional vector spaces over F {\displaystyle F} with the same dimension are isomorphic. Any bijective map between their bases can be uniquely extended to a bijective linear map between the vector spaces. If B {\displaystyle B} is some set, a vector space with dimension | B | {\displaystyle |B|} over F {\displaystyle F} can be constructed as follows: take the set F ( B ) {\displaystyle F(B)} of all functions f : B → F {\displaystyle f:B\to F} such that f ( b ) = 0 {\displaystyle f(b)=0} for all but finitely many b {\displaystyle b} in B . {\displaystyle B.} These functions can be added and multiplied with elements of F {\displaystyle F} to obtain the desired F {\displaystyle F} -vector space. An important result about dimensions is given by the rank–nullity theorem for linear maps. If F / K {\displaystyle F/K} is a field extension, then F {\displaystyle F} is in particular a vector space over K . {\displaystyle K.} Furthermore, every F {\displaystyle F} -vector space V {\displaystyle V} is also a K {\displaystyle K} -vector space. The dimensions are related by the formula dim K ( V ) = dim K ( F ) dim F ( V ) . {\displaystyle \dim _{K}(V)=\dim _{K}(F)\dim _{F}(V).} In particular, every complex vector space of dimension n {\displaystyle n} is a real vector space of dimension 2 n . {\displaystyle 2n.} Some formulae relate the dimension of a vector space with the cardinality of the base field and the cardinality of the space itself. If V {\displaystyle V} is a vector space over a field F {\displaystyle F} and if the dimension of V {\displaystyle V} is denoted by dim V , {\displaystyle \dim V,} then: If dim V {\displaystyle V} is finite then | V | = | F | dim V . {\displaystyle |V|=|F|^{\dim V}.} If dim V {\displaystyle V} is infinite then | V | = max ( | F | , dim V ) . {\displaystyle |V|=\max(|F|,\dim V).} == Generalizations == A vector space can be seen as a particular case of a matroid, and in the latter there is a well-defined notion of dimension. The length of a module and the rank of an abelian group both have several properties similar to the dimension of vector spaces. The Krull dimension of a commutative ring, named after Wolfgang Krull (1899–1971), is defined to be the maximal number of strict inclusions in an increasing chain of prime ideals in the ring. === Trace === The dimension of a vector space may alternatively be characterized as the trace of the identity operator. For instance, tr id R 2 = tr ( 1 0 0 1 ) = 1 + 1 = 2. {\displaystyle \operatorname {tr} \ \operatorname {id} _{\mathbb {R} ^{2}}=\operatorname {tr} \left({\begin{smallmatrix}1&0\\0&1\end{smallmatrix}}\right)=1+1=2.} This appears to be a circular definition, but it allows useful generalizations. Firstly, it allows for a definition of a notion of dimension when one has a trace but no natural sense of basis. For example, one may have an algebra A {\displaystyle A} with maps η : K → A {\displaystyle \eta :K\to A} (the inclusion of scalars, called the unit) and a map ϵ : A → K {\displaystyle \epsilon :A\to K} (corresponding to trace, called the counit). The composition ϵ ∘ η : K → K {\displaystyle \epsilon \circ \eta :K\to K} is a scalar (being a linear operator on a 1-dimensional space) corresponds to "trace of identity", and gives a notion of dimension for an abstract algebra. In practice, in bialgebras, this map is required to be the identity, which can be obtained by normalizing the counit by dividing by dimension ( ϵ := 1 n tr {\displaystyle \epsilon :=\textstyle {\frac {1}{n}}\operatorname {tr} } ), so in these cases the normalizing constant corresponds to dimension. Alternatively, it may be possible to take the trace of operators on an infinite-dimensional space; in this case a (finite) trace is defined, even though no (finite) dimension exists, and gives a notion of "dimension of the operator". These fall under the rubric of "trace class operators" on a Hilbert space, or more generally nuclear operators on a Banach space. A subtler generalization is to consider the trace of a family of operators as a kind of "twisted" dimension. This occurs significantly in representation theory, where the character of a representation is the trace of the representation, hence a scalar-valued function on a group χ : G → K , {\displaystyle \chi :G\to K,} whose value on the identity 1 ∈ G {\displaystyle 1\in G} is the dimension of the representation, as a representation sends the identity in the group to the identity matrix: χ ( 1 G ) = tr I V = dim V . {\displaystyle \chi (1_{G})=\operatorname {tr} \ I_{V}=\dim V.} The other values χ ( g ) {\displaystyle \chi (g)} of the character can be viewed as "twisted" dimensions, and find analogs or generalizations of statements about dimensions to statements about characters or representations. A sophisticated example of this occurs in the theory of monstrous moonshine: the j {\displaystyle j} -invariant is the graded dimension of an infinite-dimensional graded representation of the monster group, and replacing the dimension with the character gives the McKay–Thompson series for each element of the Monster group. == See also == Fractal dimension – Ratio providing a statistical index of complexity variation with scale Krull dimension – In mathematics, dimension of a ring Matroid rank – Maximum size of an independent set of the matroid Rank (linear algebra) – Dimension of the column space of a matrix Topological dimension – Topologically invariant definition of the dimension of a spacePages displaying short descriptions of redirect targets, also called Lebesgue covering dimension == Notes == == References == == Sources == Axler, Sheldon (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 978-3-319-11079-0. == External links == MIT Linear Algebra Lecture on Independence, Basis, and Dimension by Gilbert Strang at MIT OpenCourseWare
|
Wikipedia:Dimension function#0
|
In mathematics, the notion of an (exact) dimension function (also known as a gauge function) is a tool in the study of fractals and other subsets of metric spaces. Dimension functions are a generalisation of the simple "diameter to the dimension" power law used in the construction of s-dimensional Hausdorff measure. == Motivation: s-dimensional Hausdorff measure == Consider a metric space (X, d) and a subset E of X. Given a number s ≥ 0, the s-dimensional Hausdorff measure of E, denoted μs(E), is defined by μ s ( E ) = lim δ → 0 μ δ s ( E ) , {\displaystyle \mu ^{s}(E)=\lim _{\delta \to 0}\mu _{\delta }^{s}(E),} where μ δ s ( E ) = inf { ∑ i = 1 ∞ d i a m ( C i ) s | d i a m ( C i ) ≤ δ , ⋃ i = 1 ∞ C i ⊇ E } . {\displaystyle \mu _{\delta }^{s}(E)=\inf \left\{\left.\sum _{i=1}^{\infty }\mathrm {diam} (C_{i})^{s}\right|\mathrm {diam} (C_{i})\leq \delta ,\bigcup _{i=1}^{\infty }C_{i}\supseteq E\right\}.} μδs(E) can be thought of as an approximation to the "true" s-dimensional area/volume of E given by calculating the minimal s-dimensional area/volume of a covering of E by sets of diameter at most δ. As a function of increasing s, μs(E) is non-increasing. In fact, for all values of s, except possibly one, Hs(E) is either 0 or +∞; this exceptional value is called the Hausdorff dimension of E, here denoted dimH(E). Intuitively speaking, μs(E) = +∞ for s < dimH(E) for the same reason as the 1-dimensional linear length of a 2-dimensional disc in the Euclidean plane is +∞; likewise, μs(E) = 0 for s > dimH(E) for the same reason as the 3-dimensional volume of a disc in the Euclidean plane is zero. The idea of a dimension function is to use different functions of diameter than just diam(C)s for some s, and to look for the same property of the Hausdorff measure being finite and non-zero. == Definition == Let (X, d) be a metric space and E ⊆ X. Let h : [0, +∞) → [0, +∞] be a function. Define μh(E) by μ h ( E ) = lim δ → 0 μ δ h ( E ) , {\displaystyle \mu ^{h}(E)=\lim _{\delta \to 0}\mu _{\delta }^{h}(E),} where μ δ h ( E ) = inf { ∑ i = 1 ∞ h ( d i a m ( C i ) ) | d i a m ( C i ) ≤ δ , ⋃ i = 1 ∞ C i ⊇ E } . {\displaystyle \mu _{\delta }^{h}(E)=\inf \left\{\left.\sum _{i=1}^{\infty }h\left(\mathrm {diam} (C_{i})\right)\right|\mathrm {diam} (C_{i})\leq \delta ,\bigcup _{i=1}^{\infty }C_{i}\supseteq E\right\}.} Then h is called an (exact) dimension function (or gauge function) for E if μh(E) is finite and strictly positive. There are many conventions as to the properties that h should have: Rogers (1998), for example, requires that h should be monotonically increasing for t ≥ 0, strictly positive for t > 0, and continuous on the right for all t ≥ 0. === Packing dimension === Packing dimension is constructed in a very similar way to Hausdorff dimension, except that one "packs" E from inside with pairwise disjoint balls of diameter at most δ. Just as before, one can consider functions h : [0, +∞) → [0, +∞] more general than h(δ) = δs and call h an exact dimension function for E if the h-packing measure of E is finite and strictly positive. == Example == Almost surely, a sample path X of Brownian motion in the Euclidean plane has Hausdorff dimension equal to 2, but the 2-dimensional Hausdorff measure μ2(X) is zero. The exact dimension function h is given by the logarithmic correction h ( r ) = r 2 ⋅ log 1 r ⋅ log log log 1 r . {\displaystyle h(r)=r^{2}\cdot \log {\frac {1}{r}}\cdot \log \log \log {\frac {1}{r}}.} I.e., with probability one, 0 < μh(X) < +∞ for a Brownian path X in R2. For Brownian motion in Euclidean n-space Rn with n ≥ 3, the exact dimension function is h ( r ) = r 2 ⋅ log log 1 r . {\displaystyle h(r)=r^{2}\cdot \log \log {\frac {1}{r}}.} == References == Olsen, L. (2003). "The exact Hausdorff dimension functions of some Cantor sets". Nonlinearity. 16 (3): 963–970. doi:10.1088/0951-7715/16/3/309. Rogers, C. A. (1998). Hausdorff measures. Cambridge Mathematical Library (Third ed.). Cambridge: Cambridge University Press. pp. xxx+195. ISBN 0-521-62491-6.
|
Wikipedia:Dimension of an algebraic variety#0
|
In mathematics and specifically in algebraic geometry, the dimension of an algebraic variety may be defined in various equivalent ways. Some of these definitions are of geometric nature, while some other are purely algebraic and rely on commutative algebra. Some are restricted to algebraic varieties while others apply also to any algebraic set. Some are intrinsic, as independent of any embedding of the variety into an affine or projective space, while other are related to such an embedding. == Dimension of an affine algebraic set == Let K be a field, and L ⊇ K be an algebraically closed extension. An affine algebraic set V is the set of the common zeros in Ln of the elements of an ideal I in a polynomial ring R = K [ x 1 , … , x n ] . {\displaystyle R=K[x_{1},\ldots ,x_{n}].} Let A = R / I {\displaystyle A=R/I} be the K-algebra of the polynomial functions over V. The dimension of V is any of the following integers. It does not change if K is enlarged, if L is replaced by another algebraically closed extension of K and if I is replaced by another ideal having the same zeros (that is having the same radical). The dimension is also independent of the choice of coordinates; in other words it does not change if the xi are replaced by linearly independent linear combinations of them. The dimension of V is The maximal length d {\displaystyle d} of the chains V 0 ⊂ V 1 ⊂ … ⊂ V d {\displaystyle V_{0}\subset V_{1}\subset \ldots \subset V_{d}} of distinct nonempty (irreducible) subvarieties of V. This definition generalizes a property of the dimension of a Euclidean space or a vector space. It is thus probably the definition that gives the easiest intuitive description of the notion. The Krull dimension of the coordinate ring A. This is the transcription of the preceding definition in the language of commutative algebra, the Krull dimension being the maximal length of the chains p 0 ⊂ p 1 ⊂ … ⊂ p d {\displaystyle p_{0}\subset p_{1}\subset \ldots \subset p_{d}} of prime ideals of A. The maximal Krull dimension of the local rings at the points of V. This definition shows that the dimension is a local property if V {\displaystyle V} is irreducible. If V {\displaystyle V} is irreducible, it turns out that all the local rings at points of V have the same Krull dimension (see ); thus: If V is a variety, the Krull dimension of the local ring at any point of V This rephrases the previous definition into a more geometric language. The maximal dimension of the tangent vector spaces at the non singular points of V. This relates the dimension of a variety to that of a differentiable manifold. More precisely, if V if defined over the reals, then the set of its real regular points, if it is not empty, is a differentiable manifold that has the same dimension as a variety and as a manifold. If V is a variety, the dimension of the tangent vector space at any non singular point of V. This is the algebraic analogue to the fact that a connected manifold has a constant dimension. This can also be deduced from the result stated below the third definition, and the fact that the dimension of the tangent space is equal to the Krull dimension at any non-singular point (see Zariski tangent space). The number of hyperplanes or hypersurfaces in general position which are needed to have an intersection with V which is reduced to a nonzero finite number of points. This definition is not intrinsic as it apply only to algebraic sets that are explicitly embedded in an affine or projective space. The maximal length of a regular sequence in the coordinate ring A. This the algebraic translation of the preceding definition. The difference between n and the maximal length of the regular sequences contained in I. This is the algebraic translation of the fact that the intersection of n – d general hypersurfaces is an algebraic set of dimension d. The degree of the Hilbert polynomial of A. The degree of the denominator of the Hilbert series of A. This allows, through a Gröbner basis computation to compute the dimension of the algebraic set defined by a given system of polynomial equations. Moreover, the dimension is not changed if the polynomials of the Gröbner basis are replaced with their leading monomials, and if these leading monomials are replaced with their radical (monomials obtained by removing exponents). So: The Krull dimension of the Stanley–Reisner ring R / J {\displaystyle R/J} where J {\displaystyle J} is the radical of the initial ideal of I {\displaystyle I} for any admissible monomial ordering (the initial ideal of I {\displaystyle I} is the set of all leading monomials of elements of I {\displaystyle I} ). The dimension of the simplicial complex defined by this Stanley–Reisner ring. If I is a prime ideal (i.e. V is an algebraic variety), the transcendence degree over K of the field of fractions of A. This allows to prove easily that the dimension is invariant under birational equivalence. == Dimension of a projective algebraic set == Let V be a projective algebraic set defined as the set of the common zeros of a homogeneous ideal I in a polynomial ring R = K [ x 0 , x 1 , … , x n ] {\displaystyle R=K[x_{0},x_{1},\ldots ,x_{n}]} over a field K, and let A=R/I be the graded algebra of the polynomials over V. All the definitions of the previous section apply, with the change that, when A or I appear explicitly in the definition, the value of the dimension must be reduced by one. For example, the dimension of V is one less than the Krull dimension of A. == Computation of the dimension == Given a system of polynomial equations over an algebraically closed field K {\displaystyle K} , it may be difficult to compute the dimension of the algebraic set that it defines. Without further information on the system, there is only one practical method, which consists of computing a Gröbner basis and deducing the degree of the denominator of the Hilbert series of the ideal generated by the equations. The second step, which is usually the fastest, may be accelerated in the following way: Firstly, the Gröbner basis is replaced by the list of its leading monomials (this is already done for the computation of the Hilbert series). Then each monomial like x 1 e 1 ⋯ x n e n {\displaystyle {x_{1}}^{e_{1}}\cdots {x_{n}}^{e_{n}}} is replaced by the product of the variables in it: x 1 min ( e 1 , 1 ) ⋯ x n min ( e n , 1 ) . {\displaystyle x_{1}^{\min(e_{1},1)}\cdots x_{n}^{\min(e_{n},1)}.} Then the dimension is the maximal size of a subset S of the variables, such that none of these products of variables depends only on the variables in S. This algorithm is implemented in several computer algebra systems. For example in Maple, this is the function Groebner[HilbertDimension], and in Macaulay2, this is the function dim. == Real dimension == The real dimension of a set of real points, typically a semialgebraic set, is the dimension of its Zariski closure. For a semialgebraic set S, the real dimension is one of the following equal integers: The real dimension of S {\displaystyle S} is the dimension of its Zariski closure. The real dimension of S {\displaystyle S} is the maximal integer d {\displaystyle d} such that there is a homeomorphism of [ 0 , 1 ] d {\displaystyle [0,1]^{d}} in S {\displaystyle S} . The real dimension of S {\displaystyle S} is the maximal integer d {\displaystyle d} such that there is a projection of S {\displaystyle S} over a d {\displaystyle d} -dimensional subspace with a non-empty interior. For an algebraic set defined over the reals (that is defined by polynomials with real coefficients), it may occur that the real dimension of the set of its real points is smaller than its dimension as a semi algebraic set. For example, the algebraic surface of equation x 2 + y 2 + z 2 = 0 {\displaystyle x^{2}+y^{2}+z^{2}=0} is an algebraic variety of dimension two, which has only one real point (0, 0, 0), and thus has the real dimension zero. The real dimension is more difficult to compute than the algebraic dimension. For the case of a real hypersurface (that is the set of real solutions of a single polynomial equation), there exists a probabilistic algorithm to compute its real dimension. == See also == Dimension (vector space) Dimension theory (algebra) Dimension of a scheme == References ==
|
Wikipedia:Dimitri Leemans#0
|
Dimitri Leemans is a Belgian mathematician born in Uccle in 1972. == Biography == Leemans obtained his Licence en Sciences Mathématiques at the Université libre de Bruxelles in 1994 and his doctorate degree, under the supervision of Francis Buekenhout and Michel Dehon in 1998. He is currently professor at the mathematics department of the Faculty of Sciences of the Université libre de Bruxelles. He lived five years in New Zealand from 2011 until 2016 where he worked for the University of Auckland as an associate professor. He returned to Belgium after experiencing problems with Immigration New Zealand because of his stepson's autism. == Prizes and awards == Agathon De Potter prize for the period 2000-2002 from the Academie Royale des Sciences de Belgique 2007 Annual Prize in Mathematics from the Academie Royale des Sciences de Belgique 2013-2016 Marsden grant from the Royal Society of New Zealand New Zealand Mathematical Society 2014 Research Award 2018 Francois Deruyts Prize in Geometry from the Academie Royale des Sciences de Belgique == Research == Leemans's principal area of research is at the interface of algebra, computational mathematics, combinatorics and geometry. He has made major contributions in the study of regular and chiral polytopes whose automorphism groups are finite almost simple groups. He has published more than 90 papers in peer-reviewed scientific journals and two memoirs of the Academie Royale des Sciences de Belgique. He is the developer of the Magma (computer algebra system) package on incidence geometry and coset geometry since 1999. == Major scientific publications == Fernandes, Maria Elisa; Leemans, Dimitri (2011), "Polytopes of high rank for the symmetric groups", Advances in Mathematics, 228 (6): 3207–3222, doi:10.1016/j.aim.2011.08.006 Cameron, Peter J.; Fernandes, Maria Elisa; Leemans, Dimitri; Mixer, Mark (2017), "Highest rank of a polytope for A n {\displaystyle A_{n}} ", Proceedings of the London Mathematical Society, 115 (1): 135–176, doi:10.1112/plms.12039, hdl:10023/10678, S2CID 55208000 Leemans, Dimitri; Liebeck, Martin W. (2017), "Chiral polyhedra and finite simple groups", Bulletin of the London Mathematical Society, 49 (4): 581–592, arXiv:1603.07713, doi:10.1112/blms.12041, S2CID 119156295 Fernandes, Maria Elisa; Leemans, Dimitri; Mixer, Mark (2018), "An extension of the classification of high rank regular polytopes", Transactions of the American Mathematical Society, 370 (12): 8833–8857, arXiv:1509.01032, doi:10.1090/tran/7425, S2CID 119141460 Brooksbank, Peter A.; Leemans, Dimitri (2019), "Rank reduction of string C-group representations", Proceedings of the American Mathematical Society, 147 (12): 5421–5426, arXiv:1812.01055, doi:10.1090/proc/14666, S2CID 119623602 == References == == External links == Personal website of Dimitri Leemans
|
Wikipedia:Dimitrie Pompeiu#0
|
Dimitrie D. Pompeiu (Romanian: [diˈmitri.e pomˈpeju]; 4 October [O.S. 22 September] 1873 – 8 October 1954) was a Romanian mathematician, professor at the University of Bucharest, titular member of the Romanian Academy, and President of the Chamber of Deputies. == Biography == He was born in 1873 in Broscăuți, Botoșani County, in a family of well-to-do peasants. After completing high school in nearby Dorohoi, he went to study at the Normal Teachers School in Bucharest, where he had Alexandru Odobescu as a teacher. After obtaining his diploma in 1893, he taught for five years at schools in Galați and Ploiești. In 1898 he went to France, where he studied mathematics at the University of Paris (the Sorbonne). He obtained his Ph.D. degree in mathematics in 1905, with thesis On the continuity of complex variable functions written under the direction of Henri Poincaré. After returning to Romania, Pompeiu was named Professor of Mechanics at the University of Iași. In 1912, he assumed a chair at the University of Bucharest. In the early 1930s he was elected to the Chamber of Deputies as a member of Nicolae Iorga's Democratic Nationalist Party, and served as President of the Chamber of Deputies for a year. In 1934, Pompeiu was elected titular member of the Romanian Academy, while in 1943 he was elected to the Romanian Academy of Sciences. In 1945, he became the founding director of the Institute of Mathematics of the Romanian Academy. He died in Bucharest in 1954. A boulevard in the Pipera neighborhood of the city is named after him, and so is a school in his hometown of Broscăuți. == Research == Pompeiu's contributions were mainly in the field of mathematical analysis, complex functions theory, and rational mechanics. In an article published in 1929, he posed a challenging conjecture in integral geometry, now widely known as the Pompeiu problem. Among his contributions to real analysis there is the construction, dated 1906, of non-constant, everywhere differentiable functions, with derivative vanishing on a dense set. Such derivatives are now called Pompeiu derivatives. == Selected works == Pompeiu, Dimitrie (1905), "Sur la continuité des fonctions de variables complexes", Annales de la Faculté des Sciences de Toulouse, Série 2 (in French), 7 (3): 265–315, JFM 36.0454.04. Pompeiu, Dimitrie (1912), "Sur une classe de fonctions d'une variable complexe", Rendiconti del Circolo Matematico di Palermo (in French), 33 (1): 108–113, doi:10.1007/BF03015292, JFM 43.0481.01, S2CID 120717465. Pompeiu, Dimitrie (1913), "Sur une classe de fonctions d'une variable complexe et sur certaines équations intégrales", Rendiconti del Circolo Matematico di Palermo (in French), 35 (1): 277–281, doi:10.1007/BF03015607, S2CID 121616964. == See also == Cauchy–Pompeiu formula Pompeiu–Hausdorff metric Pompeiu's theorem Pompeiu derivative Wirtinger derivatives == Biographical references == Bîrsan, Temistocle; Tiba, Dan (2006), "One hundred years since the introduction of the set distance by Dimitrie Pompeiu", in Ceragioli, Francesca; Dontchev, Asen; Futura, Hitoshi; Marti, Kurt; Pandolfi, Luciano (eds.), System Modeling and Optimization, vol. 199, Boston: Kluwer Academic Publishers, pp. 35–39, doi:10.1007/0-387-33006-2_4, ISBN 978-0-387-32774-7, MR 2249320 Mocanu, Petru. "Dimitrie Pompeiu (1873-1954)" (in Romanian). Babeș-Bolyai University. Retrieved October 9, 2020. == References == Fichera, Gaetano (1969), "Derivata areolare e funzioni a variazione limitata", Revue Roumaine de Mathématiques Pures et Appliquées (in Italian), XIV (1): 27–37, MR 0265616, Zbl 0201.10002. ("Areolar derivative and functions of bounded variation" is an important reference paper in the theory of areolar derivatives.) Henrici, Peter (1993) [1986], Applied and Computational Complex Analysis Volume 3, Wiley Classics Library (Reprint ed.), New York–Chichester–Brisbane–Toronto–Singapore: John Wiley & Sons, pp. X+637, ISBN 0-471-58986-1, MR 0822470, Zbl 1107.30300.
|
Wikipedia:Dimitrije Nešić#0
|
Dimitrije Nešić (20 October 1836 – 9 May 1904) was a Serbian mathematician, professor at the Lyceum of the Principality of Serbia and president of the Serbian Royal Academy. == Biography == Nešić was born to Savka and Stojan Nešić in Belgrade, Principality of Serbia. Nešićs left their hometown Novi Pazar under Ottoman oppression on Serbian population in the area due to the First Serbian uprising. His father Stojan was craftsman and trader while his mother was housekeeper. His grandfather was merchant in Novi Pazar who came to Belgrade in 1808 because of Ottoman anti-Serb actions during the First Serbian Uprising. Dimitrije Nešić completed his elementary and six-grade secondary education in Belgrade, and enrolled at Lyceum in 1853. After receiving a scholarship, he continued his studies at the Technical College in Vienna and the Polytechnical School at Karlsruhe. He returned to Belgrade in 1862 to become a professor of mathematics. Nešić authored most of the modern mathematics textbooks in Kingdom of Serbia and overall improved the quality of studies in the field. He was sent by the government to travel across Europe and study various metric systems, and he later implemented the information gathered on his travels thus making the first official and modern Serbian metric system. He was a rector of today's University of Belgrade on two terms and also president of the academy. Dimitrije Nešić was made corresponding member of JAZU and he received Order of St Sava and Order of the White Eagle. == Selected works == Metarske mere, 1874 Trigonometrija, 1875. Nauka o kombinacijama, 1883. Algebarska analiza I, 1883. Algebarska analiza II, 1883 == See also == Jovan Sterija Popović Đura Daničić Josif Pančić Matija Ban Konstantin Branković == References == == External links == Biography on the website of SANU
|
Wikipedia:Dinesh Singh (academic)#0
|
Professor Dinesh Singh, chancellor K.R. Mangalam University is an Indian professor of mathematics. He served as the 21st Vice-Chancellor of the University of Delhi, is a distinguished fellow of Hackspace at Imperial College London, and has been an adjunct professor of Mathematics at the University of Houston. For his services to the nation he was conferred with the Padma Shri which is the fourth highest civilian award awarded by the Republic of India. == Early life and background == Dinesh Singh earned his B.sc. (Hons. – Maths) in 1975 and M.A. (Maths) in 1977 from St. Stephen's College, followed by M.Phil (Maths) in 1978 from the University of Delhi. He did a PhD in Math from Imperial College London in 1981. He holds numerous honorary doctorates some of them being awarded by University of Edinburgh, National Institute of Technology, Kurukshetra, University College Cork, Ireland, and University of Houston. == Career == Singh started his career as lecturer at St. Stephen's College, University of Delhi, in 1981. Thereafter he joined the Department of Mathematics, University of Delhi, in 1987. He was head of the Department of Mathematics at the University of Delhi from December 2004 to September 2005. He served the University of Delhi as a director, South Campus from 2005-2010. He officiated briefly as pro vice chancellor, University of Delhi, before being appointed vice chancellor on 29 October 2010. His area of specialization includes functional analysis, operator theory, and harmonic analysis. He is an adjunct professor at the University of Houston and has also taught at the Indian Institute of Technology Delhi, Indian Statistical Institute, Delhi. He is a recipient of Padma Shri, the fourth highest civilian honor awarded by the Republic of India. He is noted for being instrumental in setting up of Cluster Innovation Centre at University of Delhi, an inter-disciplinary, first of its kind research center particularly promoting undergraduate research. He also popularized the concept of innovation as credit. == Awards and distinctions == Padma Shri- India's fourth highest civilian awards by the president of India "in recognition of distinguished service in the field of Literature and Education", 2014 Career Award in Mathematics of the University Grants Commission, 1994. The AMU Prize of the Indian Mathematical Society, 1989. The Inlaks Scholarship to pursue the Ph.D. degree at the Imperial College, 1978. Mukarji-Ram Behari Mathematics Prize of St. Stephen’s College for the Best Pass in M.A., 1977. Best Undergraduate in Mathematics prize of St. Stephen’s College, 1974. Member, Scientific Advisory Committee to the Union Cabinet, Govt. of India Member, Jnanpith Award Jury Selection Board-one of the highest literary prizes in India Elected President, Mathematical Sciences Section, Indian Science Congress Association, 2012-13 Elected Vice President, Ramanujan Mathematical Society, 2013-15 == Controversies == During Singh's tenure as vice chancellor of Delhi University, the Delhi University Teachers Association criticised his leadership as being authoritarian. In 2013, the president of the association claimed that dissenting voices were silenced, and called Singh's style "feudal and autocratic", and two years later, Bhaskaracharya College of Applied Sciences principal Manoj Khanna resigned from his position, also referring to Singh's "autocratic" attitude. Khanna claimed that colleges were required to submit false affidavits to the All-India Council for Technical Education (AICTE) in order to get approval to run BTech courses, something that was denied by the university's spokesperson. Singh was also accused of financial and administrative irregularities. A ‘White Paper’ released by Delhi University Teachers’ Association (DUTA) alleged financial and administrative irregularities in functioning of Delhi University, like diversion of OBC funds for purchase of laptops or flagging off ‘Gyanodaya Express’. The Ministry of Human Resource Development, Government of India, after approval of the president of India (who is also the Visitor of Delhi University) issued a show cause notice to Singh regarding these allegations. Singh responded to the showcause notice denying every charge but HRD ministry turned down his request for withdrawing showcause notice. Noted Academics including ex-president of the Indian National Science Academy, Krishan Lal, former director general of the Council for Scientific and Industrial Research, SK Brahmachari, Keki N Daruwallla, ex-vice president of the Indian Academy of Sciences and a host of academics from JNU, Jamia and Banaras Hindu University and ex-DU vice-chancellor "backed" Prof. Dinesh Singh raising concern over the manner in which, "the autonomy of the university was being compromised". The government decided not to process his reply to the showcause notice as he had barely few months left in his tenure and few senior Cabinet ministers pleaded for no action against him in order for him to finish his full term. The HRD ministry, on 7 October 2015, requested the Visitor to send Dinesh Singh on 'forced leave' on the charge that Dinesh Singh tried to derail the process of appointment of his successor. However, the HRD ministry had to withdraw the request as the Visitor Pranab Mukherjee was not convinced if the DU VC should be punished with barely 20 days left of his term. The Visitor asked Dinesh Singh not to continue for a single day beyond the last day of his term. However, Dinesh Singh denied the charge of intentionally derailing the process of appointment of his successor and asserted that he will not continue a day beyond his tenure as is being speculated. In spite of just 20-odd days left for the completion of his term, a large number of teachers and students came out to protest in front of the Vice Regal Lodge demanding his ouster. Not only did the activists of National Democratic Teachers' Front and Academics for Action and Development (Rathi) protest against Dinesh Singh. but the demand for the removal of Dinesh Singh has seen the archrivals Democratic Teachers' Front (DTF) and Academics for Action and Development (AAD) to join anti-VC protests by Delhi University Teachers Association. J Khuntiya, chairman, AAD (Rathi) alleged that the delay has been intentional to ensure that Umesh Rai, a member of the team of Dinesh Singh becomes eligible for selection. One of the names included by Dinesh Singh in the search committee for the selection of new vice-chancellor is Vinod Rai. DUTA has also recently charged Singh with plagiarism and has sought investigation into these plagiarism charges against him. Singh was instrumental in initiating Four Year Undergraduate Program (FYUP) which met with a lot of resistance from different quarters, after which the programme was scrapped as per the directives of the University Grant Commission (UGC). Thereafter, Singh announced his resignation but later retracted it. == References == == External links == Dr Vikas Goswami Oncologist
|
Wikipedia:Dini continuity#0
|
In mathematical analysis, Dini continuity is a refinement of continuity. Every Dini continuous function is continuous. Every Lipschitz continuous function is Dini continuous. == Definition == Let X {\displaystyle X} be a compact subset of a metric space (such as R n {\displaystyle \mathbb {R} ^{n}} ), and let f : X → X {\displaystyle f:X\rightarrow X} be a function from X {\displaystyle X} into itself. The modulus of continuity of f {\displaystyle f} is ω f ( t ) = sup d ( x , y ) ≤ t d ( f ( x ) , f ( y ) ) . {\displaystyle \omega _{f}(t)=\sup _{d(x,y)\leq t}d(f(x),f(y)).} The function f {\displaystyle f} is called Dini-continuous if ∫ 0 1 ω f ( t ) t d t < ∞ . {\displaystyle \int _{0}^{1}{\frac {\omega _{f}(t)}{t}}\,dt<\infty .} An equivalent condition is that, for any θ ∈ ( 0 , 1 ) {\displaystyle \theta \in (0,1)} , ∑ i = 1 ∞ ω f ( θ i a ) < ∞ {\displaystyle \sum _{i=1}^{\infty }\omega _{f}(\theta ^{i}a)<\infty } where a {\displaystyle a} is the diameter of X {\displaystyle X} . == See also == Dini test — a condition similar to local Dini continuity implies convergence of a Fourier transform. == References == Stenflo, Örjan (2001). "A note on a theorem of Karlin". Statistics & Probability Letters. 54 (2): 183–187. doi:10.1016/S0167-7152(01)00045-1.
|
Wikipedia:Dion O'Neale#0
|
Dion O'Neale is a New Zealand applied mathematician who specialises in the area of complex systems and network science. His work involves the analysis of empirical data to inform computer simulations to predict how interacting parts and structures of networks can affect the dynamics and properties of systems. During COVID-19, O'Neale created mathematical models to build understanding of how the network of interractions of the virus was spread, and during this period, was a frequent commentator in the New Zealand media about the country's response to the pandemic. He is a senior lecturer in physics at Auckland University, principal investigator at Te Pūnaha Matatini and Project Lead of COVID-19 Modelling Aotearoa. == Education and career == Born in New Zealand, O'Neale studied at the University of Auckland between 1999 and 2003, graduating with a BSc in physics, BA in mathematics and BSc with honours in applied mathematics. He completed his MSc at Heinrich Heine University Düsseldorf in 2005, and PhD at Massey University in 2009. O'Neale was a postdoctoral research fellow at La Trobe University in Australia from August 2009 until April 2010, when he returned to New Zealand and joined the Applied Mathematics team at Industrial Research Limited, later known as Callaghan Innovation, in Lower Hutt, where he worked as a research scientist until 2013 when he became a research fellow and later lecturer with the department of physics at Auckland University. As of 2022 O'Neale continues in that role and since 2015 has been a principal investigator at Te Pūnaha Matatini. O'Neale has taken lead roles in several New Zealand government-funded research projects and in 2021 became the project lead for a programme called COVID-19 Modelling Aotearoa which initially arose under the leadership of Te Pūnaha Matatini but is now a standalone project hosted by the University of Auckland. == Response to COVID-19 in New Zealand == === Modelling and research === O'Neale was part of a team led by Michael Baker and funded in 2020 by the Health Research Council of New Zealand(HRC) for a 3-year research project: COVID-19 Pandemic in Aotearoa NZ: Impact, Inequalities & Improving our response. The goal of the project was to provide insights to the New Zealand Ministry of Health about how the virus was likely to severely affect people with existing health conditions and less able to afford health care. The application noted that Maori and Pasifika were disproportionately represented in this group so the response in New Zealand needed to be "effective and fair....[and the researchers undertook to]... communicate these insights to decision-makers at the Ministry of Health, service providers, communities, other Pacific nations, and the public in the form of practical recommendations to guide current and future pandemic responses". Another funded programme led by O'Neale, Te matatini o te horapa: a population-based contagion network for Aotearoa NZ, undertook to build a model to simulate how COVID-19 could spread on contact networks, [by] "explicitly including individual demographic and economic attributes in the model..[and]...provide policy advice about vulnerability, what factors lead to increased risk, and what effective and equitable interventions would be. These ranged from behavioural changes and social support measures, to mitigate factors which [increased] transmission risk and inequities". A paper co-authored by O'Neale and provided initially to officials on 16 November 2020, looked at modelling which possible non-pharmaceutical interventions would lead to elimination of COVID-19 if a case - not connected to the border - was found in the community. The study suggested that effective behaviours – other than testing and contract tracing – to improve the chances of possible elimination included increased levels of control in workplaces and the closing of schools. These findings were based on simulations of combination of multiple control measures and allowed "transmission routes via both 'close' and 'casual' contacts within each infection context, each with specific intervention-dependent reductions in transmission rate". By 2021, the COVID-19 Modelling Aotearoa programme was established with funding initially from the Ministry of Business, Innovation and Employment, later the Department of the Prime Minister and Cabinet and as 2022, from the Manatū Hauora Ministry of Health. O'Neale is a Project Lead for Contagion Network Modelling within this programme. COVID-19 Modelling Aotearoa aimed at helping policy makers in developing responses to COVID-19 during a possible major outbreak in New Zealand, by "bringing together the multiple realms of public data collected for the census and from other public sources, and overlaying that with a COVID contagion model developed to represent both disease progression and interventions such as contact tracing and testing and the announcement and timing of Alert Levels". O'Neale explained that this mathematical approach originated in materials science and physics but had been previously applied to the epidemiology of disease when models were developed during the spread of Ebola in Africa. He said that the model had been useful during the lockdown of Auckland in August 2020 because it modelled the risk of a spread outside of the city and predicted the outcome if there was a change in the response. O'Neale's conclusion was that "the model proved spot on, offering politicians a degree of confidence on which to base their decisions". In February 2021, the team at Te Pūnaha Matatini developed an "individual-based network contagion model" representing the population of New Zealand and the contexts in which they interracted, to "address the question of whether Alert Level 2.5 (AL2.5) [was] enough to eliminate a community outbreak with no clear epidemiological link to the border – like that seen in the 2020 Auckland August outbreak". The paper noted that the size of the outbreak at the beginning of a possible outbreak would affect the probability of elimination as would the impact of contact tracing, but concluded from the results of the simulation, that a move to Alert level 2.5 was unlikely to lead to elimination in a scenario similar to the 2020 outbreak in Auckland. A report compiled in September 2021 and delivered on 17 August 2021, used the individual-based network contagion model to simulate the spread of COVID-19 in the community, considering a "case of community transmission with no link to the border....[on the assumption that]...each simulation was seeded by setting the state to infected (specifically to 'Exposed') for a single, randomly selected, individual in Auckland...[and]...around 15% of individuals over the age of 15 have been vaccinated". As Auckland responded to the outbreak of COVID-19, O'Neale contributed to a report delivered on 10 September 2021, that considered the consequence of reconnecting during transitions between phases and changes in Alert Level restrictions. The researchers used a network representing New Zealand, Populated Aotearoa Interaction Network (PAIN), to illustrate the number of interactions between people. The report found that only a "small increase in the number of connections between individuals from different dwellings (an increase from around 10% to around 20% of the number expected at Alert Level 1) was sufficient to increase the size of the largest connected component of the population who could be reached though transmission by a factor of 15; from around 90,000 to over 1.4 million...[highlighting]...the fact that New Zealand [was] a complex and highly connected system, where individuals [were] typically not too far removed from each other...[suggesting that]... Alert Levels, and specifically lock downs, work because they reduce the vast majority of interactions within the community and limit chains of potential transmission". By October 2021, the research team had used the model to estimate the effects of a proposed change in the response of the New Zealand Government to COVID-19 by developing a simulation of a community outbreak of the Delta variant detected in Auckland on 17 August 2021. The results indicated that without making any changes to the alert levels, there should be a "zero case day around the beginning of October, but that a zero case day [was] unlikely in the near term at lower levels of intervention". === Commentary === When the New Zealand Government announced in October 2021 that senior students would be able to return to high schools within Level 3 of the response to COVID-19, O'Neale agreed with some members of the education sector that it would to pose a risk for increasing case numbers, noting that the modelling suggested "most of the extra infections from schools reopening will actually show up in non-school contexts as a result of students subsequently infecting other people in their households or in other community interactions". He acknowledged that there was a strategy by the government to improve ventilation in schools and have a mask mandate, and recommended supporting these measures by bringing in rapid antigen testing. As Omicron cases were detected at the border in late 2021, O'Neale noted that most future infections would be this variant and likely to leak out into the community, later warning that New Zealand should get prepared for a 'skyrocketing' in case numbers as happened in New South Wales which had similar rates of vaccination to New Zealand. He said in the interview that summer had meant schools were closed, people were on holiday and there were more outdoor activities which reduced transmissibility, but "the number of cases were likely to creep up once the immunity provided by the vaccine started to wane a little and people returned to work and schools". O'Neale said at this time that the numbers of COVID-19 cases could double every three days and modellers were making their predictions based on most of the cases being the Omicron variant which had a fast incubation period and would grow faster than Delta. The New Zealand media reported on 14 February 2022, that the government was about to make a change in its approach to managing Omicron due to growing numbers of cases. O'Neale suggested this change was an acknowledgement that systems to manage the virus needed to change and the plan to shorten home isolation periods and good contact tracing for high risk cases was less restrictive than under previous levels. On 16 February 2022, after a slight drop in numbers of cases of COVID-19 in New Zealand, O'Neale was one of modellers at Te Pūnaha Matatini who said that reported case numbers were doubling roughly every three days, with a possible tally in the community of around 4000 before the end of February. O'Neale explained the lag effect that meant a daily case number was often of cases reported more than a week previously. The modellers predicted a possible wave of three-to-four months, and with low transmission and a high rate of uptake of the booster vaccine, there could be "1.5m infections of which 386,400 were reported as cases" with numbers increasing if booster rates were low and transmission high". He later confirmed that the case numbers most likely reflected a backlog that had built up over the past week, a time he described as having "very noisy data due to data processing and testing systems being a bit up and down". O'Neale maintained that a peak in numbers was likely in mid to late March 2022. The issue of COVID-19 spreading widely amongst young people in New Zealand was explained by O'Neale on 27 February 2022 as being expected in the "early stages of an outbreak...[because]...younger people, much more mobile and tending to go out a lot more and also tending to get symptoms at lower rates and so less likely to be tested, less likely to know they're infected, less likely to be isolating". O'Neale clarified the importance of New Zealanders getting the booster vaccine shot to reduce the impact of the Omicron variant early in 2022, and said data from the UK had shown that after a booster, there was an increase of 20 percent effectiveness against infection to 60 percent. He later agreed with Helen Petousis-Harris and Michael Baker that David Seymour was not following the evidence by suggesting at the time that vaccine mandates could be removed. O'Neale said that having a booster dose remained the most effective way to protect against hospitalisations, and "asking people with high exposure risk due to their work to be vaccinated still [benefitted] the community". After the New Zealand Government announced a roadmap to ease restrictions in the country on 21 February 2022, O'Neale predicted high case numbers as more people became exposed to Omicron, but ultimately it was how people behaved and high levels of booster vaccine uptake, saying that the government were being pragmatic in waiting until easing restrictions. When a journalist claimed at the time, that the New Zealand Government had accelerated the rate of the outbreak, O'Neale worried that the Government was "burying its head in the sand a little bit". According to O'Neale, the Government, being apparently comfortable in believing the country could weather the outbreak, was "misguided from an outbreak control perspective...[although]...it may well be that Omicron peaks and subsides without unduly burdening the health system". While another intended change at the time to shortened isolation periods was seen as necessary by some businesses, O'Neale explained that a shorter isolation period would increase the risk of the virus spreading more quickly, but was unlikely to have a "huge benefit in returning essential workers to the workforce, given there's been such incredibly wide uptake of businesses opting in to this essential worker classification". O'Neale agreed with updating the advice to encourage the wearing of higher quality masks to reduce airborne transmission, but was cautious about people eating and drinking inside and not wearing masks. Retaining COVID-19 pre-departure tests for overseas visitors was also seen by O'Neale as the correct approach at the time for New Zealand, despite some countries having done away with them. Reasons he gave for retaining these tests included preventing a rise in infection numbers due to travellers entering the country and possibly stalling the arrival of variants. By March 2022, when Omicron had caused a major spike in daily cases, O'Neale urged New Zealanders to continue declaring test results so modellers could have confidence in the numbers they were putting out, as they needed data other than hospitalisation rates which were late on the "disease progression pipeline". With numbers of infections continuing to rise, predictions were made early in April 2022 by O'Neale that Auckland was likely to be in the "tail of infections, while other regions were still closer to their peak", and with an expected drop in case numbers after this, there could be "about 5000 new cases per day for the whole country if all the regions...managed to reach that plateau at the same time". When New Zealand recorded more than one million cases of COVID-19 on the 10 May 2022, O'Neale and other modellers said that was an underestimation of infections, which were more likely to have been around three million. O'Neale noted this presented "challenges for New Zealand down the line – making it more difficult to predict subsequent waves, reinfection rates, or the burden of long Covid". As New Zealand entered the Christmas holiday period in 2022 and it looked likely that COVID-19 numbers would increase, O'Neale as Co-lead of the Network Contagion Modelling programme at Auckland University, said that even though the pathogen might not be as serious as earlier ones, there could be bigger impacts on the health system as the modelling was suggesting one in 20 people could get the infection. He noted as many as 30 to 40 percent of infections were asymptomatic, and recommended people take rapid antigen tests as a precaution before going to social events or visiting people who were vulnerable. == Selected publications and further research == Transitivity and degree assortativity explained: The bipartite structure of social networks (2020). This research paper co-authored by O'Neal, showed how different processes in networks are related. The concluding claim of the research was that every social network could be expressed as a bipartite network, possibly through affiliations, memberships, or "accepting a friendship request on social media", and understanding the levels of transitivity and degree, would be "useful to improve studies and models of spreading phenomena on social networks, especially if group-based (bipartite)structures are considered". Social network analysis of obsidian artefacts and Māori interaction in northern Aotearoa New Zealand (2019). O'Neale collaborated on research that examined the evidence from social network analysis of obsidian recovered from sites in New Zealand, concluding that it documented how Māori society transformed over a period of 700 years from what historical accounts had described as "relatively autonomous village-based groups into larger territorial lineages, which later formed even larger geo-political tribal associations...[with]...subsequent changes in levels of interaction and social affiliation". In a press release prior to the research beginning, O'Neale said the aim was to look for patterns in the relationship between "archaeological sites, artefacts and obsidian sources" and hypothesize about how geography or social groupings produced the current distribution of obsidian. Shaun Hendy said the project showed how research was becoming more interdisciplinary and would make good use of the diverse range of networks in Te Pūnaha Matatini. Structure dynamics of evolving scientific networks (2020). This paper critically examined how co-authorship networks were affecting the evolution of scientific networks. Taking the position that a co-authored network was a one-mode projection of an original bipartite network where authors were connected to the papers they have written, the paper held that the understanding of the formation and structures of co-authored networks should take the properties of the original network into account. Power Law Distributions of Patents as Indicators of Innovation (2012). This paper acknowledged that while per capita production of patents was an important indicator of a country's innovation, there was also evidence suggesting that power laws could be a complementary measure of studying innovation and explaining variations between countries. Using simulations based on rules that generated power laws, an explanation was found for some of the variations across countries. Endogenous theories of growth, including the roles and inter-relationships of firms, were evaluated as measures of innovation and the researchers hypothesized that if distribution of firm sizes followed a power law, it would be valid to approach the consideration of innovation within the context of "distribution of patents within an economy rather than just the total number of patents itself". Structure of the Region-Technology Network as a Driver for Technological Innovation(2021). This is a paper co-authored by O'Neale that recorded the research findings of an international investigation into Agglomeration and spillovers as phenomena of technological innovation and drivers of regional economic growth. The research was based on the premise [that] "agglomeration effects occur when firms or people accrue benefit from being located near to one another, while knowledge spillovers are one process by which firms and individuals can derive such benefits, by taking advantage of new knowledge that has been created by others". Using network science to quantify economic disruptions in regional input-output networks (2019). O'Neale worked in a team that presented this paper on developing models which identified possible flow-on effects on economic systems because of natural hazards, specifically how to identify industries that had a large impact on an economic system when they were disrupted. The research considered how information, structures and connections of an industries-based business network could take a network science approach to developing a model that predicted and limited spillover effects. The results indicated to the researchers [it is] "foreseeable that increased data collection may make it possible to create networks at individual business level". Bourdieu, networks, and movements: Using the concepts of habitus, field and capital to understand a network analysis of gender differences in undergraduate physics (2019). This is published research into why there was an underrepresentation of women in science using the approach of combining network analysis of student enrolment data with the sociological theory of Pierre Bourdieu. The study found that female students enrol more in life science fields than male students, who were more likely to enrol in the Physics-Maths and Computer Science fields, possibly contributing to a perception that women were "unwelcome" in the field of science. The head of physics at the University of Auckland, Professor Richard Easther, said he was excited that his department had hosted this work [because] "it [helped] us to make evidence-based changes to our own practice, and the ways we present our subject to students". The findings of that study were confirmed in a 2021 project in which O'Neale participated, that used a direct network analysis approach to examine the choices students in New Zealand high schools made for their final year in Science, Technology, Engineering and Mathematics (STEM), and concluded from the data that females were well represented in the life sciences, but less so in physics, calculus, and vocational standards. Investigating the transmission risk of infectious disease outbreaks through the Aotearoa Co-incidence Network (ACN): a population-based study (2022). This paper showed that early in 2022, O'Neale was instrumental in establishing The Aotearoa Co-incidence Network (ACN) which used nationwide data in New Zealand to identify areas that have both high potential transmission risk and high vulnerability to infectious diseases. The study accepted that COVID-19 had restricted some immunisation programmes and it was important to get a better understanding of potential infectious disease transmission to help future policy and research respond to new and re-emerging infectious diseases. == Awards == O'Neale was part of the team at Te Pūnaha Matatini that was awarded the 2020 Prime Minister's Science Prize in recognition of their work in developing data-based mathematical models to inform the New Zealand's response to COVID-19. Diane Abad-Vergara from the World Health Organization, said that the work done by Pūnaha Matatini "had significant health and social impacts for New Zealand...[and was]...part of the reason why New Zealand [was] one of the few countries to eliminate the virus." == References ==
|
Wikipedia:Dionisio Gallarati#0
|
Dionisio Gallarati (May 8, 1923 – May 13, 2019) was an Italian mathematician, who specialised in algebraic geometry. He was a major influence on the development of algebra and geometry at the University of Genova. == Life == Born 8 May 1923 in Savona, Italy, Gallarati joined the University of Pisa in 1941. His studies being interrupted by the war, he received his first degree from Genova. He started his research career at l'Istituto Nazionale di Alta Matematica in Rome, where he was taught by Giacomo Albanese, Leonard Roth, Leonida Tonelli, E. G. Togliatti, Beniamino Segre and Francesco Severi. He took a post at Genova in 1947, where he stayed until he retired in 1987. == Research == Gallarati published 64 papers between 1951 and 1996. Important amongst his research was the study of surfaces in P3 with multiple isolated singularities. His lower bounds for maximal number of nodes of surfaces of degree n stood for a long time, and exact solutions for large n were still unknown in 2001. In Grassmannian geometry he extended Segre's bound "for the number of linearly independent complexes containing the curve in the Grassmannian corresponding to the tangent lines of a nondegenerate projective curve." He extended the results to arbitrarily dimensioned varieties' tangent spaces, to higher degree complexes, and to arbitrary curves in Grassmannians corresponding to degenerate scrolls. == Works == Gallarati wrote three books and 64 papers, in algebraic geometry, differential geometry, functional analysis, group theory, and biography. His co-authors include Giulio Aruffo, Mauro C. Beltrametti, Maria Teresa Bonardi, Gabriella Canonero, Ettore Carletti, Enrica Casazza, Mario G. Galli, Aldo Monti Iandelli, Giacomo Bragadin, Giorgio Luigi Olcese, Giulio Passatore, Luigi Robert, Aldo Rollero, Michele Sarà, Giulio Scarsi and Maria Ezia Serpico. 33 of his papers are collected in Dionisio Gallarati: Collected Papers of Dionisio Gallarati Kinston, Ontario, 2000, ed A. V. Geramita. == References ==
|
Wikipedia:Dionys Burger#0
|
Dionys Burger (10 July 1892, Amsterdam - 19 April 1987) was a Dutch secondary school physics teacher and author of the novel Sphereland. == References ==
|
Wikipedia:Diophantus#0
|
Diophantus of Alexandria (Ancient Greek: Διόφαντος, romanized: Diophantos) (; fl. 250 CE) was a Greek mathematician who was the author of the Arithmetica in thirteen books, ten of which are still extant, made up of arithmetical problems that are solved through algebraic equations. Although Joseph-Louis Lagrange called Diophantus "the inventor of algebra" he did not invent it; however, his exposition became the standard within the Neoplatonic schools of Late antiquity, and its translation into Arabic in the 9th century AD and had influence in the development of later algebra: Diophantus' method of solution matches medieval Arabic algebra in its concepts and overall procedure. The 1621 edition of Arithmetica by Bachet gained fame after Pierre de Fermat wrote his famous "Last Theorem" in the margins of his copy. In modern use, Diophantine equations are algebraic equations with integer coefficients for which integer solutions are sought. Diophantine geometry and Diophantine approximations are two other subareas of number theory that are named after him. Some problems from the Arithmetica have inspired modern work in both abstract algebra and number theory. == Biography == The exact details of Diophantus life are obscure. Although he probably flourished in the third century CE, he may have lived anywhere between 170 BCE, roughly contemporaneous with Hypsicles, the latest author he quotes from, and 350 CE, when Theon of Alexandria quotes from him. Paul Tannery suggested that a reference to an "Anatolius" as a student of Diophantus in the works of Michael Psellos may refer to the early Christian bishop Anatolius of Alexandria, who may possibly the same Anatolius mentioned by Eunapius as a teacher of the pagan Neopythagorean philosopher Iamblichus, either of which would place him in the 3rd century CE. The only definitive piece of information about his life is derived from a set of mathematical puzzles attributed to the 5th or 6th century CE grammarian Metrodorus preserved in book 14 of the Greek Anthology. One of the problems (sometimes called Diophantus' epitaph) states:Here lies Diophantus, the wonder behold. Through art algebraic, the stone tells how old: 'God gave him his boyhood one-sixth of his life, One twelfth more as youth while whiskers grew rife; And then yet one-seventh ere marriage begun; In five years there came a bouncing new son. Alas, the dear child of master and sage After attaining half the measure of his father's life chill fate took him. After consoling his fate by the science of numbers for four years, he ended his life.'This puzzle implies that Diophantus' age x can be expressed as x = x/6 + x/12 + x/7 + 5 + x/2 + 4 which gives x a value of 84 years. However, the accuracy of the information cannot be confirmed. == Arithmetica == Arithmetica is the major work of Diophantus and the most prominent work on premodern algebra in Greek mathematics. It is a collection of 290 algebraic problems giving numerical solutions of determinate equations (those with a unique solution) and indeterminate equations. Arithmetica was originally written in thirteen books, but only six of them survive in Greek, while another four books survive in Arabic, which were discovered in 1968. The books in Arabic correspond to books 4 to 7 of the original treatise, while the Greek books correspond to books 1 to 3 and 8 to 10. Arithmetica is the earliest extant work present that solve arithmetic problems by algebra. Diophantus however did not invent the method of algebra, which existed before him. Algebra was practiced and diffused orally by practitioners, with Diophantus picking up technique to solve problems in arithmetic. Equations in the book are presently called Diophantine equations. The method for solving these equations is known as Diophantine analysis. Most of the Arithmetica problems lead to quadratic equations. === Notation === Diophantus introduced an algebraic symbolism that used an abridged notation for frequently occurring operations, and an abbreviation for the unknown and for the powers of the unknown. Similar to medieval Arabic algebra, Diophantus uses three stages to solution of a problem by algebra: An unknown is named and an equation is set up An equation is simplified to a standard form (al-jabr and al-muqābala in Arabic) Simplified equation is solved Diophantus does not give classification of equations in six types like Al-Khwarizmi in extant parts of Arithmetica. He does says that he would give solution to three terms equations later, so this part of work is possibly just lost. The main difference between Diophantine notation and modern algebraic notation is that the former lacked special symbols for operations, relations, and exponentials. So for example, what would be written in modern notation as x 3 − 2 x 2 + 10 x − 1 = 5 , {\displaystyle x^{3}-2x^{2}+10x-1=5,} which can be rewritten as ( x 3 1 + x 10 ) − ( x 2 2 + x 0 1 ) = x 0 5 , {\displaystyle \left({x^{3}}1+{x}10\right)-\left({x^{2}}2+{x^{0}}1\right)={x^{0}}5,} would be written in Diophantus's notation as K υ α ¯ ζ ι ¯ ⋔ Δ υ β ¯ M α ¯ {\displaystyle \mathrm {K} ^{\upsilon }{\overline {\alpha }}\;\zeta {\overline {\iota }}\;\,\pitchfork \;\,\Delta ^{\upsilon }{\overline {\beta }}\;\mathrm {M} {\overline {\alpha }}\,\;} ἴ σ M ε ¯ {\displaystyle \sigma \;\,\mathrm {M} {\overline {\varepsilon }}} Unlike in modern notation, the coefficients come after the variables and addition is represented by the juxtaposition of terms. A literal symbol-for-symbol translation of Diophantus's equation into a modern equation would be the following: x 3 1 x 10 − x 2 2 x 0 1 = x 0 5 {\displaystyle {x^{3}}1{x}10-{x^{2}}2{x^{0}}1={x^{0}}5} where to clarify, if the modern parentheses and plus are used then the above equation can be rewritten as: ( x 3 1 + x 10 ) − ( x 2 2 + x 0 1 ) = x 0 5 {\displaystyle \left({x^{3}}1+{x}10\right)-\left({x^{2}}2+{x^{0}}1\right)={x^{0}}5} === Contents === In Book 3, Diophantus solves problems of finding values which make two linear expressions simultaneously into squares or cubes. In book 4, he finds rational powers between given numbers. He also noticed that numbers of the form 4 n + 3 {\displaystyle 4n+3} cannot be the sum of two squares. Diophantus also appears to know that every number can be written as the sum of four squares. If he did know this result (in the sense of having proved it as opposed to merely conjectured it), his doing so would be truly remarkable: even Fermat, who stated the result, failed to provide a proof of it and it was not settled until Joseph-Louis Lagrange proved it using results due to Leonhard Euler. == Other works == Another work by Diophantus, On Polygonal Numbers is transmitted in an incomplete form in four Byzantine manuscripts along with the Arithmetica. Two other lost works by Diophantus are known: Porisms and On Parts. Recently, Wilbur Knorr has suggested that another book, Preliminaries to the Geometric Elements, traditionally attributed to Hero of Alexandria, may actually be by Diophantus. === On polygonal numbers === This work on polygonal numbers, a topic that was of great interest to the Pythagoreans consists of a preface and five propositions in its extant form. The treatise breaks off in the middle of a proposition about how many ways a number can be a polygonal number. === The Porisms === The Porisms was a collection of lemmas along with accompanying proofs. Although The Porisms is lost, we know three lemmas contained there, since Diophantus quotes them in the Arithmetica and refers the reader to the Porisms for the proof. One lemma states that the difference of the cubes of two rational numbers is equal to the sum of the cubes of two other rational numbers, i.e. given any a and b, with a > b, there exist c and d, all positive and rational, such that a3 − b3 = c3 + d3. === On Parts === This work, on fractions, is known by a single reference, a Neoplatonic scholium to Iamblichus' treatise on Nicomachus' Introduction to Arithmetic. Next to a line where Iamblichus writes "Some of the Pythagoreans said that the unit is the borderline between number and parts" the scholiast writes "So Diophantus writes in On Parts, for parts involve progress in diminution carried to infinity." == Influence == Diophantus' work has had a large influence in history. Although Joseph-Louis Lagrange called Diophantus "the inventor of algebra", he did not invent it, however his work Arithmetica created a foundation for work on algebra and in fact much of advanced mathematics is based on algebra. Diophantus and his works influenced mathematics in the medieval Islamic world, and editions of Arithmetica exerted a profound influence on the development of algebra in Europe in the late sixteenth and through the 17th and 18th centuries. === Later antiquity === After its publication, Diophantus' work continued to be read in the Greek-speaking Mediterranean from the 4th through the 7th centuries. The earliest known reference to Diophantus, in the 4th century, is the Commentary on the Almagest Theon of Alexandria, which quotes from the introduction to the Arithmetica. According to the Suda, Hypatia, who was Theon's daughter and frequent collaborator, wrote a now lost commentary on Diophantus' Arithmetica, which suggests that this work may have been closely studied by Neoplatonic mathematicians in Alexandria during Late antiquity. References to Diophantus also survive in a number of Neoplatonic scholia to the works of Iamblichus. A 6th century Neoplatonic commentary on Porphyry's Isagoge by Pseudo-Elias also mentions Diophantus; after outlining the quadrivium of arithmetic, geometry, music, and astronomy and four other disciplines adjacent to them ("logistic", "geodesy", "music in matter" and "spherics"), it mentions that Nicomachus (author of the Introduction to Arithmetic) occupies the first place in arithmetic but Diophantus occupies the first place in "logistic", showing that, despite the title of Arithmetica, the more algebraic work of Diophantus was already seen as distinct from arithmetic prior to the medieval era. === Medieval era === Like many other Greek mathematical treatises, Diophantus was forgotten in Western Europe during the Dark Ages, since the study of ancient Greek, and literacy in general, had greatly declined. The portion of the Greek Arithmetica that survived, however, was, like all ancient Greek texts transmitted to the early modern world, copied by, and thus known to, medieval Byzantine scholars. Scholia on Diophantus by the Byzantine Greek scholar John Chortasmenos (1370–1437) are preserved together with a comprehensive commentary written by the earlier Greek scholar Maximos Planudes (1260 – 1305), who produced an edition of Diophantus within the library of the Chora Monastery in Byzantine Constantinople. Arithmetica became known to mathematicians in the Islamic world in the ninth century, when Qusta ibn Luqa translated it into Arabic. In 1463 German mathematician Regiomontanus wrote:"No one has yet translated from the Greek into Latin the thirteen books of Diophantus, in which the very flower of the whole of arithmetic lies hidden." Arithmetica was first translated from Greek into Latin by Bombelli in 1570, but the translation was never published. However, Bombelli borrowed many of the problems for his own book Algebra. The editio princeps of Arithmetica was published in 1575 by Xylander. === Fermat === The Latin translation of Arithmetica by Bachet in 1621 became the first Latin edition that was widely available. Pierre de Fermat owned a copy, studied it and made notes in the margins. The 1621 edition of Arithmetica by Bachet gained fame after Pierre de Fermat wrote his famous "Last Theorem" in the margins of his copy: If an integer n is greater than 2, then an + bn = cn has no solutions in non-zero integers a, b, and c. I have a truly marvelous proof of this proposition which this margin is too narrow to contain.Fermat's proof was never found, and the problem of finding a proof for the theorem went unsolved for centuries. A proof was finally found in 1994 by Andrew Wiles after working on it for seven years. It is believed that Fermat did not actually have the proof he claimed to have. Although the original copy in which Fermat wrote this is lost today, Fermat's son edited the next edition of Diophantus, published in 1670. Even though the text is otherwise inferior to the 1621 edition, Fermat's annotations—including the "Last Theorem"—were printed in this version. Fermat was not the first mathematician so moved to write in his own marginal notes to Diophantus; the Byzantine scholar John Chortasmenos (1370–1437) had written "Thy soul, Diophantus, be with Satan because of the difficulty of your other theorems and particularly of the present theorem" next to the same problem. Diophantus was among the first to recognise positive rational numbers as numbers, by allowing fractions for coefficients and solutions. He coined the term παρισότης (parisotēs) to refer to an approximate equality. This term was rendered as adaequalitas in Latin, and became the technique of adequality developed by Pierre de Fermat to find maxima for functions and tangent lines to curves. === Diophantine analysis === Today, Diophantine analysis is the area of study where integer (whole-number) solutions are sought for equations, and Diophantine equations are polynomial equations with integer coefficients to which only integer solutions are sought. It is usually rather difficult to tell whether a given Diophantine equation is solvable. Most of the problems in Arithmetica lead to quadratic equations. Diophantus looked at 3 different types of quadratic equations: ax2 + bx = c, ax2 = bx + c, and ax2 + c = bx. The reason why there were three cases to Diophantus, while today we have only one case, is that he did not have any notion for zero and he avoided negative coefficients by considering the given numbers a, b, c to all be positive in each of the three cases above. Diophantus was always satisfied with a rational solution and did not require a whole number which means he accepted fractions as solutions to his problems. Diophantus considered negative or irrational square root solutions "useless", "meaningless", and even "absurd". To give one specific example, he calls the equation 4 = 4x + 20 'absurd' because it would lead to a negative value for x. One solution was all he looked for in a quadratic equation. There is no evidence that suggests Diophantus even realized that there could be two solutions to a quadratic equation. He also considered simultaneous quadratic equations. === Rediscovery of books IV-VII === In 1968, Fuat Sezgin found four previously unknown books of Arithmetica at the shrine of Imam Rezā in the holy Islamic city of Mashhad in northeastern Iran. The four books are thought to have been translated from Greek to Arabic by Qusta ibn Luqa (820–912). Norbert Schappacher has written: [The four missing books] resurfaced around 1971 in the Astan Quds Library in Meshed (Iran) in a copy from 1198. It was not catalogued under the name of Diophantus (but under that of Qusta ibn Luqa) because the librarian was apparently not able to read the main line of the cover page where Diophantus’s name appears in geometric Kufi calligraphy. == Notes == == Editions and translations == Bachet de Méziriac, C.G. Diophanti Alexandrini Arithmeticorum libri sex et De numeris multangulis liber unus. Paris: Lutetiae, 1621. Diophantus Alexandrinus, Pierre de Fermat, Claude Gaspard Bachet de Meziriac, Diophanti Alexandrini Arithmeticorum libri 6, et De numeris multangulis liber unus. Cum comm. C(laude) G(aspar) Bacheti et observationibus P(ierre) de Fermat. Acc. doctrinae analyticae inventum novum, coll. ex variis eiu. Tolosae 1670, doi:10.3931/e-rara-9423. Tannery, P. L. Diophanti Alexandrini Opera omnia: cum Graecis commentariis, Lipsiae: In aedibus B.G. Teubneri, 1893-1895 (online: vol. 1, vol. 2) Sesiano, Jacques. The Arabic text of Books IV to VII of Diophantus’ translation and commentary. Thesis. Providence: Brown University, 1975. Sesiano, Jacques (6 December 2012). Books IV to VII of Diophantus’ Arithmetica: in the Arabic Translation Attributed to Qustā ibn Lūqā. Springer Science & Business Media. ISBN 978-1-4613-8174-7. Retrieved 3 May 2025. Christianidis, Jean; Oaks, Jeffrey A. (2023). The Arithmetica of Diophantus: a complete translation and commentary. Abingdon, Oxon New York, NY: Routledge. ISBN 1138046353. == References == Christianidis, Jean; Oaks, Jeffrey (2013). "Practicing algebra in late antiquity: The problem-solving of Diophantus of Alexandria". Historia Mathematica. 40 (2): 158–160. doi:10.1016/j.hm.2012.09.001. Christianidis, Jean; Megremi, Athanasia (2019). "Tracing the early history of algebra: Testimonies on Diophantus in the Greek-speaking world (4th–7th century CE)". Historia Mathematica. 47: 16–38. doi:10.1016/j.hm.2019.02.002. Cooke, Roger (1997). The History of Mathematics: A Brief Course. Wiley-Interscience. ISBN 0-471-18082-3. Derbyshire, John (2006). Unknown Quantity: A Real And Imaginary History of Algebra. Joseph Henry Press. ISBN 0-309-09657-X. == Further reading == Allard, A. "Les scolies aux arithmétiques de Diophante d'Alexandrie dans le Matritensis Bibl.Nat.4678 et les Vatican Gr.191 et 304" Byzantion 53. Brussels, 1983: 682–710. Christianidis, J. "Maxime Planude sur le sens du terme diophantien "plasmatikon"", Historia Scientiarum, 6 (1996)37-41. Christianidis, J. "Une interpretation byzantine de Diophante", Historia Mathematica, 25 (1998) 22–28. Katz, Victor J.; Parshall, Karen Hunger (2014). Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century. Princeton University Press. ISBN 978-0-691-14905-9. Rashed, Roshdi, Houzel, Christian. Les Arithmétiques de Diophante : Lecture historique et mathématique, Berlin, New York : Walter de Gruyter, 2013. Rashed, Roshdi, Histoire de l’analyse diophantienne classique : D’Abū Kāmil à Fermat, Berlin, New York : Walter de Gruyter. Rashed, Roshdi. L’Art de l’Algèbre de Diophante. éd. arabe. Le Caire : Bibliothèque Nationale, 1975. Rashed, Roshdi. Diophante. Les Arithmétiques. Volume III: Book IV; Volume IV: Books V–VII, app., index. Collection des Universités de France. Paris (Société d’Édition "Les Belles Lettres"), 1984. == External links == O'Connor, John J.; Robertson, Edmund F., "Diophantus", MacTutor History of Mathematics Archive, University of St Andrews Diophantus's Riddle Diophantus' epitaph, by E. Weisstein Norbert Schappacher (2005). Diophantus of Alexandria : a Text and its History. Review of Sesiano's Diophantus Review of J. Sesiano, Books IV to VII of Diophantus' Arithmetica, by Jan P. Hogendijk Latin translation from 1575 by Wilhelm Xylander Scans of Tannery's edition of Diophantus at wilbourhall.org
|
Wikipedia:Diophantus II.VIII#0
|
The eighth problem of the second book of Arithmetica by Diophantus (c. 200/214 AD – c. 284/298 AD) is to divide a square into a sum of two squares. == The solution given by Diophantus == Diophantus takes the square to be 16 and solves the problem as follows: To divide a given square into a sum of two squares. To divide 16 into a sum of two squares. Let the first summand be x 2 {\displaystyle x^{2}} , and thus the second 16 − x 2 {\displaystyle 16-x^{2}} . The latter is to be a square. I form the square of the difference of an arbitrary multiple of x diminished by the root [of] 16, that is, diminished by 4. I form, for example, the square of 2x − 4. It is 4 x 2 + 16 − 16 x {\displaystyle 4x^{2}+16-16x} . I put this expression equal to 16 − x 2 {\displaystyle 16-x^{2}} . I add to both sides x 2 + 16 x {\displaystyle x^{2}+16x} and subtract 16. In this way I obtain 5 x 2 = 16 x {\displaystyle 5x^{2}=16x} , hence x = 16 / 5 {\displaystyle x=16/5} . Thus one number is 256/25 and the other 144/25. The sum of these numbers is 16 and each summand is a square. == Geometrical interpretation == Geometrically, we may illustrate this method by drawing the circle x2 + y2 = 42 and the line y = 2x − 4. The pair of squares sought are then x02 and y02, where (x0, y0) is the point not on the y-axis where the line and circle intersect. This is shown in the adjacent diagram. == Generalization of Diophantus's solution == We may generalize Diophantus's solution to solve the problem for any given square, which we will represent algebraically as a2. Also, since Diophantus refers to an arbitrary multiple of x, we will take the arbitrary multiple to be tx. Then: ( t x − a ) 2 = a 2 − x 2 ⇒ t 2 x 2 − 2 a t x + a 2 = a 2 − x 2 ⇒ x 2 ( t 2 + 1 ) = 2 a t x ⇒ x = 2 a t t 2 + 1 or x = 0. {\displaystyle {\begin{aligned}&(tx-a)^{2}=a^{2}-x^{2}\\\Rightarrow \ \ &t^{2}x^{2}-2atx+a^{2}=a^{2}-x^{2}\\\Rightarrow \ \ &x^{2}(t^{2}+1)=2atx\\\Rightarrow \ \ &x={\frac {2at}{t^{2}+1}}{\text{ or }}x=0.\\\end{aligned}}} Therefore, we find that one of the summands is x 2 = ( 2 a t t 2 + 1 ) 2 {\displaystyle x^{2}=\left({\tfrac {2at}{t^{2}+1}}\right)^{2}} and the other is ( t x − a ) 2 = ( a ( t 2 − 1 ) t 2 + 1 ) 2 {\displaystyle (tx-a)^{2}=\left({\tfrac {a(t^{2}-1)}{t^{2}+1}}\right)^{2}} . The sum of these numbers is a 2 {\displaystyle a^{2}} and each summand is a square. Geometrically, we have intersected the circle x2 + y2 = a2 with the line y = tx − a, as shown in the adjacent diagram. Writing the lengths, OB, OA, and AB, of the sides of triangle OAB as an ordered tuple, we obtain the triple [ a ; 2 a t t 2 + 1 ; a ( t 2 − 1 ) t 2 + 1 ] {\displaystyle \left[a;{\frac {2at}{t^{2}+1}};{\frac {a(t^{2}-1)}{t^{2}+1}}\right]} . The specific result obtained by Diophantus may be obtained by taking a = 4 and t = 2: [ a ; 2 a t t 2 + 1 ; a ( t 2 − 1 ) t 2 + 1 ] = [ 20 5 ; 16 5 ; 12 5 ] = 4 5 [ 5 ; 4 ; 3 ] . {\displaystyle \left[a;{\frac {2at}{t^{2}+1}};{\frac {a(t^{2}-1)}{t^{2}+1}}\right]=\left[{\frac {20}{5}};{\frac {16}{5}};{\frac {12}{5}}\right]={\frac {4}{5}}\left[5;4;3\right].} We see that Diophantus' particular solution is in fact a subtly disguised (3, 4, 5) triple. However, as the triple will always be rational as long as a and t are rational, we can obtain an infinity of rational triples by changing the value of t, and hence changing the value of the arbitrary multiple of x. This algebraic solution needs only one additional step to arrive at the Platonic sequence [ t 2 + 1 2 ; t ; t 2 − 1 2 ] {\displaystyle [{\tfrac {t^{2}+1}{2}};t;{\tfrac {t^{2}-1}{2}}]} and that is to multiply all sides of the above triple by a factor t 2 + 1 2 a {\displaystyle \quad {\tfrac {t^{2}+1}{2a}}} . Notice also that if a = 1, the sides [OB, OA, AB] reduce to [ 1 ; 2 t t 2 + 1 ; t 2 − 1 t 2 + 1 ] . {\displaystyle \left[1;{\frac {2t}{t^{2}+1}};{\frac {t^{2}-1}{t^{2}+1}}\right].} In modern notation this is just ( 1 , sin θ , cos θ ) , {\displaystyle (1,\sin \theta ,\cos \theta ),} for θ shown in the above graph, written in terms of the cotangent t of θ/2. In the particular example given by Diophantus, t has a value of 2, the arbitrary multiplier of x. Upon clearing denominators, this expression will generate Pythagorean triples. Intriguingly, the arbitrary multiplier of x has become the cornerstone of the generator expression(s). Diophantus II.IX reaches the same solution by an even quicker route which is very similar to the 'generalized solution' above. Once again the problem is to divide 16 into two squares. Let the first number be N and the second an arbitrary multiple of N diminished by the root (of) 16. For example 2N − 4. Then: N 2 + ( 2 N − 4 ) 2 = 16 ⇒ 5 N 2 + 16 − 16 N = 16 ⇒ 5 N 2 = 16 N ⇒ N = 16 5 {\displaystyle {\begin{aligned}&N^{2}+(2N-4)^{2}=16\\\Rightarrow \ \ &5N^{2}+16-16N=16\\\Rightarrow \ \ &5N^{2}=16N\\\Rightarrow \ \ &N={\frac {16}{5}}\\\end{aligned}}} Fermat's famous comment which later became Fermat's Last Theorem appears sandwiched between 'Quaestio VIII' and 'Quaestio IX' on page 61 of a 1670 edition of Arithmetica. == See also == Fermat's Last Theorem and Diophantus II.VIII == References ==
|
Wikipedia:Dirac operator#0
|
In mathematics and in quantum mechanics, a Dirac operator is a first-order differential operator that is a formal square root, or half-iterate, of a second-order differential operator such as a Laplacian. It was introduced in 1847 by William Hamilton and in 1928 by Paul Dirac. The question which concerned Dirac was to factorise formally the Laplace operator of the Minkowski space, to get an equation for the wave function which would be compatible with special relativity. == Formal definition == In general, let D be a first-order differential operator acting on a vector bundle V over a Riemannian manifold M. If D 2 = Δ , {\displaystyle D^{2}=\Delta ,\,} where ∆ is the (positive, or geometric) Laplacian of V, then D is called a Dirac operator. Note that there are two different conventions as to how the Laplace operator is defined: the "analytic" Laplacian, which could be characterized in R n {\displaystyle \mathbb {R} ^{n}} as Δ = ∇ 2 = ∑ j = 1 n ( ∂ ∂ x j ) 2 {\displaystyle \Delta =\nabla ^{2}=\sum _{j=1}^{n}{\Big (}{\frac {\partial }{\partial x_{j}}}{\Big )}^{2}} (which is negative-definite, in the sense that ∫ R n φ ( x ) ¯ Δ φ ( x ) d x = − ∫ R n | ∇ φ ( x ) | 2 d x < 0 {\displaystyle \int _{\mathbb {R} ^{n}}{\overline {\varphi (x)}}\Delta \varphi (x)\,dx=-\int _{\mathbb {R} ^{n}}|\nabla \varphi (x)|^{2}\,dx<0} for any smooth compactly supported function φ ( x ) {\displaystyle \varphi (x)} which is not identically zero), and the "geometric", positive-definite Laplacian defined by Δ = − ∇ 2 = − ∑ j = 1 n ( ∂ ∂ x j ) 2 {\displaystyle \Delta =-\nabla ^{2}=-\sum _{j=1}^{n}{\Big (}{\frac {\partial }{\partial x_{j}}}{\Big )}^{2}} . == History == W.R. Hamilton defined "the square root of the Laplacian" in 1847 in his series of articles about quaternions: <...> if we introduce a new characteristic of operation, ◃ {\displaystyle \triangleleft } , defined with relation to these three symbols i j k , {\displaystyle ijk,} and to the known operation of partial differentiation, performed with respect to three independent but real variables x y z , {\displaystyle xyz,} as follows: ◃ = i d d x + j d d y + k d d z ; {\displaystyle \triangleleft ={\frac {i\mathrm {d} }{\mathrm {d} x}}+{\frac {j\mathrm {d} }{\mathrm {d} y}}+{\frac {k\mathrm {d} }{\mathrm {d} z}};} this new characteristic ◃ {\displaystyle \triangleleft } will have the negative of its symbolic square expressed by the following formula : − ◃ 2 = ( d d x ) 2 + ( d d y ) 2 + ( d d z ) 2 ; {\displaystyle -\triangleleft ^{2}={\Big (}{\frac {\mathrm {d} }{\mathrm {d} x}}{\Big )}^{2}+{\Big (}{\frac {\mathrm {d} }{\mathrm {d} y}}{\Big )}^{2}+{\Big (}{\frac {\mathrm {d} }{\mathrm {d} z}}{\Big )}^{2};} of which it is clear that the applications to analytical physics must be extensive in a high degree. == Examples == === Example 1 === D = −i ∂x is a Dirac operator on the tangent bundle over a line. === Example 2 === Consider a simple bundle of notable importance in physics: the configuration space of a particle with spin 1/2 confined to a plane, which is also the base manifold. It is represented by a wavefunction ψ : R2 → C2 ψ ( x , y ) = [ χ ( x , y ) η ( x , y ) ] {\displaystyle \psi (x,y)={\begin{bmatrix}\chi (x,y)\\\eta (x,y)\end{bmatrix}}} where x and y are the usual coordinate functions on R2. χ specifies the probability amplitude for the particle to be in the spin-up state, and similarly for η. The so-called spin-Dirac operator can then be written D = − i σ x ∂ x − i σ y ∂ y , {\displaystyle D=-i\sigma _{x}\partial _{x}-i\sigma _{y}\partial _{y},} where σi are the Pauli matrices. Note that the anticommutation relations for the Pauli matrices make the proof of the above defining property trivial. Those relations define the notion of a Clifford algebra. Solutions to the Dirac equation for spinor fields are often called harmonic spinors. === Example 3 === Feynman's Dirac operator describes the propagation of a free fermion in three dimensions and is elegantly written D = γ μ ∂ μ ≡ ∂ / , {\displaystyle D=\gamma ^{\mu }\partial _{\mu }\ \equiv \partial \!\!\!/,} using the Feynman slash notation. In introductory textbooks to quantum field theory, this will appear in the form D = c α → ⋅ ( − i ℏ ∇ x ) + m c 2 β {\displaystyle D=c{\vec {\alpha }}\cdot (-i\hbar \nabla _{x})+mc^{2}\beta } where α → = ( α 1 , α 2 , α 3 ) {\displaystyle {\vec {\alpha }}=(\alpha _{1},\alpha _{2},\alpha _{3})} are the off-diagonal Dirac matrices α i = β γ i {\displaystyle \alpha _{i}=\beta \gamma _{i}} , with β = γ 0 {\displaystyle \beta =\gamma _{0}} and the remaining constants are c {\displaystyle c} the speed of light, ℏ {\displaystyle \hbar } being the Planck constant, and m {\displaystyle m} the mass of a fermion (for example, an electron). It acts on a four-component wave function ψ ( x ) ∈ L 2 ( R 3 , C 4 ) {\displaystyle \psi (x)\in L^{2}(\mathbb {R} ^{3},\mathbb {C} ^{4})} , the Sobolev space of smooth, square-integrable functions. It can be extended to a self-adjoint operator on that domain. The square, in this case, is not the Laplacian, but instead D 2 = Δ + m 2 {\displaystyle D^{2}=\Delta +m^{2}} (after setting ℏ = c = 1. {\displaystyle \hbar =c=1.} ) === Example 4 === Another Dirac operator arises in Clifford analysis. In euclidean n-space this is D = ∑ j = 1 n e j ∂ ∂ x j {\displaystyle D=\sum _{j=1}^{n}e_{j}{\frac {\partial }{\partial x_{j}}}} where {ej: j = 1, ..., n} is an orthonormal basis for euclidean n-space, and Rn is considered to be embedded in a Clifford algebra. This is a special case of the Atiyah–Singer–Dirac operator acting on sections of a spinor bundle. === Example 5 === For a spin manifold, M, the Atiyah–Singer–Dirac operator is locally defined as follows: For x ∈ M and e1(x), ..., ej(x) a local orthonormal basis for the tangent space of M at x, the Atiyah–Singer–Dirac operator is D = ∑ j = 1 n e j ( x ) Γ ~ e j ( x ) , {\displaystyle D=\sum _{j=1}^{n}e_{j}(x){\tilde {\Gamma }}_{e_{j}(x)},} where Γ ~ {\displaystyle {\tilde {\Gamma }}} is the spin connection, a lifting of the Levi-Civita connection on M to the spinor bundle over M. The square in this case is not the Laplacian, but instead D 2 = Δ + R / 4 {\displaystyle D^{2}=\Delta +R/4} where R {\displaystyle R} is the scalar curvature of the connection. === Example 6 === On Riemannian manifold ( M , g ) {\displaystyle (M,g)} of dimension n = d i m ( M ) {\displaystyle n=dim(M)} with Levi-Civita connection ∇ {\displaystyle \nabla } and an orthonormal basis { e a } a = 1 n {\displaystyle \{e_{a}\}_{a=1}^{n}} , we can define exterior derivative d {\displaystyle d} and coderivative δ {\displaystyle \delta } as d = e a ∧ ∇ e a , δ = e a ⌟ ∇ e a {\displaystyle d=e^{a}\wedge \nabla _{e_{a}},\quad \delta =e^{a}\lrcorner \nabla _{e_{a}}} . Then we can define a Dirac-Kähler operator D {\displaystyle D} , as follows D = e a ∇ e a = d − δ {\displaystyle D=e^{a}\nabla _{e_{a}}=d-\delta } . The operator acts on sections of Clifford bundle in general, and it can be restricted to spinor bundle, an ideal of Clifford bundle, only if the projection operator on the ideal is parallel. == Generalisations == In Clifford analysis, the operator D : C∞(Rk ⊗ Rn, S) → C∞(Rk ⊗ Rn, Ck ⊗ S) acting on spinor valued functions defined by f ( x 1 , … , x k ) ↦ ( ∂ x 1 _ f ∂ x 2 _ f … ∂ x k _ f ) {\displaystyle f(x_{1},\ldots ,x_{k})\mapsto {\begin{pmatrix}\partial _{\underline {x_{1}}}f\\\partial _{\underline {x_{2}}}f\\\ldots \\\partial _{\underline {x_{k}}}f\\\end{pmatrix}}} is sometimes called Dirac operator in k Clifford variables. In the notation, S is the space of spinors, x i = ( x i 1 , x i 2 , … , x i n ) {\displaystyle x_{i}=(x_{i1},x_{i2},\ldots ,x_{in})} are n-dimensional variables and ∂ x i _ = ∑ j e j ⋅ ∂ x i j {\displaystyle \partial _{\underline {x_{i}}}=\sum _{j}e_{j}\cdot \partial _{x_{ij}}} is the Dirac operator in the i-th variable. This is a common generalization of the Dirac operator (k = 1) and the Dolbeault operator (n = 2, k arbitrary). It is an invariant differential operator, invariant under the action of the group SL(k) × Spin(n). The resolution of D is known only in some special cases. == See also == == References ==
|
Wikipedia:Direct limit#0
|
In mathematics, a direct limit is a way to construct a (typically large) object from many (typically smaller) objects that are put together in a specific way. These objects may be groups, rings, vector spaces or in general objects from any category. The way they are put together is specified by a system of homomorphisms (group homomorphism, ring homomorphism, or in general morphisms in the category) between those smaller objects. The direct limit of the objects A i {\displaystyle A_{i}} , where i {\displaystyle i} ranges over some directed set I {\displaystyle I} , is denoted by lim → A i {\displaystyle \varinjlim A_{i}} . This notation suppresses the system of homomorphisms; however, the limit depends on the system of homomorphisms. Direct limits are a special case of the concept of colimit in category theory. Direct limits are dual to inverse limits, which are a special case of limits in category theory. == Formal definition == We will first give the definition for algebraic structures like groups and modules, and then the general definition, which can be used in any category. === Direct limits of algebraic objects === In this section objects are understood to consist of underlying sets equipped with a given algebraic structure, such as groups, rings, modules (over a fixed ring), algebras (over a fixed field), etc. With this in mind, homomorphisms are understood in the corresponding setting (group homomorphisms, etc.). Let ⟨ I , ≤ ⟩ {\displaystyle \langle I,\leq \rangle } be a directed set. Let { A i : i ∈ I } {\displaystyle \{A_{i}:i\in I\}} be a family of objects indexed by I {\displaystyle I\,} and f i j : A i → A j {\displaystyle f_{ij}\colon A_{i}\rightarrow A_{j}} be a homomorphism for all i ≤ j {\displaystyle i\leq j} with the following properties: f i i {\displaystyle f_{ii}\,} is the identity on A i {\displaystyle A_{i}\,} , and f i k = f j k ∘ f i j {\displaystyle f_{ik}=f_{jk}\circ f_{ij}} for all i ≤ j ≤ k {\displaystyle i\leq j\leq k} . Then the pair ⟨ A i , f i j ⟩ {\displaystyle \langle A_{i},f_{ij}\rangle } is called a direct system over I {\displaystyle I} . The direct limit of the direct system ⟨ A i , f i j ⟩ {\displaystyle \langle A_{i},f_{ij}\rangle } is denoted by lim → A i {\displaystyle \varinjlim A_{i}} and is defined as follows. Its underlying set is the disjoint union of the A i {\displaystyle A_{i}} 's modulo a certain equivalence relation ∼ {\displaystyle \sim \,} : lim → A i = ⨆ i A i / ∼ . {\displaystyle \varinjlim A_{i}=\bigsqcup _{i}A_{i}{\bigg /}\sim .} Here, if x i ∈ A i {\displaystyle x_{i}\in A_{i}} and x j ∈ A j {\displaystyle x_{j}\in A_{j}} , then x i ∼ x j {\displaystyle x_{i}\sim \,x_{j}} if and only if there is some k ∈ I {\displaystyle k\in I} with i ≤ k {\displaystyle i\leq k} and j ≤ k {\displaystyle j\leq k} such that f i k ( x i ) = f j k ( x j ) {\displaystyle f_{ik}(x_{i})=f_{jk}(x_{j})\,} . Intuitively, two elements in the disjoint union are equivalent if and only if they "eventually become equal" in the direct system. An equivalent formulation that highlights the duality to the inverse limit is that an element is equivalent to all its images under the maps of the direct system, i.e. x i ∼ f i j ( x i ) {\displaystyle x_{i}\sim \,f_{ij}(x_{i})} whenever i ≤ j {\displaystyle i\leq j} . One obtains from this definition canonical functions ϕ j : A j → lim → A i {\displaystyle \phi _{j}\colon A_{j}\rightarrow \varinjlim A_{i}} sending each element to its equivalence class. The algebraic operations on lim → A i {\displaystyle \varinjlim A_{i}\,} are defined such that these maps become homomorphisms. Formally, the direct limit of the direct system ⟨ A i , f i j ⟩ {\displaystyle \langle A_{i},f_{ij}\rangle } consists of the object lim → A i {\displaystyle \varinjlim A_{i}} together with the canonical homomorphisms ϕ j : A j → lim → A i {\displaystyle \phi _{j}\colon A_{j}\rightarrow \varinjlim A_{i}} . === Direct limits in an arbitrary category === The direct limit can be defined in an arbitrary category C {\displaystyle {\mathcal {C}}} by means of a universal property. Let ⟨ X i , f i j ⟩ {\displaystyle \langle X_{i},f_{ij}\rangle } be a direct system of objects and morphisms in C {\displaystyle {\mathcal {C}}} (as defined above). A target is a pair ⟨ X , ϕ i ⟩ {\displaystyle \langle X,\phi _{i}\rangle } where X {\displaystyle X\,} is an object in C {\displaystyle {\mathcal {C}}} and ϕ i : X i → X {\displaystyle \phi _{i}\colon X_{i}\rightarrow X} are morphisms for each i ∈ I {\displaystyle i\in I} such that ϕ i = ϕ j ∘ f i j {\displaystyle \phi _{i}=\phi _{j}\circ f_{ij}} whenever i ≤ j {\displaystyle i\leq j} . A direct limit of the direct system ⟨ X i , f i j ⟩ {\displaystyle \langle X_{i},f_{ij}\rangle } is a universally repelling target ⟨ X , ϕ i ⟩ {\displaystyle \langle X,\phi _{i}\rangle } in the sense that ⟨ X , ϕ i ⟩ {\displaystyle \langle X,\phi _{i}\rangle } is a target and for each target ⟨ Y , ψ i ⟩ {\displaystyle \langle Y,\psi _{i}\rangle } , there is a unique morphism u : X → Y {\displaystyle u\colon X\rightarrow Y} such that u ∘ ϕ i = ψ i {\displaystyle u\circ \phi _{i}=\psi _{i}} for each i. The following diagram will then commute for all i, j. The direct limit is often denoted X = lim → X i {\displaystyle X=\varinjlim X_{i}} with the direct system ⟨ X i , f i j ⟩ {\displaystyle \langle X_{i},f_{ij}\rangle } and the canonical morphisms ϕ i {\displaystyle \phi _{i}} (or, more precisely, canonical injections ι i {\displaystyle \iota _{i}} ) being understood. Unlike for algebraic objects, not every direct system in an arbitrary category has a direct limit. If it does, however, the direct limit is unique in a strong sense: given another direct limit X′ there exists a unique isomorphism X′ → X that commutes with the canonical morphisms. == Examples == A collection of subsets M i {\displaystyle M_{i}} of a set M {\displaystyle M} can be partially ordered by inclusion. If the collection is directed, its direct limit is the union ⋃ M i {\displaystyle \bigcup M_{i}} . The same is true for a directed collection of subgroups of a given group, or a directed collection of subrings of a given ring, etc. The weak topology of a CW complex is defined as a direct limit. Let X {\displaystyle X} be any directed set with a greatest element m {\displaystyle m} . The direct limit of any corresponding direct system is isomorphic to X m {\displaystyle X_{m}} and the canonical morphism ϕ m : X m → X {\displaystyle \phi _{m}:X_{m}\rightarrow X} is an isomorphism. Let K be a field. For a positive integer n, consider the general linear group GL(n;K) consisting of invertible n x n - matrices with entries from K. We have a group homomorphism GL(n;K) → GL(n+1;K) that enlarges matrices by putting a 1 in the lower right corner and zeros elsewhere in the last row and column. The direct limit of this system is the general linear group of K, written as GL(K). An element of GL(K) can be thought of as an infinite invertible matrix that differs from the infinite identity matrix in only finitely many entries. The group GL(K) is of vital importance in algebraic K-theory. Let p be a prime number. Consider the direct system composed of the factor groups Z / p n Z {\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} } and the homomorphisms Z / p n Z → Z / p n + 1 Z {\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} \rightarrow \mathbb {Z} /p^{n+1}\mathbb {Z} } induced by multiplication by p {\displaystyle p} . The direct limit of this system consists of all the roots of unity of order some power of p {\displaystyle p} , and is called the Prüfer group Z ( p ∞ ) {\displaystyle \mathbb {Z} (p^{\infty })} . There is a (non-obvious) injective ring homomorphism from the ring of symmetric polynomials in n {\displaystyle n} variables to the ring of symmetric polynomials in n + 1 {\displaystyle n+1} variables. Forming the direct limit of this direct system yields the ring of symmetric functions. Let F be a C-valued sheaf on a topological space X. Fix a point x in X. The open neighborhoods of x form a directed set ordered by inclusion (U ≤ V if and only if U contains V). The corresponding direct system is (F(U), rU,V) where r is the restriction map. The direct limit of this system is called the stalk of F at x, denoted Fx. For each neighborhood U of x, the canonical morphism F(U) → Fx associates to a section s of F over U an element sx of the stalk Fx called the germ of s at x. Direct limits in the category of topological spaces are given by placing the final topology on the underlying set-theoretic direct limit. An ind-scheme is an inductive limit of schemes. == Properties == Direct limits are linked to inverse limits via H o m ( lim → X i , Y ) = lim ← H o m ( X i , Y ) . {\displaystyle \mathrm {Hom} (\varinjlim X_{i},Y)=\varprojlim \mathrm {Hom} (X_{i},Y).} An important property is that taking direct limits in the category of modules is an exact functor. This means that for any directed system of short exact sequences 0 → A i → B i → C i → 0 {\displaystyle 0\to A_{i}\to B_{i}\to C_{i}\to 0} , the sequence 0 → lim → A i → lim → B i → lim → C i → 0 {\displaystyle 0\to \varinjlim A_{i}\to \varinjlim B_{i}\to \varinjlim C_{i}\to 0} of direct limits is also exact. == Related constructions and generalizations == We note that a direct system in a category C {\displaystyle {\mathcal {C}}} admits an alternative description in terms of functors. Any directed set ⟨ I , ≤ ⟩ {\displaystyle \langle I,\leq \rangle } can be considered as a small category I {\displaystyle {\mathcal {I}}} whose objects are the elements I {\displaystyle I} and there is a morphisms i → j {\displaystyle i\rightarrow j} if and only if i ≤ j {\displaystyle i\leq j} . A direct system over I {\displaystyle I} is then the same as a covariant functor I → C {\displaystyle {\mathcal {I}}\rightarrow {\mathcal {C}}} . The colimit of this functor is the same as the direct limit of the original direct system. A notion closely related to direct limits are the filtered colimits. Here we start with a covariant functor J → C {\displaystyle {\mathcal {J}}\to {\mathcal {C}}} from a filtered category J {\displaystyle {\mathcal {J}}} to some category C {\displaystyle {\mathcal {C}}} and form the colimit of this functor. One can show that a category has all directed limits if and only if it has all filtered colimits, and a functor defined on such a category commutes with all direct limits if and only if it commutes with all filtered colimits. Given an arbitrary category C {\displaystyle {\mathcal {C}}} , there may be direct systems in C {\displaystyle {\mathcal {C}}} that don't have a direct limit in C {\displaystyle {\mathcal {C}}} (consider for example the category of finite sets, or the category of finitely generated abelian groups). In this case, we can always embed C {\displaystyle {\mathcal {C}}} into a category Ind ( C ) {\displaystyle {\text{Ind}}({\mathcal {C}})} in which all direct limits exist; the objects of Ind ( C ) {\displaystyle {\text{Ind}}({\mathcal {C}})} are called ind-objects of C {\displaystyle {\mathcal {C}}} . The categorical dual of the direct limit is called the inverse limit. As above, inverse limits can be viewed as limits of certain functors and are closely related to limits over cofiltered categories. == Terminology == In the literature, one finds the terms "directed limit", "direct inductive limit", "directed colimit", "direct colimit" and "inductive limit" for the concept of direct limit defined above. The term "inductive limit" is ambiguous however, as some authors use it for the general concept of colimit. == See also == Direct limits of groups == Notes == == References == Bourbaki, Nicolas (1968), Elements of mathematics. Theory of sets, Translated from French, Paris: Hermann, MR 0237342 Mac Lane, Saunders (1998), Categories for the Working Mathematician, Graduate Texts in Mathematics, vol. 5 (2nd ed.), Springer-Verlag
|
Wikipedia:Direct product#0
|
In mathematics, a direct product of objects already known can often be defined by giving a new one. That induces a structure on the Cartesian product of the underlying sets from that of the contributing objects. More abstractly, the product in category theory is mentioned, which formalizes those notions. Examples are the product of sets, groups (described below), rings, and other algebraic structures. The product of topological spaces is another instance. There is also the direct sum, which in some areas used interchangeably but in others is a different concept. == Examples == If R {\displaystyle \mathbb {R} } is thought of as the set of real numbers without further structure, the direct product R × R {\displaystyle \mathbb {R} \times \mathbb {R} } is just the Cartesian product { ( x , y ) : x , y ∈ R } . {\displaystyle \{(x,y):x,y\in \mathbb {R} \}.} If R {\displaystyle \mathbb {R} } is thought of as the group of real numbers under addition, the direct product R × R {\displaystyle \mathbb {R} \times \mathbb {R} } still has { ( x , y ) : x , y ∈ R } {\displaystyle \{(x,y):x,y\in \mathbb {R} \}} as its underlying set. The difference between this and the preceding examples is that R × R {\displaystyle \mathbb {R} \times \mathbb {R} } is now a group and so how to add their elements must also be stated. That is done by defining ( a , b ) + ( c , d ) = ( a + c , b + d ) . {\displaystyle (a,b)+(c,d)=(a+c,b+d).} If R {\displaystyle \mathbb {R} } is thought of as the ring of real numbers, the direct product R × R {\displaystyle \mathbb {R} \times \mathbb {R} } again has { ( x , y ) : x , y ∈ R } {\displaystyle \{(x,y):x,y\in \mathbb {R} \}} as its underlying set. The ring structure consists of addition defined by ( a , b ) + ( c , d ) = ( a + c , b + d ) {\displaystyle (a,b)+(c,d)=(a+c,b+d)} and multiplication defined by ( a , b ) ( c , d ) = ( a c , b d ) . {\displaystyle (a,b)(c,d)=(ac,bd).} Although the ring R {\displaystyle \mathbb {R} } is a field, R × R {\displaystyle \mathbb {R} \times \mathbb {R} } is not because the nonzero element ( 1 , 0 ) {\displaystyle (1,0)} does not have a multiplicative inverse. In a similar manner, the direct product of finitely many algebraic structures can be talked about; for example, R × R × R × R . {\displaystyle \mathbb {R} \times \mathbb {R} \times \mathbb {R} \times \mathbb {R} .} That relies on the direct product being associative up to isomorphism. That is, ( A × B ) × C ≅ A × ( B × C ) {\displaystyle (A\times B)\times C\cong A\times (B\times C)} for any algebraic structures A , {\displaystyle A,} B , {\displaystyle B,} and C {\displaystyle C} of the same kind. The direct product is also commutative up to isomorphism; that is, A × B ≅ B × A {\displaystyle A\times B\cong B\times A} for any algebraic structures A {\displaystyle A} and B {\displaystyle B} of the same kind. Even the direct product of infinitely many algebraic structures can be talked about; for example, the direct product of countably many copies of R , {\displaystyle \mathbb {R} ,} is written as R × R × R × ⋯ . {\displaystyle \mathbb {R} \times \mathbb {R} \times \mathbb {R} \times \dotsb .} == Direct product of groups == In group theory, define the direct product of two groups ( G , ∘ ) {\displaystyle (G,\circ )} and ( H , ⋅ ) , {\displaystyle (H,\cdot ),} can be denoted by G × H . {\displaystyle G\times H.} For abelian groups that are written additively, it may also be called the direct sum of two groups, denoted by G ⊕ H . {\displaystyle G\oplus H.} It is defined as follows: the set of the elements of the new group is the Cartesian product of the sets of elements of G and H , {\displaystyle G{\text{ and }}H,} that is { ( g , h ) : g ∈ G , h ∈ H } ; {\displaystyle \{(g,h):g\in G,h\in H\};} on these elements put an operation, defined element-wise: ( g , h ) × ( g ′ , h ′ ) = ( g ∘ g ′ , h ⋅ h ′ ) {\displaystyle (g,h)\times \left(g',h'\right)=\left(g\circ g',h\cdot h'\right)} Note that ( G , ∘ ) {\displaystyle (G,\circ )} may be the same as ( H , ⋅ ) . {\displaystyle (H,\cdot ).} The construction gives a new group, which has a normal subgroup that is isomorphic to G {\displaystyle G} (given by the elements of the form ( g , 1 ) {\displaystyle (g,1)} ) and one that is isomorphic to H {\displaystyle H} (comprising the elements ( 1 , h ) {\displaystyle (1,h)} ). The reverse also holds in the recognition theorem. If a group K {\displaystyle K} contains two normal subgroups G and H , {\displaystyle G{\text{ and }}H,} such that K = G H {\displaystyle K=GH} and the intersection of G and H {\displaystyle G{\text{ and }}H} contains only the identity, K {\displaystyle K} is isomorphic to G × H . {\displaystyle G\times H.} A relaxation of those conditions by requiring only one subgroup to be normal gives the semidirect product. For example, G and H {\displaystyle G{\text{ and }}H} are taken as two copies of the unique (up to isomorphisms) group of order 2, C 2 : {\displaystyle C^{2}:} say { 1 , a } and { 1 , b } . {\displaystyle \{1,a\}{\text{ and }}\{1,b\}.} Then, C 2 × C 2 = { ( 1 , 1 ) , ( 1 , b ) , ( a , 1 ) , ( a , b ) } , {\displaystyle C_{2}\times C_{2}=\{(1,1),(1,b),(a,1),(a,b)\},} with the operation element by element. For instance, ( 1 , b ) ∗ ( a , 1 ) = ( 1 ∗ a , b ∗ 1 ) = ( a , b ) , {\displaystyle (1,b)^{*}(a,1)=\left(1^{*}a,b^{*}1\right)=(a,b),} and ( 1 , b ) ∗ ( 1 , b ) = ( 1 , b 2 ) = ( 1 , 1 ) . {\displaystyle (1,b)^{*}(1,b)=\left(1,b^{2}\right)=(1,1).} With a direct product, some natural group homomorphisms are obtained for free: the projection maps defined by π 1 : G × H → G , π 1 ( g , h ) = g π 2 : G × H → H , π 2 ( g , h ) = h {\displaystyle {\begin{aligned}\pi _{1}:G\times H\to G,\ \ \pi _{1}(g,h)&=g\\\pi _{2}:G\times H\to H,\ \ \pi _{2}(g,h)&=h\end{aligned}}} are called the coordinate functions. Also, every homomorphism f {\displaystyle f} to the direct product is totally determined by its component functions f i = π i ∘ f . {\displaystyle f_{i}=\pi _{i}\circ f.} For any group ( G , ∘ ) {\displaystyle (G,\circ )} and any integer n ≥ 0 , {\displaystyle n\geq 0,} repeated application of the direct product gives the group of all n {\displaystyle n} -tuples G n {\displaystyle G^{n}} (for n = 0 , {\displaystyle n=0,} that is the trivial group); for example, Z n {\displaystyle \mathbb {Z} ^{n}} and R n . {\displaystyle \mathbb {R} ^{n}.} == Direct product of modules == The direct product for modules (not to be confused with the tensor product) is very similar to the one that is defined for groups above by using the Cartesian product with the operation of addition being componentwise, and the scalar multiplication just distributing over all the components. Starting from R {\displaystyle \mathbb {R} } , Euclidean space R n {\displaystyle \mathbb {R} ^{n}} is gotten, the prototypical example of a real n {\displaystyle n} -dimensional vector space. The direct product of R m {\displaystyle \mathbb {R} ^{m}} and R n {\displaystyle \mathbb {R} ^{n}} is R m + n . {\displaystyle \mathbb {R} ^{m+n}.} A direct product for a finite index ∏ i = 1 n X i {\textstyle \prod _{i=1}^{n}X_{i}} is canonically isomorphic to the direct sum ⨁ i = 1 n X i . {\textstyle \bigoplus _{i=1}^{n}X_{i}.} The direct sum and the direct product are not isomorphic for infinite indices for which the elements of a direct sum are zero for all but for a finite number of entries. They are dual in the sense of category theory: the direct sum is the coproduct, and the direct product is the product. For example, for X = ∏ i = 1 ∞ R {\textstyle X=\prod _{i=1}^{\infty }\mathbb {R} } and Y = ⨁ i = 1 ∞ R , {\textstyle Y=\bigoplus _{i=1}^{\infty }\mathbb {R} ,} the infinite direct product and direct sum of the real numbers. Only sequences with a finite number of non-zero elements are in Y . {\displaystyle Y.} For example, ( 1 , 0 , 0 , 0 , … ) {\displaystyle (1,0,0,0,\ldots )} is in Y {\displaystyle Y} but ( 1 , 1 , 1 , 1 , … ) {\displaystyle (1,1,1,1,\ldots )} is not. Both sequences are in the direct product X ; {\displaystyle X;} in fact, Y {\displaystyle Y} is a proper subset of X {\displaystyle X} (that is, Y ⊂ X {\displaystyle Y\subset X} ). == Topological space direct product == The direct product for a collection of topological spaces X i {\displaystyle X_{i}} for i {\displaystyle i} in I , {\displaystyle I,} some index set, once again makes use of the Cartesian product ∏ i ∈ I X i . {\displaystyle \prod _{i\in I}X_{i}.} Defining the topology is a little tricky. For finitely many factors, it is the obvious and natural thing to do: simply take as a basis of open sets to be the collection of all Cartesian products of open subsets from each factor: B = { U 1 × ⋯ × U n : U i o p e n i n X i } . {\displaystyle {\mathcal {B}}=\left\{U_{1}\times \cdots \times U_{n}\ :\ U_{i}\ \mathrm {open\ in} \ X_{i}\right\}.} That topology is called the product topology. For example, by directly defining the product topology on R 2 {\displaystyle \mathbb {R} ^{2}} by the open sets of R {\displaystyle \mathbb {R} } (disjoint unions of open intervals), the basis for that topology would consist of all disjoint unions of open rectangles in the plane (as it turns out, it coincides with the usual metric topology). The product topology for infinite products has a twist, which has to do with being able to make all the projection maps continuous and to make all functions into the product continuous if and only if all its component functions are continuous (that is, to satisfy the categorical definition of product: the morphisms here are continuous functions). The basis of open sets is taken to be the collection of all Cartesian products of open subsets from each factor, as before, with the proviso that all but finitely many of the open subsets are the entire factor: B = { ∏ i ∈ I U i : ( ∃ j 1 , … , j n ) ( U j i o p e n i n X j i ) a n d ( ∀ i ≠ j 1 , … , j n ) ( U i = X i ) } . {\displaystyle {\mathcal {B}}=\left\{\prod _{i\in I}U_{i}\ :\ (\exists j_{1},\ldots ,j_{n})(U_{j_{i}}\ \mathrm {open\ in} \ X_{j_{i}})\ \mathrm {and} \ (\forall i\neq j_{1},\ldots ,j_{n})(U_{i}=X_{i})\right\}.} The more natural-sounding topology would be, in this case, to take products of infinitely many open subsets as before, which yields a somewhat interesting topology, the box topology. However, it is not too difficult to find an example of bunch of continuous component functions whose product function is not continuous (see the separate entry box topology for an example and more). The problem that makes the twist necessary is ultimately rooted in the fact that the intersection of open sets is guaranteed to be open only for finitely many sets in the definition of topology. Products (with the product topology) are nice with respect to preserving properties of their factors; for example, the product of Hausdorff spaces is Hausdorff, the product of connected spaces is connected, and the product of compact spaces is compact. That last one, called Tychonoff's theorem, is yet another equivalence to the axiom of choice. For more properties and equivalent formulations, see product topology. == Direct product of binary relations == On the Cartesian product of two sets with binary relations R and S , {\displaystyle R{\text{ and }}S,} define ( a , b ) T ( c , d ) {\displaystyle (a,b)T(c,d)} as a R c and b S d . {\displaystyle aRc{\text{ and }}bSd.} If R and S {\displaystyle R{\text{ and }}S} are both reflexive, irreflexive, transitive, symmetric, or antisymmetric, then T {\displaystyle T} will be also. Similarly, totality of T {\displaystyle T} is inherited from R and S . {\displaystyle R{\text{ and }}S.} If the properties are combined, that also applies for being a preorder and being an equivalence relation. However, if R and S {\displaystyle R{\text{ and }}S} are connected relations, T {\displaystyle T} need not be connected; for example, the direct product of ≤ {\displaystyle \,\leq \,} on N {\displaystyle \mathbb {N} } with itself does not relate ( 1 , 2 ) and ( 2 , 1 ) . {\displaystyle (1,2){\text{ and }}(2,1).} == Direct product in universal algebra == If Σ {\displaystyle \Sigma } is a fixed signature, I {\displaystyle I} is an arbitrary (possibly infinite) index set, and ( A i ) i ∈ I {\displaystyle \left(\mathbf {A} _{i}\right)_{i\in I}} is an indexed family of Σ {\displaystyle \Sigma } algebras, the direct product A = ∏ i ∈ I A i {\textstyle \mathbf {A} =\prod _{i\in I}\mathbf {A} _{i}} is a Σ {\displaystyle \Sigma } algebra defined as follows: The universe set A {\displaystyle A} of A {\displaystyle \mathbf {A} } is the Cartesian product of the universe sets A i {\displaystyle A_{i}} of A i , {\displaystyle \mathbf {A} _{i},} formally: A = ∏ i ∈ I A i . {\textstyle A=\prod _{i\in I}A_{i}.} For each n {\displaystyle n} and each n {\displaystyle n} -ary operation symbol f ∈ Σ , {\displaystyle f\in \Sigma ,} its interpretation f A {\displaystyle f^{\mathbf {A} }} in A {\displaystyle \mathbf {A} } is defined componentwise, formally. For all a 1 , … , a n ∈ A {\displaystyle a_{1},\dotsc ,a_{n}\in A} and each i ∈ I , {\displaystyle i\in I,} the i {\displaystyle i} th component of f A ( a 1 , … , a n ) {\displaystyle f^{\mathbf {A} }\!\left(a_{1},\dotsc ,a_{n}\right)} is defined as f A i ( a 1 ( i ) , … , a n ( i ) ) . {\displaystyle f^{\mathbf {A} _{i}}\!\left(a_{1}(i),\dotsc ,a_{n}(i)\right).} For each i ∈ I , {\displaystyle i\in I,} the i {\displaystyle i} th projection π i : A → A i {\displaystyle \pi _{i}:A\to A_{i}} is defined by π i ( a ) = a ( i ) . {\displaystyle \pi _{i}(a)=a(i).} It is a surjective homomorphism between the Σ {\displaystyle \Sigma } algebras A and A i . {\displaystyle \mathbf {A} {\text{ and }}\mathbf {A} _{i}.} As a special case, if the index set I = { 1 , 2 } , {\displaystyle I=\{1,2\},} the direct product of two Σ {\displaystyle \Sigma } algebras A 1 and A 2 {\displaystyle \mathbf {A} _{1}{\text{ and }}\mathbf {A} _{2}} is obtained, written as A = A 1 × A 2 . {\displaystyle \mathbf {A} =\mathbf {A} _{1}\times \mathbf {A} _{2}.} If Σ {\displaystyle \Sigma } contains only one binary operation f , {\displaystyle f,} the above definition of the direct product of groups is obtained by using the notation A 1 = G , A 2 = H , {\displaystyle A_{1}=G,A_{2}=H,} f A 1 = ∘ , f A 2 = ⋅ , and f A = × . {\displaystyle f^{A_{1}}=\circ ,\ f^{A_{2}}=\cdot ,\ {\text{ and }}f^{A}=\times .} Similarly, the definition of the direct product of modules is subsumed here. == Categorical product == The direct product can be abstracted to an arbitrary category. In a category, given a collection of objects ( A i ) i ∈ I {\displaystyle (A_{i})_{i\in I}} indexed by a set I {\displaystyle I} , a product of those objects is an object A {\displaystyle A} together with morphisms p i : A → A i {\displaystyle p_{i}\colon A\to A_{i}} for all i ∈ I {\displaystyle i\in I} , such that if B {\displaystyle B} is any other object with morphisms f i : B → A i {\displaystyle f_{i}\colon B\to A_{i}} for all i ∈ I {\displaystyle i\in I} , there is a unique morphism B → A {\displaystyle B\to A} whose composition with p i {\displaystyle p_{i}} equals f i {\displaystyle f_{i}} for every i {\displaystyle i} . Such A {\displaystyle A} and ( p i ) i ∈ I {\displaystyle (p_{i})_{i\in I}} do not always exist. If they exist, then ( A , ( p i ) i ∈ I ) {\displaystyle (A,(p_{i})_{i\in I})} is unique up to isomorphism, and A {\displaystyle A} is denoted ∏ i ∈ I A i {\displaystyle \prod _{i\in I}A_{i}} . In the special case of the category of groups, a product always exists. The underlying set of ∏ i ∈ I A i {\displaystyle \prod _{i\in I}A_{i}} is the Cartesian product of the underlying sets of the A i {\displaystyle A_{i}} , the group operation is componentwise multiplication, and the (homo)morphism p i : A → A i {\displaystyle p_{i}\colon A\to A_{i}} is the projection sending each tuple to its i {\displaystyle i} th coordinate. == Internal and external direct product == Some authors draw a distinction between an internal direct product and an external direct product. For example, if A {\displaystyle A} and B {\displaystyle B} are subgroups of an additive abelian group G {\displaystyle G} such that A + B = G {\displaystyle A+B=G} and A ∩ B = { 0 } {\displaystyle A\cap B=\{0\}} , A × B ≅ G , {\displaystyle A\times B\cong G,} and it is said that G {\displaystyle G} is the internal direct product of A {\displaystyle A} and B {\displaystyle B} . To avoid ambiguity, the set { ( a , b ) ∣ a ∈ A , b ∈ B } {\displaystyle \{\,(a,b)\mid a\in A,\,b\in B\,\}} can be referred to as the external direct product of A {\displaystyle A} and B {\displaystyle B} . == See also == Direct sum – Operation in abstract algebra composing objects into "more complicated" objects Cartesian product – Mathematical set formed from two given sets Coproduct – Category-theoretic construction Free product – Operation that combines groups Semidirect product – Operation in group theory Zappa–Szep product – Mathematics conceptPages displaying short descriptions of redirect targets Tensor product of graphs – Operation in graph theory Orders on the Cartesian product of totally ordered sets – Order whose elements are all comparable == Notes == == References == Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556
|
Wikipedia:Direct sum of modules#0
|
In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. The direct sum of modules is the smallest module which contains the given modules as submodules with no "unnecessary" constraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion. The most familiar examples of this construction occur when considering vector spaces (modules over a field) and abelian groups (modules over the ring Z of integers). The construction may also be extended to cover Banach spaces and Hilbert spaces. See the article decomposition of a module for a way to write a module as a direct sum of submodules. == Construction for vector spaces and abelian groups == We give the construction first in these two cases, under the assumption that we have only two objects. Then we generalize to an arbitrary family of arbitrary modules. The key elements of the general construction are more clearly identified by considering these two cases in depth. === Construction for two vector spaces === Suppose V and W are vector spaces over the field K. The Cartesian product V × W can be given the structure of a vector space over K (Halmos 1974, §18) by defining the operations componentwise: (v1, w1) + (v2, w2) = (v1 + v2, w1 + w2) α (v, w) = (α v, α w) for v, v1, v2 ∈ V, w, w1, w2 ∈ W, and α ∈ K. The resulting vector space is called the direct sum of V and W and is usually denoted by a plus symbol inside a circle: V ⊕ W {\displaystyle V\oplus W} It is customary to write the elements of an ordered sum not as ordered pairs (v, w), but as a sum v + w. The subspace V × {0} of V ⊕ W is isomorphic to V and is often identified with V; similarly for {0} × W and W. (See internal direct sum below.) With this identification, every element of V ⊕ W can be written in one and only one way as the sum of an element of V and an element of W. The dimension of V ⊕ W is equal to the sum of the dimensions of V and W. One elementary use is the reconstruction of a finite vector space from any subspace W and its orthogonal complement: R n = W ⊕ W ⊥ {\displaystyle \mathbb {R} ^{n}=W\oplus W^{\perp }} This construction readily generalizes to any finite number of vector spaces. === Construction for two abelian groups === For abelian groups G and H which are written additively, the direct product of G and H is also called a direct sum (Mac Lane & Birkhoff 1999, §V.6). Thus the Cartesian product G × H is equipped with the structure of an abelian group by defining the operations componentwise: (g1, h1) + (g2, h2) = (g1 + g2, h1 + h2) for g1, g2 in G, and h1, h2 in H. Integral multiples are similarly defined componentwise by n(g, h) = (ng, nh) for g in G, h in H, and n an integer. This parallels the extension of the scalar product of vector spaces to the direct sum above. The resulting abelian group is called the direct sum of G and H and is usually denoted by a plus symbol inside a circle: G ⊕ H {\displaystyle G\oplus H} It is customary to write the elements of an ordered sum not as ordered pairs (g, h), but as a sum g + h. The subgroup G × {0} of G ⊕ H is isomorphic to G and is often identified with G; similarly for {0} × H and H. (See internal direct sum below.) With this identification, it is true that every element of G ⊕ H can be written in one and only one way as the sum of an element of G and an element of H. The rank of G ⊕ H is equal to the sum of the ranks of G and H. This construction readily generalises to any finite number of abelian groups. == Construction for an arbitrary family of modules == One should notice a clear similarity between the definitions of the direct sum of two vector spaces and of two abelian groups. In fact, each is a special case of the construction of the direct sum of two modules. Additionally, by modifying the definition one can accommodate the direct sum of an infinite family of modules. The precise definition is as follows (Bourbaki 1989, §II.1.6). Let R be a ring, and {Mi : i ∈ I} a family of left R-modules indexed by the set I. The direct sum of {Mi} is then defined to be the set of all sequences ( α i ) {\displaystyle (\alpha _{i})} where α i ∈ M i {\displaystyle \alpha _{i}\in M_{i}} and α i = 0 {\displaystyle \alpha _{i}=0} for cofinitely many indices i. (The direct product is analogous but the indices do not need to cofinitely vanish.) It can also be defined as functions α from I to the disjoint union of the modules Mi such that α(i) ∈ Mi for all i ∈ I and α(i) = 0 for cofinitely many indices i. These functions can equivalently be regarded as finitely supported sections of the fiber bundle over the index set I, with the fiber over i ∈ I {\displaystyle i\in I} being M i {\displaystyle M_{i}} . This set inherits the module structure via component-wise addition and scalar multiplication. Explicitly, two such sequences (or functions) α and β can be added by writing ( α + β ) i = α i + β i {\displaystyle (\alpha +\beta )_{i}=\alpha _{i}+\beta _{i}} for all i (note that this is again zero for all but finitely many indices), and such a function can be multiplied with an element r from R by defining r ( α ) i = ( r α ) i {\displaystyle r(\alpha )_{i}=(r\alpha )_{i}} for all i. In this way, the direct sum becomes a left R-module, and it is denoted ⨁ i ∈ I M i . {\displaystyle \bigoplus _{i\in I}M_{i}.} It is customary to write the sequence ( α i ) {\displaystyle (\alpha _{i})} as a sum ∑ α i {\displaystyle \sum \alpha _{i}} . Sometimes a primed summation ∑ ′ α i {\displaystyle \sum '\alpha _{i}} is used to indicate that cofinitely many of the terms are zero. == Properties == The direct sum is a submodule of the direct product of the modules Mi (Bourbaki 1989, §II.1.7). The direct product is the set of all functions α from I to the disjoint union of the modules Mi with α(i)∈Mi, but not necessarily vanishing for all but finitely many i. If the index set I is finite, then the direct sum and the direct product are equal. Each of the modules Mi may be identified with the submodule of the direct sum consisting of those functions which vanish on all indices different from i. With these identifications, every element x of the direct sum can be written in one and only one way as a sum of finitely many elements from the modules Mi. If the Mi are actually vector spaces, then the dimension of the direct sum is equal to the sum of the dimensions of the Mi. The same is true for the rank of abelian groups and the length of modules. Every vector space over the field K is isomorphic to a direct sum of sufficiently many copies of K, so in a sense only these direct sums have to be considered. This is not true for modules over arbitrary rings. The tensor product distributes over direct sums in the following sense: if N is some right R-module, then the direct sum of the tensor products of N with Mi (which are abelian groups) is naturally isomorphic to the tensor product of N with the direct sum of the Mi. Direct sums are commutative and associative (up to isomorphism), meaning that it doesn't matter in which order one forms the direct sum. The abelian group of R-linear homomorphisms from the direct sum to some left R-module L is naturally isomorphic to the direct product of the abelian groups of R-linear homomorphisms from Mi to L: Hom R ( ⨁ i ∈ I M i , L ) ≅ ∏ i ∈ I Hom R ( M i , L ) . {\displaystyle \operatorname {Hom} _{R}{\biggl (}\bigoplus _{i\in I}M_{i},L{\biggr )}\cong \prod _{i\in I}\operatorname {Hom} _{R}\left(M_{i},L\right).} Indeed, there is clearly a homomorphism τ from the left hand side to the right hand side, where τ(θ)(i) is the R-linear homomorphism sending x∈Mi to θ(x) (using the natural inclusion of Mi into the direct sum). The inverse of the homomorphism τ is defined by τ − 1 ( β ) ( α ) = ∑ i ∈ I β ( i ) ( α ( i ) ) {\displaystyle \tau ^{-1}(\beta )(\alpha )=\sum _{i\in I}\beta (i)(\alpha (i))} for any α in the direct sum of the modules Mi. The key point is that the definition of τ−1 makes sense because α(i) is zero for all but finitely many i, and so the sum is finite.In particular, the dual vector space of a direct sum of vector spaces is isomorphic to the direct product of the duals of those spaces. The finite direct sum of modules is a biproduct: If p k : A 1 ⊕ ⋯ ⊕ A n → A k {\displaystyle p_{k}:A_{1}\oplus \cdots \oplus A_{n}\to A_{k}} are the canonical projection mappings and i k : A k ↦ A 1 ⊕ ⋯ ⊕ A n {\displaystyle i_{k}:A_{k}\mapsto A_{1}\oplus \cdots \oplus A_{n}} are the inclusion mappings, then i 1 ∘ p 1 + ⋯ + i n ∘ p n {\displaystyle i_{1}\circ p_{1}+\cdots +i_{n}\circ p_{n}} equals the identity morphism of A1 ⊕ ⋯ ⊕ An, and p k ∘ i l {\displaystyle p_{k}\circ i_{l}} is the identity morphism of Ak in the case l = k, and is the zero map otherwise. == Internal direct sum == Suppose M is an R-module and Mi is a submodule of M for each i in I. If every x in M can be written in exactly one way as a sum of finitely many elements of the Mi, then we say that M is the internal direct sum of the submodules Mi (Halmos 1974, §18). In this case, M is naturally isomorphic to the (external) direct sum of the Mi as defined above (Adamson 1972, p.61). A submodule N of M is a direct summand of M if there exists some other submodule N′ of M such that M is the internal direct sum of N and N′. In this case, N and N′ are called complementary submodules. == Universal property == In the language of category theory, the direct sum is a coproduct and hence a colimit in the category of left R-modules, which means that it is characterized by the following universal property. For every i in I, consider the natural embedding j i : M i → ⨁ i ∈ I M i {\displaystyle j_{i}:M_{i}\rightarrow \bigoplus _{i\in I}M_{i}} which sends the elements of Mi to those functions which are zero for all arguments but i. Now let M be an arbitrary R-module and fi : Mi → M be arbitrary R-linear maps for every i, then there exists precisely one R-linear map f : ⨁ i ∈ I M i → M {\displaystyle f:\bigoplus _{i\in I}M_{i}\rightarrow M} such that f o ji = fi for all i. == Grothendieck group == The direct sum gives a collection of objects the structure of a commutative monoid, in that the addition of objects is defined, but not subtraction. In fact, subtraction can be defined, and every commutative monoid can be extended to an abelian group. This extension is known as the Grothendieck group. The extension is done by defining equivalence classes of pairs of objects, which allows certain pairs to be treated as inverses. The construction, detailed in the article on the Grothendieck group, is "universal", in that it has the universal property of being unique, and homomorphic to any other embedding of a commutative monoid in an abelian group. == Direct sum of modules with additional structure == If the modules we are considering carry some additional structure (for example, a norm or an inner product), then the direct sum of the modules can often be made to carry this additional structure, as well. In this case, we obtain the coproduct in the appropriate category of all objects carrying the additional structure. Two prominent examples occur for Banach spaces and Hilbert spaces. In some classical texts, the phrase "direct sum of algebras over a field" is also introduced for denoting the algebraic structure that is presently more commonly called a direct product of algebras; that is, the Cartesian product of the underlying sets with the componentwise operations. This construction, however, does not provide a coproduct in the category of algebras, but a direct product (see note below and the remark on direct sums of rings). === Direct sum of algebras === A direct sum of algebras X {\displaystyle X} and Y {\displaystyle Y} is the direct sum as vector spaces, with product ( x 1 + y 1 ) ( x 2 + y 2 ) = ( x 1 x 2 + y 1 y 2 ) . {\displaystyle (x_{1}+y_{1})(x_{2}+y_{2})=(x_{1}x_{2}+y_{1}y_{2}).} Consider these classical examples: R ⊕ R {\displaystyle \mathbf {R} \oplus \mathbf {R} } is ring isomorphic to split-complex numbers, also used in interval analysis. C ⊕ C {\displaystyle \mathbf {C} \oplus \mathbf {C} } is the algebra of tessarines introduced by James Cockle in 1848. H ⊕ H , {\displaystyle \mathbf {H} \oplus \mathbf {H} ,} called the split-biquaternions, was introduced by William Kingdon Clifford in 1873. Joseph Wedderburn exploited the concept of a direct sum of algebras in his classification of hypercomplex numbers. See his Lectures on Matrices (1934), page 151. Wedderburn makes clear the distinction between a direct sum and a direct product of algebras: For the direct sum the field of scalars acts jointly on both parts: λ ( x ⊕ y ) = λ x ⊕ λ y {\displaystyle \lambda (x\oplus y)=\lambda x\oplus \lambda y} while for the direct product a scalar factor may be collected alternately with the parts, but not both: λ ( x , y ) = ( λ x , y ) = ( x , λ y ) . {\displaystyle \lambda (x,y)=(\lambda x,y)=(x,\lambda y).} Ian R. Porteous uses the three direct sums above, denoting them 2 R , 2 C , 2 H , {\displaystyle ^{2}R,\ ^{2}C,\ ^{2}H,} as rings of scalars in his analysis of Clifford Algebras and the Classical Groups (1995). The construction described above, as well as Wedderburn's use of the terms direct sum and direct product follow a different convention than the one in category theory. In categorical terms, Wedderburn's direct sum is a categorical product, whilst Wedderburn's direct product is a coproduct (or categorical sum), which (for commutative algebras) actually corresponds to the tensor product of algebras. === Direct sum of Banach spaces === The direct sum of two Banach spaces X {\displaystyle X} and Y {\displaystyle Y} is the direct sum of X {\displaystyle X} and Y {\displaystyle Y} considered as vector spaces, with the norm ‖ ( x , y ) ‖ = ‖ x ‖ X + ‖ y ‖ Y {\displaystyle \|(x,y)\|=\|x\|_{X}+\|y\|_{Y}} for all x ∈ X {\displaystyle x\in X} and y ∈ Y . {\displaystyle y\in Y.} Generally, if X i {\displaystyle X_{i}} is a collection of Banach spaces, where i {\displaystyle i} traverses the index set I , {\displaystyle I,} then the direct sum ⨁ i ∈ I X i {\displaystyle \bigoplus _{i\in I}X_{i}} is a module consisting of all functions x {\displaystyle x} defined over I {\displaystyle I} such that x ( i ) ∈ X i {\displaystyle x(i)\in X_{i}} for all i ∈ I {\displaystyle i\in I} and ∑ i ∈ I ‖ x ( i ) ‖ X i < ∞ . {\displaystyle \sum _{i\in I}\|x(i)\|_{X_{i}}<\infty .} The norm is given by the sum above. The direct sum with this norm is again a Banach space. For example, if we take the index set I = N {\displaystyle I=\mathbb {N} } and X i = R , {\displaystyle X_{i}=\mathbb {R} ,} then the direct sum ⨁ i ∈ N X i {\displaystyle \bigoplus _{i\in \mathbb {N} }X_{i}} is the space ℓ 1 , {\displaystyle \ell _{1},} which consists of all the sequences ( a i ) {\displaystyle \left(a_{i}\right)} of reals with finite norm ‖ a ‖ = ∑ i | a i | . {\textstyle \|a\|=\sum _{i}\left|a_{i}\right|.} A closed subspace A {\displaystyle A} of a Banach space X {\displaystyle X} is complemented if there is another closed subspace B {\displaystyle B} of X {\displaystyle X} such that X {\displaystyle X} is equal to the internal direct sum A ⊕ B . {\displaystyle A\oplus B.} Note that not every closed subspace is complemented; e.g. c 0 {\displaystyle c_{0}} is not complemented in ℓ ∞ . {\displaystyle \ell ^{\infty }.} === Direct sum of modules with bilinear forms === Let { ( M i , b i ) : i ∈ I } {\displaystyle \left\{\left(M_{i},b_{i}\right):i\in I\right\}} be a family indexed by I {\displaystyle I} of modules equipped with bilinear forms. The orthogonal direct sum is the module direct sum with bilinear form B {\displaystyle B} defined by B ( ( x i ) , ( y i ) ) = ∑ i ∈ I b i ( x i , y i ) {\displaystyle B\left({\left({x_{i}}\right),\left({y_{i}}\right)}\right)=\sum _{i\in I}b_{i}\left({x_{i},y_{i}}\right)} in which the summation makes sense even for infinite index sets I {\displaystyle I} because only finitely many of the terms are non-zero. === Direct sum of Hilbert spaces === If finitely many Hilbert spaces H 1 , … , H n {\displaystyle H_{1},\ldots ,H_{n}} are given, one can construct their orthogonal direct sum as above (since they are vector spaces), defining the inner product as: ⟨ ( x 1 , … , x n ) , ( y 1 , … , y n ) ⟩ = ⟨ x 1 , y 1 ⟩ + ⋯ + ⟨ x n , y n ⟩ . {\displaystyle \left\langle \left(x_{1},\ldots ,x_{n}\right),\left(y_{1},\ldots ,y_{n}\right)\right\rangle =\langle x_{1},y_{1}\rangle +\cdots +\langle x_{n},y_{n}\rangle .} The resulting direct sum is a Hilbert space which contains the given Hilbert spaces as mutually orthogonal subspaces. If infinitely many Hilbert spaces H i {\displaystyle H_{i}} for i ∈ I {\displaystyle i\in I} are given, we can carry out the same construction; notice that when defining the inner product, only finitely many summands will be non-zero. However, the result will only be an inner product space and it will not necessarily be complete. We then define the direct sum of the Hilbert spaces H i {\displaystyle H_{i}} to be the completion of this inner product space. Alternatively and equivalently, one can define the direct sum of the Hilbert spaces H i {\displaystyle H_{i}} as the space of all functions α with domain I , {\displaystyle I,} such that α ( i ) {\displaystyle \alpha (i)} is an element of H i {\displaystyle H_{i}} for every i ∈ I {\displaystyle i\in I} and: ∑ i ‖ α ( i ) ‖ 2 < ∞ . {\displaystyle \sum _{i}\left\|\alpha _{(i)}\right\|^{2}<\infty .} The inner product of two such function α and β is then defined as: ⟨ α , β ⟩ = ∑ i ⟨ α i , β i ⟩ . {\displaystyle \langle \alpha ,\beta \rangle =\sum _{i}\langle \alpha _{i},\beta _{i}\rangle .} This space is complete and we get a Hilbert space. For example, if we take the index set I = N {\displaystyle I=\mathbb {N} } and X i = R , {\displaystyle X_{i}=\mathbb {R} ,} then the direct sum ⊕ i ∈ N X i {\displaystyle \oplus _{i\in \mathbb {N} }X_{i}} is the space ℓ 2 , {\displaystyle \ell _{2},} which consists of all the sequences ( a i ) {\displaystyle \left(a_{i}\right)} of reals with finite norm ‖ a ‖ = ∑ i ‖ a i ‖ 2 . {\textstyle \|a\|={\sqrt {\sum _{i}\left\|a_{i}\right\|^{2}}}.} Comparing this with the example for Banach spaces, we see that the Banach space direct sum and the Hilbert space direct sum are not necessarily the same. But if there are only finitely many summands, then the Banach space direct sum is isomorphic to the Hilbert space direct sum, although the norm will be different. Every Hilbert space is isomorphic to a direct sum of sufficiently many copies of the base field, which is either R or C . {\displaystyle \mathbb {R} {\text{ or }}\mathbb {C} .} This is equivalent to the assertion that every Hilbert space has an orthonormal basis. More generally, every closed subspace of a Hilbert space is complemented because it admits an orthogonal complement. Conversely, the Lindenstrauss–Tzafriri theorem asserts that if every closed subspace of a Banach space is complemented, then the Banach space is isomorphic (topologically) to a Hilbert space. == See also == Biproduct – in category theory, an object that is both product and coproduct in compatible waysPages displaying wikidata descriptions as a fallback Indecomposable module Jordan–Hölder theorem – Decomposition of an algebraic structurePages displaying short descriptions of redirect targets Krull–Schmidt theorem – Mathematical theorem Split exact sequence – Type of short exact sequence in mathematics == References == Adamson, Iain T. (1972), Elementary rings and modules, University Mathematical Texts, Oliver and Boyd, ISBN 0-05-002192-3. Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9. Dummit, David S.; Foote, Richard M. (1991), Abstract algebra, Englewood Cliffs, NJ: Prentice Hall, Inc., ISBN 0-13-004771-6. Halmos, Paul (1974), Finite dimensional vector spaces, Springer, ISBN 0-387-90093-4 Mac Lane, S.; Birkhoff, G. (1999), Algebra, AMS Chelsea, ISBN 0-8218-1646-2.
|
Wikipedia:Dirichlet eigenvalue#0
|
In mathematics, the Dirichlet eigenvalues are the fundamental modes of vibration of an idealized drum with a given shape. The problem of whether one can hear the shape of a drum is: given the Dirichlet eigenvalues, what features of the shape of the drum can one deduce. Here a "drum" is thought of as an elastic membrane Ω, which is represented as a planar domain whose boundary is fixed. The Dirichlet eigenvalues are found by solving the following problem for an unknown function u ≠ 0 and eigenvalue λ Here Δ is the Laplacian, which is given in xy-coordinates by Δ u = ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 . {\displaystyle \Delta u={\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}.} The boundary value problem (1) is the Dirichlet problem for the Helmholtz equation, and so λ is known as a Dirichlet eigenvalue for Ω. Dirichlet eigenvalues are contrasted with Neumann eigenvalues: eigenvalues for the corresponding Neumann problem. The Laplace operator Δ appearing in (1) is often known as the Dirichlet Laplacian when it is considered as accepting only functions u satisfying the Dirichlet boundary condition. More generally, in spectral geometry one considers (1) on a manifold with boundary Ω. Then Δ is taken to be the Laplace–Beltrami operator, also with Dirichlet boundary conditions. It can be shown, using the spectral theorem for compact self-adjoint operators that the eigenspaces are finite-dimensional and that the Dirichlet eigenvalues λ are real, positive, and have no limit point. Thus they can be arranged in increasing order: 0 < λ 1 ≤ λ 2 ≤ ⋯ , λ n → ∞ , {\displaystyle 0<\lambda _{1}\leq \lambda _{2}\leq \cdots ,\quad \lambda _{n}\to \infty ,} where each eigenvalue is counted according to its geometric multiplicity. The eigenspaces are orthogonal in the space of square-integrable functions, and consist of smooth functions. In fact, the Dirichlet Laplacian has a continuous extension to an operator from the Sobolev space H 0 2 ( Ω ) {\displaystyle H_{0}^{2}(\Omega )} into L 2 ( Ω ) {\displaystyle L^{2}(\Omega )} . This operator is invertible, and its inverse is compact and self-adjoint so that the usual spectral theorem can be applied to obtain the eigenspaces of Δ and the reciprocals 1/λ of its eigenvalues. One of the primary tools in the study of the Dirichlet eigenvalues is the max-min principle: the first eigenvalue λ1 minimizes the Dirichlet energy. To wit, λ 1 = inf u ≠ 0 ∫ Ω | ∇ u | 2 ∫ Ω | u | 2 , {\displaystyle \lambda _{1}=\inf _{u\not =0}{\frac {\int _{\Omega }|\nabla u|^{2}}{\int _{\Omega }|u|^{2}}},} the infimum is taken over all u of compact support that do not vanish identically in Ω. By a density argument, this infimum agrees with that taken over nonzero u ∈ H 0 1 ( Ω ) {\displaystyle u\in H_{0}^{1}(\Omega )} . Moreover, using results from the calculus of variations analogous to the Lax–Milgram theorem, one can show that a minimizer exists in H 0 1 ( Ω ) {\displaystyle H_{0}^{1}(\Omega )} . More generally, one has λ k = sup inf ∫ Ω | ∇ u | 2 ∫ Ω | u | 2 {\displaystyle \lambda _{k}=\sup \inf {\frac {\int _{\Omega }|\nabla u|^{2}}{\int _{\Omega }|u|^{2}}}} where the supremum is taken over all (k−1)-tuples ϕ 1 , … , ϕ k − 1 ∈ H 0 1 ( Ω ) {\displaystyle \phi _{1},\dots ,\phi _{k-1}\in H_{0}^{1}(\Omega )} and the infimum over all u orthogonal to the ϕ i {\displaystyle \phi _{i}} . == Applications == The Dirichlet Laplacian may arise from various problems of mathematical physics; it may refer to modes of at idealized drum, small waves at the surface of an idealized pool, as well as to a mode of an idealized optical fiber in the paraxial approximation. The last application is most practical in connection to the double-clad fibers; in such fibers, it is important, that most of modes of the fill the domain uniformly, or the most of rays cross the core. The poorest shape seems to be the circularly-symmetric domain ,. The modes of pump should not avoid the active core used in double-clad fiber amplifiers. The spiral-shaped domain happens to be especially efficient for such an application due to the boundary behavior of modes of Dirichlet laplacian. The theorem about boundary behavior of the Dirichlet Laplacian if analogy of the property of rays in geometrical optics (Fig.1); the angular momentum of a ray (green) increases at each reflection from the spiral part of the boundary (blue), until the ray hits the chunk (red); all rays (except those parallel to the optical axis) unavoidly visit the region in vicinity of the chunk to frop the excess of the angular momentum. Similarly, all the modes of the Dirichlet Laplacian have non-zero values in vicinity of the chunk. The normal component of the derivative of the mode at the boundary can be interpreted as pressure; the pressure integrated over the surface gives the force. As the mode is steady-state solution of the propagation equation (with trivial dependence of the longitudinal coordinate), the total force should be zero. Similarly, the angular momentum of the force of pressure should be also zero. However, there exists a formal proof, which does not refer to the analogy with the physical system. == See also == Rayleigh–Faber–Krahn inequality == Notes == == References == Benguria, Rafael D. "Dirichlet Eigenvalue". Encyclopedia of Mathematics. Springer. Retrieved 28 October 2021. Chavel, Isaac (1984). Eigenvalues in Riemannian geometry. Pure Appl. Math. Vol. 115. Academic Press. ISBN 978-0-12-170640-1.. Courant, Richard; Hilbert, David (1962). Methods of Mathematical Physics, Volume I. Wiley-Interscience..
|
Wikipedia:Dirichlet kernel#0
|
In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as D n ( x ) = ∑ k = − n n e i k x = ( 1 + 2 ∑ k = 1 n cos ( k x ) ) = sin ( ( n + 1 / 2 ) x ) sin ( x / 2 ) , {\displaystyle D_{n}(x)=\sum _{k=-n}^{n}e^{ikx}=\left(1+2\sum _{k=1}^{n}\cos(kx)\right)={\frac {\sin \left(\left(n+1/2\right)x\right)}{\sin(x/2)}},} where n is any nonnegative integer. The kernel functions are periodic with period 2 π {\displaystyle 2\pi } . The importance of the Dirichlet kernel comes from its relation to Fourier series. The convolution of Dn(x) with any function f of period 2π is the nth-degree Fourier series approximation to f, i.e., we have ( D n ∗ f ) ( x ) = ∫ − π π f ( y ) D n ( x − y ) d y = 2 π ∑ k = − n n f ^ ( k ) e i k x , {\displaystyle (D_{n}*f)(x)=\int _{-\pi }^{\pi }f(y)D_{n}(x-y)\,dy=2\pi \sum _{k=-n}^{n}{\hat {f}}(k)e^{ikx},} where f ^ ( k ) = 1 2 π ∫ − π π f ( x ) e − i k x d x {\displaystyle {\widehat {f}}(k)={\frac {1}{2\pi }}\int _{-\pi }^{\pi }f(x)e^{-ikx}\,dx} is the kth Fourier coefficient of f. This implies that in order to study convergence of Fourier series it is enough to study properties of the Dirichlet kernel. == Applications == In signal processing, the Dirichlet kernel is often called the periodic sinc function: P ( ω ) = D n ( x ) | x = 2 π ω / ω 0 = sin ( π M ω / ω 0 ) sin ( π ω / ω 0 ) {\displaystyle P(\omega )=D_{n}(x)|_{x=2\pi \omega /\omega _{0}}={\sin(\pi M\omega /\omega _{0}) \over \sin(\pi \omega /\omega _{0})}} where M = 2 n + 1 ≥ 3 {\displaystyle M=2n+1\geq 3} is an odd integer. In this form, ω {\displaystyle \omega } is the angular frequency, and ω 0 {\displaystyle \omega _{0}} is half of the periodicity in frequency. In this case, the periodic sinc function in the frequency domain can be thought of as the Fourier transform of a time bounded impulse train in the time domain: p ( t ) = ∑ k = − n n δ ( t − k T ) {\displaystyle p(t)=\sum _{k=-n}^{n}\delta (t-kT)} where T = π ω 0 {\displaystyle T={\pi \over \omega _{0}}} is the time increment between each impulse and M = 2 n + 1 {\displaystyle M=2n+1} represents the number of impulses in the impulse train. In optics, the Dirichlet kernel is part of the mathematical description of the diffraction pattern formed when monochromatic light passes through an aperture with multiple narrow slits of equal width and equally spaced along an axis perpendicular to the optical axis. In this case, M {\displaystyle M} is the number of slits. == L1 norm of the kernel function == Of particular importance is the fact that the L1 norm of Dn on [ 0 , 2 π ] {\displaystyle [0,2\pi ]} diverges to infinity as n → ∞. One can estimate that ‖ D n ‖ L 1 = Ω ( log n ) . {\displaystyle \|D_{n}\|_{L^{1}}=\Omega (\log n).} By using a Riemann-sum argument to estimate the contribution in the largest neighbourhood of zero in which D n {\displaystyle D_{n}} is positive, and Jensen's inequality for the remaining part, it is also possible to show that: ‖ D n ‖ L 1 ≥ 4 Si ( π ) + 8 π log n {\displaystyle \|D_{n}\|_{L^{1}}\geq 4\operatorname {Si} (\pi )+{\frac {8}{\pi }}\log n} where Si ( x ) {\textstyle \operatorname {Si} (x)} is the sine integral ∫ 0 x ( sin t ) / t d t . {\textstyle \int _{0}^{x}(\sin t)/t\,dt.} This lack of uniform integrability is behind many divergence phenomena for the Fourier series. For example, together with the uniform boundedness principle, it can be used to show that the Fourier series of a continuous function may fail to converge pointwise, in rather dramatic fashion. See convergence of Fourier series for further details. A precise proof of the first result that ‖ D n ‖ L 1 [ 0 , 2 π ] = Ω ( log n ) {\displaystyle \|D_{n}\|_{L^{1}[0,2\pi ]}=\Omega (\log n)} is given by ∫ 0 2 π | D n ( x ) | d x ≥ ∫ 0 π | sin [ ( 2 n + 1 ) x ] | x d x ≥ ∑ k = 0 2 n ∫ k π ( k + 1 ) π | sin s | s d s ≥ | ∑ k = 0 2 n ∫ 0 π sin s ( k + 1 ) π d s | = 2 π H 2 n + 1 ≥ 2 π log ( 2 n + 1 ) , {\displaystyle {\begin{aligned}\int _{0}^{2\pi }|D_{n}(x)|\,dx&\geq \int _{0}^{\pi }{\frac {\left|\sin[(2n+1)x]\right|}{x}}\,dx\\[5pt]&\geq \sum _{k=0}^{2n}\int _{k\pi }^{(k+1)\pi }{\frac {\left|\sin s\right|}{s}}\,ds\\[5pt]&\geq \left|\sum _{k=0}^{2n}\int _{0}^{\pi }{\frac {\sin s}{(k+1)\pi }}\,ds\right|\\[5pt]&={\frac {2}{\pi }}H_{2n+1}\\[5pt]&\geq {\frac {2}{\pi }}\log(2n+1),\end{aligned}}} where we have used the Taylor series identity that 2 / x ≤ 1 / | sin ( x / 2 ) | {\displaystyle 2/x\leq 1/\left|\sin(x/2)\right|} and where H n {\displaystyle H_{n}} are the first-order harmonic numbers. == Relation to the periodic delta function == The Dirichlet kernel is a periodic function which becomes the Dirac comb, i.e. the periodic delta function, in the limit lim m → ∞ D m ( x ) = ∑ m = − ∞ ∞ e ± i ω m T = 2 π T ∑ k = − ∞ ∞ δ ( ω − 2 π k / T ) = 1 T ∑ k = − ∞ ∞ δ ( ξ − k / T ) , {\displaystyle \lim _{m\to \infty }D_{m}(x)=\sum _{m=-\infty }^{\infty }e^{\pm i\omega mT}={\frac {2\pi }{T}}\sum _{k=-\infty }^{\infty }\delta (\omega -2\pi k/T)={\frac {1}{T}}\sum _{k=-\infty }^{\infty }\delta (\xi -k/T)~,} with the angular frequency ω = 2 π ξ {\displaystyle \omega =2\pi \xi } . This can be inferred from the autoconjugation property of the Dirichlet kernel under forward and inverse Fourier transform: F [ D n ( 2 π x ) ] ( ξ ) = F − 1 [ D n ( 2 π x ) ] ( ξ ) = ∫ − ∞ ∞ D n ( 2 π x ) e ± i 2 π ξ x d x = ∑ k = − n + n δ ( ξ − k ) ≡ comb n ( ξ ) {\displaystyle {\mathcal {F}}\left[D_{n}(2\pi x)\right](\xi )={\mathcal {F}}^{-1}\left[D_{n}(2\pi x)\right](\xi )=\int _{-\infty }^{\infty }D_{n}(2\pi x)e^{\pm i2\pi \xi x}\,dx=\sum _{k=-n}^{+n}\delta (\xi -k)\equiv \operatorname {comb} _{n}(\xi )} F [ comb n ] ( x ) = F − 1 [ comb n ] ( x ) = ∫ − ∞ ∞ comb n ( ξ ) e ± i 2 π ξ x d ξ = D n ( 2 π x ) , {\displaystyle {\mathcal {F}}\left[\operatorname {comb} _{n}\right](x)={\mathcal {F}}^{-1}\left[\operatorname {comb} _{n}\right](x)=\int _{-\infty }^{\infty }\operatorname {comb} _{n}(\xi )e^{\pm i2\pi \xi x}\,d\xi =D_{n}(2\pi x),} and comb n ( x ) {\displaystyle \operatorname {comb} _{n}(x)} goes to the Dirac comb Ш {\displaystyle \operatorname {\text{Ш}} } of period T = 1 {\displaystyle T=1} as n → ∞ {\displaystyle n\rightarrow \infty } , which remains invariant under Fourier transform: F [ Ш ] = Ш {\displaystyle {\mathcal {F}}[\operatorname {\text{Ш}} ]=\operatorname {\text{Ш}} } . Thus D n ( 2 π x ) {\displaystyle D_{n}(2\pi x)} must also have converged to Ш {\displaystyle \operatorname {\text{Ш}} } as n → ∞ {\displaystyle n\rightarrow \infty } . In a different vein, consider ∆(x) as the identity element for convolution on functions of period 2π. In other words, we have f ∗ ( Δ ) = f {\displaystyle f*(\Delta )=f} for every function f of period 2π. The Fourier series representation of this "function" is Δ ( x ) ∼ ∑ k = − ∞ ∞ e i k x = ( 1 + 2 ∑ k = 1 ∞ cos ( k x ) ) . {\displaystyle \Delta (x)\sim \sum _{k=-\infty }^{\infty }e^{ikx}=\left(1+2\sum _{k=1}^{\infty }\cos(kx)\right).} (This Fourier series converges to the function almost nowhere.) Therefore, the Dirichlet kernel, which is just the sequence of partial sums of this series, can be thought of as an approximate identity. Abstractly speaking it is not however an approximate identity of positive elements (hence the failures in pointwise convergence mentioned above). == Proof of the trigonometric identity == The trigonometric identity ∑ k = − n n e i k x = sin ( ( n + 1 / 2 ) x ) sin ( x / 2 ) {\displaystyle \sum _{k=-n}^{n}e^{ikx}={\frac {\sin((n+1/2)x)}{\sin(x/2)}}} displayed at the top of this article may be established as follows. First recall that the sum of a finite geometric series is ∑ k = 0 n a r k = a 1 − r n + 1 1 − r . {\displaystyle \sum _{k=0}^{n}ar^{k}=a{\frac {1-r^{n+1}}{1-r}}.} In particular, we have ∑ k = − n n r k = r − n ⋅ 1 − r 2 n + 1 1 − r . {\displaystyle \sum _{k=-n}^{n}r^{k}=r^{-n}\cdot {\frac {1-r^{2n+1}}{1-r}}.} Multiply both the numerator and the denominator by r − 1 / 2 {\displaystyle r^{-1/2}} , getting r − n − 1 / 2 r − 1 / 2 ⋅ 1 − r 2 n + 1 1 − r = r − n − 1 / 2 − r n + 1 / 2 r − 1 / 2 − r 1 / 2 . {\displaystyle {\frac {r^{-n-1/2}}{r^{-1/2}}}\cdot {\frac {1-r^{2n+1}}{1-r}}={\frac {r^{-n-1/2}-r^{n+1/2}}{r^{-1/2}-r^{1/2}}}.} In the case r = e i x {\displaystyle r=e^{ix}} we have ∑ k = − n n e i k x = e − ( n + 1 / 2 ) i x − e ( n + 1 / 2 ) i x e − i x / 2 − e i x / 2 = − 2 i sin ( ( n + 1 / 2 ) x ) − 2 i sin ( x / 2 ) = sin ( ( n + 1 / 2 ) x ) sin ( x / 2 ) {\displaystyle \sum _{k=-n}^{n}e^{ikx}={\frac {e^{-(n+1/2)ix}-e^{(n+1/2)ix}}{e^{-ix/2}-e^{ix/2}}}={\frac {-2i\sin((n+1/2)x)}{-2i\sin(x/2)}}={\frac {\sin((n+1/2)x)}{\sin(x/2)}}} as required. === Alternative proof of the trigonometric identity === Start with the series f ( x ) = 1 + 2 ∑ k = 1 n cos ( k x ) . {\displaystyle f(x)=1+2\sum _{k=1}^{n}\cos(kx).} Multiply both sides by sin ( x / 2 ) {\textstyle \sin(x/2)} and use the trigonometric identity cos ( a ) sin ( b ) = sin ( a + b ) − sin ( a − b ) 2 {\displaystyle \cos(a)\sin(b)={\frac {\sin(a+b)-\sin(a-b)}{2}}} to reduce the terms in the sum. sin ( x / 2 ) f ( x ) = sin ( x / 2 ) + ∑ k = 1 n ( sin ( ( k + 1 2 ) x ) − sin ( ( k − 1 2 ) x ) ) {\displaystyle \sin(x/2)f(x)=\sin(x/2)+\sum _{k=1}^{n}\left(\sin((k+{\tfrac {1}{2}})x)-\sin((k-{\tfrac {1}{2}})x)\right)} which telescopes down to the result. == Variant of identity == If the sum is only over non-negative integers (which may arise when computing a discrete Fourier transform that is not centered), then using similar techniques we can show the following identity: ∑ k = 0 N − 1 e i k x = e i ( N − 1 ) x / 2 sin ( N x / 2 ) sin ( x / 2 ) {\displaystyle \sum _{k=0}^{N-1}e^{ikx}=e^{i(N-1)x/2}{\frac {\sin(N\,x/2)}{\sin(x/2)}}} Another variant is D n ( x ) − 1 2 cos ( n x ) = sin ( n x ) 2 tan ( x 2 ) {\displaystyle D_{n}(x)-{\frac {1}{2}}\cos(nx)={\frac {\sin \left(nx\right)}{2\tan({\frac {x}{2}})}}} and this can be easily proved by using an identity sin ( α + β ) = sin ( α ) cos ( β ) + cos ( α ) sin ( β ) {\displaystyle \sin(\alpha +\beta )=\sin(\alpha )\cos(\beta )+\cos(\alpha )\sin(\beta )} . == See also == Fejér kernel == References == == Sources == Bruckner, Andrew M.; Bruckner, Judith B.; Thomson, Brian S. (1997). "15 Fourier Series §15.2 Dirichlet's Kernel". Real Analysis. Prentice-Hall. pp. 619–622. ISBN 0-13-458886-X. Podkorytov, A.N. (1988). "Asymptotic behavior of the Dirichlet kernel of Fourier sums with respect to a polygon". Journal of Soviet Mathematics. 42 (2): 1640–6. doi:10.1007/BF01665052. Levi, H. (1974). "A geometric construction of the Dirichlet kernel". Transactions of the New York Academy of Sciences. 36 (7 Series II): 640–3. doi:10.1111/j.2164-0947.1974.tb03023.x. "Dirichlet kernel", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Dirichlet kernel at PlanetMath
|
Wikipedia:Dirichlet series#0
|
In mathematics, a Dirichlet series is any series of the form ∑ n = 1 ∞ a n n s , {\displaystyle \sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}},} where s is complex, and a n {\displaystyle a_{n}} is a complex sequence. It is a special case of general Dirichlet series. Dirichlet series play a variety of important roles in analytic number theory. The most usually seen definition of the Riemann zeta function is a Dirichlet series, as are the Dirichlet L-functions. Specifically, the Riemann zeta function ζ(s) is the Dirichlet series of the constant unit function u(n), namely: ζ ( s ) = ∑ n = 1 ∞ 1 n s = ∑ n = 1 ∞ u ( n ) n s = D ( u , s ) , {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=\sum _{n=1}^{\infty }{\frac {u(n)}{n^{s}}}=D(u,s),} where D(u, s) denotes the Dirichlet series of u(n). It is conjectured that the Selberg class of series obeys the generalized Riemann hypothesis. The series is named in honor of Peter Gustav Lejeune Dirichlet. == Combinatorial importance == Dirichlet series can be used as generating series for counting weighted sets of objects with respect to a weight which is combined multiplicatively when taking Cartesian products. Suppose that A is a set with a function w: A → N assigning a weight to each of the elements of A, and suppose additionally that the fibre over any natural number under that weight is a finite set. (We call such an arrangement (A,w) a weighted set.) Suppose additionally that an is the number of elements of A with weight n. Then we define the formal Dirichlet generating series for A with respect to w as follows: D w A ( s ) = ∑ a ∈ A 1 w ( a ) s = ∑ n = 1 ∞ a n n s {\displaystyle {\mathfrak {D}}_{w}^{A}(s)=\sum _{a\in A}{\frac {1}{w(a)^{s}}}=\sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}} Note that if A and B are disjoint subsets of some weighted set (U, w), then the Dirichlet series for their (disjoint) union is equal to the sum of their Dirichlet series: D w A ⊎ B ( s ) = D w A ( s ) + D w B ( s ) . {\displaystyle {\mathfrak {D}}_{w}^{A\uplus B}(s)={\mathfrak {D}}_{w}^{A}(s)+{\mathfrak {D}}_{w}^{B}(s).} Moreover, if (A, u) and (B, v) are two weighted sets, and we define a weight function w: A × B → N by w ( a , b ) = u ( a ) v ( b ) , {\displaystyle w(a,b)=u(a)v(b),} for all a in A and b in B, then we have the following decomposition for the Dirichlet series of the Cartesian product: D w A × B ( s ) = D u A ( s ) ⋅ D v B ( s ) . {\displaystyle {\mathfrak {D}}_{w}^{A\times B}(s)={\mathfrak {D}}_{u}^{A}(s)\cdot {\mathfrak {D}}_{v}^{B}(s).} This follows ultimately from the simple fact that n − s ⋅ m − s = ( n m ) − s . {\displaystyle n^{-s}\cdot m^{-s}=(nm)^{-s}.} == Examples == The most famous example of a Dirichlet series is ζ ( s ) = ∑ n = 1 ∞ 1 n s , {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}},} whose analytic continuation to C {\displaystyle \mathbb {C} } (apart from a simple pole at s = 1 {\displaystyle s=1} ) is the Riemann zeta function. Provided that f is real-valued at all natural numbers n, the respective real and imaginary parts of the Dirichlet series F have known formulas where we write s ≡ σ + i t {\displaystyle s\equiv \sigma +it} : ℜ [ F ( s ) ] = ∑ n ≥ 1 f ( n ) cos ( t log n ) n σ ℑ [ F ( s ) ] = − ∑ n ≥ 1 f ( n ) sin ( t log n ) n σ . {\displaystyle {\begin{aligned}\Re [F(s)]&=\sum _{n\geq 1}{\frac {~f(n)\,\cos(t\log n)~}{n^{\sigma }}}\\\Im [F(s)]&=-\sum _{n\geq 1}{\frac {~f(n)\,\sin(t\log n)~}{n^{\sigma }}}\,.\end{aligned}}} Treating these as formal Dirichlet series for the time being in order to be able to ignore matters of convergence, note that we have: ζ ( s ) = D id N ( s ) = ∏ p prime D id { p n : n ∈ N } ( s ) = ∏ p prime ∑ n ∈ N D id { p n } ( s ) = ∏ p prime ∑ n ∈ N 1 ( p n ) s = ∏ p prime ∑ n ∈ N ( 1 p s ) n = ∏ p prime 1 1 − p − s {\displaystyle {\begin{aligned}\zeta (s)&={\mathfrak {D}}_{\operatorname {id} }^{\mathbb {N} }(s)=\prod _{p{\text{ prime}}}{\mathfrak {D}}_{\operatorname {id} }^{\{p^{n}:n\in \mathbb {N} \}}(s)=\prod _{p{\text{ prime}}}\sum _{n\in \mathbb {N} }{\mathfrak {D}}_{\operatorname {id} }^{\{p^{n}\}}(s)\\&=\prod _{p{\text{ prime}}}\sum _{n\in \mathbb {N} }{\frac {1}{(p^{n})^{s}}}=\prod _{p{\text{ prime}}}\sum _{n\in \mathbb {N} }\left({\frac {1}{p^{s}}}\right)^{n}=\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}\end{aligned}}} as each natural number has a unique multiplicative decomposition into powers of primes. It is this bit of combinatorics which inspires the Euler product formula. Another is: 1 ζ ( s ) = ∑ n = 1 ∞ μ ( n ) n s {\displaystyle {\frac {1}{\zeta (s)}}=\sum _{n=1}^{\infty }{\frac {\mu (n)}{n^{s}}}} where μ(n) is the Möbius function. This and many of the following series may be obtained by applying Möbius inversion and Dirichlet convolution to known series. For example, given a Dirichlet character χ(n) one has 1 L ( χ , s ) = ∑ n = 1 ∞ μ ( n ) χ ( n ) n s {\displaystyle {\frac {1}{L(\chi ,s)}}=\sum _{n=1}^{\infty }{\frac {\mu (n)\chi (n)}{n^{s}}}} where L(χ, s) is a Dirichlet L-function. If the arithmetic function f has a Dirichlet inverse function f − 1 ( n ) {\displaystyle f^{-1}(n)} , i.e., if there exists an inverse function such that the Dirichlet convolution of f with its inverse yields the multiplicative identity ∑ d | n f ( d ) f − 1 ( n / d ) = δ n , 1 {\textstyle \sum _{d|n}f(d)f^{-1}(n/d)=\delta _{n,1}} , then the DGF of the inverse function is given by the reciprocal of F: ∑ n ≥ 1 f − 1 ( n ) n s = ( ∑ n ≥ 1 f ( n ) n s ) − 1 . {\displaystyle \sum _{n\geq 1}{\frac {f^{-1}(n)}{n^{s}}}=\left(\sum _{n\geq 1}{\frac {f(n)}{n^{s}}}\right)^{-1}.} Other identities include ζ ( s − 1 ) ζ ( s ) = ∑ n = 1 ∞ φ ( n ) n s {\displaystyle {\frac {\zeta (s-1)}{\zeta (s)}}=\sum _{n=1}^{\infty }{\frac {\varphi (n)}{n^{s}}}} where φ ( n ) {\displaystyle \varphi (n)} is the totient function, ζ ( s − k ) ζ ( s ) = ∑ n = 1 ∞ J k ( n ) n s {\displaystyle {\frac {\zeta (s-k)}{\zeta (s)}}=\sum _{n=1}^{\infty }{\frac {J_{k}(n)}{n^{s}}}} where Jk is the Jordan function, and ζ ( s ) ζ ( s − a ) = ∑ n = 1 ∞ σ a ( n ) n s ζ ( s ) ζ ( s − a ) ζ ( s − 2 a ) ζ ( 2 s − 2 a ) = ∑ n = 1 ∞ σ a ( n 2 ) n s ζ ( s ) ζ ( s − a ) ζ ( s − b ) ζ ( s − a − b ) ζ ( 2 s − a − b ) = ∑ n = 1 ∞ σ a ( n ) σ b ( n ) n s {\displaystyle {\begin{aligned}&\zeta (s)\zeta (s-a)=\sum _{n=1}^{\infty }{\frac {\sigma _{a}(n)}{n^{s}}}\\[6pt]&{\frac {\zeta (s)\zeta (s-a)\zeta (s-2a)}{\zeta (2s-2a)}}=\sum _{n=1}^{\infty }{\frac {\sigma _{a}(n^{2})}{n^{s}}}\\[6pt]&{\frac {\zeta (s)\zeta (s-a)\zeta (s-b)\zeta (s-a-b)}{\zeta (2s-a-b)}}=\sum _{n=1}^{\infty }{\frac {\sigma _{a}(n)\sigma _{b}(n)}{n^{s}}}\end{aligned}}} where σa(n) is the divisor function. By specialization to the divisor function d = σ0 we have ζ 2 ( s ) = ∑ n = 1 ∞ d ( n ) n s ζ 3 ( s ) ζ ( 2 s ) = ∑ n = 1 ∞ d ( n 2 ) n s ζ 4 ( s ) ζ ( 2 s ) = ∑ n = 1 ∞ d ( n ) 2 n s . {\displaystyle {\begin{aligned}\zeta ^{2}(s)&=\sum _{n=1}^{\infty }{\frac {d(n)}{n^{s}}}\\[6pt]{\frac {\zeta ^{3}(s)}{\zeta (2s)}}&=\sum _{n=1}^{\infty }{\frac {d(n^{2})}{n^{s}}}\\[6pt]{\frac {\zeta ^{4}(s)}{\zeta (2s)}}&=\sum _{n=1}^{\infty }{\frac {d(n)^{2}}{n^{s}}}.\end{aligned}}} The logarithm of the zeta function is given by log ζ ( s ) = ∑ n = 2 ∞ Λ ( n ) log ( n ) 1 n s , ℜ ( s ) > 1 {\displaystyle \log \zeta (s)=\sum _{n=2}^{\infty }{\frac {\Lambda (n)}{\log(n)}}{\frac {1}{n^{s}}},\qquad \Re (s)>1} where Λ(n) is the von Mangoldt function. Similarly, we have that − ζ ′ ( s ) = ∑ n = 2 ∞ log ( n ) n s , ℜ ( s ) > 1. {\displaystyle -\zeta '(s)=\sum _{n=2}^{\infty }{\frac {\log(n)}{n^{s}}},\qquad \Re (s)>1.} The logarithmic derivative of the zeta function is then ζ ′ ( s ) ζ ( s ) = − ∑ n = 1 ∞ Λ ( n ) n s . {\displaystyle {\frac {\zeta '(s)}{\zeta (s)}}=-\sum _{n=1}^{\infty }{\frac {\Lambda (n)}{n^{s}}}.} These last three are special cases of a more general relationship for derivatives of Dirichlet series, given below. Given the Liouville function λ(n), one has ζ ( 2 s ) ζ ( s ) = ∑ n = 1 ∞ λ ( n ) n s . {\displaystyle {\frac {\zeta (2s)}{\zeta (s)}}=\sum _{n=1}^{\infty }{\frac {\lambda (n)}{n^{s}}}.} Yet another example involves Ramanujan's sum: σ 1 − s ( m ) ζ ( s ) = ∑ n = 1 ∞ c n ( m ) n s . {\displaystyle {\frac {\sigma _{1-s}(m)}{\zeta (s)}}=\sum _{n=1}^{\infty }{\frac {c_{n}(m)}{n^{s}}}.} Another pair of examples involves the Möbius function and the prime omega function: ζ ( s ) ζ ( 2 s ) = ∑ n = 1 ∞ | μ ( n ) | n s ≡ ∑ n = 1 ∞ μ 2 ( n ) n s . {\displaystyle {\frac {\zeta (s)}{\zeta (2s)}}=\sum _{n=1}^{\infty }{\frac {|\mu (n)|}{n^{s}}}\equiv \sum _{n=1}^{\infty }{\frac {\mu ^{2}(n)}{n^{s}}}.} ζ 2 ( s ) ζ ( 2 s ) = ∑ n = 1 ∞ 2 ω ( n ) n s . {\displaystyle {\frac {\zeta ^{2}(s)}{\zeta (2s)}}=\sum _{n=1}^{\infty }{\frac {2^{\omega (n)}}{n^{s}}}.} We have that the Dirichlet series for the prime zeta function, which is the analog to the Riemann zeta function summed only over indices n which are prime, is given by a sum over the Moebius function and the logarithms of the zeta function: P ( s ) := ∑ p prime p − s = ∑ n ≥ 1 μ ( n ) n log ζ ( n s ) . {\displaystyle P(s):=\sum _{p{\text{ prime}}}p^{-s}=\sum _{n\geq 1}{\frac {\mu (n)}{n}}\log \zeta (ns).} A large tabular catalog listing of other examples of sums corresponding to known Dirichlet series representations is found here. Examples of Dirichlet series DGFs corresponding to additive (rather than multiplicative) f are given here for the prime omega functions ω ( n ) {\displaystyle \omega (n)} and Ω ( n ) {\displaystyle \Omega (n)} , which respectively count the number of distinct prime factors of n (with multiplicity or not). For example, the DGF of the first of these functions is expressed as the product of the Riemann zeta function and the prime zeta function for any complex s with ℜ ( s ) > 1 {\displaystyle \Re (s)>1} : ∑ n ≥ 1 ω ( n ) n s = ζ ( s ) ⋅ P ( s ) , ℜ ( s ) > 1. {\displaystyle \sum _{n\geq 1}{\frac {\omega (n)}{n^{s}}}=\zeta (s)\cdot P(s),\Re (s)>1.} If f is a multiplicative function such that its DGF F converges absolutely for all ℜ ( s ) > σ a , f {\displaystyle \Re (s)>\sigma _{a,f}} , and if p is any prime number, we have that ( 1 + f ( p ) p − s ) × ∑ n ≥ 1 f ( n ) μ ( n ) n s = ( 1 − f ( p ) p − s ) × ∑ n ≥ 1 f ( n ) μ ( n ) μ ( gcd ( p , n ) ) n s , ∀ ℜ ( s ) > σ a , f , {\displaystyle \left(1+f(p)p^{-s}\right)\times \sum _{n\geq 1}{\frac {f(n)\mu (n)}{n^{s}}}=\left(1-f(p)p^{-s}\right)\times \sum _{n\geq 1}{\frac {f(n)\mu (n)\mu (\gcd(p,n))}{n^{s}}},\forall \Re (s)>\sigma _{a,f},} where μ ( n ) {\displaystyle \mu (n)} is the Moebius function. Another unique Dirichlet series identity generates the summatory function of some arithmetic f evaluated at GCD inputs given by ∑ n ≥ 1 ( ∑ k = 1 n f ( gcd ( k , n ) ) ) 1 n s = ζ ( s − 1 ) ζ ( s ) × ∑ n ≥ 1 f ( n ) n s , ∀ ℜ ( s ) > σ a , f + 1. {\displaystyle \sum _{n\geq 1}\left(\sum _{k=1}^{n}f(\gcd(k,n))\right){\frac {1}{n^{s}}}={\frac {\zeta (s-1)}{\zeta (s)}}\times \sum _{n\geq 1}{\frac {f(n)}{n^{s}}},\forall \Re (s)>\sigma _{a,f}+1.} We also have a formula between the DGFs of two arithmetic functions f and g related by Moebius inversion. In particular, if g ( n ) = ( f ∗ 1 ) ( n ) {\displaystyle g(n)=(f\ast 1)(n)} , then by Moebius inversion we have that f ( n ) = ( g ∗ μ ) ( n ) {\displaystyle f(n)=(g\ast \mu )(n)} . Hence, if F and G are the two respective DGFs of f and g, then we can relate these two DGFs by the formulas: F ( s ) = G ( s ) ζ ( s ) , ℜ ( s ) > max ( σ a , f , σ a , g ) . {\displaystyle F(s)={\frac {G(s)}{\zeta (s)}},\Re (s)>\max(\sigma _{a,f},\sigma _{a,g}).} There is a known formula for the exponential of a Dirichlet series. If F ( s ) = exp ( G ( s ) ) {\displaystyle F(s)=\exp(G(s))} is the DGF of some arithmetic f with f ( 1 ) ≠ 0 {\displaystyle f(1)\neq 0} , then the DGF G is expressed by the sum G ( s ) = log ( f ( 1 ) ) + ∑ n ≥ 2 ( f ′ ∗ f − 1 ) ( n ) log ( n ) ⋅ n s , {\displaystyle G(s)=\log(f(1))+\sum _{n\geq 2}{\frac {(f^{\prime }\ast f^{-1})(n)}{\log(n)\cdot n^{s}}},} where f − 1 ( n ) {\displaystyle f^{-1}(n)} is the Dirichlet inverse of f and where the arithmetic derivative of f is given by the formula f ′ ( n ) = log ( n ) ⋅ f ( n ) {\displaystyle f^{\prime }(n)=\log(n)\cdot f(n)} for all natural numbers n ≥ 2 {\displaystyle n\geq 2} . == Analytic properties == Given a sequence { a n } n ∈ N {\displaystyle \{a_{n}\}_{n\in \mathbb {N} }} of complex numbers we try to consider the value of f ( s ) = ∑ n = 1 ∞ a n n s {\displaystyle f(s)=\sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}} as a function of the complex variable s. In order for this to make sense, we need to consider the convergence properties of the above infinite series: If { a n } n ∈ N {\displaystyle \{a_{n}\}_{n\in \mathbb {N} }} is a bounded sequence of complex numbers, then the corresponding Dirichlet series f converges absolutely on the open half-plane Re(s) > 1. In general, if an = O(nk), the series converges absolutely in the half plane Re(s) > k + 1. If the set of sums a n + a n + 1 + ⋯ + a n + k {\displaystyle a_{n}+a_{n+1}+\cdots +a_{n+k}} is bounded for n and k ≥ 0, then the above infinite series converges on the open half-plane of s such that Re(s) > 0. In both cases f is an analytic function on the corresponding open half plane. In general σ {\displaystyle \sigma } is the abscissa of convergence of a Dirichlet series if it converges for ℜ ( s ) > σ {\displaystyle \Re (s)>\sigma } and diverges for ℜ ( s ) < σ . {\displaystyle \Re (s)<\sigma .} This is the analogue for Dirichlet series of the radius of convergence for power series. The Dirichlet series case is more complicated, though: absolute convergence and uniform convergence may occur in distinct half-planes. In many cases, the analytic function associated with a Dirichlet series has an analytic extension to a larger domain. === Abscissa of convergence === Suppose ∑ n = 1 ∞ a n n s 0 {\displaystyle \sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s_{0}}}}} converges for some s 0 ∈ C , ℜ ( s 0 ) > 0. {\displaystyle s_{0}\in \mathbb {C} ,\Re (s_{0})>0.} Proposition 1. A ( N ) := ∑ n = 1 N a n = o ( N s 0 ) . {\displaystyle A(N):=\sum _{n=1}^{N}a_{n}=o(N^{s_{0}}).} Proof. Note that: ( n + 1 ) s − n s = ∫ n n + 1 s x s − 1 d x = O ( n s − 1 ) . {\displaystyle (n+1)^{s}-n^{s}=\int _{n}^{n+1}sx^{s-1}\,dx={\mathcal {O}}(n^{s-1}).} and define B ( N ) = ∑ n = 1 N a n n s 0 = ℓ + o ( 1 ) {\displaystyle B(N)=\sum _{n=1}^{N}{\frac {a_{n}}{n^{s_{0}}}}=\ell +o(1)} where ℓ = ∑ n = 1 ∞ a n n s 0 . {\displaystyle \ell =\sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s_{0}}}}.} By summation by parts we have A ( N ) = ∑ n = 1 N a n n s 0 n s 0 = B ( N ) N s 0 + ∑ n = 1 N − 1 B ( n ) ( n s 0 − ( n + 1 ) s 0 ) = ( B ( N ) − ℓ ) N s 0 + ∑ n = 1 N − 1 ( B ( n ) − ℓ ) ( n s 0 − ( n + 1 ) s 0 ) = o ( N s 0 ) + ∑ n = 1 N − 1 o ( n s 0 − 1 ) = o ( N s 0 ) {\displaystyle {\begin{aligned}A(N)&=\sum _{n=1}^{N}{\frac {a_{n}}{n^{s_{0}}}}n^{s_{0}}\\&=B(N)N^{s_{0}}+\sum _{n=1}^{N-1}B(n)\left(n^{s_{0}}-(n+1)^{s_{0}}\right)\\&=(B(N)-\ell )N^{s_{0}}+\sum _{n=1}^{N-1}(B(n)-\ell )\left(n^{s_{0}}-(n+1)^{s_{0}}\right)\\&=o(N^{s_{0}})+\sum _{n=1}^{N-1}{\mathcal {o}}(n^{s_{0}-1})\\&=o(N^{s_{0}})\end{aligned}}} Proposition 2. Define L = { ∑ n = 1 ∞ a n If convergent 0 otherwise {\displaystyle L={\begin{cases}\sum _{n=1}^{\infty }a_{n}&{\text{If convergent}}\\0&{\text{otherwise}}\end{cases}}} Then: σ = lim sup N → ∞ ln | A ( N ) − L | ln N = inf σ ′ { A ( N ) − L = O ( N σ ′ ) } {\displaystyle \sigma =\limsup _{N\to \infty }{\frac {\ln |A(N)-L|}{\ln N}}=\inf _{\sigma '}\left\{A(N)-L={\mathcal {O}}(N^{\sigma '})\right\}} is the abscissa of convergence of the Dirichlet series. Proof. From the definition ∀ ε > 0 A ( N ) − L = O ( N σ + ε ) {\displaystyle \forall \varepsilon >0\qquad A(N)-L={\mathcal {O}}(N^{\sigma +\varepsilon })} so that ∑ n = 1 N a n n s = A ( N ) N − s + ∑ n = 1 N − 1 A ( n ) ( n − s − ( n + 1 ) − s ) = ( A ( N ) − L ) N − s + ∑ n = 1 N − 1 ( A ( n ) − L ) ( n − s − ( n + 1 ) − s ) = O ( N σ + ε − s ) + ∑ n = 1 N − 1 O ( n σ + ε − s − 1 ) {\displaystyle {\begin{aligned}\sum _{n=1}^{N}{\frac {a_{n}}{n^{s}}}&=A(N)N^{-s}+\sum _{n=1}^{N-1}A(n)(n^{-s}-(n+1)^{-s})\\&=(A(N)-L)N^{-s}+\sum _{n=1}^{N-1}(A(n)-L)(n^{-s}-(n+1)^{-s})\\&={\mathcal {O}}(N^{\sigma +\varepsilon -s})+\sum _{n=1}^{N-1}{\mathcal {O}}(n^{\sigma +\varepsilon -s-1})\end{aligned}}} which converges as N → ∞ {\displaystyle N\to \infty } whenever ℜ ( s ) > σ . {\displaystyle \Re (s)>\sigma .} Hence, for every s {\displaystyle s} such that ∑ n = 1 ∞ a n n − s {\textstyle \sum _{n=1}^{\infty }a_{n}n^{-s}} diverges, we have σ ≥ ℜ ( s ) , {\displaystyle \sigma \geq \Re (s),} and this finishes the proof. Proposition 3. If ∑ n = 1 ∞ a n {\displaystyle \sum _{n=1}^{\infty }a_{n}} converges then f ( σ + i t ) = o ( 1 σ ) {\displaystyle f(\sigma +it)=o\left({\tfrac {1}{\sigma }}\right)} as σ → 0 + {\displaystyle \sigma \to 0^{+}} and where it is meromorphic ( f ( s ) {\displaystyle f(s)} has no poles on ℜ ( s ) = 0 {\displaystyle \Re (s)=0} ). Proof. Note that n − s − ( n + 1 ) − s = s n − s − 1 + O ( n − s − 2 ) {\displaystyle n^{-s}-(n+1)^{-s}=sn^{-s-1}+O(n^{-s-2})} and A ( N ) − f ( 0 ) → 0 {\displaystyle A(N)-f(0)\to 0} we have by summation by parts, for ℜ ( s ) > 0 {\displaystyle \Re (s)>0} f ( s ) = lim N → ∞ ∑ n = 1 N a n n s = lim N → ∞ A ( N ) N − s + ∑ n = 1 N − 1 A ( n ) ( n − s − ( n + 1 ) − s ) = s ∑ n = 1 ∞ A ( n ) n − s − 1 + O ( ∑ n = 1 ∞ A ( n ) n − s − 2 ) ⏟ = O ( 1 ) {\displaystyle {\begin{aligned}f(s)&=\lim _{N\to \infty }\sum _{n=1}^{N}{\frac {a_{n}}{n^{s}}}\\&=\lim _{N\to \infty }A(N)N^{-s}+\sum _{n=1}^{N-1}A(n)(n^{-s}-(n+1)^{-s})\\&=s\sum _{n=1}^{\infty }A(n)n^{-s-1}+\underbrace {{\mathcal {O}}\left(\sum _{n=1}^{\infty }A(n)n^{-s-2}\right)} _{={\mathcal {O}}(1)}\end{aligned}}} Now find N such that for n > N, | A ( n ) − f ( 0 ) | < ε {\displaystyle |A(n)-f(0)|<\varepsilon } s ∑ n = 1 ∞ A ( n ) n − s − 1 = s f ( 0 ) ζ ( s + 1 ) + s ∑ n = 1 N ( A ( n ) − f ( 0 ) ) n − s − 1 ⏟ = O ( 1 ) + s ∑ n = N + 1 ∞ ( A ( n ) − f ( 0 ) ) n − s − 1 ⏟ < ε | s | ∫ N ∞ x − ℜ ( s ) − 1 d x {\displaystyle s\sum _{n=1}^{\infty }A(n)n^{-s-1}=\underbrace {sf(0)\zeta (s+1)+s\sum _{n=1}^{N}(A(n)-f(0))n^{-s-1}} _{={\mathcal {O}}(1)}+\underbrace {s\sum _{n=N+1}^{\infty }(A(n)-f(0))n^{-s-1}} _{<\varepsilon |s|\int _{N}^{\infty }x^{-\Re (s)-1}\,dx}} and hence, for every ε > 0 {\displaystyle \varepsilon >0} there is a C {\displaystyle C} such that for σ > 0 {\displaystyle \sigma >0} : | f ( σ + i t ) | < C + ε | σ + i t | 1 σ . {\displaystyle |f(\sigma +it)|<C+\varepsilon |\sigma +it|{\frac {1}{\sigma }}.} == Formal Dirichlet series == A formal Dirichlet series over a ring R is associated to a function a from the positive integers to R D ( a , s ) = ∑ n = 1 ∞ a ( n ) n − s {\displaystyle D(a,s)=\sum _{n=1}^{\infty }a(n)n^{-s}\ } with addition and multiplication defined by D ( a , s ) + D ( b , s ) = ∑ n = 1 ∞ ( a + b ) ( n ) n − s {\displaystyle D(a,s)+D(b,s)=\sum _{n=1}^{\infty }(a+b)(n)n^{-s}\ } D ( a , s ) ⋅ D ( b , s ) = ∑ n = 1 ∞ ( a ∗ b ) ( n ) n − s {\displaystyle D(a,s)\cdot D(b,s)=\sum _{n=1}^{\infty }(a*b)(n)n^{-s}\ } where ( a + b ) ( n ) = a ( n ) + b ( n ) {\displaystyle (a+b)(n)=a(n)+b(n)\ } is the pointwise sum and ( a ∗ b ) ( n ) = ∑ k ∣ n a ( k ) b ( n / k ) {\displaystyle (a*b)(n)=\sum _{k\mid n}a(k)b(n/k)\ } is the Dirichlet convolution of a and b. The formal Dirichlet series form a ring Ω, indeed an R-algebra, with the zero function as additive zero element and the function δ defined by δ(1) = 1, δ(n) = 0 for n > 1 as multiplicative identity. An element of this ring is invertible if a(1) is invertible in R. If R is commutative, so is Ω; if R is an integral domain, so is Ω. The non-zero multiplicative functions form a subgroup of the group of units of Ω. The ring of formal Dirichlet series over C is isomorphic to a ring of formal power series in countably many variables. == Derivatives == Given F ( s ) = ∑ n = 1 ∞ f ( n ) n s {\displaystyle F(s)=\sum _{n=1}^{\infty }{\frac {f(n)}{n^{s}}}} it is possible to show that F ′ ( s ) = − ∑ n = 1 ∞ f ( n ) log ( n ) n s {\displaystyle F'(s)=-\sum _{n=1}^{\infty }{\frac {f(n)\log(n)}{n^{s}}}} assuming the right hand side converges. For a completely multiplicative function ƒ(n), and assuming the series converges for Re(s) > σ0, then one has that F ′ ( s ) F ( s ) = − ∑ n = 1 ∞ f ( n ) Λ ( n ) n s {\displaystyle {\frac {F^{\prime }(s)}{F(s)}}=-\sum _{n=1}^{\infty }{\frac {f(n)\Lambda (n)}{n^{s}}}} converges for Re(s) > σ0. Here, Λ(n) is the von Mangoldt function. == Products == Suppose F ( s ) = ∑ n = 1 ∞ f ( n ) n − s {\displaystyle F(s)=\sum _{n=1}^{\infty }f(n)n^{-s}} and G ( s ) = ∑ n = 1 ∞ g ( n ) n − s . {\displaystyle G(s)=\sum _{n=1}^{\infty }g(n)n^{-s}.} If both F(s) and G(s) are absolutely convergent for s > a and s > b then we have 1 2 T ∫ − T T F ( a + i t ) G ( b − i t ) d t = ∑ n = 1 ∞ f ( n ) g ( n ) n − a − b as T ∼ ∞ . {\displaystyle {\frac {1}{2T}}\int _{-T}^{T}\,F(a+it)G(b-it)\,dt=\sum _{n=1}^{\infty }f(n)g(n)n^{-a-b}{\text{ as }}T\sim \infty .} If a = b and ƒ(n) = g(n) we have 1 2 T ∫ − T T | F ( a + i t ) | 2 d t = ∑ n = 1 ∞ [ f ( n ) ] 2 n − 2 a as T ∼ ∞ . {\displaystyle {\frac {1}{2T}}\int _{-T}^{T}|F(a+it)|^{2}\,dt=\sum _{n=1}^{\infty }[f(n)]^{2}n^{-2a}{\text{ as }}T\sim \infty .} == Coefficient inversion (integral formula) == For all positive integers x ≥ 1 {\displaystyle x\geq 1} , the function f at x, f ( x ) {\displaystyle f(x)} , can be recovered from the Dirichlet generating function (DGF) F of f (or the Dirichlet series over f) using the following integral formula whenever σ > σ a , f {\displaystyle \sigma >\sigma _{a,f}} , the abscissa of absolute convergence of the DGF F f ( x ) = lim T → ∞ 1 2 T ∫ − T T x σ + i t F ( σ + i t ) d t . {\displaystyle f(x)=\lim _{T\rightarrow \infty }{\frac {1}{2T}}\int _{-T}^{T}x^{\sigma +it}F(\sigma +it)dt.} It is also possible to invert the Mellin transform of the summatory function of f that defines the DGF F of f to obtain the coefficients of the Dirichlet series (see section below). In this case, we arrive at a complex contour integral formula related to Perron's theorem. Practically speaking, the rates of convergence of the above formula as a function of T are variable, and if the Dirichlet series F is sensitive to sign changes as a slowly converging series, it may require very large T to approximate the coefficients of F using this formula without taking the formal limit. Another variant of the previous formula stated in Apostol's book provides an integral formula for an alternate sum in the following form for c , x > 0 {\displaystyle c,x>0} and any real ℜ ( s ) ≡ σ > σ a , f − c {\displaystyle \Re (s)\equiv \sigma >\sigma _{a,f}-c} where we denote ℜ ( s ) := σ {\displaystyle \Re (s):=\sigma } : ∑ n ≤ x ′ f ( n ) n s = 1 2 π i ∫ c − i ∞ c + i ∞ D f ( s + z ) x z z d z . {\displaystyle {\sum _{n\leq x}}^{\prime }{\frac {f(n)}{n^{s}}}={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }D_{f}(s+z){\frac {x^{z}}{z}}dz.} == Integral and series transformations == The inverse Mellin transform of a Dirichlet series, divided by s, is given by Perron's formula. Additionally, if F ( z ) := ∑ n ≥ 0 f n z n {\textstyle F(z):=\sum _{n\geq 0}f_{n}z^{n}} is the (formal) ordinary generating function of the sequence of { f n } n ≥ 0 {\displaystyle \{f_{n}\}_{n\geq 0}} , then an integral representation for the Dirichlet series of the generating function sequence, { f n z n } n ≥ 0 {\displaystyle \{f_{n}z^{n}\}_{n\geq 0}} , is given by ∑ n ≥ 0 f n z n ( n + 1 ) s = ( − 1 ) s − 1 ( s − 1 ) ! ∫ 0 1 log s − 1 ( t ) F ( t z ) d t , s ≥ 1. {\displaystyle \sum _{n\geq 0}{\frac {f_{n}z^{n}}{(n+1)^{s}}}={\frac {(-1)^{s-1}}{(s-1)!}}\int _{0}^{1}\log ^{s-1}(t)F(tz)\,dt,\ s\geq 1.} Another class of related derivative and series-based generating function transformations on the ordinary generating function of a sequence which effectively produces the left-hand-side expansion in the previous equation are respectively defined in. == Relation to power series == The sequence an generated by a Dirichlet series generating function corresponding to: ζ ( s ) m = ∑ n = 1 ∞ a n n s {\displaystyle \zeta (s)^{m}=\sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}} where ζ(s) is the Riemann zeta function, has the ordinary generating function: ∑ n = 1 ∞ a n x n = x + ( m 1 ) ∑ a = 2 ∞ x a + ( m 2 ) ∑ a = 2 ∞ ∑ b = 2 ∞ x a b + ( m 3 ) ∑ a = 2 ∞ ∑ b = 2 ∞ ∑ c = 2 ∞ x a b c + ( m 4 ) ∑ a = 2 ∞ ∑ b = 2 ∞ ∑ c = 2 ∞ ∑ d = 2 ∞ x a b c d + ⋯ {\displaystyle \sum _{n=1}^{\infty }a_{n}x^{n}=x+{m \choose 1}\sum _{a=2}^{\infty }x^{a}+{m \choose 2}\sum _{a=2}^{\infty }\sum _{b=2}^{\infty }x^{ab}+{m \choose 3}\sum _{a=2}^{\infty }\sum _{b=2}^{\infty }\sum _{c=2}^{\infty }x^{abc}+{m \choose 4}\sum _{a=2}^{\infty }\sum _{b=2}^{\infty }\sum _{c=2}^{\infty }\sum _{d=2}^{\infty }x^{abcd}+\cdots } == Relation to the summatory function of an arithmetic function via Mellin transforms == If f is an arithmetic function with corresponding DGF F, and the summatory function of f is defined by S f ( x ) := { ∑ n ≤ x f ( n ) , x ≥ 1 ; 0 , 0 < x < 1 , {\displaystyle S_{f}(x):={\begin{cases}\sum _{n\leq x}f(n),&x\geq 1;\\0,&0<x<1,\end{cases}}} then we can express F by the Mellin transform of the summatory function at − s {\displaystyle -s} . Namely, we have that F ( s ) = s ⋅ ∫ 1 ∞ S f ( x ) x s + 1 d x , ℜ ( s ) > σ a , f . {\displaystyle F(s)=s\cdot \int _{1}^{\infty }{\frac {S_{f}(x)}{x^{s+1}}}dx,\Re (s)>\sigma _{a,f}.} For σ := ℜ ( s ) > 0 {\displaystyle \sigma :=\Re (s)>0} and any natural numbers N ≥ 1 {\displaystyle N\geq 1} , we also have the approximation to the DGF F of f given by F ( s ) = ∑ n ≤ N f ( n ) n − s − S f ( N ) N s + s ⋅ ∫ N ∞ S f ( y ) y s + 1 d y . {\displaystyle F(s)=\sum _{n\leq N}f(n)n^{-s}-{\frac {S_{f}(N)}{N^{s}}}+s\cdot \int _{N}^{\infty }{\frac {S_{f}(y)}{y^{s+1}}}dy.} == See also == General Dirichlet series Zeta function regularization Euler product Dirichlet convolution == References == Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001 Hardy, G.H.; Riesz, Marcel (1915). The general theory of Dirichlet's series. Cambridge Tracts in Mathematics. Vol. 18. Cambridge University Press. The general theory of Dirichlet's series by G. H. Hardy. Cornell University Library Historical Math Monographs. {Reprinted by} Cornell University Library Digital Collections Gould, Henry W.; Shonhiwa, Temba (2008). "A catalogue of interesting Dirichlet series". Miss. J. Math. Sci. 20 (1). Archived from the original on 2011-10-02. Mathar, Richard J. (2011). "Survey of Dirichlet series of multiplicative arithmetic functions". arXiv:1106.4038 [math.NT]. Tenenbaum, Gérald (1995). Introduction to Analytic and Probabilistic Number Theory. Cambridge Studies in Advanced Mathematics. Vol. 46. Cambridge University Press. ISBN 0-521-41261-7. Zbl 0831.11001. "Dirichlet series". PlanetMath.
|
Wikipedia:Dirichlet–Jordan test#0
|
In mathematics, the Dirichlet–Jordan test gives sufficient conditions for a complex-valued, periodic function f {\displaystyle f} to be equal to the sum of its Fourier series at a point of continuity. Moreover, the behavior of the Fourier series at points of discontinuity is determined as well (it is the midpoint of the values of the discontinuity). It is one of many conditions for the convergence of Fourier series. The original test was established by Peter Gustav Lejeune Dirichlet in 1829, for piecewise monotone functions (functions with a finite number of sections per period each of which is monotonic). It was extended in the late 19th century by Camille Jordan to functions of bounded variation in each period (any function of bounded variation is the difference of two monotonically increasing functions). == Dirichlet–Jordan test for Fourier series == Let f ( x ) {\displaystyle f(x)} be complex-valued integrable function on the interval [ − π , π ] {\displaystyle [-\pi ,\pi ]} and the partial sums of its Fourier series S n f ( x ) {\displaystyle S_{n}f(x)} , given by S n f ( x ) = ∑ k = − n n c k e i k x , {\displaystyle S_{n}f(x)=\sum _{k=-n}^{n}c_{k}e^{ikx},} with Fourier coefficients c k {\displaystyle c_{k}} defined as c k = 1 2 π ∫ − π π f ( x ) e − i k x d x . {\displaystyle c_{k}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }f(x)e^{-ikx}\,dx.} The Dirichlet-Jordan test states that if f {\displaystyle f} is of bounded variation, then for each x ∈ [ − π , π ] {\displaystyle x\in [-\pi ,\pi ]} the limit S n f ( x ) {\displaystyle S_{n}f(x)} exists and is equal to lim n → ∞ S n f ( x ) = lim ε → 0 f ( x + ε ) + f ( x − ε ) 2 . {\displaystyle \lim _{n\to \infty }S_{n}f(x)=\lim _{\varepsilon \to 0}{\frac {f(x+\varepsilon )+f(x-\varepsilon )}{2}}.} Alternatively, Jordan's test states that if f ∈ L 1 {\displaystyle f\in L^{1}} is of bounded variation in a neighborhood of x {\displaystyle x} , then the limit of S n f ( x ) {\displaystyle S_{n}f(x)} exists and converges in a similar manner. If, in addition, f {\displaystyle f} is continuous at x {\displaystyle x} , then lim n → ∞ S n f ( x ) = f ( x ) . {\displaystyle \lim _{n\to \infty }S_{n}f(x)=f(x).} Moreover, if f {\displaystyle f} is continuous at every point in [ − π , π ] {\displaystyle [-\pi ,\pi ]} , then the convergence is uniform rather than just pointwise. The analogous statement holds irrespective of the choice of period of f {\displaystyle f} , or which version of the Fourier series is chosen. == Jordan test for Fourier integrals == For the Fourier transform on the real line, there is a version of the test as well. Suppose that f ( x ) {\displaystyle f(x)} is in L 1 ( − ∞ , ∞ ) {\displaystyle L^{1}(-\infty ,\infty )} and of bounded variation in a neighborhood of the point x {\displaystyle x} . Then 1 π lim M → ∞ ∫ 0 M d u ∫ − ∞ ∞ f ( t ) cos u ( x − t ) d t = lim ε → 0 f ( x + ε ) + f ( x − ε ) 2 . {\displaystyle {\frac {1}{\pi }}\lim _{M\to \infty }\int _{0}^{M}du\int _{-\infty }^{\infty }f(t)\cos u(x-t)\,dt=\lim _{\varepsilon \to 0}{\frac {f(x+\varepsilon )+f(x-\varepsilon )}{2}}.} If f {\displaystyle f} is continuous in an open interval, then the integral on the left-hand side converges uniformly in the interval, and the limit on the right-hand side is f ( x ) {\displaystyle f(x)} . This version of the test (although not satisfying modern demands for rigor) is historically prior to Dirichlet, being due to Joseph Fourier. == Dirichlet conditions in signal processing == In signal processing, the test is often retained in the original form due to Dirichlet: a piecewise monotone bounded periodic function f {\displaystyle f} (having a finite number of monotonic intervals per period) has a convergent Fourier series whose value at each point is the arithmetic mean of the left and right limits of the function. The condition of piecewise monotonicity stipulates having only finitely many local extrema per period, which implies f {\displaystyle f} is of bounded variation (though the reverse is not true). (Dirichlet required in addition that the function have only finitely many discontinuities, but this constraint is unnecessarily stringent.) Any signal that can be physically produced in a laboratory satisfies these conditions. As in the pointwise case of the Jordan test, the condition of boundedness can be relaxed if the function is assumed to be absolutely integrable (i.e., L 1 {\displaystyle L^{1}} ) over a period, provided it satisfies the other conditions of the test in a neighborhood of the point x {\displaystyle x} where the limit is taken. == See also == Dini test == Notes == == References == Edwards, R. E. (1979). Fourier Series. Vol. 64. New York, NY: Springer New York. doi:10.1007/978-1-4612-6208-4. ISBN 978-1-4612-6210-7. Lanczos, Cornelius (2016-09-12). Discourse on Fourier Series. Philadelphia, PA: Society for Industrial and Applied Mathematics. doi:10.1137/1.9781611974522. ISBN 978-1-61197-451-5. Retrieved 2024-12-15. Lion, Georges A. (1986). "A Simple Proof of the Dirichlet-Jordan Convergence Test". The American Mathematical Monthly. 93 (4): 281–282. doi:10.1080/00029890.1986.11971805. ISSN 0002-9890. Khare, Kedar; Butola, Mansi; Rajora, Sunaina (2023). Fourier Optics and Computational Imaging. Cham: Springer International Publishing. doi:10.1007/978-3-031-18353-9. ISBN 978-3-031-18352-2. Proakis, John G.; Manolakis, Dimitris G. (1996). Digital Signal Processing: Principles, Algorithms, and Applications (3rd ed.). Prentice Hall. ISBN 978-0-13-373762-2. Zygmund, A.; Fefferman, Robert (2003-02-06). Trigonometric Series. Cambridge University Press. doi:10.1017/cbo9781316036587. ISBN 978-0-521-89053-3. == External links == "Dirichlet conditions". PlanetMath.
|
Wikipedia:Dirk Kroese#0
|
Dirk Pieter Kroese (born 1963) is a Dutch-Australian mathematician and statistician, and Professor at the University of Queensland. He is known for several contributions to applied probability, kernel density estimation, Monte Carlo methods and rare-event simulation. He is, with Reuven Rubinstein, a pioneer of the Cross-Entropy (CE) method. == Biography == Born in Wapenveld (municipality of Heerde), Dirk Kroese received his MSc (Netherlands Ingenieur (ir) degree) in 1986 and his Ph.D. (cum laude) in 1990, both from the Department of Applied Mathematics at the University of Twente. His dissertation was entitled Stochastic Models in Reliability. His PhD advisors were Joseph H. A. de Smit and Wilbert C. M. Kallenberg. Part of his PhD research was carried out at Princeton University under the guidance of Erhan Çınlar. He has held teaching and research positions at University of Texas at Austin (1986), Princeton University (1988–1989), the University of Twente (1991–1998), the University of Melbourne (1997), and the University of Adelaide (1998–2000). Since 2000 he has been working at the University of Queensland, where he became a full professor in 2010. == Work == Kroese's work spans a wide range of topics in applied probability and mathematical statistics, including telecommunication networks, reliability engineering, point processes, kernel density estimation, Monte Carlo methods, rare-event simulation, cross-entropy methods, randomized optimization, and machine learning. He is a Chief Investigator of the Australian Research Council Centre of Excellence in Mathematical and Statistical Frontiers (ACEMS). He has over 120 peer-reviewed publications, including six monographs. == Publications == === Books === Rubinstein, R.Y., Kroese, D.P. (2004). The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation, and Machine Learning, Springer, New York. Rubinstein, R. Y., Kroese, D. P. (2007). Simulation and the Monte Carlo Method, 2nd edition, John Wiley & Sons. Kroese, D.P., Taimre, T., and Botev, Z.I. (2011). Handbook of Monte Carlo Methods, Wiley Series in Probability and Statistics, John Wiley & Sons, New York. Kroese, D.P. and Chan, J.C.C. (2014). Statistical Modeling and Computation, Springer, New York. Rubinstein, R. Y., Kroese, D. P. (2017). Simulation and the Monte Carlo Method, 3rd edition, John Wiley & Sons. Kroese, D.P., Botev, Z.I., Taimre, T and Vaisman, R. (2019). Data Science and Machine Learning: Mathematical and Statistical Methods, Chapman & Hall/CRC. Kroese, D.P., Botev, Z.I. (2023). An Advanced Course in Probability and Stochastic Processes, Chapman & Hall/CRC. === Selected articles === de Boer, Kroese, D.P., Mannor, S. and Rubinstein, R.Y. (2005). A tutorial on the cross-entropy method. Annals of Operations Research 134 (1), 19–67. Botev, Z.I., Grotowski J.F., Kroese, D.P. (2010). Kernel density estimation via diffusion. The Annals of Statistics 38 (5), 2916–2957. Kroese, D.P., Brereton. T., Taimre, T. and Botev Z.I. (2014). Why the Monte Carlo method is so important today. Wiley Interdisciplinary Reviews: Computational Statistics 6 (6), 386–392. Kroese, D.P., Porotsky S., Rubinstein, R.Y. (2006). The cross-entropy method for continuous multi-extremal optimization. Methodology and Computing in Applied Probability 8 (3), 383–407. Asmussen, S. and Kroese, D.P. Improved algorithms for rare event simulation with heavy tails (2006). Advances in Applied Probability 38 (2), 545–558. Botev, Z.I. and Kroese, D.P. (2012). Efficient Monte Carlo simulation via the generalized splitting method. Statistics and Computing 22 (1), 1–16. == References ==
|
Wikipedia:Dirk van Dalen#0
|
Dirk van Dalen (born 20 December 1932, Amsterdam) is a Dutch mathematician and historian of science. == Life == Van Dalen studied mathematics and physics and astronomy at the University of Amsterdam. Inspired by the work of Brouwer and Heyting, he received his Ph.D. in 1963 from the University of Amsterdam for the thesis Extension problems in intuitionistic plane Projective geometry. From 1964 to 1966 Van Dalen taught logic and mathematics at MIT, and later Oxford. From 1967 he was professor at the University of Utrecht. In 2003 Dirk van Dalen was awarded the Academy Medal 2003 of the Royal Dutch Academy of Sciences for bringing the works of Brouwer to international attention. == Works == === As (co-)author === Fraenkel, Abraham; Bar-Hillel, Yehoshua; Levy, Azriel; — (1973) [1958]. Foundations of Set Theory. Studies in Logic and Foundations of Mathematics. Vol. 67 (2nd revised ed.). Amsterdam: North Holland Publishing. ISBN 9780080887050. — (1963). Extension problems in intuitionistic plane projective geometry (Thesis). —; Monna, Antonie Frans (1972). Sets and Integration. An Outline of the Development. Groningen: Wolters-Noordhoff. doi:10.1007/978-94-010-2718-2. ISBN 90-01-59775-0. —; Doets, H.C.; De Swart, H.C.M. (1975). Verzamelingen - naïef, axiomatisch en toegepast (in Dutch). Utrecht: Oosthoek, Scheltema & Holkema. p. 364. ISBN 9789031300587. —; Doets, H.C.; De Swart, H.C.M. (1978) [1975]. Sets: Naive, Axiomatic and Applied. Pergamon Press. ISBN 0-08-021166-6. — (1978). Filosofische grondslagen van de Wiskunde. Terreinverkenningen in de filosofie (in Dutch). Vol. 4. Assen: Van Gorcum. ISBN 90-232-1540-0. — (2013) [1980]. Logic and Structure. Universitext (5 ed.). London, Heidelberg, New York, Dordrecht: Springer. doi:10.1007/978-1-4471-4558-5. ISBN 978-1447145578. — (1982). "Braucht die konstruktive Mathematik Grundlagen?". Jahresbericht der Deutschen Mathematiker-Vereinigung (in German). 84. Berlin: G. Reimer: 57–78. ISSN 0012-0456. Troelstra, Anne Sjerp; — (1988). Constructivism in Mathematics: An Introduction. Amsterdam: North-Holland Publishing. Volume 1. Studies in Logic and the Foundations of Mathematics; 121. ISBN 0-444-70266-0. Volume 2. Studies in Logic and the Foundations of Mathematics; 123. ISBN 0-444-70358-6. — (1990). "'The War of the Frogs and the Mice, or the Crisis of the Mathematische Annalen". Mathematical Intelligencer. 12 (4). Springer Verlag: 17–31. doi:10.1007/BF03024028. ISSN 0343-6993. S2CID 123400249. —; Ebbinghaus, Heinz-Dieter (June 2000). "Zermelo and the Skolem Paradox". Bulletin of Symbolic Logic. 6 (2): 145–161. doi:10.2307/421203. hdl:1874/27769. JSTOR 421203. S2CID 8530810. — (2001). "Intuitionistic Logic". In Goble, Lou (ed.). The Blackwell Guide to Philosophical Logic. New York: Blackwell Publishing. pp. 224–257. doi:10.1002/9781405164801.ch11. ISBN 9780631206934. — (2002). L.E.J. Brouwer (1881-1966). Een Biografie. Het heldere licht der wiskunde (in Dutch). Amsterdam: Bert Bakker. p. 561. ISBN 9789035124820. —. Mystic, Geometer and Intuitionist. The life of L. E. J. Brouwer. London: Oxford University Press. Dalen, Dirk (2002) [1999]. The dawning revolution. Clarendon Press. ISBN 0-19-850297-4. Dalen, Dirk van (2005). Hope and desillusion. Clarendon Press. ISBN 978-0-19-851620-0. — (2005). L.E.J. Brouwer en de Grondslagen van de Wiskunde (in Dutch). Utrecht: Epsilon. p. 209. ISBN 9789050410939. — (20 June 2011). "Brouwer's ϵ-fixed point and Sperner's lemma". Theoretical Computer Science. 412 (28): 3140–3144. doi:10.1016/j.tcs.2011.04.002. — (2013). L.E.J. Brouwer - Topologist, Intuitionist, Philosopher: How mathematics is rooted in life. Springer Verlag. ISBN 9781447146155. === As (co-)editor === —, ed. (1981). Brouwer's Cambridge Lectures on Intuitionism. Cambridge University Press. ISBN 0521234417. Freudenthal, Hans (2009). Springer, Tonny A.; — (eds.). Selecta (Heritage of European Mathematics). Sources and studies in the history of mathematics and physical sciences. Zürich: European Mathematical Society. ISBN 978-3-03-719058-6. —, ed. (2011). The selected correspondence of L. E. J. Brouwer. Sources and studies in the history of mathematics and physical sciences. London: Springer Verlag. ISBN 978-0-85729-527-9. == References == == Further reading == Barendregt, Henk; Bezem, Marc; Klop, Jan Willem, eds. (1993). Dirk van Dalen Festschrift. University of Utrecht, Department of Philosophy. p. 229. ISBN 9039303355. Gurevich, Yuri; e.a., eds. (1995). Special issue: a tribute to Dirk van Dalen. Amsterdam: North-Holland. == External links == KNAW (21 May 2003). "Berichten aan de pers: Akademiepenning 2003 voor Dirk van Dalen" (in Dutch). Archived from the original on 26 December 2010. Retrieved 2 December 2023. "Over Dirk van Dalen". Koninklijke Bibliotheek (KB) (in Dutch). Archived from the original on 3 March 2016. Retrieved 2 December 2023. "Dirk van Dalen, Emeritus Professor - Logic and Philosophy of Mathematics". Archived from the original on 28 March 2010. Retrieved 2 December 2023. Dirk van Dalen at the Mathematics Genealogy Project Interview Dirk van Dalen over L.E.J. Brouwer (2015) on YouTube
|
Wikipedia:Discontinuous group#0
|
In mathematics, a group action of a group G {\displaystyle G} on a set S {\displaystyle S} is a group homomorphism from G {\displaystyle G} to some group (under function composition) of functions from S {\displaystyle S} to itself. It is said that G {\displaystyle G} acts on S {\displaystyle S} . Many sets of transformations form a group under function composition; for example, the rotations around a point in the plane. It is often useful to consider the group as an abstract group, and to say that one has a group action of the abstract group that consists of performing the transformations of the group of transformations. The reason for distinguishing the group from the transformations is that, generally, a group of transformations of a structure acts also on various related structures; for example, the above rotation group also acts on triangles by transforming triangles into triangles. If a group acts on a structure, it will usually also act on objects built from that structure. For example, the group of Euclidean isometries acts on Euclidean space and also on the figures drawn in it; in particular, it acts on the set of all triangles. Similarly, the group of symmetries of a polyhedron acts on the vertices, the edges, and the faces of the polyhedron. A group action on a vector space is called a representation of the group. In the case of a finite-dimensional vector space, it allows one to identify many groups with subgroups of the general linear group GL ( n , K ) {\displaystyle \operatorname {GL} (n,K)} , the group of the invertible matrices of dimension n {\displaystyle n} over a field K {\displaystyle K} . The symmetric group S n {\displaystyle S_{n}} acts on any set with n {\displaystyle n} elements by permuting the elements of the set. Although the group of all permutations of a set depends formally on the set, the concept of group action allows one to consider a single group for studying the permutations of all sets with the same cardinality. == Definition == === Left group action === If G {\displaystyle G} is a group with identity element e {\displaystyle e} , and X {\displaystyle X} is a set, then a (left) group action α {\displaystyle \alpha } of G {\displaystyle G} on X is a function α : G × X → X {\displaystyle \alpha :G\times X\to X} that satisfies the following two axioms: for all g and h in G and all x in X {\displaystyle X} . The group G {\displaystyle G} is then said to act on X {\displaystyle X} (from the left). A set X {\displaystyle X} together with an action of G {\displaystyle G} is called a (left) G {\displaystyle G} -set. It can be notationally convenient to curry the action α {\displaystyle \alpha } , so that, instead, one has a collection of transformations αg : X → X, with one transformation αg for each group element g ∈ G. The identity and compatibility relations then read α e ( x ) = x {\displaystyle \alpha _{e}(x)=x} and α g ( α h ( x ) ) = ( α g ∘ α h ) ( x ) = α g h ( x ) {\displaystyle \alpha _{g}(\alpha _{h}(x))=(\alpha _{g}\circ \alpha _{h})(x)=\alpha _{gh}(x)} The second axiom states that the function composition is compatible with the group multiplication; they form a commutative diagram. This axiom can be shortened even further, and written as α g ∘ α h = α g h {\displaystyle \alpha _{g}\circ \alpha _{h}=\alpha _{gh}} . With the above understanding, it is very common to avoid writing α {\displaystyle \alpha } entirely, and to replace it with either a dot, or with nothing at all. Thus, α(g, x) can be shortened to g⋅x or gx, especially when the action is clear from context. The axioms are then e ⋅ x = x {\displaystyle e{\cdot }x=x} g ⋅ ( h ⋅ x ) = ( g h ) ⋅ x {\displaystyle g{\cdot }(h{\cdot }x)=(gh){\cdot }x} From these two axioms, it follows that for any fixed g in G {\displaystyle G} , the function from X to itself which maps x to g⋅x is a bijection, with inverse bijection the corresponding map for g−1. Therefore, one may equivalently define a group action of G on X as a group homomorphism from G into the symmetric group Sym(X) of all bijections from X to itself. === Right group action === Likewise, a right group action of G {\displaystyle G} on X {\displaystyle X} is a function α : X × G → X , {\displaystyle \alpha :X\times G\to X,} that satisfies the analogous axioms: (with α(x, g) often shortened to xg or x⋅g when the action being considered is clear from context) for all g and h in G and all x in X. The difference between left and right actions is in the order in which a product gh acts on x. For a left action, h acts first, followed by g second. For a right action, g acts first, followed by h second. Because of the formula (gh)−1 = h−1g−1, a left action can be constructed from a right action by composing with the inverse operation of the group. Also, a right action of a group G on X can be considered as a left action of its opposite group Gop on X. Thus, for establishing general properties of group actions, it suffices to consider only left actions. However, there are cases where this is not possible. For example, the multiplication of a group induces both a left action and a right action on the group itself—multiplication on the left and on the right, respectively. == Notable properties of actions == Let G be a group acting on a set X. The action is called faithful or effective if g⋅x = x for all x ∈ X implies that g = eG. Equivalently, the homomorphism from G to the group of bijections of X corresponding to the action is injective. The action is called free (or semiregular or fixed-point free) if the statement that g⋅x = x for some x ∈ X already implies that g = eG. In other words, no non-trivial element of G fixes a point of X. This is a much stronger property than faithfulness. For example, the action of any group on itself by left multiplication is free. This observation implies Cayley's theorem that any group can be embedded in a symmetric group (which is infinite when the group is). A finite group may act faithfully on a set of size much smaller than its cardinality (however such an action cannot be free). For instance the abelian 2-group (Z / 2Z)n (of cardinality 2n) acts faithfully on a set of size 2n. This is not always the case, for example the cyclic group Z / 2nZ cannot act faithfully on a set of size less than 2n. In general the smallest set on which a faithful action can be defined can vary greatly for groups of the same size. For example, three groups of size 120 are the symmetric group S5, the icosahedral group A5 × Z / 2Z and the cyclic group Z / 120Z. The smallest sets on which faithful actions can be defined for these groups are of size 5, 7, and 16 respectively. === Transitivity properties === The action of G on X is called transitive if for any two points x, y ∈ X there exists a g ∈ G so that g ⋅ x = y. The action is simply transitive (or sharply transitive, or regular) if it is both transitive and free. This means that given x, y ∈ X there is exactly one g ∈ G such that g ⋅ x = y. If X is acted upon simply transitively by a group G then it is called a principal homogeneous space for G or a G-torsor. For an integer n ≥ 1, the action is n-transitive if X has at least n elements, and for any pair of n-tuples (x1, ..., xn), (y1, ..., yn) ∈ Xn with pairwise distinct entries (that is xi ≠ xj, yi ≠ yj when i ≠ j) there exists a g ∈ G such that g⋅xi = yi for i = 1, ..., n. In other words, the action on the subset of Xn of tuples without repeated entries is transitive. For n = 2, 3 this is often called double, respectively triple, transitivity. The class of 2-transitive groups (that is, subgroups of a finite symmetric group whose action is 2-transitive) and more generally multiply transitive groups is well-studied in finite group theory. An action is sharply n-transitive when the action on tuples without repeated entries in Xn is sharply transitive. ==== Examples ==== The action of the symmetric group of X is transitive, in fact n-transitive for any n up to the cardinality of X. If X has cardinality n, the action of the alternating group is (n − 2)-transitive but not (n − 1)-transitive. The action of the general linear group of a vector space V on the set V ∖ {0} of non-zero vectors is transitive, but not 2-transitive (similarly for the action of the special linear group if the dimension of v is at least 2). The action of the orthogonal group of a Euclidean space is not transitive on nonzero vectors but it is on the unit sphere. === Primitive actions === The action of G on X is called primitive if there is no partition of X preserved by all elements of G apart from the trivial partitions (the partition in a single piece and its dual, the partition into singletons). === Topological properties === Assume that X is a topological space and the action of G is by homeomorphisms. The action is wandering if every x ∈ X has a neighbourhood U such that there are only finitely many g ∈ G with g⋅U ∩ U ≠ ∅. More generally, a point x ∈ X is called a point of discontinuity for the action of G if there is an open subset U ∋ x such that there are only finitely many g ∈ G with g⋅U ∩ U ≠ ∅. The domain of discontinuity of the action is the set of all points of discontinuity. Equivalently it is the largest G-stable open subset Ω ⊂ X such that the action of G on Ω is wandering. In a dynamical context this is also called a wandering set. The action is properly discontinuous if for every compact subset K ⊂ X there are only finitely many g ∈ G such that g⋅K ∩ K ≠ ∅. This is strictly stronger than wandering; for instance the action of Z on R2 ∖ {(0, 0)} given by n⋅(x, y) = (2nx, 2−ny) is wandering and free but not properly discontinuous. The action by deck transformations of the fundamental group of a locally simply connected space on a universal cover is wandering and free. Such actions can be characterized by the following property: every x ∈ X has a neighbourhood U such that g⋅U ∩ U = ∅ for every g ∈ G ∖ {eG}. Actions with this property are sometimes called freely discontinuous, and the largest subset on which the action is freely discontinuous is then called the free regular set. An action of a group G on a locally compact space X is called cocompact if there exists a compact subset A ⊂ X such that X = G ⋅ A. For a properly discontinuous action, cocompactness is equivalent to compactness of the quotient space X / G. === Actions of topological groups === Now assume G is a topological group and X a topological space on which it acts by homeomorphisms. The action is said to be continuous if the map G × X → X is continuous for the product topology. The action is said to be proper if the map G × X → X × X defined by (g, x) ↦ (x, g⋅x) is proper. This means that given compact sets K, K′ the set of g ∈ G such that g⋅K ∩ K′ ≠ ∅ is compact. In particular, this is equivalent to proper discontinuity if G is a discrete group. It is said to be locally free if there exists a neighbourhood U of eG such that g⋅x ≠ x for all x ∈ X and g ∈ U ∖ {eG}. The action is said to be strongly continuous if the orbital map g ↦ g⋅x is continuous for every x ∈ X. Contrary to what the name suggests, this is a weaker property than continuity of the action. If G is a Lie group and X a differentiable manifold, then the subspace of smooth points for the action is the set of points x ∈ X such that the map g ↦ g⋅x is smooth. There is a well-developed theory of Lie group actions, i.e. action which are smooth on the whole space. === Linear actions === If g acts by linear transformations on a module over a commutative ring, the action is said to be irreducible if there are no proper nonzero g-invariant submodules. It is said to be semisimple if it decomposes as a direct sum of irreducible actions. == Orbits and stabilizers == Consider a group G acting on a set X. The orbit of an element x in X is the set of elements in X to which x can be moved by the elements of G. The orbit of x is denoted by G⋅x: G ⋅ x = { g ⋅ x : g ∈ G } . {\displaystyle G{\cdot }x=\{g{\cdot }x:g\in G\}.} The defining properties of a group guarantee that the set of orbits of (points x in) X under the action of G form a partition of X. The associated equivalence relation is defined by saying x ~ y if and only if there exists a g in G with g⋅x = y. The orbits are then the equivalence classes under this relation; two elements x and y are equivalent if and only if their orbits are the same, that is, G⋅x = G⋅y. The group action is transitive if and only if it has exactly one orbit, that is, if there exists x in X with G⋅x = X. This is the case if and only if G⋅x = X for all x in X (given that X is non-empty). The set of all orbits of X under the action of G is written as X / G (or, less frequently, as G \ X), and is called the quotient of the action. In geometric situations it may be called the orbit space, while in algebraic situations it may be called the space of coinvariants, and written XG, by contrast with the invariants (fixed points), denoted XG: the coinvariants are a quotient while the invariants are a subset. The coinvariant terminology and notation are used particularly in group cohomology and group homology, which use the same superscript/subscript convention. === Invariant subsets === If Y is a subset of X, then G⋅Y denotes the set {g⋅y : g ∈ G and y ∈ Y}. The subset Y is said to be invariant under G if G⋅Y = Y (which is equivalent G⋅Y ⊆ Y). In that case, G also operates on Y by restricting the action to Y. The subset Y is called fixed under G if g⋅y = y for all g in G and all y in Y. Every subset that is fixed under G is also invariant under G, but not conversely. Every orbit is an invariant subset of X on which G acts transitively. Conversely, any invariant subset of X is a union of orbits. The action of G on X is transitive if and only if all elements are equivalent, meaning that there is only one orbit. A G-invariant element of X is x ∈ X such that g⋅x = x for all g ∈ G. The set of all such x is denoted XG and called the G-invariants of X. When X is a G-module, XG is the zeroth cohomology group of G with coefficients in X, and the higher cohomology groups are the derived functors of the functor of G-invariants. === Fixed points and stabilizer subgroups === Given g in G and x in X with g⋅x = x, it is said that "x is a fixed point of g" or that "g fixes x". For every x in X, the stabilizer subgroup of G with respect to x (also called the isotropy group or little group) is the set of all elements in G that fix x: G x = { g ∈ G : g ⋅ x = x } . {\displaystyle G_{x}=\{g\in G:g{\cdot }x=x\}.} This is a subgroup of G, though typically not a normal one. The action of G on X is free if and only if all stabilizers are trivial. The kernel N of the homomorphism with the symmetric group, G → Sym(X), is given by the intersection of the stabilizers Gx for all x in X. If N is trivial, the action is said to be faithful (or effective). Let x and y be two elements in X, and let g be a group element such that y = g⋅x. Then the two stabilizer groups Gx and Gy are related by Gy = gGxg−1. Proof: by definition, h ∈ Gy if and only if h⋅(g⋅x) = g⋅x. Applying g−1 to both sides of this equality yields (g−1hg)⋅x = x; that is, g−1hg ∈ Gx. An opposite inclusion follows similarly by taking h ∈ Gx and x = g−1⋅y. The above says that the stabilizers of elements in the same orbit are conjugate to each other. Thus, to each orbit, we can associate a conjugacy class of a subgroup of G (that is, the set of all conjugates of the subgroup). Let (H) denote the conjugacy class of H. Then the orbit O has type (H) if the stabilizer Gx of some/any x in O belongs to (H). A maximal orbit type is often called a principal orbit type. === Orbit-stabilizer theorem === Orbits and stabilizers are closely related. For a fixed x in X, consider the map f : G → X given by g ↦ g⋅x. By definition the image f(G) of this map is the orbit G⋅x. The condition for two elements to have the same image is f ( g ) = f ( h ) ⟺ g ⋅ x = h ⋅ x ⟺ g − 1 h ⋅ x = x ⟺ g − 1 h ∈ G x ⟺ h ∈ g G x . {\displaystyle f(g)=f(h)\iff g{\cdot }x=h{\cdot }x\iff g^{-1}h{\cdot }x=x\iff g^{-1}h\in G_{x}\iff h\in gG_{x}.} In other words, f(g) = f(h) if and only if g and h lie in the same coset for the stabilizer subgroup Gx. Thus, the fiber f−1({y}) of f over any y in G⋅x is contained in such a coset, and every such coset also occurs as a fiber. Therefore f induces a bijection between the set G / Gx of cosets for the stabilizer subgroup and the orbit G⋅x, which sends gGx ↦ g⋅x. This result is known as the orbit-stabilizer theorem. If G is finite then the orbit-stabilizer theorem, together with Lagrange's theorem, gives | G ⋅ x | = [ G : G x ] = | G | / | G x | , {\displaystyle |G\cdot x|=[G\,:\,G_{x}]=|G|/|G_{x}|,} in other words the length of the orbit of x times the order of its stabilizer is the order of the group. In particular that implies that the orbit length is a divisor of the group order. Example: Let G be a group of prime order p acting on a set X with k elements. Since each orbit has either 1 or p elements, there are at least k mod p orbits of length 1 which are G-invariant elements. More specifically, k and the number of G-invariant elements are congruent modulo p. This result is especially useful since it can be employed for counting arguments (typically in situations where X is finite as well). Example: We can use the orbit-stabilizer theorem to count the automorphisms of a graph. Consider the cubical graph as pictured, and let G denote its automorphism group. Then G acts on the set of vertices {1, 2, ..., 8}, and this action is transitive as can be seen by composing rotations about the center of the cube. Thus, by the orbit-stabilizer theorem, |G| = |G ⋅ 1| |G1| = 8 |G1|. Applying the theorem now to the stabilizer G1, we can obtain |G1| = |(G1) ⋅ 2| |(G1)2|. Any element of G that fixes 1 must send 2 to either 2, 4, or 5. As an example of such automorphisms consider the rotation around the diagonal axis through 1 and 7 by 2π/3, which permutes 2, 4, 5 and 3, 6, 8, and fixes 1 and 7. Thus, |(G1) ⋅ 2| = 3. Applying the theorem a third time gives |(G1)2| = |((G1)2) ⋅ 3| |((G1)2)3|. Any element of G that fixes 1 and 2 must send 3 to either 3 or 6. Reflecting the cube at the plane through 1, 2, 7 and 8 is such an automorphism sending 3 to 6, thus |((G1)2) ⋅ 3| = 2. One also sees that ((G1)2)3 consists only of the identity automorphism, as any element of G fixing 1, 2 and 3 must also fix all other vertices, since they are determined by their adjacency to 1, 2 and 3. Combining the preceding calculations, we can now obtain |G| = 8 ⋅ 3 ⋅ 2 ⋅ 1 = 48. === Burnside's lemma === A result closely related to the orbit-stabilizer theorem is Burnside's lemma: | X / G | = 1 | G | ∑ g ∈ G | X g | , {\displaystyle |X/G|={\frac {1}{|G|}}\sum _{g\in G}|X^{g}|,} where Xg is the set of points fixed by g. This result is mainly of use when G and X are finite, when it can be interpreted as follows: the number of orbits is equal to the average number of points fixed per group element. Fixing a group G, the set of formal differences of finite G-sets forms a ring called the Burnside ring of G, where addition corresponds to disjoint union, and multiplication to Cartesian product. == Examples == The trivial action of any group G on any set X is defined by g⋅x = x for all g in G and all x in X; that is, every group element induces the identity permutation on X. In every group G, left multiplication is an action of G on G: g⋅x = gx for all g, x in G. This action is free and transitive (regular), and forms the basis of a rapid proof of Cayley's theorem – that every group is isomorphic to a subgroup of the symmetric group of permutations of the set G. In every group G with subgroup H, left multiplication is an action of G on the set of cosets G / H: g⋅aH = gaH for all g, a in G. In particular if H contains no nontrivial normal subgroups of G this induces an isomorphism from G to a subgroup of the permutation group of degree [G : H]. In every group G, conjugation is an action of G on G: g⋅x = gxg−1. An exponential notation is commonly used for the right-action variant: xg = g−1xg; it satisfies (xg)h = xgh. In every group G with subgroup H, conjugation is an action of G on conjugates of H: g⋅K = gKg−1 for all g in G and K conjugates of H. An action of Z on a set X uniquely determines and is determined by an automorphism of X, given by the action of 1. Similarly, an action of Z / 2Z on X is equivalent to the data of an involution of X. The symmetric group Sn and its subgroups act on the set {1, ..., n} by permuting its elements The symmetry group of a polyhedron acts on the set of vertices of that polyhedron. It also acts on the set of faces or the set of edges of the polyhedron. The symmetry group of any geometrical object acts on the set of points of that object. For a coordinate space V over a field F with group of units F*, the mapping F* × V → V given by a × (x1, x2, ..., xn) ↦ (ax1, ax2, ..., axn) is a group action called scalar multiplication. The automorphism group of a vector space (or graph, or group, or ring ...) acts on the vector space (or set of vertices of the graph, or group, or ring ...). The general linear group GL(n, K) and its subgroups, particularly its Lie subgroups (including the special linear group SL(n, K), orthogonal group O(n, K), special orthogonal group SO(n, K), and symplectic group Sp(n, K)) are Lie groups that act on the vector space Kn. The group operations are given by multiplying the matrices from the groups with the vectors from Kn. The general linear group GL(n, Z) acts on Zn by natural matrix action. The orbits of its action are classified by the greatest common divisor of coordinates of the vector in Zn. The affine group acts transitively on the points of an affine space, and the subgroup V of the affine group (that is, a vector space) has transitive and free (that is, regular) action on these points; indeed this can be used to give a definition of an affine space. The projective linear group PGL(n + 1, K) and its subgroups, particularly its Lie subgroups, which are Lie groups that act on the projective space Pn(K). This is a quotient of the action of the general linear group on projective space. Particularly notable is PGL(2, K), the symmetries of the projective line, which is sharply 3-transitive, preserving the cross ratio; the Möbius group PGL(2, C) is of particular interest. The isometries of the plane act on the set of 2D images and patterns, such as wallpaper patterns. The definition can be made more precise by specifying what is meant by image or pattern, for example, a function of position with values in a set of colors. Isometries are in fact one example of affine group (action). The sets acted on by a group G comprise the category of G-sets in which the objects are G-sets and the morphisms are G-set homomorphisms: functions f : X → Y such that g⋅(f(x)) = f(g⋅x) for every g in G. The Galois group of a field extension L / K acts on the field L but has only a trivial action on elements of the subfield K. Subgroups of Gal(L / K) correspond to subfields of L that contain K, that is, intermediate field extensions between L and K. The additive group of the real numbers (R, +) acts on the phase space of "well-behaved" systems in classical mechanics (and in more general dynamical systems) by time translation: if t is in R and x is in the phase space, then x describes a state of the system, and t + x is defined to be the state of the system t seconds later if t is positive or −t seconds ago if t is negative. The additive group of the real numbers (R, +) acts on the set of real functions of a real variable in various ways, with (t⋅f)(x) equal to, for example, f(x + t), f(x) + t, f(xet), f(x)et, f(x + t)et, or f(xet) + t, but not f(xet + t). Given a group action of G on X, we can define an induced action of G on the power set of X, by setting g⋅U = {g⋅u : u ∈ U} for every subset U of X and every g in G. This is useful, for instance, in studying the action of the large Mathieu group on a 24-set and in studying symmetry in certain models of finite geometries. The quaternions with norm 1 (the versors), as a multiplicative group, act on R3: for any such quaternion z = cos α/2 + v sin α/2, the mapping f(x) = zxz* is a counterclockwise rotation through an angle α about an axis given by a unit vector v; z is the same rotation; see quaternions and spatial rotation. This is not a faithful action because the quaternion −1 leaves all points where they were, as does the quaternion 1. Given left G-sets X, Y, there is a left G-set YX whose elements are G-equivariant maps α : X × G → Y, and with left G-action given by g⋅α = α ∘ (idX × –g) (where "–g" indicates right multiplication by g). This G-set has the property that its fixed points correspond to equivariant maps X → Y; more generally, it is an exponential object in the category of G-sets. == Group actions and groupoids == The notion of group action can be encoded by the action groupoid G′ = G ⋉ X associated to the group action. The stabilizers of the action are the vertex groups of the groupoid and the orbits of the action are its components. == Morphisms and isomorphisms between G-sets == If X and Y are two G-sets, a morphism from X to Y is a function f : X → Y such that f(g⋅x) = g⋅f(x) for all g in G and all x in X. Morphisms of G-sets are also called equivariant maps or G-maps. The composition of two morphisms is again a morphism. If a morphism f is bijective, then its inverse is also a morphism. In this case f is called an isomorphism, and the two G-sets X and Y are called isomorphic; for all practical purposes, isomorphic G-sets are indistinguishable. Some example isomorphisms: Every regular G action is isomorphic to the action of G on G given by left multiplication. Every free G action is isomorphic to G × S, where S is some set and G acts on G × S by left multiplication on the first coordinate. (S can be taken to be the set of orbits X / G.) Every transitive G action is isomorphic to left multiplication by G on the set of left cosets of some subgroup H of G. (H can be taken to be the stabilizer group of any element of the original G-set.) With this notion of morphism, the collection of all G-sets forms a category; this category is a Grothendieck topos (in fact, assuming a classical metalogic, this topos will even be Boolean). == Variants and generalizations == We can also consider actions of monoids on sets, by using the same two axioms as above. This does not define bijective maps and equivalence relations however. See semigroup action. Instead of actions on sets, we can define actions of groups and monoids on objects of an arbitrary category: start with an object X of some category, and then define an action on X as a monoid homomorphism into the monoid of endomorphisms of X. If X has an underlying set, then all definitions and facts stated above can be carried over. For example, if we take the category of vector spaces, we obtain group representations in this fashion. We can view a group G as a category with a single object in which every morphism is invertible. A (left) group action is then nothing but a (covariant) functor from G to the category of sets, and a group representation is a functor from G to the category of vector spaces. A morphism between G-sets is then a natural transformation between the group action functors. In analogy, an action of a groupoid is a functor from the groupoid to the category of sets or to some other category. In addition to continuous actions of topological groups on topological spaces, one also often considers smooth actions of Lie groups on smooth manifolds, regular actions of algebraic groups on algebraic varieties, and actions of group schemes on schemes. All of these are examples of group objects acting on objects of their respective category. == Gallery == == See also == Gain graph Group with operators Measurable group action Monoid action Young–Deruyts development == Notes == == Citations == == References == Aschbacher, Michael (2000). Finite Group Theory. Cambridge University Press. ISBN 978-0-521-78675-1. MR 1777008. Dummit, David; Richard Foote (2003). Abstract Algebra (3rd ed.). Wiley. ISBN 0-471-43334-9. Eie, Minking; Chang, Shou-Te (2010). A Course on Abstract Algebra. World Scientific. ISBN 978-981-4271-88-2. Hatcher, Allen (2002), Algebraic Topology, Cambridge University Press, ISBN 978-0-521-79540-1, MR 1867354. Rotman, Joseph (1995). An Introduction to the Theory of Groups. Graduate Texts in Mathematics 148 (4th ed.). Springer-Verlag. ISBN 0-387-94285-8. Smith, Jonathan D.H. (2008). Introduction to abstract algebra. Textbooks in mathematics. CRC Press. ISBN 978-1-4200-6371-4. Kapovich, Michael (2009), Hyperbolic manifolds and discrete groups, Modern Birkhäuser Classics, Birkhäuser, pp. xxvii+467, ISBN 978-0-8176-4912-8, Zbl 1180.57001 Maskit, Bernard (1988), Kleinian groups, Grundlehren der Mathematischen Wissenschaften, vol. 287, Springer-Verlag, pp. XIII+326, Zbl 0627.30039 Perrone, Paolo (2024), Starting Category Theory, World Scientific, doi:10.1142/9789811286018_0005, ISBN 978-981-12-8600-1 Thurston, William (1980), The geometry and topology of three-manifolds, Princeton lecture notes, p. 175, archived from the original on 2020-07-27, retrieved 2016-02-08 Thurston, William P. (1997), Three-dimensional geometry and topology. Vol. 1., Princeton Mathematical Series, vol. 35, Princeton University Press, pp. x+311, Zbl 0873.57001 tom Dieck, Tammo (1987), Transformation groups, de Gruyter Studies in Mathematics, vol. 8, Berlin: Walter de Gruyter & Co., p. 29, doi:10.1515/9783110858372.312, ISBN 978-3-11-009745-0, MR 0889050 == External links == "Action of a group on a manifold", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Group Action". MathWorld.
|
Wikipedia:Discontinuous linear map#0
|
In mathematics, linear maps form an important class of "simple" functions which preserve the algebraic structure of linear spaces and are often used as approximations to more general functions (see linear approximation). If the spaces involved are also topological spaces (that is, topological vector spaces), then it makes sense to ask whether all linear maps are continuous. It turns out that for maps defined on infinite-dimensional topological vector spaces (e.g., infinite-dimensional normed spaces), the answer is generally no: there exist discontinuous linear maps. If the domain of definition is complete, it is trickier; such maps can be proven to exist, but the proof relies on the axiom of choice and does not provide an explicit example. == A linear map from a finite-dimensional space is always continuous == Let X and Y be two normed spaces and f : X → Y {\displaystyle f:X\to Y} a linear map from X to Y. If X is finite-dimensional, choose a basis ( e 1 , e 2 , … , e n ) {\displaystyle \left(e_{1},e_{2},\ldots ,e_{n}\right)} in X which may be taken to be unit vectors. Then, f ( x ) = ∑ i = 1 n x i f ( e i ) , {\displaystyle f(x)=\sum _{i=1}^{n}x_{i}f(e_{i}),} and so by the triangle inequality, ‖ f ( x ) ‖ = ‖ ∑ i = 1 n x i f ( e i ) ‖ ≤ ∑ i = 1 n | x i | ‖ f ( e i ) ‖ . {\displaystyle \|f(x)\|=\left\|\sum _{i=1}^{n}x_{i}f(e_{i})\right\|\leq \sum _{i=1}^{n}|x_{i}|\|f(e_{i})\|.} Letting M = sup i { ‖ f ( e i ) ‖ } , {\displaystyle M=\sup _{i}\{\|f(e_{i})\|\},} and using the fact that ∑ i = 1 n | x i | ≤ C ‖ x ‖ {\displaystyle \sum _{i=1}^{n}|x_{i}|\leq C\|x\|} for some C>0 which follows from the fact that any two norms on a finite-dimensional space are equivalent, one finds ‖ f ( x ) ‖ ≤ ( ∑ i = 1 n | x i | ) M ≤ C M ‖ x ‖ . {\displaystyle \|f(x)\|\leq \left(\sum _{i=1}^{n}|x_{i}|\right)M\leq CM\|x\|.} Thus, f {\displaystyle f} is a bounded linear operator and so is continuous. In fact, to see this, simply note that f is linear, and therefore ‖ f ( x ) − f ( x ′ ) ‖ = ‖ f ( x − x ′ ) ‖ ≤ K ‖ x − x ′ ‖ {\displaystyle \|f(x)-f(x')\|=\|f(x-x')\|\leq K\|x-x'\|} for some universal constant K. Thus for any ϵ > 0 , {\displaystyle \epsilon >0,} we can choose δ ≤ ϵ / K {\displaystyle \delta \leq \epsilon /K} so that f ( B ( x , δ ) ) ⊆ B ( f ( x ) , ϵ ) {\displaystyle f(B(x,\delta ))\subseteq B(f(x),\epsilon )} ( B ( x , δ ) {\displaystyle B(x,\delta )} and B ( f ( x ) , ϵ ) {\displaystyle B(f(x),\epsilon )} are the normed balls around x {\displaystyle x} and f ( x ) {\displaystyle f(x)} ), which gives continuity. If X is infinite-dimensional, this proof will fail as there is no guarantee that the supremum M exists. If Y is the zero space {0}, the only map between X and Y is the zero map which is trivially continuous. In all other cases, when X is infinite-dimensional and Y is not the zero space, one can find a discontinuous map from X to Y. == A concrete example == Examples of discontinuous linear maps are easy to construct in spaces that are not complete; on any Cauchy sequence e i {\displaystyle e_{i}} of linearly independent vectors which does not have a limit, there is a linear operator T {\displaystyle T} such that the quantities ‖ T ( e i ) ‖ / ‖ e i ‖ {\displaystyle \|T(e_{i})\|/\|e_{i}\|} grow without bound. In a sense, the linear operators are not continuous because the space has "holes". For example, consider the space X {\displaystyle X} of real-valued smooth functions on the interval [0, 1] with the uniform norm, that is, ‖ f ‖ = sup x ∈ [ 0 , 1 ] | f ( x ) | . {\displaystyle \|f\|=\sup _{x\in [0,1]}|f(x)|.} The derivative-at-a-point map, given by T ( f ) = f ′ ( 0 ) {\displaystyle T(f)=f'(0)\,} defined on X {\displaystyle X} and with real values, is linear, but not continuous. Indeed, consider the sequence f n ( x ) = sin ( n 2 x ) n {\displaystyle f_{n}(x)={\frac {\sin(n^{2}x)}{n}}} for n ≥ 1 {\displaystyle n\geq 1} . This sequence converges uniformly to the constantly zero function, but T ( f n ) = n 2 cos ( n 2 ⋅ 0 ) n = n → ∞ {\displaystyle T(f_{n})={\frac {n^{2}\cos(n^{2}\cdot 0)}{n}}=n\to \infty } as n → ∞ {\displaystyle n\to \infty } instead of T ( f n ) → T ( 0 ) = 0 {\displaystyle T(f_{n})\to T(0)=0} , as would hold for a continuous map. Note that T {\displaystyle T} is real-valued, and so is actually a linear functional on X {\displaystyle X} (an element of the algebraic dual space X ∗ {\displaystyle X^{*}} ). The linear map X → X {\displaystyle X\to X} which assigns to each function its derivative is similarly discontinuous. Note that although the derivative operator is not continuous, it is closed. The fact that the domain is not complete here is important: discontinuous operators on complete spaces require a little more work. == A nonconstructive example == An algebraic basis for the real numbers as a vector space over the rationals is known as a Hamel basis (note that some authors use this term in a broader sense to mean an algebraic basis of any vector space). Note that any two noncommensurable numbers, say 1 and π {\displaystyle \pi } , are linearly independent. One may find a Hamel basis containing them, and define a map f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } so that f ( π ) = 0 , {\displaystyle f(\pi )=0,} f acts as the identity on the rest of the Hamel basis, and extend to all of R {\displaystyle \mathbb {R} } by linearity. Let {rn}n be any sequence of rationals which converges to π {\displaystyle \pi } . Then limn f(rn) = π, but f ( π ) = 0. {\displaystyle f(\pi )=0.} By construction, f is linear over Q {\displaystyle \mathbb {Q} } (not over R {\displaystyle \mathbb {R} } ), but not continuous. Note that f is also not measurable; an additive real function is linear if and only if it is measurable, so for every such function there is a Vitali set. The construction of f relies on the axiom of choice. This example can be extended into a general theorem about the existence of discontinuous linear maps on any infinite-dimensional normed space (as long as the codomain is not trivial). == General existence theorem == Discontinuous linear maps can be proven to exist more generally, even if the space is complete. Let X and Y be normed spaces over the field K where K = R {\displaystyle K=\mathbb {R} } or K = C . {\displaystyle K=\mathbb {C} .} Assume that X is infinite-dimensional and Y is not the zero space. We will find a discontinuous linear map f from X to K, which will imply the existence of a discontinuous linear map g from X to Y given by the formula g ( x ) = f ( x ) y 0 {\displaystyle g(x)=f(x)y_{0}} where y 0 {\displaystyle y_{0}} is an arbitrary nonzero vector in Y. If X is infinite-dimensional, to show the existence of a linear functional which is not continuous then amounts to constructing f which is not bounded. For that, consider a sequence (en)n ( n ≥ 1 {\displaystyle n\geq 1} ) of linearly independent vectors in X, which we normalize. Then, we define T ( e n ) = n ‖ e n ‖ {\displaystyle T(e_{n})=n\|e_{n}\|\,} for each n = 1 , 2 , … {\displaystyle n=1,2,\ldots } Complete this sequence of linearly independent vectors to a vector space basis of X by defining T at the other vectors in the basis to be zero. T so defined will extend uniquely to a linear map on X, and since it is clearly not bounded, it is not continuous. Notice that by using the fact that any set of linearly independent vectors can be completed to a basis, we implicitly used the axiom of choice, which was not needed for the concrete example in the previous section. == Role of the axiom of choice == As noted above, the axiom of choice (AC) is used in the general existence theorem of discontinuous linear maps. In fact, there are no constructive examples of discontinuous linear maps with complete domain (for example, Banach spaces). In analysis as it is usually practiced by working mathematicians, the axiom of choice is always employed (it is an axiom of ZFC set theory); thus, to the analyst, all infinite-dimensional topological vector spaces admit discontinuous linear maps. On the other hand, in 1970 Robert M. Solovay exhibited a model of set theory in which every set of reals is measurable. This implies that there are no discontinuous linear real functions. Clearly AC does not hold in the model. Solovay's result shows that it is not necessary to assume that all infinite-dimensional vector spaces admit discontinuous linear maps, and there are schools of analysis which adopt a more constructivist viewpoint. For example, H. G. Garnir, in searching for so-called "dream spaces" (topological vector spaces on which every linear map into a normed space is continuous), was led to adopt ZF + DC + BP (dependent choice is a weakened form and the Baire property is a negation of strong AC) as his axioms to prove the Garnir–Wright closed graph theorem which states, among other things, that any linear map from an F-space to a TVS is continuous. Going to the extreme of constructivism, there is Ceitin's theorem, which states that every function is continuous (this is to be understood in the terminology of constructivism, according to which only representable functions are considered to be functions). Such stances are held by only a small minority of working mathematicians. The upshot is that the existence of discontinuous linear maps depends on AC; it is consistent with set theory without AC that there are no discontinuous linear maps on complete spaces. In particular, no concrete construction such as the derivative can succeed in defining a discontinuous linear map everywhere on a complete space. == Closed operators == Many naturally occurring linear discontinuous operators are closed, a class of operators which share some of the features of continuous operators. It makes sense to ask which linear operators on a given space are closed. The closed graph theorem asserts that an everywhere-defined closed operator on a complete domain is continuous, so to obtain a discontinuous closed operator, one must permit operators which are not defined everywhere. To be more concrete, let T {\displaystyle T} be a map from X {\displaystyle X} to Y {\displaystyle Y} with domain Dom ( T ) , {\displaystyle \operatorname {Dom} (T),} written T : Dom ( T ) ⊆ X → Y . {\displaystyle T:\operatorname {Dom} (T)\subseteq X\to Y.} We don't lose much if we replace X by the closure of Dom ( T ) . {\displaystyle \operatorname {Dom} (T).} That is, in studying operators that are not everywhere-defined, one may restrict one's attention to densely defined operators without loss of generality. If the graph Γ ( T ) {\displaystyle \Gamma (T)} of T {\displaystyle T} is closed in X × Y , {\displaystyle X\times Y,} we call T closed. Otherwise, consider its closure Γ ( T ) ¯ {\displaystyle {\overline {\Gamma (T)}}} in X × Y . {\displaystyle X\times Y.} If Γ ( T ) ¯ {\displaystyle {\overline {\Gamma (T)}}} is itself the graph of some operator T ¯ , {\displaystyle {\overline {T}},} T {\displaystyle T} is called closable, and T ¯ {\displaystyle {\overline {T}}} is called the closure of T . {\displaystyle T.} So the natural question to ask about linear operators that are not everywhere-defined is whether they are closable. The answer is, "not necessarily"; indeed, every infinite-dimensional normed space admits linear operators that are not closable. As in the case of discontinuous operators considered above, the proof requires the axiom of choice and so is in general nonconstructive, though again, if X is not complete, there are constructible examples. In fact, there is even an example of a linear operator whose graph has closure all of X × Y . {\displaystyle X\times Y.} Such an operator is not closable. Let X be the space of polynomial functions from [0,1] to R {\displaystyle \mathbb {R} } and Y the space of polynomial functions from [2,3] to R {\displaystyle \mathbb {R} } . They are subspaces of C([0,1]) and C([2,3]) respectively, and so normed spaces. Define an operator T which takes the polynomial function x ↦ p(x) on [0,1] to the same function on [2,3]. As a consequence of the Stone–Weierstrass theorem, the graph of this operator is dense in X × Y , {\displaystyle X\times Y,} so this provides a sort of maximally discontinuous linear map (confer nowhere continuous function). Note that X is not complete here, as must be the case when there is such a constructible map. == Impact for dual spaces == The dual space of a topological vector space is the collection of continuous linear maps from the space into the underlying field. Thus the failure of some linear maps to be continuous for infinite-dimensional normed spaces implies that for these spaces, one needs to distinguish the algebraic dual space from the continuous dual space which is then a proper subset. It illustrates the fact that an extra dose of caution is needed in doing analysis on infinite-dimensional spaces as compared to finite-dimensional ones. == Beyond normed spaces == The argument for the existence of discontinuous linear maps on normed spaces can be generalized to all metrizable topological vector spaces, especially to all Fréchet spaces, but there exist infinite-dimensional locally convex topological vector spaces such that every functional is continuous. On the other hand, the Hahn–Banach theorem, which applies to all locally convex spaces, guarantees the existence of many continuous linear functionals, and so a large dual space. In fact, to every convex set, the Minkowski gauge associates a continuous linear functional. The upshot is that spaces with fewer convex sets have fewer functionals, and in the worst-case scenario, a space may have no functionals at all other than the zero functional. This is the case for the L p ( R , d x ) {\displaystyle L^{p}(\mathbb {R} ,dx)} spaces with 0 < p < 1 , {\displaystyle 0<p<1,} from which it follows that these spaces are nonconvex. Note that here is indicated the Lebesgue measure on the real line. There are other L p {\displaystyle L^{p}} spaces with 0 < p < 1 {\displaystyle 0<p<1} which do have nontrivial dual spaces. Another such example is the space of real-valued measurable functions on the unit interval with quasinorm given by ‖ f ‖ = ∫ I | f ( x ) | 1 + | f ( x ) | d x . {\displaystyle \|f\|=\int _{I}{\frac {|f(x)|}{1+|f(x)|}}dx.} This non-locally convex space has a trivial dual space. One can consider even more general spaces. For example, the existence of a homomorphism between complete separable metric groups can also be shown nonconstructively. == See also == Finest locally convex topology – Vector space with a topology defined by convex open setsPages displaying short descriptions of redirect targets Sublinear function – Type of function in linear algebra == References == Constantin Costara, Dumitru Popa, Exercises in Functional Analysis, Springer, 2003. ISBN 1-4020-1560-7. Schechter, Eric, Handbook of Analysis and its Foundations, Academic Press, 1997. ISBN 0-12-622760-8.
|
Wikipedia:Discrete Analysis#0
|
Discrete Analysis is a mathematics journal covering the applications of analysis to discrete structures. Discrete Analysis is an arXiv overlay journal, meaning the journal's content is hosted on the arXiv. == History == Discrete Analysis was created by Timothy Gowers to demonstrate that a high-quality mathematics journal could be inexpensively produced outside of the traditional academic publishing industry. The journal is open access, and submissions are free for authors. The journal's 2018 MCQ is 1.21. == References == Ball, Philip (2015). "Leading mathematician launches arXiv 'overlay' journal". Nature. 526 (7571): 146. Bibcode:2015Natur.526..146B. doi:10.1038/nature.2015.18351. PMID 26432251. Day, Charles (2015-09-18). "Meet the overlay journal". Physics Today Daily edition. == External links == Official website
|
Wikipedia:Discrete Laplace operator#0
|
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph (having a finite number of edges and vertices), the discrete Laplace operator is more commonly called the Laplacian matrix. The discrete Laplace operator occurs in physics problems such as the Ising model and loop quantum gravity, as well as in the study of discrete dynamical systems. It is also used in numerical analysis as a stand-in for the continuous Laplace operator. Common applications include image processing, where it is known as the Laplace filter, and in machine learning for clustering and semi-supervised learning on neighborhood graphs. == Definitions == === Graph Laplacians === There are various definitions of the discrete Laplacian for graphs, differing by sign and scale factor (sometimes one averages over the neighboring vertices, other times one just sums; this makes no difference for a regular graph). The traditional definition of the graph Laplacian, given below, corresponds to the negative continuous Laplacian on a domain with a free boundary. Let G = ( V , E ) {\displaystyle G=(V,E)} be a graph with vertices V {\displaystyle V} and edges E {\displaystyle E} . Let ϕ : V → R {\displaystyle \phi \colon V\to R} be a function of the vertices taking values in a ring. Then, the discrete Laplacian Δ {\displaystyle \Delta } acting on ϕ {\displaystyle \phi } is defined by ( Δ ϕ ) ( v ) = ∑ w : d ( w , v ) = 1 [ ϕ ( v ) − ϕ ( w ) ] {\displaystyle (\Delta \phi )(v)=\sum _{w:\,d(w,v)=1}\left[\phi (v)-\phi (w)\right]} where d ( w , v ) {\displaystyle d(w,v)} is the graph distance between vertices w and v. Thus, this sum is over the nearest neighbors of the vertex v. For a graph with a finite number of edges and vertices, this definition is identical to that of the Laplacian matrix. That is, ϕ {\displaystyle \phi } can be written as a column vector; and so Δ ϕ {\displaystyle \Delta \phi } is the product of the column vector and the Laplacian matrix, while ( Δ ϕ ) ( v ) {\displaystyle (\Delta \phi )(v)} is just the v'th entry of the product vector. If the graph has weighted edges, that is, a weighting function γ : E → R {\displaystyle \gamma \colon E\to R} is given, then the definition can be generalized to ( Δ γ ϕ ) ( v ) = ∑ w : d ( w , v ) = 1 γ w v [ ϕ ( v ) − ϕ ( w ) ] {\displaystyle (\Delta _{\gamma }\phi )(v)=\sum _{w:\,d(w,v)=1}\gamma _{wv}\left[\phi (v)-\phi (w)\right]} where γ w v {\displaystyle \gamma _{wv}} is the weight value on the edge w v ∈ E {\displaystyle wv\in E} . Closely related to the discrete Laplacian is the averaging operator: ( M ϕ ) ( v ) = 1 deg v ∑ w : d ( w , v ) = 1 ϕ ( w ) . {\displaystyle (M\phi )(v)={\frac {1}{\deg v}}\sum _{w:\,d(w,v)=1}\phi (w).} === Mesh Laplacians === In addition to considering the connectivity of nodes and edges in a graph, mesh Laplace operators take into account the geometry of a surface (e.g. the angles at the nodes). For a two-dimensional manifold triangle mesh, the Laplace–Beltrami operator of a scalar function u {\displaystyle u} at a vertex i {\displaystyle i} can be approximated as ( Δ u ) i ≡ 1 2 A i ∑ j ( cot α i j + cot β i j ) ( u j − u i ) , {\displaystyle (\Delta u)_{i}\equiv {\frac {1}{2A_{i}}}\sum _{j}(\cot \alpha _{ij}+\cot \beta _{ij})(u_{j}-u_{i}),} where the sum is over all adjacent vertices j {\displaystyle j} of i {\displaystyle i} , α i j {\displaystyle \alpha _{ij}} and β i j {\displaystyle \beta _{ij}} are the two angles opposite of the edge i j {\displaystyle ij} , and A i {\displaystyle A_{i}} is the vertex area of i {\displaystyle i} ; that is, e.g. one third of the summed areas of triangles incident to i {\displaystyle i} . It is important to note that the sign of the discrete Laplace–Beltrami operator is conventionally opposite the sign of the ordinary Laplace operator. The above cotangent formula can be derived using many different methods among which are piecewise linear finite elements, finite volumes, and discrete exterior calculus. To facilitate computation, the Laplacian is encoded in a matrix L ∈ R | V | × | V | {\displaystyle L\in \mathbb {R} ^{|V|\times |V|}} such that L u = ( Δ u ) i {\displaystyle Lu=(\Delta u)_{i}} . Let C {\displaystyle C} be the (sparse) cotangent matrix with entries C i j = { 1 2 ( cot α i j + cot β i j ) i j is an edge, that is j ∈ N ( i ) , − ∑ k ∈ N ( i ) C i k i = j , 0 otherwise {\displaystyle C_{ij}={\begin{cases}{\frac {1}{2}}(\cot \alpha _{ij}+\cot \beta _{ij})&ij{\text{ is an edge, that is }}j\in N(i),\\-\sum \limits _{k\in N(i)}C_{ik}&i=j,\\0&{\text{otherwise}}\end{cases}}} where N ( i ) {\displaystyle N(i)} denotes the neighborhood of i {\displaystyle i} , and let M {\displaystyle M} be the diagonal mass matrix M {\displaystyle M} whose i {\displaystyle i} -th entry along the diagonal is the vertex area A i {\displaystyle A_{i}} . Then L = M − 1 C {\displaystyle L=M^{-1}C} is the sought discretization of the Laplacian. A more general overview of mesh operators is given in. === Finite differences === Approximations of the Laplacian, obtained by the finite-difference method or by the finite-element method, can also be called discrete Laplacians. For example, the Laplacian in two dimensions can be approximated using the five-point stencil finite-difference method, resulting in Δ f ( x , y ) ≈ f ( x − h , y ) + f ( x + h , y ) + f ( x , y − h ) + f ( x , y + h ) − 4 f ( x , y ) h 2 , {\displaystyle \Delta f(x,y)\approx {\frac {f(x-h,y)+f(x+h,y)+f(x,y-h)+f(x,y+h)-4f(x,y)}{h^{2}}},} where the grid size is h in both dimensions, so that the five-point stencil of a point (x, y) in the grid is { ( x − h , y ) , ( x , y ) , ( x + h , y ) , ( x , y − h ) , ( x , y + h ) } . {\displaystyle \{(x-h,y),(x,y),(x+h,y),(x,y-h),(x,y+h)\}.} If the grid size h = 1, the result is the negative discrete Laplacian on the graph, which is the square lattice grid. There are no constraints here on the values of the function f(x, y) on the boundary of the lattice grid, thus this is the case of no source at the boundary, that is, a no-flux boundary condition (aka, insulation, or homogeneous Neumann boundary condition). The control of the state variable at the boundary, as f(x, y) given on the boundary of the grid (aka, Dirichlet boundary condition), is rarely used for graph Laplacians, but is common in other applications. Multidimensional discrete Laplacians on rectangular cuboid regular grids have very special properties, e.g., they are Kronecker sums of one-dimensional discrete Laplacians, see Kronecker sum of discrete Laplacians, in which case all its eigenvalues and eigenvectors can be explicitly calculated. === Finite-element method === In this approach, the domain is discretized into smaller elements, often triangles or tetrahedra, but other elements such as rectangles or cuboids are possible. The solution space is then approximated using so called form-functions of a pre-defined degree. The differential equation containing the Laplace operator is then transformed into a variational formulation, and a system of equations is constructed (linear or eigenvalue problems). The resulting matrices are usually very sparse and can be solved with iterative methods. === Image processing === Discrete Laplace operator is often used in image processing e.g. in edge detection and motion estimation applications. The discrete Laplacian is defined as the sum of the second derivatives and calculated as sum of differences over the nearest neighbours of the central pixel. Since derivative filters are often sensitive to noise in an image, the Laplace operator is often preceded by a smoothing filter (such as a Gaussian filter) in order to remove the noise before calculating the derivative. The smoothing filter and Laplace filter are often combined into a single filter. ==== Implementation via operator discretization ==== For one-, two- and three-dimensional signals, the discrete Laplacian can be given as convolution with the following kernels: 1D filter: D → x 2 = [ 1 − 2 1 ] {\displaystyle {\vec {D}}_{x}^{2}={\begin{bmatrix}1&-2&1\end{bmatrix}}} , 2D filter: D x y 2 = [ 0 1 0 1 − 4 1 0 1 0 ] {\displaystyle \mathbf {D} _{xy}^{2}={\begin{bmatrix}0&1&0\\1&-4&1\\0&1&0\end{bmatrix}}} . D x y 2 {\displaystyle \mathbf {D} _{xy}^{2}} corresponds to the (Five-point stencil) finite-difference formula seen previously. It is stable for very smoothly varying fields, but for equations with rapidly varying solutions a more stable and isotropic form of the Laplacian operator is required, such as the nine-point stencil, which includes the diagonals: 2D filter: D x y 2 = [ 0.25 0.5 0.25 0.5 − 3 0.5 0.25 0.5 0.25 ] {\displaystyle \mathbf {D} _{xy}^{2}={\begin{bmatrix}0.25&0.5&0.25\\0.5&-3&0.5\\0.25&0.5&0.25\end{bmatrix}}} , 3D filter: D x y z 2 {\displaystyle \mathbf {D} _{xyz}^{2}} using seven-point stencil is given by: first plane = [ 0 0 0 0 1 0 0 0 0 ] {\displaystyle {\begin{bmatrix}0&0&0\\0&1&0\\0&0&0\end{bmatrix}}} ; second plane = [ 0 1 0 1 − 6 1 0 1 0 ] {\displaystyle {\begin{bmatrix}0&1&0\\1&-6&1\\0&1&0\end{bmatrix}}} ; third plane = [ 0 0 0 0 1 0 0 0 0 ] {\displaystyle {\begin{bmatrix}0&0&0\\0&1&0\\0&0&0\end{bmatrix}}} . and using 27-point stencil by: first plane = 1 26 [ 2 3 2 3 6 3 2 3 2 ] {\displaystyle {\frac {1}{26}}{\begin{bmatrix}2&3&2\\3&6&3\\2&3&2\end{bmatrix}}} ; second plane = 1 26 [ 3 6 3 6 − 88 6 3 6 3 ] {\displaystyle {\frac {1}{26}}{\begin{bmatrix}3&6&3\\6&-88&6\\3&6&3\end{bmatrix}}} ; third plane = 1 26 [ 2 3 2 3 6 3 2 3 2 ] {\displaystyle {\frac {1}{26}}{\begin{bmatrix}2&3&2\\3&6&3\\2&3&2\end{bmatrix}}} . nD filter: For the element a x 1 , x 2 , … , x n {\displaystyle a_{x_{1},x_{2},\dots ,x_{n}}} of the kernel D x 1 , x 2 , … , x n 2 , {\displaystyle \mathbf {D} _{x_{1},x_{2},\dots ,x_{n}}^{2},} a x 1 , x 2 , … , x n = { − 2 n if s = n , 1 if s = n − 1 , 0 otherwise, {\displaystyle a_{x_{1},x_{2},\dots ,x_{n}}=\left\{{\begin{array}{ll}-2n&{\text{if }}s=n,\\1&{\text{if }}s=n-1,\\0&{\text{otherwise,}}\end{array}}\right.} where xi is the position (either −1, 0 or 1) of the element in the kernel in the i-th direction, and s is the number of directions i for which xi = 0. Note that the nD version, which is based on the graph generalization of the Laplacian, assumes all neighbors to be at an equal distance, and hence leads to the following 2D filter with diagonals included, rather than the version above: 2D filter: D x y 2 = [ 1 1 1 1 − 8 1 1 1 1 ] . {\displaystyle \mathbf {D} _{xy}^{2}={\begin{bmatrix}1&1&1\\1&-8&1\\1&1&1\end{bmatrix}}.} These kernels are deduced by using discrete differential quotients. It can be shown that the following discrete approximation of the two-dimensional Laplacian operator as a convex combination of difference operators ∇ γ 2 = ( 1 − γ ) ∇ 5 2 + γ ∇ × 2 = ( 1 − γ ) [ 0 1 0 1 − 4 1 0 1 0 ] + γ [ 1 / 2 0 1 / 2 0 − 2 0 1 / 2 0 1 / 2 ] {\displaystyle \nabla _{\gamma }^{2}=(1-\gamma )\nabla _{5}^{2}+\gamma \nabla _{\times }^{2}=(1-\gamma ){\begin{bmatrix}0&1&0\\1&-4&1\\0&1&0\end{bmatrix}}+\gamma {\begin{bmatrix}1/2&0&1/2\\0&-2&0\\1/2&0&1/2\end{bmatrix}}} for γ ∈ [0, 1] is compatible with discrete scale-space properties, where specifically the value γ = 1/3 gives the best approximation of rotational symmetry. Regarding three-dimensional signals, it is shown that the Laplacian operator can be approximated by the two-parameter family of difference operators ∇ γ 1 , γ 2 2 = ( 1 − γ 1 − γ 2 ) ∇ 7 2 + γ 1 ∇ + 3 2 + γ 2 ∇ × 3 2 ) , {\displaystyle \nabla _{\gamma _{1},\gamma _{2}}^{2}=(1-\gamma _{1}-\gamma _{2})\,\nabla _{7}^{2}+\gamma _{1}\,\nabla _{+^{3}}^{2}+\gamma _{2}\,\nabla _{\times ^{3}}^{2}),} where ( ∇ 7 2 f ) 0 , 0 , 0 = f − 1 , 0 , 0 + f + 1 , 0 , 0 + f 0 , − 1 , 0 + f 0 , + 1 , 0 + f 0 , 0 , − 1 + f 0 , 0 , + 1 − 6 f 0 , 0 , 0 , {\displaystyle (\nabla _{7}^{2}f)_{0,0,0}=f_{-1,0,0}+f_{+1,0,0}+f_{0,-1,0}+f_{0,+1,0}+f_{0,0,-1}+f_{0,0,+1}-6f_{0,0,0},} ( ∇ + 3 2 f ) 0 , 0 , 0 = 1 4 ( f − 1 , − 1 , 0 + f − 1 , + 1 , 0 + f + 1 , − 1 , 0 + f + 1 , + 1 , 0 + f − 1 , 0 , − 1 + f − 1 , 0 , + 1 + f + 1 , 0 , − 1 + f + 1 , 0 , + 1 + f 0 , − 1 , − 1 + f 0 , − 1 , + 1 + f 0 , + 1 , − 1 + f 0 , + 1 , + 1 − 12 f 0 , 0 , 0 ) , {\displaystyle (\nabla _{+^{3}}^{2}f)_{0,0,0}={\frac {1}{4}}(f_{-1,-1,0}+f_{-1,+1,0}+f_{+1,-1,0}+f_{+1,+1,0}+f_{-1,0,-1}+f_{-1,0,+1}+f_{+1,0,-1}+f_{+1,0,+1}+f_{0,-1,-1}+f_{0,-1,+1}+f_{0,+1,-1}+f_{0,+1,+1}-12f_{0,0,0}),} ( ∇ × 3 2 f ) 0 , 0 , 0 = 1 4 ( f − 1 , − 1 , − 1 + f − 1 , − 1 , + 1 + f − 1 , + 1 , − 1 + f − 1 , + 1 , + 1 + f + 1 , − 1 , − 1 + f + 1 , − 1 , + 1 + f + 1 , + 1 , − 1 + f + 1 , + 1 , + 1 − 8 f 0 , 0 , 0 ) . {\displaystyle (\nabla _{\times ^{3}}^{2}f)_{0,0,0}={\frac {1}{4}}(f_{-1,-1,-1}+f_{-1,-1,+1}+f_{-1,+1,-1}+f_{-1,+1,+1}+f_{+1,-1,-1}+f_{+1,-1,+1}+f_{+1,+1,-1}+f_{+1,+1,+1}-8f_{0,0,0}).} It can be shown by Taylor series analysis that combinations of values of γ 1 {\displaystyle \gamma _{1}} and γ 2 {\displaystyle \gamma _{2}} for which 3 γ 1 + 6 γ 2 = 2 {\displaystyle 3\gamma _{1}+6\gamma _{2}=2} give the best approximations of rotational symmetry. ==== Implementation via continuous reconstruction ==== A discrete signal, comprising images, can be viewed as a discrete representation of a continuous function f ( r ¯ ) {\displaystyle f({\bar {r}})} , where the coordinate vector r ¯ ∈ R n {\displaystyle {\bar {r}}\in R^{n}} and the value domain is real f ∈ R {\displaystyle f\in R} . Derivation operation is therefore directly applicable to the continuous function, f {\displaystyle f} . In particular any discrete image, with reasonable presumptions on the discretization process, e.g. assuming band limited functions, or wavelets expandable functions, etc. can be reconstructed by means of well-behaving interpolation functions underlying the reconstruction formulation, f ( r ¯ ) = ∑ k ∈ K f k μ k ( r ¯ ) {\displaystyle f({\bar {r}})=\sum _{k\in K}f_{k}\mu _{k}({\bar {r}})} where f k ∈ R {\displaystyle f_{k}\in R} are discrete representations of f {\displaystyle f} on grid K {\displaystyle K} and μ k {\displaystyle \mu _{k}} are interpolation functions specific to the grid K {\displaystyle K} . On a uniform grid, such as images, and for bandlimited functions, interpolation functions are shift invariant amounting to μ k ( r ¯ ) = μ ( r ¯ − r ¯ k ) {\displaystyle \mu _{k}({\bar {r}})=\mu ({\bar {r}}-{\bar {r}}_{k})} with μ {\displaystyle \mu } being an appropriately dilated sinc function defined in n {\displaystyle n} -dimensions i.e. r ¯ = ( x 1 , x 2 . . . x n ) T {\displaystyle {\bar {r}}=(x_{1},x_{2}...x_{n})^{T}} . Other approximations of μ {\displaystyle \mu } on uniform grids, are appropriately dilated Gaussian functions in n {\displaystyle n} -dimensions. Accordingly, the discrete Laplacian becomes a discrete version of the Laplacian of the continuous f ( r ¯ ) {\displaystyle f({\bar {r}})} ∇ 2 f ( r ¯ k ) = ∑ k ′ ∈ K f k ′ ( ∇ 2 μ ( r ¯ − r ¯ k ′ ) ) | r ¯ = r ¯ k {\displaystyle \nabla ^{2}f({\bar {r}}_{k})=\sum _{k'\in K}f_{k'}(\nabla ^{2}\mu ({\bar {r}}-{\bar {r}}_{k'}))|_{{\bar {r}}={\bar {r}}_{k}}} which in turn is a convolution with the Laplacian of the interpolation function on the uniform (image) grid K {\displaystyle K} . An advantage of using Gaussians as interpolation functions is that they yield linear operators, including Laplacians, that are free from rotational artifacts of the coordinate frame in which f {\displaystyle f} is represented via f k {\displaystyle f_{k}} , in n {\displaystyle n} -dimensions, and are frequency aware by definition. A linear operator has not only a limited range in the r ¯ {\displaystyle {\bar {r}}} domain but also an effective range in the frequency domain (alternatively Gaussian scale space) which can be controlled explicitly via the variance of the Gaussian in a principled manner. The resulting filtering can be implemented by separable filters and decimation (signal processing)/pyramid (image processing) representations for further computational efficiency in n {\displaystyle n} -dimensions. In other words, the discrete Laplacian filter of any size can be generated conveniently as the sampled Laplacian of Gaussian with spatial size befitting the needs of a particular application as controlled by its variance. Monomials which are non-linear operators can also be implemented using a similar reconstruction and approximation approach provided that the signal is sufficiently over-sampled. Thereby, such non-linear operators e.g. Structure Tensor, and Generalized Structure Tensor which are used in pattern recognition for their total least-square optimality in orientation estimation, can be realized. == Spectrum == The spectrum of the discrete Laplacian on an infinite grid is of key interest; since it is a self-adjoint operator, it has a real spectrum. For the convention Δ = I − M {\displaystyle \Delta =I-M} on Z {\displaystyle Z} , the spectrum lies within [ 0 , 2 ] {\displaystyle [0,2]} (as the averaging operator has spectral values in [ − 1 , 1 ] {\displaystyle [-1,1]} ). This may also be seen by applying the Fourier transform. Note that the discrete Laplacian on an infinite grid has purely absolutely continuous spectrum, and therefore, no eigenvalues or eigenfunctions. == Theorems == If the graph is an infinite square lattice grid, then this definition of the Laplacian can be shown to correspond to the continuous Laplacian in the limit of an infinitely fine grid. Thus, for example, on a one-dimensional grid we have ∂ 2 F ∂ x 2 = lim ϵ → 0 [ F ( x + ϵ ) − F ( x ) ] − [ F ( x ) − F ( x − ϵ ) ] ϵ 2 . {\displaystyle {\frac {\partial ^{2}F}{\partial x^{2}}}=\lim _{\epsilon \rightarrow 0}{\frac {[F(x+\epsilon )-F(x)]-[F(x)-F(x-\epsilon )]}{\epsilon ^{2}}}.} This definition of the Laplacian is commonly used in numerical analysis and in image processing. In image processing, it is considered to be a type of digital filter, more specifically an edge filter, called the Laplace filter. == Discrete heat equation == Suppose ϕ {\textstyle \phi } describes a temperature distribution across a graph, where ϕ i {\textstyle \phi _{i}} is the temperature at vertex i {\textstyle i} . According to Newton's law of cooling, the heat transferred from node i {\textstyle i} to node j {\textstyle j} is proportional to ϕ i − ϕ j {\textstyle \phi _{i}-\phi _{j}} if nodes i {\textstyle i} and j {\textstyle j} are connected (if they are not connected, no heat is transferred). Then, for thermal conductivity k {\textstyle k} , d ϕ i d t = − k ∑ j A i j ( ϕ i − ϕ j ) = − k ( ϕ i ∑ j A i j − ∑ j A i j ϕ j ) = − k ( ϕ i deg ( v i ) − ∑ j A i j ϕ j ) = − k ∑ j ( δ i j deg ( v i ) − A i j ) ϕ j = − k ∑ j ( L i j ) ϕ j . {\displaystyle {\begin{aligned}{\frac {d\phi _{i}}{dt}}&=-k\sum _{j}A_{ij}\left(\phi _{i}-\phi _{j}\right)\\&=-k\left(\phi _{i}\sum _{j}A_{ij}-\sum _{j}A_{ij}\phi _{j}\right)\\&=-k\left(\phi _{i}\ \deg(v_{i})-\sum _{j}A_{ij}\phi _{j}\right)\\&=-k\sum _{j}\left(\delta _{ij}\ \deg(v_{i})-A_{ij}\right)\phi _{j}\\&=-k\sum _{j}\left(L_{ij}\right)\phi _{j}.\end{aligned}}} In matrix-vector notation, d ϕ d t = − k ( D − A ) ϕ = − k L ϕ , {\displaystyle {\begin{aligned}{\frac {d\phi }{dt}}&=-k(D-A)\phi \\&=-kL\phi ,\end{aligned}}} which gives d ϕ d t + k L ϕ = 0. {\displaystyle {\frac {d\phi }{dt}}+kL\phi =0.} Notice that this equation takes the same form as the heat equation, where the matrix −L is replacing the Laplacian operator ∇ 2 {\textstyle \nabla ^{2}} ; hence, the "graph Laplacian". To find a solution to this differential equation, apply standard techniques for solving a first-order matrix differential equation. That is, write ϕ {\textstyle \phi } as a linear combination of eigenvectors v i {\textstyle \mathbf {v} _{i}} of L (so that L v i = λ i v i {\textstyle L\mathbf {v} _{i}=\lambda _{i}\mathbf {v} _{i}} ) with time-dependent coefficients, ϕ ( t ) = ∑ i c i ( t ) v i . {\textstyle \phi (t)=\sum _{i}c_{i}(t)\mathbf {v} _{i}.} Plugging into the original expression (because L is a symmetric matrix, its unit-norm eigenvectors v i {\textstyle \mathbf {v} _{i}} are orthogonal): 0 = d ( ∑ i c i ( t ) v i ) d t + k L ( ∑ i c i ( t ) v i ) = ∑ i [ d c i ( t ) d t v i + k c i ( t ) L v i ] = ∑ i [ d c i ( t ) d t v i + k c i ( t ) λ i v i ] ⇒ 0 = d c i ( t ) d t + k λ i c i ( t ) , {\displaystyle {\begin{aligned}0={}&{\frac {d\left(\sum _{i}c_{i}(t)\mathbf {v} _{i}\right)}{dt}}+kL\left(\sum _{i}c_{i}(t)\mathbf {v} _{i}\right)\\{}={}&\sum _{i}\left[{\frac {dc_{i}(t)}{dt}}\mathbf {v} _{i}+kc_{i}(t)L\mathbf {v} _{i}\right]\\{}={}&\sum _{i}\left[{\frac {dc_{i}(t)}{dt}}\mathbf {v} _{i}+kc_{i}(t)\lambda _{i}\mathbf {v} _{i}\right]\\\Rightarrow 0={}&{\frac {dc_{i}(t)}{dt}}+k\lambda _{i}c_{i}(t),\\\end{aligned}}} whose solution is c i ( t ) = c i ( 0 ) e − k λ i t . {\displaystyle c_{i}(t)=c_{i}(0)e^{-k\lambda _{i}t}.} As shown before, the eigenvalues λ i {\textstyle \lambda _{i}} of L are non-negative, showing that the solution to the diffusion equation approaches an equilibrium, because it only exponentially decays or remains constant. This also shows that given λ i {\textstyle \lambda _{i}} and the initial condition c i ( 0 ) {\textstyle c_{i}(0)} , the solution at any time t can be found. To find c i ( 0 ) {\textstyle c_{i}(0)} for each i {\textstyle i} in terms of the overall initial condition ϕ ( 0 ) {\textstyle \phi (0)} , simply project ϕ ( 0 ) {\textstyle \phi (0)} onto the unit-norm eigenvectors v i {\textstyle \mathbf {v} _{i}} ; c i ( 0 ) = ⟨ ϕ ( 0 ) , v i ⟩ {\displaystyle c_{i}(0)=\left\langle \phi (0),\mathbf {v} _{i}\right\rangle } . This approach has been applied to quantitative heat transfer modelling on unstructured grids. In the case of undirected graphs, this works because L {\textstyle L} is symmetric, and by the spectral theorem, its eigenvectors are all orthogonal. So the projection onto the eigenvectors of L {\textstyle L} is simply an orthogonal coordinate transformation of the initial condition to a set of coordinates which decay exponentially and independently of each other. === Equilibrium behavior === To understand lim t → ∞ ϕ ( t ) {\textstyle \lim _{t\to \infty }\phi (t)} , the only terms c i ( t ) = c i ( 0 ) e − k λ i t {\textstyle c_{i}(t)=c_{i}(0)e^{-k\lambda _{i}t}} that remain are those where λ i = 0 {\textstyle \lambda _{i}=0} , since lim t → ∞ e − k λ i t = { 0 , if λ i > 0 1 , if λ i = 0 {\displaystyle \lim _{t\to \infty }e^{-k\lambda _{i}t}={\begin{cases}0,&{\text{if}}&\lambda _{i}>0\\1,&{\text{if}}&\lambda _{i}=0\end{cases}}} In other words, the equilibrium state of the system is determined completely by the kernel of L {\textstyle L} . Since by definition, ∑ j L i j = 0 {\textstyle \sum _{j}L_{ij}=0} , the vector v 1 {\textstyle \mathbf {v} ^{1}} of all ones is in the kernel. If there are k {\textstyle k} disjoint connected components in the graph, then this vector of all ones can be split into the sum of k {\textstyle k} independent λ = 0 {\textstyle \lambda =0} eigenvectors of ones and zeros, where each connected component corresponds to an eigenvector with ones at the elements in the connected component and zeros elsewhere. The consequence of this is that for a given initial condition ϕ ( 0 ) {\textstyle \phi (0)} for a graph with N {\textstyle N} vertices lim t → ∞ ϕ ( t ) = ⟨ ϕ ( 0 ) , v 1 ⟩ v 1 {\displaystyle \lim _{t\to \infty }\phi (t)=\left\langle \phi (0),\mathbf {v^{1}} \right\rangle \mathbf {v^{1}} } where v 1 = 1 N [ 1 , 1 , … , 1 ] {\displaystyle \mathbf {v^{1}} ={\frac {1}{\sqrt {N}}}[1,1,\ldots ,1]} For each element ϕ j {\textstyle \phi _{j}} of ϕ {\textstyle \phi } , i.e. for each vertex j {\textstyle j} in the graph, it can be rewritten as lim t → ∞ ϕ j ( t ) = 1 N ∑ i = 1 N ϕ i ( 0 ) {\displaystyle \lim _{t\to \infty }\phi _{j}(t)={\frac {1}{N}}\sum _{i=1}^{N}\phi _{i}(0)} . In other words, at steady state, the value of ϕ {\textstyle \phi } converges to the same value at each of the vertices of the graph, which is the average of the initial values at all of the vertices. Since this is the solution to the heat diffusion equation, this makes perfect sense intuitively. We expect that neighboring elements in the graph will exchange energy until that energy is spread out evenly throughout all of the elements that are connected to each other. === Example of the operator on a grid === This section shows an example of a function ϕ {\textstyle \phi } diffusing over time through a graph. The graph in this example is constructed on a 2D discrete grid, with points on the grid connected to their eight neighbors. Three initial points are specified to have a positive value, while the rest of the values in the grid are zero. Over time, the exponential decay acts to distribute the values at these points evenly throughout the entire grid. The complete Matlab source code that was used to generate this animation is provided below. It shows the process of specifying initial conditions, projecting these initial conditions onto the eigenvalues of the Laplacian Matrix, and simulating the exponential decay of these projected initial conditions. == Discrete Schrödinger operator == Let P : V → R {\displaystyle P\colon V\rightarrow R} be a potential function defined on the graph. Note that P can be considered to be a multiplicative operator acting diagonally on ϕ {\displaystyle \phi } ( P ϕ ) ( v ) = P ( v ) ϕ ( v ) . {\displaystyle (P\phi )(v)=P(v)\phi (v).} Then H = Δ + P {\displaystyle H=\Delta +P} is the discrete Schrödinger operator, an analog of the continuous Schrödinger operator. If the number of edges meeting at a vertex is uniformly bounded, and the potential is bounded, then H is bounded and self-adjoint. The spectral properties of this Hamiltonian can be studied with Stone's theorem; this is a consequence of the duality between posets and Boolean algebras. On regular lattices, the operator typically has both traveling-wave as well as Anderson localization solutions, depending on whether the potential is periodic or random. The Green's function of the discrete Schrödinger operator is given in the resolvent formalism by G ( v , w ; λ ) = ⟨ δ v | 1 H − λ | δ w ⟩ {\displaystyle G(v,w;\lambda )=\left\langle \delta _{v}\left|{\frac {1}{H-\lambda }}\right|\delta _{w}\right\rangle } where δ w {\displaystyle \delta _{w}} is understood to be the Kronecker delta function on the graph: δ w ( v ) = δ w v {\displaystyle \delta _{w}(v)=\delta _{wv}} ; that is, it equals 1 if v=w and 0 otherwise. For fixed w ∈ V {\displaystyle w\in V} and λ {\displaystyle \lambda } a complex number, the Green's function considered to be a function of v is the unique solution to ( H − λ ) G ( v , w ; λ ) = δ w ( v ) . {\displaystyle (H-\lambda )G(v,w;\lambda )=\delta _{w}(v).} == ADE classification == Certain equations involving the discrete Laplacian only have solutions on the simply-laced Dynkin diagrams (all edges multiplicity 1), and are an example of the ADE classification. Specifically, the only positive solutions to the homogeneous equation: Δ ϕ = ϕ , {\displaystyle \Delta \phi =\phi ,} in words, "Twice any label is the sum of the labels on adjacent vertices," are on the extended (affine) ADE Dynkin diagrams, of which there are 2 infinite families (A and D) and 3 exceptions (E). The resulting numbering is unique up to scale, and if the smallest value is set at 1, the other numbers are integers, ranging up to 6. The ordinary ADE graphs are the only graphs that admit a positive labeling with the following property: Twice any label minus two is the sum of the labels on adjacent vertices. In terms of the Laplacian, the positive solutions to the inhomogeneous equation: Δ ϕ = ϕ − 2. {\displaystyle \Delta \phi =\phi -2.} The resulting numbering is unique (scale is specified by the "2"), and consists of integers; for E8 they range from 58 to 270, and have been observed as early as 1968. == See also == Spectral shape analysis Electrical network Kronecker sum of discrete Laplacians Discrete calculus == References == == External links == Ollivier, Yann (2004). "Spectral gap of a graph". Archived from the original on 2007-05-23.
|
Wikipedia:Discrete Poisson equation#0
|
In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator. The discrete Poisson equation is frequently used in numerical analysis as a stand-in for the continuous Poisson equation, although it is also studied in its own right as a topic in discrete mathematics. == On a two-dimensional rectangular grid == Using the finite difference numerical method to discretize the 2-dimensional Poisson equation (assuming a uniform spatial discretization, Δ x = Δ y {\displaystyle \Delta x=\Delta y} ) on an m × n grid gives the following formula: ( ∇ 2 u ) i j = 1 Δ x 2 ( u i + 1 , j + u i − 1 , j + u i , j + 1 + u i , j − 1 − 4 u i j ) = g i j {\displaystyle ({\nabla }^{2}u)_{ij}={\frac {1}{\Delta x^{2}}}(u_{i+1,j}+u_{i-1,j}+u_{i,j+1}+u_{i,j-1}-4u_{ij})=g_{ij}} where 2 ≤ i ≤ m − 1 {\displaystyle 2\leq i\leq m-1} and 2 ≤ j ≤ n − 1 {\displaystyle 2\leq j\leq n-1} . The preferred arrangement of the solution vector is to use natural ordering which, prior to removing boundary elements, would look like: u = [ u 11 , u 21 , … , u m 1 , u 12 , u 22 , … , u m 2 , … , u m n ] T {\displaystyle \mathbf {u} ={\begin{bmatrix}u_{11},u_{21},\ldots ,u_{m1},u_{12},u_{22},\ldots ,u_{m2},\ldots ,u_{mn}\end{bmatrix}}^{\mathsf {T}}} This will result in an mn × mn linear system: A u = b {\displaystyle A\mathbf {u} =\mathbf {b} } where A = [ D − I 0 0 0 ⋯ 0 − I D − I 0 0 ⋯ 0 0 − I D − I 0 ⋯ 0 ⋮ ⋱ ⋱ ⋱ ⋱ ⋱ ⋮ 0 ⋯ 0 − I D − I 0 0 ⋯ ⋯ 0 − I D − I 0 ⋯ ⋯ ⋯ 0 − I D ] , {\displaystyle A={\begin{bmatrix}~D&-I&~0&~0&~0&\cdots &~0\\-I&~D&-I&~0&~0&\cdots &~0\\~0&-I&~D&-I&~0&\cdots &~0\\\vdots &\ddots &\ddots &\ddots &\ddots &\ddots &\vdots \\~0&\cdots &~0&-I&~D&-I&~0\\~0&\cdots &\cdots &~0&-I&~D&-I\\~0&\cdots &\cdots &\cdots &~0&-I&~D\end{bmatrix}},} I {\displaystyle I} is the m × m identity matrix, and D {\displaystyle D} , also m × m, is given by: D = [ 4 − 1 0 0 0 ⋯ 0 − 1 4 − 1 0 0 ⋯ 0 0 − 1 4 − 1 0 ⋯ 0 ⋮ ⋱ ⋱ ⋱ ⋱ ⋱ ⋮ 0 ⋯ 0 − 1 4 − 1 0 0 ⋯ ⋯ 0 − 1 4 − 1 0 ⋯ ⋯ ⋯ 0 − 1 4 ] , {\displaystyle D={\begin{bmatrix}~4&-1&~0&~0&~0&\cdots &~0\\-1&~4&-1&~0&~0&\cdots &~0\\~0&-1&~4&-1&~0&\cdots &~0\\\vdots &\ddots &\ddots &\ddots &\ddots &\ddots &\vdots \\~0&\cdots &~0&-1&~4&-1&~0\\~0&\cdots &\cdots &~0&-1&~4&-1\\~0&\cdots &\cdots &\cdots &~0&-1&~4\end{bmatrix}},} and b {\displaystyle \mathbf {b} } is defined by b = − Δ x 2 [ g 11 , g 21 , … , g m 1 , g 12 , g 22 , … , g m 2 , … , g m n ] T . {\displaystyle \mathbf {b} =-\Delta x^{2}{\begin{bmatrix}g_{11},g_{21},\ldots ,g_{m1},g_{12},g_{22},\ldots ,g_{m2},\ldots ,g_{mn}\end{bmatrix}}^{\mathsf {T}}.} For each u i j {\displaystyle u_{ij}} equation, the columns of D {\displaystyle D} correspond to a block of m {\displaystyle m} components in u {\displaystyle u} : [ u 1 j , u 2 j , … , u i − 1 , j , u i j , u i + 1 , j , … , u m j ] T {\displaystyle {\begin{bmatrix}u_{1j},&u_{2j},&\ldots ,&u_{i-1,j},&u_{ij},&u_{i+1,j},&\ldots ,&u_{mj}\end{bmatrix}}^{\mathsf {T}}} while the columns of I {\displaystyle I} to the left and right of D {\displaystyle D} each correspond to other blocks of m {\displaystyle m} components within u {\displaystyle u} : [ u 1 , j − 1 , u 2 , j − 1 , … , u i − 1 , j − 1 , u i , j − 1 , u i + 1 , j − 1 , … , u m , j − 1 ] T {\displaystyle {\begin{bmatrix}u_{1,j-1},&u_{2,j-1},&\ldots ,&u_{i-1,j-1},&u_{i,j-1},&u_{i+1,j-1},&\ldots ,&u_{m,j-1}\end{bmatrix}}^{\mathsf {T}}} and [ u 1 , j + 1 , u 2 , j + 1 , … , u i − 1 , j + 1 , u i , j + 1 , u i + 1 , j + 1 , … , u m , j + 1 ] T {\displaystyle {\begin{bmatrix}u_{1,j+1},&u_{2,j+1},&\ldots ,&u_{i-1,j+1},&u_{i,j+1},&u_{i+1,j+1},&\ldots ,&u_{m,j+1}\end{bmatrix}}^{\mathsf {T}}} respectively. From the above, it can be inferred that there are n {\displaystyle n} block columns of m {\displaystyle m} in A {\displaystyle A} . It is important to note that prescribed values of u {\displaystyle u} (usually lying on the boundary) would have their corresponding elements removed from I {\displaystyle I} and D {\displaystyle D} . For the common case that all the nodes on the boundary are set, we have 2 ≤ i ≤ m − 1 {\displaystyle 2\leq i\leq m-1} and 2 ≤ j ≤ n − 1 {\displaystyle 2\leq j\leq n-1} , and the system would have the dimensions (m − 2)(n − 2) × (m− 2)(n − 2), where D {\displaystyle D} and I {\displaystyle I} would have dimensions (m − 2) × (m − 2). == Example == For a 3×3 ( m = 3 {\displaystyle m=3} and n = 3 {\displaystyle n=3} ) grid with all the boundary nodes prescribed, the system would look like: [ U ] = [ u 22 , u 32 , u 42 , u 23 , u 33 , u 43 , u 24 , u 34 , u 44 ] T {\displaystyle {\begin{bmatrix}U\end{bmatrix}}={\begin{bmatrix}u_{22},u_{32},u_{42},u_{23},u_{33},u_{43},u_{24},u_{34},u_{44}\end{bmatrix}}^{\mathsf {T}}} with A = [ 4 − 1 0 − 1 0 0 0 0 0 − 1 4 − 1 0 − 1 0 0 0 0 0 − 1 4 0 0 − 1 0 0 0 − 1 0 0 4 − 1 0 − 1 0 0 0 − 1 0 − 1 4 − 1 0 − 1 0 0 0 − 1 0 − 1 4 0 0 − 1 0 0 0 − 1 0 0 4 − 1 0 0 0 0 0 − 1 0 − 1 4 − 1 0 0 0 0 0 − 1 0 − 1 4 ] {\displaystyle A=\left[{\begin{array}{ccc|ccc|ccc}~4&-1&~0&-1&~0&~0&~0&~0&~0\\-1&~4&-1&~0&-1&~0&~0&~0&~0\\~0&-1&~4&~0&~0&-1&~0&~0&~0\\\hline -1&~0&~0&~4&-1&~0&-1&~0&~0\\~0&-1&~0&-1&~4&-1&~0&-1&~0\\~0&~0&-1&~0&-1&~4&~0&~0&-1\\\hline ~0&~0&~0&-1&~0&~0&~4&-1&~0\\~0&~0&~0&~0&-1&~0&-1&~4&-1\\~0&~0&~0&~0&~0&-1&~0&-1&~4\end{array}}\right]} and b = [ − Δ x 2 g 22 + u 12 + u 21 − Δ x 2 g 32 + u 31 − Δ x 2 g 42 + u 52 + u 41 − Δ x 2 g 23 + u 13 − Δ x 2 g 33 − Δ x 2 g 43 + u 53 − Δ x 2 g 24 + u 14 + u 25 − Δ x 2 g 34 + u 35 − Δ x 2 g 44 + u 54 + u 45 ] . {\displaystyle \mathbf {b} =\left[{\begin{array}{l}-\Delta x^{2}g_{22}+u_{12}+u_{21}\\-\Delta x^{2}g_{32}+u_{31}~~~~~~~~\\-\Delta x^{2}g_{42}+u_{52}+u_{41}\\-\Delta x^{2}g_{23}+u_{13}~~~~~~~~\\-\Delta x^{2}g_{33}~~~~~~~~~~~~~~~~\\-\Delta x^{2}g_{43}+u_{53}~~~~~~~~\\-\Delta x^{2}g_{24}+u_{14}+u_{25}\\-\Delta x^{2}g_{34}+u_{35}~~~~~~~~\\-\Delta x^{2}g_{44}+u_{54}+u_{45}\end{array}}\right].} As can be seen, the boundary u {\displaystyle u} 's are brought to the right-hand-side of the equation. The entire system is 9 × 9 while D {\displaystyle D} and I {\displaystyle I} are 3 × 3 and given by: D = [ 4 − 1 0 − 1 4 − 1 0 − 1 4 ] {\displaystyle D={\begin{bmatrix}~4&-1&~0\\-1&~4&-1\\~0&-1&~4\\\end{bmatrix}}} and − I = [ − 1 0 0 0 − 1 0 0 0 − 1 ] . {\displaystyle -I={\begin{bmatrix}-1&~0&~0\\~0&-1&~0\\~0&~0&-1\end{bmatrix}}.} == Methods of solution == Because [ A ] {\displaystyle {\begin{bmatrix}A\end{bmatrix}}} is block tridiagonal and sparse, many methods of solution have been developed to optimally solve this linear system for [ U ] {\displaystyle {\begin{bmatrix}U\end{bmatrix}}} . Among the methods are a generalized Thomas algorithm with a resulting computational complexity of O ( n 2.5 ) {\displaystyle O(n^{2.5})} , cyclic reduction, successive overrelaxation that has a complexity of O ( n 1.5 ) {\displaystyle O(n^{1.5})} , and Fast Fourier transforms which is O ( n log ( n ) ) {\displaystyle O(n\log(n))} . An optimal O ( n ) {\displaystyle O(n)} solution can also be computed using multigrid methods. == Applications == In computational fluid dynamics, for the solution of an incompressible flow problem, the incompressibility condition acts as a constraint for the pressure. There is no explicit form available for pressure in this case due to a strong coupling of the velocity and pressure fields. In this condition, by taking the divergence of all terms in the momentum equation, one obtains the pressure Poisson equation. For an incompressible flow this constraint is given by: ∂ v x ∂ x + ∂ v y ∂ y + ∂ v z ∂ z = 0 {\displaystyle {\frac {\partial v_{x}}{\partial x}}+{\frac {\partial v_{y}}{\partial y}}+{\frac {\partial v_{z}}{\partial z}}=0} where v x {\displaystyle v_{x}} is the velocity in the x {\displaystyle x} direction, v y {\displaystyle v_{y}} is velocity in y {\displaystyle y} and v z {\displaystyle v_{z}} is the velocity in the z {\displaystyle z} direction. Taking divergence of the momentum equation and using the incompressibility constraint, the pressure Poisson equation is formed given by: ∇ 2 p = f ( ν , V ) {\displaystyle \nabla ^{2}p=f(\nu ,V)} where ν {\displaystyle \nu } is the kinematic viscosity of the fluid and V {\displaystyle V} is the velocity vector. The discrete Poisson's equation arises in the theory of Markov chains. It appears as the relative value function for the dynamic programming equation in a Markov decision process, and as the control variate for application in simulation variance reduction. == Footnotes == == References == Hoffman, Joe D., Numerical Methods for Engineers and Scientists, 4th Ed., McGraw–Hill Inc., New York, 1992. Sweet, Roland A., SIAM Journal on Numerical Analysis, Vol. 11, No. 3 , June 1974, 506–520. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 20.4. Fourier and Cyclic Reduction Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on August 11, 2011. Retrieved August 18, 2011.
|
Wikipedia:Discrete calculus#0
|
Discrete calculus or the calculus of discrete functions, is the mathematical study of incremental change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. The word calculus is a Latin word, meaning originally "small pebble"; as such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. Meanwhile, calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the study of continuous change. Discrete calculus has two entry points, differential calculus and integral calculus. Differential calculus concerns incremental rates of change and the slopes of piece-wise linear curves. Integral calculus concerns accumulation of quantities and the areas under piece-wise constant curves. These two points of view are related to each other by the fundamental theorem of discrete calculus. The study of the concepts of change starts with their discrete form. The development is dependent on a parameter, the increment Δ x {\displaystyle \Delta x} of the independent variable. If we so choose, we can make the increment smaller and smaller and find the continuous counterparts of these concepts as limits. Informally, the limit of discrete calculus as Δ x → 0 {\displaystyle \Delta x\to 0} is infinitesimal calculus. Even though it serves as a discrete underpinning of calculus, the main value of discrete calculus is in applications. == Two initial constructions == Discrete differential calculus is the study of the definition, properties, and applications of the difference quotient of a function. The process of finding the difference quotient is called differentiation. Given a function defined at several points of the real line, the difference quotient at that point is a way of encoding the small-scale (i.e., from the point to the next) behavior of the function. By finding the difference quotient of a function at every pair of consecutive points in its domain, it is possible to produce a new function, called the difference quotient function or just the difference quotient of the original function. In formal terms, the difference quotient is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be something close to the doubling function. Suppose the functions are defined at points separated by an increment Δ x = h > 0 {\displaystyle \Delta x=h>0} : a , a + h , a + 2 h , … , a + n h , … {\displaystyle a,a+h,a+2h,\ldots ,a+nh,\ldots } The "doubling function" may be denoted by g ( x ) = 2 x {\displaystyle g(x)=2x} and the "squaring function" by f ( x ) = x 2 {\displaystyle f(x)=x^{2}} . The "difference quotient" is the rate of change of the function over one of the intervals [ x , x + h ] {\displaystyle [x,x+h]} defined by the formula: f ( x + h ) − f ( x ) h . {\displaystyle {\frac {f(x+h)-f(x)}{h}}.} It takes the function f {\displaystyle f} as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function g ( x ) = 2 x + h {\displaystyle g(x)=2x+h} , as will turn out. As a matter of convenience, the new function may defined at the middle points of the above intervals: a + h / 2 , a + h + h / 2 , a + 2 h + h / 2 , . . . , a + n h + h / 2 , . . . {\displaystyle a+h/2,a+h+h/2,a+2h+h/2,...,a+nh+h/2,...} As the rate of change is that for the whole interval [ x , x + h ] {\displaystyle [x,x+h]} , any point within it can be used as such a reference or, even better, the whole interval which makes the difference quotient a 1 {\displaystyle 1} -cochain. The most common notation for the difference quotient is: Δ f Δ x ( x + h / 2 ) = f ( x + h ) − f ( x ) h . {\displaystyle {\frac {\Delta f}{\Delta x}}(x+h/2)={\frac {f(x+h)-f(x)}{h}}.} If the input of the function represents time, then the difference quotient represents change with respect to time. For example, if f {\displaystyle f} is a function that takes a time as input and gives the position of a ball at that time as output, then the difference quotient of f {\displaystyle f} is how the position is changing in time, that is, it is the velocity of the ball. If a function is linear (that is, if the points of the graph of the function lie on a straight line), then the function can be written as y = m x + b {\displaystyle y=mx+b} , where x {\displaystyle x} is the independent variable, y {\displaystyle y} is the dependent variable, b {\displaystyle b} is the y {\displaystyle y} -intercept, and: m = rise run = change in y change in x = Δ y Δ x . {\displaystyle m={\frac {\text{rise}}{\text{run}}}={\frac {{\text{change in }}y}{{\text{change in }}x}}={\frac {\Delta y}{\Delta x}}.} This gives an exact value for the slope of a straight line. If the function is not linear, however, then the change in y {\displaystyle y} divided by the change in x {\displaystyle x} varies. The difference quotient give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let f {\displaystyle f} be a function, and fix a point x {\displaystyle x} in the domain of f {\displaystyle f} . ( x , f ( x ) ) {\displaystyle (x,f(x))} is a point on the graph of the function. If h {\displaystyle h} is the increment of x {\displaystyle x} , then x + h {\displaystyle x+h} is the next value of x {\displaystyle x} . Therefore, ( x + h , f ( x + h ) ) {\displaystyle (x+h,f(x+h))} is the increment of ( x , f ( x ) ) {\displaystyle (x,f(x))} . The slope of the line between these two points is m = f ( x + h ) − f ( x ) ( x + h ) − x = f ( x + h ) − f ( x ) h . {\displaystyle m={\frac {f(x+h)-f(x)}{(x+h)-x}}={\frac {f(x+h)-f(x)}{h}}.} So m {\displaystyle m} is the slope of the line between ( x , f ( x ) ) {\displaystyle (x,f(x))} and ( x + h , f ( x + h ) ) {\displaystyle (x+h,f(x+h))} . Here is a particular example, the difference quotient of the squaring function. Let f ( x ) = x 2 {\displaystyle f(x)=x^{2}} be the squaring function. Then: Δ f Δ x ( x ) = ( x + h ) 2 − x 2 h = x 2 + 2 h x + h 2 − x 2 h = 2 h x + h 2 h = 2 x + h . {\displaystyle {\begin{aligned}{\frac {\Delta f}{\Delta x}}(x)&={(x+h)^{2}-x^{2} \over {h}}\\&={x^{2}+2hx+h^{2}-x^{2} \over {h}}\\&={2hx+h^{2} \over {h}}\\&=2x+h.\end{aligned}}} The difference quotient of the difference quotient is called the second difference quotient and it is defined at a + h , a + 2 h , a + 3 h , … , a + n h , … {\displaystyle a+h,a+2h,a+3h,\ldots ,a+nh,\ldots } and so on. Discrete integral calculus is the study of the definitions, properties, and applications of the Riemann sums. The process of finding the value of a sum is called integration. In technical language, integral calculus studies a certain linear operator. The Riemann sum inputs a function and outputs a function, which gives the algebraic sum of areas between the part of the graph of the input and the x-axis. A motivating example is the distances traveled in a given time. distance = speed ⋅ time {\displaystyle {\text{distance}}={\text{speed}}\cdot {\text{time}}} If the speed is constant, only multiplication is needed, but if the speed changes, we evaluate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the distance traveled in each interval. When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to any irregularly shaped region exhibiting an incrementally varying velocity over a given time period. If the bars in the diagram on the right represents speed as it varies from an interval to the next, the distance traveled (between the times represented by a {\displaystyle a} and b {\displaystyle b} ) is the area of the shaded region s {\displaystyle s} . So, the interval between a {\displaystyle a} and b {\displaystyle b} is divided into a number of equal segments, the length of each segment represented by the symbol Δ x {\displaystyle \Delta x} . For each small segment, we have one value of the function f ( x ) {\displaystyle f(x)} . Call that value v {\displaystyle v} . Then the area of the rectangle with base Δ x {\displaystyle \Delta x} and height v {\displaystyle v} gives the distance (time Δ x {\displaystyle \Delta x} multiplied by speed v {\displaystyle v} ) traveled in that segment. Associated with each segment is the value of the function above it, f ( x ) = v {\displaystyle f(x)=v} . The sum of all such rectangles gives the area between the axis and the piece-wise constant curve, which is the total distance traveled. Suppose a function is defined at the mid-points of the intervals of equal length Δ x = h > 0 {\displaystyle \Delta x=h>0} : a + h / 2 , a + h + h / 2 , a + 2 h + h / 2 , … , a + n h − h / 2 , … {\displaystyle a+h/2,a+h+h/2,a+2h+h/2,\ldots ,a+nh-h/2,\ldots } Then the Riemann sum from a {\displaystyle a} to b = a + n h {\displaystyle b=a+nh} in sigma notation is: ∑ i = 1 n f ( a + i h ) Δ x . {\displaystyle \sum _{i=1}^{n}f(a+ih)\,\Delta x.} As this computation is carried out for each n {\displaystyle n} , the new function is defined at the points: a , a + h , a + 2 h , … , a + n h , … {\displaystyle a,a+h,a+2h,\ldots ,a+nh,\ldots } The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the difference quotients to the Riemann sums. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration. The fundamental theorem of calculus: If a function f {\displaystyle f} is defined on a partition of the interval [ a , b ] {\displaystyle [a,b]} , b = a + n h {\displaystyle b=a+nh} , and if F {\displaystyle F} is a function whose difference quotient is f {\displaystyle f} , then we have: ∑ i = 0 n − 1 f ( a + i h + h / 2 ) Δ x = F ( b ) − F ( a ) . {\displaystyle \sum _{i=0}^{n-1}f(a+ih+h/2)\,\Delta x=F(b)-F(a).} Furthermore, for every m = 0 , 1 , 2 , … , n − 1 {\textstyle m=0,1,2,\ldots ,n-1} , we have: Δ Δ x ∑ i = 0 m f ( a + i h + h / 2 ) Δ x = f ( a + m h + h / 2 ) . {\displaystyle {\frac {\Delta }{\Delta x}}\sum _{i=0}^{m}f(a+ih+h/2)\,\Delta x=f(a+mh+h/2).} This is also a prototype solution of a difference equation. Difference equations relate an unknown function to its difference or difference quotient, and are ubiquitous in the sciences. == History == The early history of discrete calculus is the history of calculus. Such basic ideas as the difference quotients and the Riemann sums appear implicitly or explicitly in definitions and proofs. After the limit is taken, however, they are never to be seen again. However, the Kirchhoff's voltage law (1847) can be expressed in terms of the one-dimensional discrete exterior derivative. During the 20th century discrete calculus remains interlinked with infinitesimal calculus especially differential forms but also starts to draw from algebraic topology as both develop. The main contributions come from the following individuals: Henri Poincaré: triangulations (barycentric subdivision, dual triangulation), Poincaré lemma, the first proof of the general Stokes Theorem, and a lot more L. E. J. Brouwer: simplicial approximation theorem Élie Cartan, Georges de Rham: the notion of differential form, the exterior derivative as a coordinate-independent linear operator, exactness/closedness of forms Emmy Noether, Heinz Hopf, Leopold Vietoris, Walther Mayer: modules of chains, the boundary operator, chain complexes J. W. Alexander, Solomon Lefschetz, Lev Pontryagin, Andrey Kolmogorov, Norman Steenrod, Eduard Čech: the early cochain notions Hermann Weyl: the Kirchhoff laws stated in terms of the boundary and the coboundary operators W. V. D. Hodge: the Hodge star operator, the Hodge decomposition Samuel Eilenberg, Saunders Mac Lane, Norman Steenrod, J.H.C. Whitehead: the rigorous development of homology and cohomology theory including chain and cochain complexes, the cup product Hassler Whitney: cochains as integrands The recent development of discrete calculus, starting with Whitney, has been driven by the needs of applied modeling. == Applications == Discrete calculus is used for modeling either directly or indirectly as a discretization of infinitesimal calculus in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Physics makes particular use of calculus; all discrete concepts in classical mechanics and electromagnetism are related through discrete calculus. The mass of an object of known density that varies incrementally, the moment of inertia of such objects, as well as the total energy of an object within a discrete conservative field can be found by the use of discrete calculus. An example of the use of discrete calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term "change of motion" which implies the difference quotient saying The change of momentum of a body is equal to the resultant force acting on the body and is in the same direction. Commonly expressed today as Force = Mass × Acceleration, it invokes discrete calculus when the change is incremental because acceleration is the difference quotient of velocity with respect to time or second difference quotient of the spatial position. Starting from knowing how an object is accelerating, we use the Riemann sums to derive its path. Maxwell's theory of electromagnetism and Einstein's theory of general relativity have been expressed in the language of discrete calculus. Chemistry uses calculus in determining reaction rates and radioactive decay (exponential decay). In biology, population dynamics starts with reproduction and death rates to model population changes (population modeling). In engineering, difference equations are used to plot a course of a spacecraft within zero gravity environments, to model heat transfer, diffusion, and wave propagation. The discrete analogue of Green's theorem is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property. It can be used to efficiently calculate sums of rectangular domains in images, to rapidly extract features and detect object; another algorithm that could be used is the summed area table. In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies. In economics, calculus allows for the determination of maximal profit by calculating both marginal cost and marginal revenue, as well as modeling of markets. In signal processing and machine learning, discrete calculus allows for appropriate definitions of operators (e.g., convolution), level set optimization and other key functions for neural network analysis on graph structures. Discrete calculus can be used in conjunction with other mathematical disciplines. For example, it can be used in probability theory to determine the probability of a discrete random variable from an assumed density function. == Calculus of differences and sums == Suppose a function (a 0 {\displaystyle 0} -cochain) f {\displaystyle f} is defined at points separated by an increment Δ x = h > 0 {\displaystyle \Delta x=h>0} : a , a + h , a + 2 h , … , a + n h , … {\displaystyle a,a+h,a+2h,\ldots ,a+nh,\ldots } The difference (or the exterior derivative, or the coboundary operator) of the function is given by: ( Δ f ) ( [ x , x + h ] ) = f ( x + h ) − f ( x ) . {\displaystyle {\big (}\Delta f{\big )}{\big (}[x,x+h]{\big )}=f(x+h)-f(x).} It is defined at each of the above intervals; it is a 1 {\displaystyle 1} -cochain. Suppose a 1 {\displaystyle 1} -cochain g {\displaystyle g} is defined at each of the above intervals. Then its sum is a function (a 0 {\displaystyle 0} -cochain) defined at each of the points by: ( ∑ g ) ( a + n h ) = ∑ i = 1 n g ( [ a + ( i − 1 ) h , a + i h ] ) . {\displaystyle \left(\sum g\right)\!(a+nh)=\sum _{i=1}^{n}g{\big (}[a+(i-1)h,a+ih]{\big )}.} These are their properties: Constant rule: If c {\displaystyle c} is a constant, then Δ c = 0 {\displaystyle \Delta c=0} Linearity: if a {\displaystyle a} and b {\displaystyle b} are constants, Δ ( a f + b g ) = a Δ f + b Δ g , ∑ ( a f + b g ) = a ∑ f + b ∑ g {\displaystyle \Delta (af+bg)=a\,\Delta f+b\,\Delta g,\quad \sum (af+bg)=a\,\sum f+b\,\sum g} Product rule: Δ ( f g ) = f Δ g + g Δ f + Δ f Δ g {\displaystyle \Delta (fg)=f\,\Delta g+g\,\Delta f+\Delta f\,\Delta g} Fundamental theorem of calculus I: ( ∑ Δ f ) ( a + n h ) = f ( a + n h ) − f ( a ) {\displaystyle \left(\sum \Delta f\right)\!(a+nh)=f(a+nh)-f(a)} Fundamental theorem of calculus II: Δ ( ∑ g ) = g {\displaystyle \Delta \!\left(\sum g\right)=g} The definitions are applied to graphs as follows. If a function (a 0 {\displaystyle 0} -cochain) f {\displaystyle f} is defined at the nodes of a graph: a , b , c , … {\displaystyle a,b,c,\ldots } then its exterior derivative (or the differential) is the difference, i.e., the following function defined on the edges of the graph ( 1 {\displaystyle 1} -cochain): ( d f ) ( [ a , b ] ) = f ( b ) − f ( a ) . {\displaystyle \left(df\right)\!{\big (}[a,b]{\big )}=f(b)-f(a).} If g {\displaystyle g} is a 1 {\displaystyle 1} -cochain, then its integral over a sequence of edges σ {\displaystyle \sigma } of the graph is the sum of its values over all edges of σ {\displaystyle \sigma } ("path integral"): ∫ σ g = ∑ σ g ( [ a , b ] ) . {\displaystyle \int _{\sigma }g=\sum _{\sigma }g{\big (}[a,b]{\big )}.} These are the properties: Constant rule: If c {\displaystyle c} is a constant, then d c = 0 {\displaystyle dc=0} Linearity: if a {\displaystyle a} and b {\displaystyle b} are constants, d ( a f + b g ) = a d f + b d g , ∫ σ ( a f + b g ) = a ∫ σ f + b ∫ σ g {\displaystyle d(af+bg)=a\,df+b\,dg,\quad \int _{\sigma }(af+bg)=a\,\int _{\sigma }f+b\,\int _{\sigma }g} Product rule: d ( f g ) = f d g + g d f + d f d g {\displaystyle d(fg)=f\,dg+g\,df+df\,dg} Fundamental theorem of calculus I: if a 1 {\displaystyle 1} -chain σ {\displaystyle \sigma } consists of the edges [ a 0 , a 1 ] , [ a 1 , a 2 ] , . . . , [ a n − 1 , a n ] {\displaystyle [a_{0},a_{1}],[a_{1},a_{2}],...,[a_{n-1},a_{n}]} , then for any 0 {\displaystyle 0} -cochain f {\displaystyle f} ∫ σ d f = f ( a n ) − f ( a 0 ) {\displaystyle \int _{\sigma }df=f(a_{n})-f(a_{0})} Fundamental theorem of calculus II: if the graph is a tree, g {\displaystyle g} is a 1 {\displaystyle 1} -cochain, and a function ( 0 {\displaystyle 0} -cochain) is defined on the nodes of the graph by f ( x ) = ∫ σ g {\displaystyle f(x)=\int _{\sigma }g} where a 1 {\displaystyle 1} -chain σ {\displaystyle \sigma } consists of [ a 0 , a 1 ] , [ a 1 , a 2 ] , . . . , [ a n − 1 , x ] {\displaystyle [a_{0},a_{1}],[a_{1},a_{2}],...,[a_{n-1},x]} for some fixed a 0 {\displaystyle a_{0}} , then d f = g {\displaystyle df=g} See references. == Chains of simplices and cubes == A simplicial complex S {\displaystyle S} is a set of simplices that satisfies the following conditions: 1. Every face of a simplex from S {\displaystyle S} is also in S {\displaystyle S} . 2. The non-empty intersection of any two simplices σ 1 , σ 2 ∈ S {\displaystyle \sigma _{1},\sigma _{2}\in S} is a face of both σ 1 {\displaystyle \sigma _{1}} and σ 2 {\displaystyle \sigma _{2}} . By definition, an orientation of a k-simplex is given by an ordering of the vertices, written as ( v 0 , . . . , v k ) {\displaystyle (v_{0},...,v_{k})} , with the rule that two orderings define the same orientation if and only if they differ by an even permutation. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean. Let S {\displaystyle S} be a simplicial complex. A simplicial k-chain is a finite formal sum ∑ i = 1 N c i σ i , {\displaystyle \sum _{i=1}^{N}c_{i}\sigma _{i},\,} where each ci is an integer and σi is an oriented k-simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example, ( v 0 , v 1 ) = − ( v 1 , v 0 ) . {\displaystyle (v_{0},v_{1})=-(v_{1},v_{0}).} The vector space of k-chains on S {\displaystyle S} is written C k {\displaystyle C_{k}} . It has a basis in one-to-one correspondence with the set of k-simplices in S {\displaystyle S} . To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices. Let σ = ( v 0 , . . . , v k ) {\displaystyle \sigma =(v_{0},...,v_{k})} be an oriented k-simplex, viewed as a basis element of C k {\displaystyle C_{k}} . The boundary operator ∂ k : C k → C k − 1 {\displaystyle \partial _{k}:C_{k}\rightarrow C_{k-1}} is the linear operator defined by: ∂ k ( σ ) = ∑ i = 0 k ( − 1 ) i ( v 0 , … , v i ^ , … , v k ) , {\displaystyle \partial _{k}(\sigma )=\sum _{i=0}^{k}(-1)^{i}(v_{0},\dots ,{\widehat {v_{i}}},\dots ,v_{k}),} where the oriented simplex ( v 0 , … , v i ^ , … , v k ) {\displaystyle (v_{0},\dots ,{\widehat {v_{i}}},\dots ,v_{k})} is the i {\displaystyle i} th face of σ {\displaystyle \sigma } , obtained by deleting its i {\displaystyle i} th vertex. In C k {\displaystyle C_{k}} , elements of the subgroup Z k = ker ∂ k {\displaystyle Z_{k}=\ker \partial _{k}} are referred to as cycles, and the subgroup B k = im ∂ k + 1 {\displaystyle B_{k}=\operatorname {im} \partial _{k+1}} is said to consist of boundaries. A direct computation shows that ∂ 2 = 0 {\displaystyle \partial ^{2}=0} . In geometric terms, this says that the boundary of anything has no boundary. Equivalently, the vector spaces ( C k , ∂ k ) {\displaystyle (C_{k},\partial _{k})} form a chain complex. Another equivalent statement is that B k {\displaystyle B_{k}} is contained in Z k {\displaystyle Z_{k}} . A cubical complex is a set composed of points, line segments, squares, cubes, and their n-dimensional counterparts. They are used analogously to simplices to form complexes. An elementary interval is a subset I ⊂ R {\displaystyle I\subset \mathbf {R} } of the form I = [ ℓ , ℓ + 1 ] or I = [ ℓ , ℓ ] {\displaystyle I=[\ell ,\ell +1]\quad {\text{or}}\quad I=[\ell ,\ell ]} for some ℓ ∈ Z {\displaystyle \ell \in \mathbf {Z} } . An elementary cube Q {\displaystyle Q} is the finite product of elementary intervals, i.e. Q = I 1 × I 2 × ⋯ × I d ⊂ R d {\displaystyle Q=I_{1}\times I_{2}\times \cdots \times I_{d}\subset \mathbf {R} ^{d}} where I 1 , I 2 , … , I d {\displaystyle I_{1},I_{2},\ldots ,I_{d}} are elementary intervals. Equivalently, an elementary cube is any translate of a unit cube [ 0 , 1 ] n {\displaystyle [0,1]^{n}} embedded in Euclidean space R d {\displaystyle \mathbf {R} ^{d}} (for some n , d ∈ N ∪ { 0 } {\displaystyle n,d\in \mathbf {N} \cup \{0\}} with n ≤ d {\displaystyle n\leq d} ). A set X ⊆ R d {\displaystyle X\subseteq \mathbf {R} ^{d}} is a cubical complex if it can be written as a union of elementary cubes (or possibly, is homeomorphic to such a set) and it contains all of the faces of all of its cubes. The boundary operator and the chain complex are defined similarly to those for simplicial complexes. More general are cell complexes. A chain complex ( C ∗ , ∂ ∗ ) {\displaystyle (C_{*},\partial _{*})} is a sequence of vector spaces … , C 0 , C 1 , C 2 , C 3 , C 4 , … {\displaystyle \ldots ,C_{0},C_{1},C_{2},C_{3},C_{4},\ldots } connected by linear operators (called boundary operators) ∂ n : C n → C n − 1 {\displaystyle \partial _{n}:C_{n}\to C_{n-1}} , such that the composition of any two consecutive maps is the zero map. Explicitly, the boundary operators satisfy ∂ n ∘ ∂ n + 1 = 0 {\displaystyle \partial _{n}\circ \partial _{n+1}=0} , or with indices suppressed, ∂ 2 = 0 {\displaystyle \partial ^{2}=0} . The complex may be written out as follows. ⋯ ← ∂ 0 C 0 ← ∂ 1 C 1 ← ∂ 2 C 2 ← ∂ 3 C 3 ← ∂ 4 C 4 ← ∂ 5 ⋯ {\displaystyle \cdots {\xleftarrow {\partial _{0}}}C_{0}{\xleftarrow {\partial _{1}}}C_{1}{\xleftarrow {\partial _{2}}}C_{2}{\xleftarrow {\partial _{3}}}C_{3}{\xleftarrow {\partial _{4}}}C_{4}{\xleftarrow {\partial _{5}}}\cdots } A simplicial map is a map between simplicial complexes with the property that the images of the vertices of a simplex always span a simplex (therefore, vertices have vertices for images). A simplicial map f {\displaystyle f} from a simplicial complex S {\displaystyle S} to another T {\displaystyle T} is a function from the vertex set of S {\displaystyle S} to the vertex set of T {\displaystyle T} such that the image of each simplex in S {\displaystyle S} (viewed as a set of vertices) is a simplex in T {\displaystyle T} . It generates a linear map, called a chain map, from the chain complex of S {\displaystyle S} to the chain complex of T {\displaystyle T} . Explicitly, it is given on k {\displaystyle k} -chains by f ( ( v 0 , … , v k ) ) = ( f ( v 0 ) , … , f ( v k ) ) {\displaystyle f((v_{0},\ldots ,v_{k}))=(f(v_{0}),\ldots ,f(v_{k}))} if f ( v 0 ) , . . . , f ( v k ) {\displaystyle f(v_{0}),...,f(v_{k})} are all distinct, and otherwise it is set equal to 0 {\displaystyle 0} . A chain map f {\displaystyle f} between two chain complexes ( A ∗ , d A , ∗ ) {\displaystyle (A_{*},d_{A,*})} and ( B ∗ , d B , ∗ ) {\displaystyle (B_{*},d_{B,*})} is a sequence f ∗ {\displaystyle f_{*}} of homomorphisms f n : A n → B n {\displaystyle f_{n}:A_{n}\rightarrow B_{n}} for each n {\displaystyle n} that commutes with the boundary operators on the two chain complexes, so d B , n ∘ f n = f n − 1 ∘ d A , n {\displaystyle d_{B,n}\circ f_{n}=f_{n-1}\circ d_{A,n}} . This is written out in the following commutative diagram: A chain map sends cycles to cycles and boundaries to boundaries. See references. == Discrete differential forms: cochains == For each vector space Ci in the chain complex we consider its dual space C i ∗ := H o m ( C i , R ) , {\displaystyle C_{i}^{*}:=\mathrm {Hom} (C_{i},{\bf {R}}),} and d i = ∂ i ∗ {\displaystyle d^{i}=\partial _{i}^{*}} is its dual linear operator d i − 1 : C i − 1 ∗ → C i ∗ . {\displaystyle d^{i-1}:C_{i-1}^{*}\to C_{i}^{*}.} This has the effect of "reversing all the arrows" of the original complex, leaving a cochain complex ⋯ ← C i + 1 ∗ ← ∂ i ∗ C i ∗ ← ∂ i − 1 ∗ C i − 1 ∗ ← ⋯ {\displaystyle \cdots \leftarrow C_{i+1}^{*}{\stackrel {\partial _{i}^{*}}{\leftarrow }}\ C_{i}^{*}{\stackrel {\partial _{i-1}^{*}}{\leftarrow }}C_{i-1}^{*}\leftarrow \cdots } The cochain complex ( C ∗ , d ∗ ) {\displaystyle (C^{*},d^{*})} is the dual notion to a chain complex. It consists of a sequence of vector spaces . . . , C 0 , C 1 , C 2 , C 3 , C 4 , . . . {\displaystyle ...,C_{0},C_{1},C_{2},C_{3},C_{4},...} connected by linear operators d n : C n → C n + 1 {\displaystyle d^{n}:C^{n}\to C^{n+1}} satisfying d n + 1 ∘ d n = 0 {\displaystyle d^{n+1}\circ d^{n}=0} . The cochain complex may be written out in a similar fashion to the chain complex. ⋯ → d − 1 C 0 → d 0 C 1 → d 1 C 2 → d 2 C 3 → d 3 C 4 → d 4 ⋯ {\displaystyle \cdots {\xrightarrow {d^{-1}}}C^{0}{\xrightarrow {d^{0}}}C^{1}{\xrightarrow {d^{1}}}C^{2}{\xrightarrow {d^{2}}}C^{3}{\xrightarrow {d^{3}}}C^{4}{\xrightarrow {d^{4}}}\cdots } The index n {\displaystyle n} in either C n {\displaystyle C_{n}} or C n {\displaystyle C^{n}} is referred to as the degree (or dimension). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension. The elements of the individual vector spaces of a (co)chain complex are called cochains. The elements in the kernel of d {\displaystyle d} are called cocycles (or closed elements), and the elements in the image of d {\displaystyle d} are called coboundaries (or exact elements). Right from the definition of the differential, all boundaries are cycles. The Poincaré lemma states that if B {\displaystyle B} is an open ball in R n {\displaystyle {\bf {R}}^{n}} , any closed p {\displaystyle p} -form ω {\displaystyle \omega } defined on B {\displaystyle B} is exact, for any integer p {\displaystyle p} with 1 ≤ p ≤ n {\displaystyle 1\leq p\leq n} . When we refer to cochains as discrete (differential) forms, we refer to d {\displaystyle d} as the exterior derivative. We also use the calculus notation for the values of the forms: ω ( s ) = ∫ s ω . {\displaystyle \omega (s)=\int _{s}\omega .} Stokes' theorem is a statement about the discrete differential forms on manifolds, which generalizes the fundamental theorem of discrete calculus for a partition of an interval: ∑ i = 0 n − 1 Δ F Δ x ( a + i h + h / 2 ) Δ x = F ( b ) − F ( a ) . {\displaystyle \sum _{i=0}^{n-1}{\frac {\Delta F}{\Delta x}}(a+ih+h/2)\,\Delta x=F(b)-F(a).} Stokes' theorem says that the sum of a form ω {\displaystyle \omega } over the boundary of some orientable manifold Ω {\displaystyle \Omega } is equal to the sum of its exterior derivative d ω {\displaystyle d\omega } over the whole of Ω {\displaystyle \Omega } , i.e., ∫ Ω d ω = ∫ ∂ Ω ω . {\displaystyle \int _{\Omega }d\omega =\int _{\partial \Omega }\omega \,.} It is worthwhile to examine the underlying principle by considering an example for d = 2 {\displaystyle d=2} dimensions. The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise. As a consequence, only the contribution from the boundary remains. See references. == The wedge product of forms == In discrete calculus, this is a construction that creates from forms higher order forms: adjoining two cochains of degree p {\displaystyle p} and q {\displaystyle q} to form a composite cochain of degree p + q {\displaystyle p+q} . For cubical complexes, the wedge product is defined on every cube seen as a vector space of the same dimension. For simplicial complexes, the wedge product is implemented as the cup product: if f p {\displaystyle f^{p}} is a p {\displaystyle p} -cochain and g q {\displaystyle g^{q}} is a q {\displaystyle q} -cochain, then ( f p ⌣ g q ) ( σ ) = f p ( σ 0 , 1 , . . . , p ) ⋅ g q ( σ p , p + 1 , . . . , p + q ) {\displaystyle (f^{p}\smile g^{q})(\sigma )=f^{p}(\sigma _{0,1,...,p})\cdot g^{q}(\sigma _{p,p+1,...,p+q})} where σ {\displaystyle \sigma } is a ( p + q ) {\displaystyle (p+q)} -simplex and σ S , S ⊂ { 0 , 1 , . . . , p + q } {\displaystyle \sigma _{S},\ S\subset \{0,1,...,p+q\}} , is the simplex spanned by S {\displaystyle S} into the ( p + q ) {\displaystyle (p+q)} -simplex whose vertices are indexed by { 0 , . . . , p + q } {\displaystyle \{0,...,p+q\}} . So, σ 0 , 1 , . . . , p {\displaystyle \sigma _{0,1,...,p}} is the p {\displaystyle p} -th front face and σ p , p + 1 , . . . , p + q {\displaystyle \sigma _{p,p+1,...,p+q}} is the q {\displaystyle q} -th back face of σ {\displaystyle \sigma } , respectively. The coboundary of the cup product of cochains f p {\displaystyle f^{p}} and g q {\displaystyle g^{q}} is given by d ( f p ⌣ g q ) = d f p ⌣ g q + ( − 1 ) p ( f p ⌣ d g q ) . {\displaystyle d(f^{p}\smile g^{q})=d{f^{p}}\smile g^{q}+(-1)^{p}(f^{p}\smile d{g^{q}}).} The cup product of two cocycles is again a cocycle, and the product of a coboundary with a cocycle (in either order) is a coboundary. The cup product operation satisfies the identity α p ⌣ β q = ( − 1 ) p q ( β q ⌣ α p ) . {\displaystyle \alpha ^{p}\smile \beta ^{q}=(-1)^{pq}(\beta ^{q}\smile \alpha ^{p}).} In other words, the corresponding multiplication is graded-commutative. See references. == Laplace operator == The Laplace operator Δ f {\displaystyle \Delta f} of a function f {\displaystyle f} at a vertex p {\displaystyle p} , is (up to a factor) the rate at which the average value of f {\displaystyle f} over a cellular neighborhood of p {\displaystyle p} deviates from f ( p ) {\displaystyle f(p)} . The Laplace operator represents the flux density of the gradient flow of a function. For instance, the net rate at which a chemical dissolved in a fluid moves toward or away from some point is proportional to the Laplace operator of the chemical concentration at that point; expressed symbolically, the resulting equation is the diffusion equation. For these reasons, it is extensively used in the sciences for modelling various physical phenomena. The codifferential δ : C k → C k − 1 {\displaystyle \delta :C^{k}\to C^{k-1}} is an operator defined on k {\displaystyle k} -forms by: δ = ( − 1 ) n ( k − 1 ) + 1 ⋆ d ⋆ = ( − 1 ) k ⋆ − 1 d ⋆ , {\displaystyle \delta =(-1)^{n(k-1)+1}{\star }d{\star }=(-1)^{k}\,{\star }^{-1}d{\star },} where d {\displaystyle d} is the exterior derivative or differential and ⋆ {\displaystyle \star } is the Hodge star operator. The codifferential is the adjoint of the exterior derivative according to Stokes' theorem: ( η , δ ζ ) = ( d η , ζ ) . {\displaystyle (\eta ,\delta \zeta )=(d\eta ,\zeta ).} Since the differential satisfies d 2 = 0 {\displaystyle d^{2}=0} , the codifferential has the corresponding property δ 2 = ⋆ d ⋆ ⋆ d ⋆ = ( − 1 ) k ( n − k ) ⋆ d 2 ⋆ = 0. {\displaystyle \delta ^{2}={\star }d{\star }{\star }d{\star }=(-1)^{k(n-k)}{\star }d^{2}{\star }=0.} The Laplace operator is defined by: Δ = ( δ + d ) 2 = δ d + d δ . {\displaystyle \Delta =(\delta +d)^{2}=\delta d+d\delta .} See references. == Related == Discrete element method Divided differences Finite difference coefficient Finite difference method Finite element method Finite volume method Numerical differentiation Numerical integration Numerical methods for ordinary differential equations == See also == Calculus of finite differences Calculus on finite weighted graphs Cellular automaton Discrete differential geometry Discrete Laplace operator Calculus of finite differences, discrete calculus or discrete analysis Discrete Morse theory == References ==
|
Wikipedia:Disjunction property of Wallman#0
|
In mathematics, especially in order theory, a partially ordered set with a unique minimal element 0 has the disjunction property of Wallman when for every pair (a, b) of elements of the poset, either b ≤ a or there exists an element c ≤ b such that c ≠ 0 and c has no nontrivial common predecessor with a. That is, in the latter case, the only x with x ≤ a and x ≤ c is x = 0. A version of this property for lattices was introduced by Wallman (1938), in a paper showing that the homology theory of a topological space could be defined in terms of its distributive lattice of closed sets. He observed that the inclusion order on the closed sets of a T1 space has the disjunction property. The generalization to partial orders was introduced by Wolk (1956). == References ==
|
Wikipedia:Distance-regular graph#0
|
In the mathematical field of graph theory, a distance-regular graph is a regular graph such that for any two vertices v and w, the number of vertices at distance j from v and at distance k from w depends only upon j, k, and the distance between v and w. Some authors exclude the complete graphs and disconnected graphs from this definition. Every distance-transitive graph is distance regular. Indeed, distance-regular graphs were introduced as a combinatorial generalization of distance-transitive graphs, having the numerical regularity properties of the latter without necessarily having a large automorphism group. == Intersection arrays == The intersection array of a distance-regular graph is the array ( b 0 , b 1 , … , b d − 1 ; c 1 , … , c d ) {\displaystyle (b_{0},b_{1},\ldots ,b_{d-1};c_{1},\ldots ,c_{d})} in which d {\displaystyle d} is the diameter of the graph and for each 1 ≤ j ≤ d {\displaystyle 1\leq j\leq d} , b j {\displaystyle b_{j}} gives the number of neighbours of u {\displaystyle u} at distance j + 1 {\displaystyle j+1} from v {\displaystyle v} and c j {\displaystyle c_{j}} gives the number of neighbours of u {\displaystyle u} at distance j − 1 {\displaystyle j-1} from v {\displaystyle v} for any pair of vertices u {\displaystyle u} and v {\displaystyle v} at distance j {\displaystyle j} . There is also the number a j {\displaystyle a_{j}} that gives the number of neighbours of u {\displaystyle u} at distance j {\displaystyle j} from v {\displaystyle v} . The numbers a j , b j , c j {\displaystyle a_{j},b_{j},c_{j}} are called the intersection numbers of the graph. They satisfy the equation a j + b j + c j = k , {\displaystyle a_{j}+b_{j}+c_{j}=k,} where k = b 0 {\displaystyle k=b_{0}} is the valency, i.e., the number of neighbours, of any vertex. It turns out that a graph G {\displaystyle G} of diameter d {\displaystyle d} is distance regular if and only if it has an intersection array in the preceding sense. == Cospectral and disconnected distance-regular graphs == A pair of connected distance-regular graphs are cospectral if their adjacency matrices have the same spectrum. This is equivalent to their having the same intersection array. A distance-regular graph is disconnected if and only if it is a disjoint union of cospectral distance-regular graphs. == Properties == Suppose G {\displaystyle G} is a connected distance-regular graph of valency k {\displaystyle k} with intersection array ( b 0 , b 1 , … , b d − 1 ; c 1 , … , c d ) {\displaystyle (b_{0},b_{1},\ldots ,b_{d-1};c_{1},\ldots ,c_{d})} . For each 0 ≤ j ≤ d , {\displaystyle 0\leq j\leq d,} let k j {\displaystyle k_{j}} denote the number of vertices at distance k {\displaystyle k} from any given vertex and let G j {\displaystyle G_{j}} denote the k j {\displaystyle k_{j}} -regular graph with adjacency matrix A j {\displaystyle A_{j}} formed by relating pairs of vertices on G {\displaystyle G} at distance j {\displaystyle j} . === Graph-theoretic properties === k j + 1 k j = b j c j + 1 {\displaystyle {\frac {k_{j+1}}{k_{j}}}={\frac {b_{j}}{c_{j+1}}}} for all 0 ≤ j < d {\displaystyle 0\leq j<d} . b 0 > b 1 ≥ ⋯ ≥ b d − 1 > 0 {\displaystyle b_{0}>b_{1}\geq \cdots \geq b_{d-1}>0} and 1 = c 1 ≤ ⋯ ≤ c d ≤ b 0 {\displaystyle 1=c_{1}\leq \cdots \leq c_{d}\leq b_{0}} . === Spectral properties === G {\displaystyle G} has d + 1 {\displaystyle d+1} distinct eigenvalues. The only simple eigenvalue of G {\displaystyle G} is k , {\displaystyle k,} or both k {\displaystyle k} and − k {\displaystyle -k} if G {\displaystyle G} is bipartite. k ≤ 1 2 ( m − 1 ) ( m + 2 ) {\displaystyle k\leq {\frac {1}{2}}(m-1)(m+2)} for any eigenvalue multiplicity m > 1 {\displaystyle m>1} of G , {\displaystyle G,} unless G {\displaystyle G} is a complete multipartite graph. d ≤ 3 m − 4 {\displaystyle d\leq 3m-4} for any eigenvalue multiplicity m > 1 {\displaystyle m>1} of G , {\displaystyle G,} unless G {\displaystyle G} is a cycle graph or a complete multipartite graph. If G {\displaystyle G} is strongly regular, then n ≤ 4 m − 1 {\displaystyle n\leq 4m-1} and k ≤ 2 m − 1 {\displaystyle k\leq 2m-1} . == Examples == Some first examples of distance-regular graphs include: The complete graphs. The cycle graphs. The odd graphs. The Moore graphs. The collinearity graph of a regular near polygon. The Wells graph and the Sylvester graph. Strongly regular graphs are the distance-regular graphs of diameter 2. == Classification of distance-regular graphs == There are only finitely many distinct connected distance-regular graphs of any given valency k > 2 {\displaystyle k>2} . Similarly, there are only finitely many distinct connected distance-regular graphs with any given eigenvalue multiplicity m > 2 {\displaystyle m>2} (with the exception of the complete multipartite graphs). === Cubic distance-regular graphs === The cubic distance-regular graphs have been completely classified. The 13 distinct cubic distance-regular graphs are K4 (or Tetrahedral graph), K3,3, the Petersen graph, the Cubical graph, the Heawood graph, the Pappus graph, the Coxeter graph, the Tutte–Coxeter graph, the Dodecahedral graph, the Desargues graph, Tutte 12-cage, the Biggs–Smith graph, and the Foster graph. == References == == Further reading == Godsil, C. D. (1993). Algebraic Combinatorics. Chapman and Hall Mathematics Series. New York: Chapman and Hall. ISBN 978-0-412-04131-0. MR 1220704.
|
Wikipedia:Distance-transitive graph#0
|
In the mathematical field of graph theory, a distance-transitive graph is a graph such that, given any two vertices v and w at any distance i, and any other two vertices x and y at the same distance, there is an automorphism of the graph that carries v to x and w to y. Distance-transitive graphs were first defined in 1971 by Norman L. Biggs and D. H. Smith. A distance-transitive graph is interesting partly because it has a large automorphism group. Some interesting finite groups are the automorphism groups of distance-transitive graphs, especially of those whose diameter is 2. == Examples == Some first examples of families of distance-transitive graphs include: The Johnson graphs. The Grassmann graphs. The Hamming Graphs (including Hypercube graphs). The folded cube graphs. The square rook's graphs. The Livingstone graph. == Classification of cubic distance-transitive graphs == After introducing them in 1971, Biggs and Smith showed that there are only 12 finite connected trivalent distance-transitive graphs. These are: == Relation to distance-regular graphs == Every distance-transitive graph is distance-regular, but the converse is not necessarily true. In 1969, before publication of the Biggs–Smith definition, a Russian group led by Georgy Adelson-Velsky showed that there exist graphs that are distance-regular but not distance-transitive. The smallest distance-regular graph that is not distance-transitive is the Shrikhande graph, with 16 vertices and degree 6. The only graph of this type with degree three is the 126-vertex Tutte 12-cage. Complete lists of distance-transitive graphs are known for some degrees larger than three, but the classification of distance-transitive graphs with arbitrarily large vertex degree remains open. == References == Early works Adel'son-Vel'skii, G. M.; Veĭsfeĭler, B. Ju.; Leman, A. A.; Faradžev, I. A. (1969), "An example of a graph which has no transitive group of automorphisms", Doklady Akademii Nauk SSSR, 185: 975–976, MR 0244107. Biggs, Norman (1971), "Intersection matrices for linear graphs", Combinatorial Mathematics and its Applications (Proc. Conf., Oxford, 1969), London: Academic Press, pp. 15–23, MR 0285421. Biggs, Norman (1971), Finite Groups of Automorphisms, London Mathematical Society Lecture Note Series, vol. 6, London & New York: Cambridge University Press, MR 0327563. Biggs, N. L.; Smith, D. H. (1971), "On trivalent graphs", Bulletin of the London Mathematical Society, 3 (2): 155–158, doi:10.1112/blms/3.2.155, MR 0286693. Smith, D. H. (1971), "Primitive and imprimitive graphs", The Quarterly Journal of Mathematics, Second Series, 22 (4): 551–557, doi:10.1093/qmath/22.4.551, MR 0327584. Surveys Biggs, N. L. (1993), "Distance-Transitive Graphs", Algebraic Graph Theory (2nd ed.), Cambridge University Press, pp. 155–163, chapter 20. Van Bon, John (2007), "Finite primitive distance-transitive graphs", European Journal of Combinatorics, 28 (2): 517–532, doi:10.1016/j.ejc.2005.04.014, MR 2287450. Brouwer, A. E.; Cohen, A. M.; Neumaier, A. (1989), "Distance-Transitive Graphs", Distance-Regular Graphs, New York: Springer-Verlag, pp. 214–234, chapter 7. Cohen, A. M. Cohen (2004), "Distance-transitive graphs", in Beineke, L. W.; Wilson, R. J. (eds.), Topics in Algebraic Graph Theory, Encyclopedia of Mathematics and its Applications, vol. 102, Cambridge University Press, pp. 222–249. Godsil, C.; Royle, G. (2001), "Distance-Transitive Graphs", Algebraic Graph Theory, New York: Springer-Verlag, pp. 66–69, section 4.5. Ivanov, A. A. (1992), "Distance-transitive graphs and their classification", in Faradžev, I. A.; Ivanov, A. A.; Klin, M.; et al. (eds.), The Algebraic Theory of Combinatorial Objects, Math. Appl. (Soviet Series), vol. 84, Dordrecht: Kluwer, pp. 283–378, MR 1321634. == External links == Weisstein, Eric W. "Distance-Transitive Graph". MathWorld.
|
Wikipedia:Distinguished limit#0
|
In mathematics, a distinguished limit is an appropriately chosen scale factor used in the method of matched asymptotic expansions. == External links == Singular perturbation theory, Scholarpedia
|
Wikipedia:Distribution (number theory)#0
|
In algebra and number theory, a distribution is a function on a system of finite sets into an abelian group which is analogous to an integral: it is thus the algebraic analogue of a distribution in the sense of generalised function. The original examples of distributions occur, unnamed, as functions φ on Q/Z satisfying ∑ r = 0 N − 1 ϕ ( x + r N ) = ϕ ( N x ) . {\displaystyle \sum _{r=0}^{N-1}\phi \left(x+{\frac {r}{N}}\right)=\phi (Nx)\ .} Such distributions are called ordinary distributions. They also occur in p-adic integration theory in Iwasawa theory. Let ... → Xn+1 → Xn → ... be a projective system of finite sets with surjections, indexed by the natural numbers, and let X be their projective limit. We give each Xn the discrete topology, so that X is compact. Let φ = (φn) be a family of functions on Xn taking values in an abelian group V and compatible with the projective system: w ( m , n ) ∑ y ↦ x ϕ ( y ) = ϕ ( x ) {\displaystyle w(m,n)\sum _{y\mapsto x}\phi (y)=\phi (x)} for some weight function w. The family φ is then a distribution on the projective system X. A function f on X is "locally constant", or a "step function" if it factors through some Xn. We can define an integral of a step function against φ as ∫ f d ϕ = ∑ x ∈ X n f ( x ) ϕ n ( x ) . {\displaystyle \int f\,d\phi =\sum _{x\in X_{n}}f(x)\phi _{n}(x)\ .} The definition extends to more general projective systems, such as those indexed by the positive integers ordered by divisibility. As an important special case consider the projective system Z/nZ indexed by positive integers ordered by divisibility. We identify this with the system (1/n)Z/Z with limit Q/Z. For x in R we let ⟨x⟩ denote the fractional part of x normalised to 0 ≤ ⟨x⟩ < 1, and let {x} denote the fractional part normalised to 0 < {x} ≤ 1. == Examples == === Hurwitz zeta function === The multiplication theorem for the Hurwitz zeta function ζ ( s , a ) = ∑ n = 0 ∞ ( n + a ) − s {\displaystyle \zeta (s,a)=\sum _{n=0}^{\infty }(n+a)^{-s}} gives a distribution relation ∑ p = 0 q − 1 ζ ( s , a + p / q ) = q s ζ ( s , q a ) . {\displaystyle \sum _{p=0}^{q-1}\zeta (s,a+p/q)=q^{s}\,\zeta (s,qa)\ .} Hence for given s, the map t ↦ ζ ( s , { t } ) {\displaystyle t\mapsto \zeta (s,\{t\})} is a distribution on Q/Z. === Bernoulli distribution === Recall that the Bernoulli polynomials Bn are defined by B n ( x ) = ∑ k = 0 n ( n n − k ) b k x n − k , {\displaystyle B_{n}(x)=\sum _{k=0}^{n}{n \choose n-k}b_{k}x^{n-k}\ ,} for n ≥ 0, where bk are the Bernoulli numbers, with generating function t e x t e t − 1 = ∑ n = 0 ∞ B n ( x ) t n n ! . {\displaystyle {\frac {te^{xt}}{e^{t}-1}}=\sum _{n=0}^{\infty }B_{n}(x){\frac {t^{n}}{n!}}\ .} They satisfy the distribution relation B k ( x ) = n k − 1 ∑ a = 0 n − 1 b k ( x + a n ) . {\displaystyle B_{k}(x)=n^{k-1}\sum _{a=0}^{n-1}b_{k}\left({\frac {x+a}{n}}\right)\ .} Thus the map ϕ n : 1 n Z / Z → Q {\displaystyle \phi _{n}:{\frac {1}{n}}\mathbb {Z} /\mathbb {Z} \rightarrow \mathbb {Q} } defined by ϕ n : x ↦ n k − 1 B k ( ⟨ x ⟩ ) {\displaystyle \phi _{n}:x\mapsto n^{k-1}B_{k}(\langle x\rangle )} is a distribution. === Cyclotomic units === The cyclotomic units satisfy distribution relations. Let a be an element of Q/Z prime to p and let ga denote exp(2πia)−1. Then for a≠ 0 we have ∏ p b = a g b = g a . {\displaystyle \prod _{pb=a}g_{b}=g_{a}\ .} == Universal distribution == One considers the distributions on Z with values in some abelian group V and seek the "universal" or most general distribution possible. == Stickelberger distributions == Let h be an ordinary distribution on Q/Z taking values in a field F. Let G(N) denote the multiplicative group of Z/NZ, and for any function f on G(N) we extend f to a function on Z/NZ by taking f to be zero off G(N). Define an element of the group algebra F[G(N)] by g N ( r ) = 1 | G ( N ) | ∑ a ∈ G ( N ) h ( ⟨ r a N ⟩ ) σ a − 1 . {\displaystyle g_{N}(r)={\frac {1}{|G(N)|}}\sum _{a\in G(N)}h\left({\left\langle {\frac {ra}{N}}\right\rangle }\right)\sigma _{a}^{-1}\ .} The group algebras form a projective system with limit X. Then the functions gN form a distribution on Q/Z with values in X, the Stickelberger distribution associated with h. == p-adic measures == Consider the special case when the value group V of a distribution φ on X takes values in a local field K, finite over Qp, or more generally, in a finite-dimensional p-adic Banach space W over K, with valuation |·|. We call φ a measure if |φ| is bounded on compact open subsets of X. Let D be the ring of integers of K and L a lattice in W, that is, a free D-submodule of W with K⊗L = W. Up to scaling a measure may be taken to have values in L. === Hecke operators and measures === Let D be a fixed integer prime to p and consider ZD, the limit of the system Z/pnD. Consider any eigenfunction of the Hecke operator Tp with eigenvalue λp prime to p. We describe a procedure for deriving a measure of ZD. Fix an integer N prime to p and to D. Let F be the D-module of all functions on rational numbers with denominator coprime to N. For any prime l not dividing N we define the Hecke operator Tl by ( T l f ) ( a b ) = f ( l a b ) + ∑ k = 0 l − 1 f ( a + k b l b ) − ∑ k = 0 l − 1 f ( k l ) . {\displaystyle (T_{l}f)\left({\frac {a}{b}}\right)=f\left({\frac {la}{b}}\right)+\sum _{k=0}^{l-1}f\left({\frac {a+kb}{lb}}\right)-\sum _{k=0}^{l-1}f\left({\frac {k}{l}}\right)\ .} Let f be an eigenfunction for Tp with eigenvalue λp in D. The quadratic equation X2 − λpX + p = 0 has roots π1, π2 with π1 a unit and π2 divisible by p. Define a sequence a0 = 2, a1 = π1+π2 = λp and a k + 2 = λ p a k + 1 − p a k , {\displaystyle a_{k+2}=\lambda _{p}a_{k+1}-pa_{k}\ ,} so that a k = π 1 k + π 2 k . {\displaystyle a_{k}=\pi _{1}^{k}+\pi _{2}^{k}\ .} == References == Kubert, Daniel S.; Lang, Serge (1981). Modular Units. Grundlehren der Mathematischen Wissenschaften. Vol. 244. Springer-Verlag. ISBN 0-387-90517-0. Zbl 0492.12002. Lang, Serge (1990). Cyclotomic Fields I and II. Graduate Texts in Mathematics. Vol. 121 (second combined ed.). Springer Verlag. ISBN 3-540-96671-4. Zbl 0704.11038. Mazur, B.; Swinnerton-Dyer, P. (1974). "Arithmetic of Weil curves". Inventiones Mathematicae. 25: 1–61. doi:10.1007/BF01389997. Zbl 0281.14016.
|
Wikipedia:Distribution algebra#0
|
In algebra, the distribution algebra D ( G , K ) {\displaystyle D(G,K)} of a p-adic Lie group G is the K-algebra of K-valued distributions on G. (See the reference for a more precise definition.) == References == Schneider, P.; Teitelbaum, J. (May 18, 2001). " U ( g ) {\displaystyle U({\mathfrak {g}})} -finite locally analytic representations" (PDF). Representation Theory. 5: 111–128. doi:10.1090/S1088-4165-01-00109-1. S2CID 15790048.
|
Wikipedia:Distributive homomorphism#0
|
A congruence θ of a join-semilattice S is monomial, if the θ-equivalence class of any element of S has a largest element. We say that θ is distributive, if it is a join, in the congruence lattice Con S of S, of monomial join-congruences of S. The following definition originates in Schmidt's 1968 work and was subsequently adjusted by Wehrung. Definition (weakly distributive homomorphisms). A homomorphism μ : S → T between join-semilattices S and T is weakly distributive, if for all a, b in S and all c in T such that μ(c) ≤ a ∨ b, there are elements x and y of S such that c ≤ x ∨ y, μ(x) ≤ a, and μ(y) ≤ b. Examples: (1) For an algebra B and a reduct A of B (that is, an algebra with same underlying set as B but whose set of operations is a subset of the one of B), the canonical (∨, 0)-homomorphism from Conc A to Conc B is weakly distributive. Here, Conc A denotes the (∨, 0)-semilattice of all compact congruences of A. (2) For a convex sublattice K of a lattice L, the canonical (∨, 0)-homomorphism from Conc K to Conc L is weakly distributive. == References == E.T. Schmidt, Zur Charakterisierung der Kongruenzverbände der Verbände, Mat. Casopis Sloven. Akad. Vied. 18 (1968), 3--20. F. Wehrung, A uniform refinement property for congruence lattices, Proc. Amer. Math. Soc. 127, no. 2 (1999), 363–370. F. Wehrung, A solution to Dilworth's congruence lattice problem, preprint 2006.
|
Wikipedia:Distributive property#0
|
In mathematics, the distributive property of binary operations is a generalization of the distributive law, which asserts that the equality x ⋅ ( y + z ) = x ⋅ y + x ⋅ z {\displaystyle x\cdot (y+z)=x\cdot y+x\cdot z} is always true in elementary algebra. For example, in elementary arithmetic, one has 2 ⋅ ( 1 + 3 ) = ( 2 ⋅ 1 ) + ( 2 ⋅ 3 ) . {\displaystyle 2\cdot (1+3)=(2\cdot 1)+(2\cdot 3).} Therefore, one would say that multiplication distributes over addition. This basic property of numbers is part of the definition of most algebraic structures that have two operations called addition and multiplication, such as complex numbers, polynomials, matrices, rings, and fields. It is also encountered in Boolean algebra and mathematical logic, where each of the logical and (denoted ∧ {\displaystyle \,\land \,} ) and the logical or (denoted ∨ {\displaystyle \,\lor \,} ) distributes over the other. == Definition == Given a set S {\displaystyle S} and two binary operators ∗ {\displaystyle \,*\,} and + {\displaystyle \,+\,} on S , {\displaystyle S,} the operation ∗ {\displaystyle \,*\,} is left-distributive over (or with respect to) + {\displaystyle \,+\,} if, given any elements x , y , and z {\displaystyle x,y,{\text{ and }}z} of S , {\displaystyle S,} x ∗ ( y + z ) = ( x ∗ y ) + ( x ∗ z ) ; {\displaystyle x*(y+z)=(x*y)+(x*z);} the operation ∗ {\displaystyle \,*\,} is right-distributive over + {\displaystyle \,+\,} if, given any elements x , y , and z {\displaystyle x,y,{\text{ and }}z} of S , {\displaystyle S,} ( y + z ) ∗ x = ( y ∗ x ) + ( z ∗ x ) ; {\displaystyle (y+z)*x=(y*x)+(z*x);} and the operation ∗ {\displaystyle \,*\,} is distributive over + {\displaystyle \,+\,} if it is left- and right-distributive. When ∗ {\displaystyle \,*\,} is commutative, the three conditions above are logically equivalent. == Meaning == The operators used for examples in this section are those of the usual addition + {\displaystyle \,+\,} and multiplication ⋅ . {\displaystyle \,\cdot .\,} If the operation denoted ⋅ {\displaystyle \cdot } is not commutative, there is a distinction between left-distributivity and right-distributivity: a ⋅ ( b ± c ) = a ⋅ b ± a ⋅ c (left-distributive) {\displaystyle a\cdot \left(b\pm c\right)=a\cdot b\pm a\cdot c\qquad {\text{ (left-distributive) }}} ( a ± b ) ⋅ c = a ⋅ c ± b ⋅ c (right-distributive) . {\displaystyle (a\pm b)\cdot c=a\cdot c\pm b\cdot c\qquad {\text{ (right-distributive) }}.} In either case, the distributive property can be described in words as: To multiply a sum (or difference) by a factor, each summand (or minuend and subtrahend) is multiplied by this factor and the resulting products are added (or subtracted). If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies right-distributivity and vice versa, and one talks simply of distributivity. One example of an operation that is "only" right-distributive is division, which is not commutative: ( a ± b ) ÷ c = a ÷ c ± b ÷ c . {\displaystyle (a\pm b)\div c=a\div c\pm b\div c.} In this case, left-distributivity does not apply: a ÷ ( b ± c ) ≠ a ÷ b ± a ÷ c {\displaystyle a\div (b\pm c)\neq a\div b\pm a\div c} The distributive laws are among the axioms for rings (like the ring of integers) and fields (like the field of rational numbers). Here multiplication is distributive over addition, but addition is not distributive over multiplication. Examples of structures with two operations that are each distributive over the other are Boolean algebras such as the algebra of sets or the switching algebra. Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a sum with each summand of the other sum (keeping track of signs) then add up all of the resulting products. == Examples == === Real numbers === In the following examples, the use of the distributive law on the set of real numbers R {\displaystyle \mathbb {R} } is illustrated. When multiplication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form a field, which ensures the validity of the distributive law. First example (mental and written multiplication)During mental arithmetic, distributivity is often used unconsciously: 6 ⋅ 16 = 6 ⋅ ( 10 + 6 ) = 6 ⋅ 10 + 6 ⋅ 6 = 60 + 36 = 96 {\displaystyle 6\cdot 16=6\cdot (10+6)=6\cdot 10+6\cdot 6=60+36=96} Thus, to calculate 6 ⋅ 16 {\displaystyle 6\cdot 16} in one's head, one first multiplies 6 ⋅ 10 {\displaystyle 6\cdot 10} and 6 ⋅ 6 {\displaystyle 6\cdot 6} and add the intermediate results. Written multiplication is also based on the distributive law. Second example (with variables) 3 a 2 b ⋅ ( 4 a − 5 b ) = 3 a 2 b ⋅ 4 a − 3 a 2 b ⋅ 5 b = 12 a 3 b − 15 a 2 b 2 {\displaystyle 3a^{2}b\cdot (4a-5b)=3a^{2}b\cdot 4a-3a^{2}b\cdot 5b=12a^{3}b-15a^{2}b^{2}} Third example (with two sums) ( a + b ) ⋅ ( a − b ) = a ⋅ ( a − b ) + b ⋅ ( a − b ) = a 2 − a b + b a − b 2 = a 2 − b 2 = ( a + b ) ⋅ a − ( a + b ) ⋅ b = a 2 + b a − a b − b 2 = a 2 − b 2 {\displaystyle {\begin{aligned}(a+b)\cdot (a-b)&=a\cdot (a-b)+b\cdot (a-b)=a^{2}-ab+ba-b^{2}=a^{2}-b^{2}\\&=(a+b)\cdot a-(a+b)\cdot b=a^{2}+ba-ab-b^{2}=a^{2}-b^{2}\\\end{aligned}}} Here the distributive law was applied twice, and it does not matter which bracket is first multiplied out. Fourth exampleHere the distributive law is applied the other way around compared to the previous examples. Consider 12 a 3 b 2 − 30 a 4 b c + 18 a 2 b 3 c 2 . {\displaystyle 12a^{3}b^{2}-30a^{4}bc+18a^{2}b^{3}c^{2}\,.} Since the factor 6 a 2 b {\displaystyle 6a^{2}b} occurs in all summands, it can be factored out. That is, due to the distributive law one obtains 12 a 3 b 2 − 30 a 4 b c + 18 a 2 b 3 c 2 = 6 a 2 b ( 2 a b − 5 a 2 c + 3 b 2 c 2 ) . {\displaystyle 12a^{3}b^{2}-30a^{4}bc+18a^{2}b^{3}c^{2}=6a^{2}b\left(2ab-5a^{2}c+3b^{2}c^{2}\right).} === Matrices === The distributive law is valid for matrix multiplication. More precisely, ( A + B ) ⋅ C = A ⋅ C + B ⋅ C {\displaystyle (A+B)\cdot C=A\cdot C+B\cdot C} for all l × m {\displaystyle l\times m} -matrices A , B {\displaystyle A,B} and m × n {\displaystyle m\times n} -matrices C , {\displaystyle C,} as well as A ⋅ ( B + C ) = A ⋅ B + A ⋅ C {\displaystyle A\cdot (B+C)=A\cdot B+A\cdot C} for all l × m {\displaystyle l\times m} -matrices A {\displaystyle A} and m × n {\displaystyle m\times n} -matrices B , C . {\displaystyle B,C.} Because the commutative property does not hold for matrix multiplication, the second law does not follow from the first law. In this case, they are two different laws. === Other examples === Multiplication of ordinal numbers, in contrast, is only left-distributive, not right-distributive. The cross product is left- and right-distributive over vector addition, though not commutative. The union of sets is distributive over intersection, and intersection is distributive over union. Logical disjunction ("or") is distributive over logical conjunction ("and"), and vice versa. For real numbers (and for any totally ordered set), the maximum operation is distributive over the minimum operation, and vice versa: max ( a , min ( b , c ) ) = min ( max ( a , b ) , max ( a , c ) ) and min ( a , max ( b , c ) ) = max ( min ( a , b ) , min ( a , c ) ) . {\displaystyle \max(a,\min(b,c))=\min(\max(a,b),\max(a,c))\quad {\text{ and }}\quad \min(a,\max(b,c))=\max(\min(a,b),\min(a,c)).} For integers, the greatest common divisor is distributive over the least common multiple, and vice versa: gcd ( a , lcm ( b , c ) ) = lcm ( gcd ( a , b ) , gcd ( a , c ) ) and lcm ( a , gcd ( b , c ) ) = gcd ( lcm ( a , b ) , lcm ( a , c ) ) . {\displaystyle \gcd(a,\operatorname {lcm} (b,c))=\operatorname {lcm} (\gcd(a,b),\gcd(a,c))\quad {\text{ and }}\quad \operatorname {lcm} (a,\gcd(b,c))=\gcd(\operatorname {lcm} (a,b),\operatorname {lcm} (a,c)).} For real numbers, addition distributes over the maximum operation, and also over the minimum operation: a + max ( b , c ) = max ( a + b , a + c ) and a + min ( b , c ) = min ( a + b , a + c ) . {\displaystyle a+\max(b,c)=\max(a+b,a+c)\quad {\text{ and }}\quad a+\min(b,c)=\min(a+b,a+c).} For binomial multiplication, distribution is sometimes referred to as the FOIL Method (First terms a c , {\displaystyle ac,} Outer a d , {\displaystyle ad,} Inner b c , {\displaystyle bc,} and Last b d {\displaystyle bd} ) such as: ( a + b ) ⋅ ( c + d ) = a c + a d + b c + b d . {\displaystyle (a+b)\cdot (c+d)=ac+ad+bc+bd.} In all semirings, including the complex numbers, the quaternions, polynomials, and matrices, multiplication distributes over addition: u ( v + w ) = u v + u w , ( u + v ) w = u w + v w . {\displaystyle u(v+w)=uv+uw,(u+v)w=uw+vw.} In all algebras over a field, including the octonions and other non-associative algebras, multiplication distributes over addition. == Propositional logic == === Rule of replacement === In standard truth-functional propositional logic, distribution in logical proofs uses two valid rules of replacement to expand individual occurrences of certain logical connectives, within some formula, into separate applications of those connectives across subformulas of the given formula. The rules are ( P ∧ ( Q ∨ R ) ) ⇔ ( ( P ∧ Q ) ∨ ( P ∧ R ) ) and ( P ∨ ( Q ∧ R ) ) ⇔ ( ( P ∨ Q ) ∧ ( P ∨ R ) ) {\displaystyle (P\land (Q\lor R))\Leftrightarrow ((P\land Q)\lor (P\land R))\qquad {\text{ and }}\qquad (P\lor (Q\land R))\Leftrightarrow ((P\lor Q)\land (P\lor R))} where " ⇔ {\displaystyle \Leftrightarrow } ", also written ≡ , {\displaystyle \,\equiv ,\,} is a metalogical symbol representing "can be replaced in a proof with" or "is logically equivalent to". === Truth functional connectives === Distributivity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that distributivity is a property of particular connectives. The following are truth-functional tautologies. ( P ∧ ( Q ∨ R ) ) ⇔ ( ( P ∧ Q ) ∨ ( P ∧ R ) ) Distribution of conjunction over disjunction ( P ∨ ( Q ∧ R ) ) ⇔ ( ( P ∨ Q ) ∧ ( P ∨ R ) ) Distribution of disjunction over conjunction ( P ∧ ( Q ∧ R ) ) ⇔ ( ( P ∧ Q ) ∧ ( P ∧ R ) ) Distribution of conjunction over conjunction ( P ∨ ( Q ∨ R ) ) ⇔ ( ( P ∨ Q ) ∨ ( P ∨ R ) ) Distribution of disjunction over disjunction ( P → ( Q → R ) ) ⇔ ( ( P → Q ) → ( P → R ) ) Distribution of implication ( P → ( Q ↔ R ) ) ⇔ ( ( P → Q ) ↔ ( P → R ) ) Distribution of implication over equivalence ( P → ( Q ∧ R ) ) ⇔ ( ( P → Q ) ∧ ( P → R ) ) Distribution of implication over conjunction ( P ∨ ( Q ↔ R ) ) ⇔ ( ( P ∨ Q ) ↔ ( P ∨ R ) ) Distribution of disjunction over equivalence {\displaystyle {\begin{alignedat}{13}&(P&&\;\land &&(Q\lor R))&&\;\Leftrightarrow \;&&((P\land Q)&&\;\lor (P\land R))&&\quad {\text{ Distribution of }}&&{\text{ conjunction }}&&{\text{ over }}&&{\text{ disjunction }}\\&(P&&\;\lor &&(Q\land R))&&\;\Leftrightarrow \;&&((P\lor Q)&&\;\land (P\lor R))&&\quad {\text{ Distribution of }}&&{\text{ disjunction }}&&{\text{ over }}&&{\text{ conjunction }}\\&(P&&\;\land &&(Q\land R))&&\;\Leftrightarrow \;&&((P\land Q)&&\;\land (P\land R))&&\quad {\text{ Distribution of }}&&{\text{ conjunction }}&&{\text{ over }}&&{\text{ conjunction }}\\&(P&&\;\lor &&(Q\lor R))&&\;\Leftrightarrow \;&&((P\lor Q)&&\;\lor (P\lor R))&&\quad {\text{ Distribution of }}&&{\text{ disjunction }}&&{\text{ over }}&&{\text{ disjunction }}\\&(P&&\to &&(Q\to R))&&\;\Leftrightarrow \;&&((P\to Q)&&\to (P\to R))&&\quad {\text{ Distribution of }}&&{\text{ implication }}&&{\text{ }}&&{\text{ }}\\&(P&&\to &&(Q\leftrightarrow R))&&\;\Leftrightarrow \;&&((P\to Q)&&\leftrightarrow (P\to R))&&\quad {\text{ Distribution of }}&&{\text{ implication }}&&{\text{ over }}&&{\text{ equivalence }}\\&(P&&\to &&(Q\land R))&&\;\Leftrightarrow \;&&((P\to Q)&&\;\land (P\to R))&&\quad {\text{ Distribution of }}&&{\text{ implication }}&&{\text{ over }}&&{\text{ conjunction }}\\&(P&&\;\lor &&(Q\leftrightarrow R))&&\;\Leftrightarrow \;&&((P\lor Q)&&\leftrightarrow (P\lor R))&&\quad {\text{ Distribution of }}&&{\text{ disjunction }}&&{\text{ over }}&&{\text{ equivalence }}\\\end{alignedat}}} Double distribution ( ( P ∧ Q ) ∨ ( R ∧ S ) ) ⇔ ( ( ( P ∨ R ) ∧ ( P ∨ S ) ) ∧ ( ( Q ∨ R ) ∧ ( Q ∨ S ) ) ) ( ( P ∨ Q ) ∧ ( R ∨ S ) ) ⇔ ( ( ( P ∧ R ) ∨ ( P ∧ S ) ) ∨ ( ( Q ∧ R ) ∨ ( Q ∧ S ) ) ) {\displaystyle {\begin{alignedat}{13}&((P\land Q)&&\;\lor (R\land S))&&\;\Leftrightarrow \;&&(((P\lor R)\land (P\lor S))&&\;\land ((Q\lor R)\land (Q\lor S)))&&\\&((P\lor Q)&&\;\land (R\lor S))&&\;\Leftrightarrow \;&&(((P\land R)\lor (P\land S))&&\;\lor ((Q\land R)\lor (Q\land S)))&&\\\end{alignedat}}} == Distributivity and rounding == In approximate arithmetic, such as floating-point arithmetic, the distributive property of multiplication (and division) over addition may fail because of the limitations of arithmetic precision. For example, the identity 1 / 3 + 1 / 3 + 1 / 3 = ( 1 + 1 + 1 ) / 3 {\displaystyle 1/3+1/3+1/3=(1+1+1)/3} fails in decimal arithmetic, regardless of the number of significant digits. Methods such as banker's rounding may help in some cases, as may increasing the precision used, but ultimately some calculation errors are inevitable. == In rings and other structures == Distributivity is most commonly found in semirings, notably the particular cases of rings and distributive lattices. A semiring has two binary operations, commonly denoted + {\displaystyle \,+\,} and ∗ , {\displaystyle \,*,} and requires that ∗ {\displaystyle \,*\,} must distribute over + . {\displaystyle \,+.} A ring is a semiring with additive inverses. A lattice is another kind of algebraic structure with two binary operations, ∧ and ∨ . {\displaystyle \,\land {\text{ and }}\lor .} If either of these operations distributes over the other (say ∧ {\displaystyle \,\land \,} distributes over ∨ {\displaystyle \,\lor } ), then the reverse also holds ( ∨ {\displaystyle \,\lor \,} distributes over ∧ {\displaystyle \,\land \,} ), and the lattice is called distributive. See also Distributivity (order theory). A Boolean algebra can be interpreted either as a special kind of ring (a Boolean ring) or a special kind of distributive lattice (a Boolean lattice). Each interpretation is responsible for different distributive laws in the Boolean algebra. Similar structures without distributive laws are near-rings and near-fields instead of rings and division rings. The operations are usually defined to be distributive on the right but not on the left. == Generalizations == In several mathematical areas, generalized distributivity laws are considered. This may involve the weakening of the above conditions or the extension to infinitary operations. Especially in order theory one finds numerous important variants of distributivity, some of which include infinitary operations, such as the infinite distributive law; others being defined in the presence of only one binary operation, such as the according definitions and their relations are given in the article distributivity (order theory). This also includes the notion of a completely distributive lattice. In the presence of an ordering relation, one can also weaken the above equalities by replacing = {\displaystyle \,=\,} by either ≤ {\displaystyle \,\leq \,} or ≥ . {\displaystyle \,\geq .} Naturally, this will lead to meaningful concepts only in some situations. An application of this principle is the notion of sub-distributivity as explained in the article on interval arithmetic. In category theory, if ( S , μ , ν ) {\displaystyle (S,\mu ,\nu )} and ( S ′ , μ ′ , ν ′ ) {\displaystyle \left(S^{\prime },\mu ^{\prime },\nu ^{\prime }\right)} are monads on a category C , {\displaystyle C,} a distributive law S . S ′ → S ′ . S {\displaystyle S.S^{\prime }\to S^{\prime }.S} is a natural transformation λ : S . S ′ → S ′ . S {\displaystyle \lambda :S.S^{\prime }\to S^{\prime }.S} such that ( S ′ , λ ) {\displaystyle \left(S^{\prime },\lambda \right)} is a lax map of monads S → S {\displaystyle S\to S} and ( S , λ ) {\displaystyle (S,\lambda )} is a colax map of monads S ′ → S ′ . {\displaystyle S^{\prime }\to S^{\prime }.} This is exactly the data needed to define a monad structure on S ′ . S {\displaystyle S^{\prime }.S} : the multiplication map is S ′ μ . μ ′ S 2 . S ′ λ S {\displaystyle S^{\prime }\mu .\mu ^{\prime }S^{2}.S^{\prime }\lambda S} and the unit map is η ′ S . η . {\displaystyle \eta ^{\prime }S.\eta .} See: distributive law between monads. A generalized distributive law has also been proposed in the area of information theory. === Antidistributivity === The ubiquitous identity that relates inverses to the binary operation in any group, namely ( x y ) − 1 = y − 1 x − 1 , {\displaystyle (xy)^{-1}=y^{-1}x^{-1},} which is taken as an axiom in the more general context of a semigroup with involution, has sometimes been called an antidistributive property (of inversion as a unary operation). In the context of a near-ring, which removes the commutativity of the additively written group and assumes only one-sided distributivity, one can speak of (two-sided) distributive elements but also of antidistributive elements. The latter reverse the order of (the non-commutative) addition; assuming a left-nearring (i.e. one which all elements distribute when multiplied on the left), then an antidistributive element a {\displaystyle a} reverses the order of addition when multiplied to the right: ( x + y ) a = y a + x a . {\displaystyle (x+y)a=ya+xa.} In the study of propositional logic and Boolean algebra, the term antidistributive law is sometimes used to denote the interchange between conjunction and disjunction when implication factors over them: ( a ∨ b ) ⇒ c ≡ ( a ⇒ c ) ∧ ( b ⇒ c ) {\displaystyle (a\lor b)\Rightarrow c\equiv (a\Rightarrow c)\land (b\Rightarrow c)} ( a ∧ b ) ⇒ c ≡ ( a ⇒ c ) ∨ ( b ⇒ c ) . {\displaystyle (a\land b)\Rightarrow c\equiv (a\Rightarrow c)\lor (b\Rightarrow c).} These two tautologies are a direct consequence of the duality in De Morgan's laws. == Notes == == External links == A demonstration of the Distributive Law for integer arithmetic (from cut-the-knot)
|
Wikipedia:Ditkin set#0
|
In mathematics, a Ditkin set, introduced by (Ditkin 1939), is a closed subset of the circle such that a function f vanishing on the set can be approximated by functions φnf with φ vanishing in a neighborhood of the set. == References == Ditkin, V. (1939), "On the structure of ideals in certain normed rings", Uchenye Zapiski Moskov. Gos. Univ. Matematika, 30: 83–130, MR 0002012
|
Wikipedia:Divided differences#0
|
In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation. Divided differences is a recursive division process. Given a sequence of data points ( x 0 , y 0 ) , … , ( x n , y n ) {\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})} , the method calculates the coefficients of the interpolation polynomial of these points in the Newton form. It is sometimes denoted by a delta with a bar: △ | {\displaystyle {\text{△}}\!\!\!|\,\,} or ◿ ◺ {\displaystyle {\text{◿}}\!{\text{◺}}} . == Definition == Given n + 1 data points ( x 0 , y 0 ) , … , ( x n , y n ) {\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})} where the x k {\displaystyle x_{k}} are assumed to be pairwise distinct, the forward divided differences are defined as: [ y k ] := y k , k ∈ { 0 , … , n } [ y k , … , y k + j ] := [ y k + 1 , … , y k + j ] − [ y k , … , y k + j − 1 ] x k + j − x k , k ∈ { 0 , … , n − j } , j ∈ { 1 , … , n } . {\displaystyle {\begin{aligned}{\mathopen {[}}y_{k}]&:=y_{k},&&k\in \{0,\ldots ,n\}\\{\mathopen {[}}y_{k},\ldots ,y_{k+j}]&:={\frac {[y_{k+1},\ldots ,y_{k+j}]-[y_{k},\ldots ,y_{k+j-1}]}{x_{k+j}-x_{k}}},&&k\in \{0,\ldots ,n-j\},\ j\in \{1,\ldots ,n\}.\end{aligned}}} To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of j above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding x-values: x 0 y 0 = [ y 0 ] [ y 0 , y 1 ] x 1 y 1 = [ y 1 ] [ y 0 , y 1 , y 2 ] [ y 1 , y 2 ] [ y 0 , y 1 , y 2 , y 3 ] x 2 y 2 = [ y 2 ] [ y 1 , y 2 , y 3 ] [ y 2 , y 3 ] x 3 y 3 = [ y 3 ] {\displaystyle {\begin{matrix}x_{0}&y_{0}=[y_{0}]&&&\\&&[y_{0},y_{1}]&&\\x_{1}&y_{1}=[y_{1}]&&[y_{0},y_{1},y_{2}]&\\&&[y_{1},y_{2}]&&[y_{0},y_{1},y_{2},y_{3}]\\x_{2}&y_{2}=[y_{2}]&&[y_{1},y_{2},y_{3}]&\\&&[y_{2},y_{3}]&&\\x_{3}&y_{3}=[y_{3}]&&&\\\end{matrix}}} === Notation === Note that the divided difference [ y k , … , y k + j ] {\displaystyle [y_{k},\ldots ,y_{k+j}]} depends on the values x k , … , x k + j {\displaystyle x_{k},\ldots ,x_{k+j}} and y k , … , y k + j {\displaystyle y_{k},\ldots ,y_{k+j}} , but the notation hides the dependency on the x-values. If the data points are given by a function f, ( x 0 , y 0 ) , … , ( x k , y n ) = ( x 0 , f ( x 0 ) ) , … , ( x n , f ( x n ) ) {\displaystyle (x_{0},y_{0}),\ldots ,(x_{k},y_{n})=(x_{0},f(x_{0})),\ldots ,(x_{n},f(x_{n}))} one sometimes writes the divided difference in the notation f [ x k , … , x k + j ] = def [ f ( x k ) , … , f ( x k + j ) ] = [ y k , … , y k + j ] . {\displaystyle f[x_{k},\ldots ,x_{k+j}]\ {\stackrel {\text{def}}{=}}\ [f(x_{k}),\ldots ,f(x_{k+j})]=[y_{k},\ldots ,y_{k+j}].} Other notations for the divided difference of the function ƒ on the nodes x0, ..., xn are: f [ x k , … , x k + j ] = [ x 0 , … , x n ] f = [ x 0 , … , x n ; f ] = D [ x 0 , … , x n ] f . {\displaystyle f[x_{k},\ldots ,x_{k+j}]={\mathopen {[}}x_{0},\ldots ,x_{n}]f={\mathopen {[}}x_{0},\ldots ,x_{n};f]=D[x_{0},\ldots ,x_{n}]f.} == Example == Divided differences for k = 0 {\displaystyle k=0} and the first few values of j {\displaystyle j} : [ y 0 ] = y 0 [ y 0 , y 1 ] = y 1 − y 0 x 1 − x 0 [ y 0 , y 1 , y 2 ] = [ y 1 , y 2 ] − [ y 0 , y 1 ] x 2 − x 0 = y 2 − y 1 x 2 − x 1 − y 1 − y 0 x 1 − x 0 x 2 − x 0 = y 2 − y 1 ( x 2 − x 1 ) ( x 2 − x 0 ) − y 1 − y 0 ( x 1 − x 0 ) ( x 2 − x 0 ) [ y 0 , y 1 , y 2 , y 3 ] = [ y 1 , y 2 , y 3 ] − [ y 0 , y 1 , y 2 ] x 3 − x 0 {\displaystyle {\begin{aligned}{\mathopen {[}}y_{0}]&=y_{0}\\{\mathopen {[}}y_{0},y_{1}]&={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}\\{\mathopen {[}}y_{0},y_{1},y_{2}]&={\frac {{\mathopen {[}}y_{1},y_{2}]-{\mathopen {[}}y_{0},y_{1}]}{x_{2}-x_{0}}}={\frac {{\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}-{\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}}{x_{2}-x_{0}}}={\frac {y_{2}-y_{1}}{(x_{2}-x_{1})(x_{2}-x_{0})}}-{\frac {y_{1}-y_{0}}{(x_{1}-x_{0})(x_{2}-x_{0})}}\\{\mathopen {[}}y_{0},y_{1},y_{2},y_{3}]&={\frac {{\mathopen {[}}y_{1},y_{2},y_{3}]-{\mathopen {[}}y_{0},y_{1},y_{2}]}{x_{3}-x_{0}}}\end{aligned}}} Thus, the table corresponding to these terms upto two columns has the following form: x 0 y 0 y 1 − y 0 x 1 − x 0 x 1 y 1 y 2 − y 1 x 2 − x 1 − y 1 − y 0 x 1 − x 0 x 2 − x 0 y 2 − y 1 x 2 − x 1 x 2 y 2 ⋮ ⋮ ⋮ ⋮ ⋮ x n y n {\displaystyle {\begin{matrix}x_{0}&y_{0}&&\\&&{y_{1}-y_{0} \over x_{1}-x_{0}}&\\x_{1}&y_{1}&&{{y_{2}-y_{1} \over x_{2}-x_{1}}-{y_{1}-y_{0} \over x_{1}-x_{0}} \over x_{2}-x_{0}}\\&&{y_{2}-y_{1} \over x_{2}-x_{1}}&\\x_{2}&y_{2}&&\vdots \\&&\vdots &\\\vdots &&&\vdots \\&&\vdots &\\x_{n}&y_{n}&&\\\end{matrix}}} == Properties == Linearity ( f + g ) [ x 0 , … , x n ] = f [ x 0 , … , x n ] + g [ x 0 , … , x n ] ( λ ⋅ f ) [ x 0 , … , x n ] = λ ⋅ f [ x 0 , … , x n ] {\displaystyle {\begin{aligned}(f+g)[x_{0},\dots ,x_{n}]&=f[x_{0},\dots ,x_{n}]+g[x_{0},\dots ,x_{n}]\\(\lambda \cdot f)[x_{0},\dots ,x_{n}]&=\lambda \cdot f[x_{0},\dots ,x_{n}]\end{aligned}}} Leibniz rule ( f ⋅ g ) [ x 0 , … , x n ] = f [ x 0 ] ⋅ g [ x 0 , … , x n ] + f [ x 0 , x 1 ] ⋅ g [ x 1 , … , x n ] + ⋯ + f [ x 0 , … , x n ] ⋅ g [ x n ] = ∑ r = 0 n f [ x 0 , … , x r ] ⋅ g [ x r , … , x n ] {\displaystyle (f\cdot g)[x_{0},\dots ,x_{n}]=f[x_{0}]\cdot g[x_{0},\dots ,x_{n}]+f[x_{0},x_{1}]\cdot g[x_{1},\dots ,x_{n}]+\dots +f[x_{0},\dots ,x_{n}]\cdot g[x_{n}]=\sum _{r=0}^{n}f[x_{0},\ldots ,x_{r}]\cdot g[x_{r},\ldots ,x_{n}]} Divided differences are symmetric: If σ : { 0 , … , n } → { 0 , … , n } {\displaystyle \sigma :\{0,\dots ,n\}\to \{0,\dots ,n\}} is a permutation then f [ x 0 , … , x n ] = f [ x σ ( 0 ) , … , x σ ( n ) ] {\displaystyle f[x_{0},\dots ,x_{n}]=f[x_{\sigma (0)},\dots ,x_{\sigma (n)}]} Polynomial interpolation in the Newton form: if P {\displaystyle P} is a polynomial function of degree ≤ n {\displaystyle \leq n} , and p [ x 0 , … , x n ] {\displaystyle p[x_{0},\dots ,x_{n}]} is the divided difference, then P n − 1 ( x ) = p [ x 0 ] + p [ x 0 , x 1 ] ( x − x 0 ) + p [ x 0 , x 1 , x 2 ] ( x − x 0 ) ( x − x 1 ) + ⋯ + p [ x 0 , … , x n ] ( x − x 0 ) ( x − x 1 ) ⋯ ( x − x n − 1 ) {\displaystyle P_{n-1}(x)=p[x_{0}]+p[x_{0},x_{1}](x-x_{0})+p[x_{0},x_{1},x_{2}](x-x_{0})(x-x_{1})+\cdots +p[x_{0},\ldots ,x_{n}](x-x_{0})(x-x_{1})\cdots (x-x_{n-1})} If p {\displaystyle p} is a polynomial function of degree < n {\displaystyle <n} , then p [ x 0 , … , x n ] = 0. {\displaystyle p[x_{0},\dots ,x_{n}]=0.} Mean value theorem for divided differences: if f {\displaystyle f} is n times differentiable, then f [ x 0 , … , x n ] = f ( n ) ( ξ ) n ! {\displaystyle f[x_{0},\dots ,x_{n}]={\frac {f^{(n)}(\xi )}{n!}}} for a number ξ {\displaystyle \xi } in the open interval determined by the smallest and largest of the x k {\displaystyle x_{k}} 's. == Matrix form == The divided difference scheme can be put into an upper triangular matrix: T f ( x 0 , … , x n ) = ( f [ x 0 ] f [ x 0 , x 1 ] f [ x 0 , x 1 , x 2 ] … f [ x 0 , … , x n ] 0 f [ x 1 ] f [ x 1 , x 2 ] … f [ x 1 , … , x n ] 0 0 f [ x 2 ] … f [ x 2 , … , x n ] ⋮ ⋮ ⋱ ⋮ 0 0 0 … f [ x n ] ) . {\displaystyle T_{f}(x_{0},\dots ,x_{n})={\begin{pmatrix}f[x_{0}]&f[x_{0},x_{1}]&f[x_{0},x_{1},x_{2}]&\ldots &f[x_{0},\dots ,x_{n}]\\0&f[x_{1}]&f[x_{1},x_{2}]&\ldots &f[x_{1},\dots ,x_{n}]\\0&0&f[x_{2}]&\ldots &f[x_{2},\dots ,x_{n}]\\\vdots &\vdots &&\ddots &\vdots \\0&0&0&\ldots &f[x_{n}]\end{pmatrix}}.} Then it holds T f + g ( x ) = T f ( x ) + T g ( x ) {\displaystyle T_{f+g}(x)=T_{f}(x)+T_{g}(x)} T λ f ( x ) = λ T f ( x ) {\displaystyle T_{\lambda f}(x)=\lambda T_{f}(x)} if λ {\displaystyle \lambda } is a scalar T f ⋅ g ( x ) = T f ( x ) ⋅ T g ( x ) {\displaystyle T_{f\cdot g}(x)=T_{f}(x)\cdot T_{g}(x)} This follows from the Leibniz rule. It means that multiplication of such matrices is commutative. Summarised, the matrices of divided difference schemes with respect to the same set of nodes x form a commutative ring. Since T f ( x ) {\displaystyle T_{f}(x)} is a triangular matrix, its eigenvalues are obviously f ( x 0 ) , … , f ( x n ) {\displaystyle f(x_{0}),\dots ,f(x_{n})} . Let δ ξ {\displaystyle \delta _{\xi }} be a Kronecker delta-like function, that is δ ξ ( t ) = { 1 : t = ξ , 0 : else . {\displaystyle \delta _{\xi }(t)={\begin{cases}1&:t=\xi ,\\0&:{\mbox{else}}.\end{cases}}} Obviously f ⋅ δ ξ = f ( ξ ) ⋅ δ ξ {\displaystyle f\cdot \delta _{\xi }=f(\xi )\cdot \delta _{\xi }} , thus δ ξ {\displaystyle \delta _{\xi }} is an eigenfunction of the pointwise function multiplication. That is T δ x i ( x ) {\displaystyle T_{\delta _{x_{i}}}(x)} is somehow an "eigenmatrix" of T f ( x ) {\displaystyle T_{f}(x)} : T f ( x ) ⋅ T δ x i ( x ) = f ( x i ) ⋅ T δ x i ( x ) {\displaystyle T_{f}(x)\cdot T_{\delta _{x_{i}}}(x)=f(x_{i})\cdot T_{\delta _{x_{i}}}(x)} . However, all columns of T δ x i ( x ) {\displaystyle T_{\delta _{x_{i}}}(x)} are multiples of each other, the matrix rank of T δ x i ( x ) {\displaystyle T_{\delta _{x_{i}}}(x)} is 1. So you can compose the matrix of all eigenvectors of T f ( x ) {\displaystyle T_{f}(x)} from the i {\displaystyle i} -th column of each T δ x i ( x ) {\displaystyle T_{\delta _{x_{i}}}(x)} . Denote the matrix of eigenvectors with U ( x ) {\displaystyle U(x)} . Example U ( x 0 , x 1 , x 2 , x 3 ) = ( 1 1 ( x 1 − x 0 ) 1 ( x 2 − x 0 ) ( x 2 − x 1 ) 1 ( x 3 − x 0 ) ( x 3 − x 1 ) ( x 3 − x 2 ) 0 1 1 ( x 2 − x 1 ) 1 ( x 3 − x 1 ) ( x 3 − x 2 ) 0 0 1 1 ( x 3 − x 2 ) 0 0 0 1 ) {\displaystyle U(x_{0},x_{1},x_{2},x_{3})={\begin{pmatrix}1&{\frac {1}{(x_{1}-x_{0})}}&{\frac {1}{(x_{2}-x_{0})(x_{2}-x_{1})}}&{\frac {1}{(x_{3}-x_{0})(x_{3}-x_{1})(x_{3}-x_{2})}}\\0&1&{\frac {1}{(x_{2}-x_{1})}}&{\frac {1}{(x_{3}-x_{1})(x_{3}-x_{2})}}\\0&0&1&{\frac {1}{(x_{3}-x_{2})}}\\0&0&0&1\end{pmatrix}}} The diagonalization of T f ( x ) {\displaystyle T_{f}(x)} can be written as U ( x ) ⋅ diag ( f ( x 0 ) , … , f ( x n ) ) = T f ( x ) ⋅ U ( x ) . {\displaystyle U(x)\cdot \operatorname {diag} (f(x_{0}),\dots ,f(x_{n}))=T_{f}(x)\cdot U(x).} === Polynomials and power series === The matrix J = ( x 0 1 0 0 ⋯ 0 0 x 1 1 0 ⋯ 0 0 0 x 2 1 0 ⋮ ⋮ ⋱ ⋱ 0 0 0 0 ⋱ 1 0 0 0 0 x n ) {\displaystyle J={\begin{pmatrix}x_{0}&1&0&0&\cdots &0\\0&x_{1}&1&0&\cdots &0\\0&0&x_{2}&1&&0\\\vdots &\vdots &&\ddots &\ddots &\\0&0&0&0&\;\ddots &1\\0&0&0&0&&x_{n}\end{pmatrix}}} contains the divided difference scheme for the identity function with respect to the nodes x 0 , … , x n {\displaystyle x_{0},\dots ,x_{n}} , thus J m {\displaystyle J^{m}} contains the divided differences for the power function with exponent m {\displaystyle m} . Consequently, you can obtain the divided differences for a polynomial function p {\displaystyle p} by applying p {\displaystyle p} to the matrix J {\displaystyle J} : If p ( ξ ) = a 0 + a 1 ⋅ ξ + ⋯ + a m ⋅ ξ m {\displaystyle p(\xi )=a_{0}+a_{1}\cdot \xi +\dots +a_{m}\cdot \xi ^{m}} and p ( J ) = a 0 + a 1 ⋅ J + ⋯ + a m ⋅ J m {\displaystyle p(J)=a_{0}+a_{1}\cdot J+\dots +a_{m}\cdot J^{m}} then T p ( x ) = p ( J ) . {\displaystyle T_{p}(x)=p(J).} This is known as Opitz' formula. Now consider increasing the degree of p {\displaystyle p} to infinity, i.e. turn the Taylor polynomial into a Taylor series. Let f {\displaystyle f} be a function which corresponds to a power series. You can compute the divided difference scheme for f {\displaystyle f} by applying the corresponding matrix series to J {\displaystyle J} : If f ( ξ ) = ∑ k = 0 ∞ a k ξ k {\displaystyle f(\xi )=\sum _{k=0}^{\infty }a_{k}\xi ^{k}} and f ( J ) = ∑ k = 0 ∞ a k J k {\displaystyle f(J)=\sum _{k=0}^{\infty }a_{k}J^{k}} then T f ( x ) = f ( J ) . {\displaystyle T_{f}(x)=f(J).} == Alternative characterizations == === Expanded form === f [ x 0 ] = f ( x 0 ) f [ x 0 , x 1 ] = f ( x 0 ) ( x 0 − x 1 ) + f ( x 1 ) ( x 1 − x 0 ) f [ x 0 , x 1 , x 2 ] = f ( x 0 ) ( x 0 − x 1 ) ⋅ ( x 0 − x 2 ) + f ( x 1 ) ( x 1 − x 0 ) ⋅ ( x 1 − x 2 ) + f ( x 2 ) ( x 2 − x 0 ) ⋅ ( x 2 − x 1 ) f [ x 0 , x 1 , x 2 , x 3 ] = f ( x 0 ) ( x 0 − x 1 ) ⋅ ( x 0 − x 2 ) ⋅ ( x 0 − x 3 ) + f ( x 1 ) ( x 1 − x 0 ) ⋅ ( x 1 − x 2 ) ⋅ ( x 1 − x 3 ) + f ( x 2 ) ( x 2 − x 0 ) ⋅ ( x 2 − x 1 ) ⋅ ( x 2 − x 3 ) + f ( x 3 ) ( x 3 − x 0 ) ⋅ ( x 3 − x 1 ) ⋅ ( x 3 − x 2 ) f [ x 0 , … , x n ] = ∑ j = 0 n f ( x j ) ∏ k ∈ { 0 , … , n } ∖ { j } ( x j − x k ) {\displaystyle {\begin{aligned}f[x_{0}]&=f(x_{0})\\f[x_{0},x_{1}]&={\frac {f(x_{0})}{(x_{0}-x_{1})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})}}\\f[x_{0},x_{1},x_{2}]&={\frac {f(x_{0})}{(x_{0}-x_{1})\cdot (x_{0}-x_{2})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})\cdot (x_{1}-x_{2})}}+{\frac {f(x_{2})}{(x_{2}-x_{0})\cdot (x_{2}-x_{1})}}\\f[x_{0},x_{1},x_{2},x_{3}]&={\frac {f(x_{0})}{(x_{0}-x_{1})\cdot (x_{0}-x_{2})\cdot (x_{0}-x_{3})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})\cdot (x_{1}-x_{2})\cdot (x_{1}-x_{3})}}+\\&\quad \quad {\frac {f(x_{2})}{(x_{2}-x_{0})\cdot (x_{2}-x_{1})\cdot (x_{2}-x_{3})}}+{\frac {f(x_{3})}{(x_{3}-x_{0})\cdot (x_{3}-x_{1})\cdot (x_{3}-x_{2})}}\\f[x_{0},\dots ,x_{n}]&=\sum _{j=0}^{n}{\frac {f(x_{j})}{\prod _{k\in \{0,\dots ,n\}\setminus \{j\}}(x_{j}-x_{k})}}\end{aligned}}} With the help of the polynomial function ω ( ξ ) = ( ξ − x 0 ) ⋯ ( ξ − x n ) {\displaystyle \omega (\xi )=(\xi -x_{0})\cdots (\xi -x_{n})} this can be written as f [ x 0 , … , x n ] = ∑ j = 0 n f ( x j ) ω ′ ( x j ) . {\displaystyle f[x_{0},\dots ,x_{n}]=\sum _{j=0}^{n}{\frac {f(x_{j})}{\omega '(x_{j})}}.} === Peano form === If x 0 < x 1 < ⋯ < x n {\displaystyle x_{0}<x_{1}<\cdots <x_{n}} and n ≥ 1 {\displaystyle n\geq 1} , the divided differences can be expressed as f [ x 0 , … , x n ] = 1 ( n − 1 ) ! ∫ x 0 x n f ( n ) ( t ) B n − 1 ( t ) d t {\displaystyle f[x_{0},\ldots ,x_{n}]={\frac {1}{(n-1)!}}\int _{x_{0}}^{x_{n}}f^{(n)}(t)\;B_{n-1}(t)\,dt} where f ( n ) {\displaystyle f^{(n)}} is the n {\displaystyle n} -th derivative of the function f {\displaystyle f} and B n − 1 {\displaystyle B_{n-1}} is a certain B-spline of degree n − 1 {\displaystyle n-1} for the data points x 0 , … , x n {\displaystyle x_{0},\dots ,x_{n}} , given by the formula B n − 1 ( t ) = ∑ k = 0 n ( max ( 0 , x k − t ) ) n − 1 ω ′ ( x k ) {\displaystyle B_{n-1}(t)=\sum _{k=0}^{n}{\frac {(\max(0,x_{k}-t))^{n-1}}{\omega '(x_{k})}}} This is a consequence of the Peano kernel theorem; it is called the Peano form of the divided differences and B n − 1 {\displaystyle B_{n-1}} is the Peano kernel for the divided differences, all named after Giuseppe Peano. === Forward and backward differences === When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences. Given n+1 data points ( x 0 , y 0 ) , … , ( x n , y n ) {\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})} with x k = x 0 + k h , for k = 0 , … , n and fixed h > 0 {\displaystyle x_{k}=x_{0}+kh,\ {\text{ for }}\ k=0,\ldots ,n{\text{ and fixed }}h>0} the forward differences are defined as Δ ( 0 ) y k := y k , k = 0 , … , n Δ ( j ) y k := Δ ( j − 1 ) y k + 1 − Δ ( j − 1 ) y k , k = 0 , … , n − j , j = 1 , … , n . {\displaystyle {\begin{aligned}\Delta ^{(0)}y_{k}&:=y_{k},\qquad k=0,\ldots ,n\\\Delta ^{(j)}y_{k}&:=\Delta ^{(j-1)}y_{k+1}-\Delta ^{(j-1)}y_{k},\qquad k=0,\ldots ,n-j,\ j=1,\dots ,n.\end{aligned}}} whereas the backward differences are defined as: ∇ ( 0 ) y k := y k , k = 0 , … , n ∇ ( j ) y k := ∇ ( j − 1 ) y k − ∇ ( j − 1 ) y k − 1 , k = 0 , … , n − j , j = 1 , … , n . {\displaystyle {\begin{aligned}\nabla ^{(0)}y_{k}&:=y_{k},\qquad k=0,\ldots ,n\\\nabla ^{(j)}y_{k}&:=\nabla ^{(j-1)}y_{k}-\nabla ^{(j-1)}y_{k-1},\qquad k=0,\ldots ,n-j,\ j=1,\dots ,n.\end{aligned}}} Thus the forward difference table is written as: y 0 Δ y 0 y 1 Δ 2 y 0 Δ y 1 Δ 3 y 0 y 2 Δ 2 y 1 Δ y 2 y 3 {\displaystyle {\begin{matrix}y_{0}&&&\\&\Delta y_{0}&&\\y_{1}&&\Delta ^{2}y_{0}&\\&\Delta y_{1}&&\Delta ^{3}y_{0}\\y_{2}&&\Delta ^{2}y_{1}&\\&\Delta y_{2}&&\\y_{3}&&&\\\end{matrix}}} whereas the backwards difference table is written as: y 0 ∇ y 1 y 1 ∇ 2 y 2 ∇ y 2 ∇ 3 y 3 y 2 ∇ 2 y 3 ∇ y 3 y 3 {\displaystyle {\begin{matrix}y_{0}&&&\\&\nabla y_{1}&&\\y_{1}&&\nabla ^{2}y_{2}&\\&\nabla y_{2}&&\nabla ^{3}y_{3}\\y_{2}&&\nabla ^{2}y_{3}&\\&\nabla y_{3}&&\\y_{3}&&&\\\end{matrix}}} The relationship between divided differences and forward differences is [ y j , y j + 1 , … , y j + k ] = 1 k ! h k Δ ( k ) y j , {\displaystyle [y_{j},y_{j+1},\ldots ,y_{j+k}]={\frac {1}{k!h^{k}}}\Delta ^{(k)}y_{j},} whereas for backward differences: [ y j , y j − 1 , … , y j − k ] = 1 k ! h k ∇ ( k ) y j . {\displaystyle [{y}_{j},y_{j-1},\ldots ,{y}_{j-k}]={\frac {1}{k!h^{k}}}\nabla ^{(k)}y_{j}.} == See also == Difference quotient Neville's algorithm Polynomial interpolation Mean value theorem for divided differences Nörlund–Rice integral Pascal's triangle == References == Louis Melville Milne-Thomson (2000) [1933]. The Calculus of Finite Differences. American Mathematical Soc. Chapter 1: Divided Differences. ISBN 978-0-8218-2107-7. Myron B. Allen; Eli L. Isaacson (1998). Numerical Analysis for Applied Science. John Wiley & Sons. Appendix A. ISBN 978-1-118-03027-1. Ron Goldman (2002). Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling. Morgan Kaufmann. Chapter 4:Newton Interpolation and Difference Triangles. ISBN 978-0-08-051547-2. == External links == An implementation in Haskell.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.