source
stringlengths
16
98
text
stringlengths
40
168k
Wikipedia:Kseniya Garaschuk#0
Kseniya Garaschuk (born 1982) is a Soviet-born Canadian mathematician and mathematics educator. She is an associate professor of mathematics and statistics at the University of the Fraser Valley, and the editor-in-chief of the mathematics journal Crux Mathematicorum. == Education and career == Garaschuk was born to a family of mathematicians in Minsk, Belarus, at a time when it was part of the Soviet Union. She began studying mathematics and computer science at the Belarusian State University but after a year, when she was 18, moved with her parents to Canada. She took a gap year to improve her English and then completed her undergraduate studies at Simon Fraser University, staying at Simon Fraser for an additional year to earn a master's degree for work in exponential sums in 2008. Next, she went to the University of Victoria for doctoral research in mathematics, in combinatorial design theory. She completed her PhD in 2014; her dissertation, Linear methods for rational triangle decompositions, was supervised by Peter Dukes. Finding herself isolated in her research work and more energized by teaching, Garaschuk took a postdoctoral fellowship in science education at the University of British Columbia, under the university's Carl Weiman Science Education Initiative, before joining the faculty at the University of the Fraser Valley, in 2016. Her current research interests include examining effectiveness of various classroom and assessment practices in undergraduate mathematics. As well as her editorial work with Crux Mathematicorum, Garaschuk has been active in service to the Canadian Mathematical Society (CMS) since 2008, as student committee chair, a member of the board of directors, in running mathematics camps and community mathematics events. She is a member of the CMS Education Committee and is a contributing editor of the CMS Education Notes. == Book == With Andy Liu, Garaschuk is coauthor of the book Grade Five Competition from the Leningrad Mathematical Olympiad, 1979–1992 (Springer, 2020). == Recognition == In 2021, the Canadian Mathematical Society gave Garaschuk their Graham Wright Award for Distinguished Service, and named her as a fellow of the society. In 2018, Garaschuk won University of the Fraser Valley Faculty of Science Teaching Award. In 2020, she was awarded University of the Fraser Valley Faculty of Science Achievement Award for overall excellence in academic endeavours. == References ==
Wikipedia:Kunizo Yoneyama#0
Kunizō Yoneyama (米山 国蔵, 1877–1968) was a Japanese mathematician at Kyoto University working in topology. In 1917, he published the construction of the Lakes of Wada, which he named after his teacher Takeo Wada, to whom he credited the discovery. == Publications == Yoneyama, Kunizô (1917), "Theory of Continuous Set of Points (not finished)", Tôhoku Mathematical Journal, 12: 43–158 Yoneyama, Kunizô (1918), "Theory of Continuous Set of Points", Tôhoku Mathematical Journal, 13: 33–157 Yoneyama, Kunizô (1920), "On Continuous Set of Points, II", Tôhoku Mathematical Journal, 18: 134–186 == References == Mimura, Mamoru (1999), "The Japanese school of topology", in James, I. M. (ed.), History of topology, Amsterdam: North-Holland, pp. 863–882, doi:10.1016/B978-044482375-5/50032-8, ISBN 978-0-444-82375-5, MR 1721126 Neoi, Makoto (2004), A Study on Educational Viewpoints of a Mathematician Kunizo Yoneyama (in Japanese), Tokyo: Tokai University, p. 12
Wikipedia:Kurt Johansson (mathematician)#0
Kurt Johansson (born 1960) is a Swedish mathematician, specializing in probability theory. Johansson received his PhD in 1988 from Uppsala University under the supervision of Lennart Carleson and is a professor in mathematics at KTH Royal Institute of Technology. In 2000 Johansson was awarded the Rollo Davidson Prize in Probability theory. In 2002 he was an invited speaker of the International Congress of Mathematicians in Beijing and was awarded the Göran Gustafsson Prize. In 2006 he was elected a member of the Royal Swedish Academy of Sciences. In 2012 he was elected a fellow of the American Mathematical Society. == Selected publications == Johansson, Kurt (1997). "On Random Matrices from the Compact Classical Groups". The Annals of Mathematics. 145 (3): 519–545. doi:10.2307/2951843. JSTOR 2951843. Johansson, Kurt (1998). "On fluctuations of eigenvalues of random Hermitian matrices". Duke Mathematical Journal. 91 (1): 151–204. doi:10.1215/S0012-7094-98-09108-6. ISSN 0012-7094. Baik, Jinho; Deift, Percy; Johansson, Kurt (1999). "On the distribution of the length of the longest increasing subsequence of random permutations". Journal of the American Mathematical Society. 12 (4): 1119–1179. doi:10.1090/S0894-0347-99-00307-0. Johansson, Kurt (2000). "Transversal fluctuations for increasing subsequences on the plane". Probability Theory and Related Fields. 116 (4): 445–456. doi:10.1007/s004400050258. hdl:2027.42/142448. S2CID 16313314. Johansson, Kurt (2000). "Shape Fluctuations and Random Matrices". Communications in Mathematical Physics. 209 (2): 437–476. arXiv:math/9903134. Bibcode:2000CMaPh.209..437J. doi:10.1007/s002200050027. ISSN 0010-3616. S2CID 16291076. Johansson, Kurt (2001). "Random Growth and Random Matrices". European Congress of Mathematics. Progress in Mathematics, vol. 201. pp. 445–456. doi:10.1007/978-3-0348-8268-2_25. ISBN 978-3-0348-9497-5. Johansson, Kurt (2001). "Discrete orthogonal polynomial ensembles and the Plancherel measure" (PDF). Annals of Mathematics. 153 (1): 259–296. arXiv:math/9906120. doi:10.2307/2661375. JSTOR 2661375. S2CID 14120881. Johansson, Kurt (2002). "Non-intersecting paths, random tilings and random matrices". Probability Theory and Related Fields. 123 (2): 225–280. arXiv:math/0011250. doi:10.1007/s004400100187. S2CID 17994807. Johansson, Kurt (2005). "Non-intersecting, simple, symmetric \- random walks and the extended Hahn kernel". Annales de l'Institut Fourier. 55 (6): 2129–2145. arXiv:math/0409013. doi:10.5802/aif.2155. ISSN 0373-0956. S2CID 8434266. Johansson, K. (2007). "From Gumbel to Tracy-Widom". Probability Theory and Related Fields. 138 (1–2): 75–112. doi:10.1007/s00440-006-0012-7. S2CID 15410267. Adler, Mark; Johansson, Kurt; Van Moerbeke, Pierre (2014). "Double Aztec diamonds and the tacnode process". Advances in Mathematics. 252: 518–571. arXiv:1112.5532. doi:10.1016/j.aim.2013.10.012. Adler, Mark; Chhita, Sunil; Johansson, Kurt; Van Moerbeke, Pierre (2015). "Tacnode GUE-minor processes and double Aztec diamonds" (PDF). Probability Theory and Related Fields. 162 (1–2): 275–325. doi:10.1007/s00440-014-0573-9. S2CID 119126886. Johansson, Kurt (2019). "The two-time distribution in geometric last-passage percolation". Probability Theory and Related Fields. 175 (3–4): 849–895. arXiv:1802.00729. doi:10.1007/s00440-019-00901-9. == References ==
Wikipedia:Kuṭṭaka#0
Kuṭṭaka is an algorithm for finding integer solutions of linear Diophantine equations. A linear Diophantine equation is an equation of the form ax + by = c where x and y are unknown quantities and a, b, and c are known quantities with integer values. The algorithm was originally invented by the Indian astronomer-mathematician Āryabhaṭa (476–550 CE) and is described very briefly in his Āryabhaṭīya. Āryabhaṭa did not give the algorithm the name Kuṭṭaka, and his description of the method was mostly obscure and incomprehensible. It was Bhāskara I (c. 600 – c. 680) who gave a detailed description of the algorithm with several examples from astronomy in his Āryabhatiyabhāṣya, who gave the algorithm the name Kuṭṭaka. In Sanskrit, the word Kuṭṭaka means pulverization (reducing to powder), and it indicates the nature of the algorithm. The algorithm in essence is a process where the coefficients in a given linear Diophantine equation are broken up into smaller numbers to get a linear Diophantine equation with smaller coefficients. In general, it is easy to find integer solutions of linear Diophantine equations with small coefficients. From a solution to the reduced equation, a solution to the original equation can be determined. Many Indian mathematicians after Aryabhaṭa have discussed the Kuṭṭaka method with variations and refinements. The Kuṭṭaka method was considered to be so important that the entire subject of algebra used to be called Kuṭṭaka-ganita or simply Kuṭṭaka. Sometimes the subject of solving linear Diophantine equations is also called Kuṭṭaka. In literature, there are several other names for the Kuṭṭaka algorithm like Kuṭṭa, Kuṭṭakāra and Kuṭṭikāra. There is also a treatise devoted exclusively to a discussion of Kuṭṭaka. Such specialized treatises are very rare in the mathematical literature of ancient India. The treatise written in Sanskrit is titled Kuṭṭākāra Śirōmaṇi and is authored by one Devaraja. The Kuṭṭaka algorithm has much similarity with and can be considered as a precursor of the modern day extended Euclidean algorithm. The latter algorithm is a procedure for finding integers x and y satisfying the condition ax + by = gcd(a, b). == Aryabhaṭa's formulation of the problem == The problem that can supposedly be solved by the Kuṭṭaka method was not formulated by Aryabhaṭa as a problem of solving the linear Diophantine equation. Aryabhaṭa considered the following problems all of which are equivalent to the problem of solving the linear Diophantine equation: Find an integer which when divided by two given integers leaves two given remainders. This problem may be formulated in two different ways: Let the integer to be found be N, the divisors be a and b, and the remainders be R1 and R2. Then the problem is to find N such that N ≡ R1 (mod a) and N ≡ R2 (mod b). Letting the integer to be found to be N, the divisors be a and b, and the remainders be R1 and R2, the problem is to find N such that there are integers x and y such that N = ax + R1 and N = by + R2. This is equivalent to ax − by = c where c = R2 − R1. Find an integer such that its product with a given integer being increased or decreased by another given integer and then divided by a third integer leaves no remainder. Letting the integer to be determined be x and the three integers be a, b and c, the problem is to find x such that (ax ± b)/c is an integer y. This is equivalent to finding integers x and y such that (ax ± b)/c = y. This in turn is equivalent to the problem of finding integer solutions of ax ± by = ±c. == Reduction of the problem == Aryabhata and other Indian writers had noted the following property of linear Diophantine equations: "The linear Diophantine equation ax + by = c has a solution if and only if gcd(a, b) is a divisor of c." So the first stage in the pulverization process is to cancel out the common factor gcd(a, b) from a, b and c, and obtain an equation with smaller coefficients in which the coefficients of x and y are relatively prime. For example, Bhāskara I observes: "The dividend and the divisor shall become prime to each other, on being divided by the residue of their mutual division. The operation of the pulveriser should be considered in relation to them." == Aryabhata's algorithm == Aryabhata gave the algorithm for solving the linear Diophantine equation in verses 32–33 of Ganitapada of Aryabhatiya. Taking Bhāskara I's explanation of these verses also into consideration, Bibhutibbhushan Datta has given the following translation of these verses: "Divide the divisor corresponding to the greater remainder by the divisor corresponding to the smaller remainder. The residue (and the divisor corresponding to the smaller remainder) being mutually divided (until the remainder becomes zero), the last quotient should be multiplied by an optional integer and then added (in case the number of quotients of the mutual division is even) or subtracted (in case the number of quotients is odd) by the difference of the remainders. (Place the other quotients of the mutual division successively one below the other in a column; below them the result just obtained and underneath it the optional integer.) Any number below (that is, the penultimate) is multiplied by the one just above it and added by that just below it. Divide the last number (obtained so doing repeatedly) by the divisor corresponding to the smaller remainder; then multiply the residue by the divisor corresponding to the greater remainder and add the greater remainder. (The result will be) the number corresponding to the two divisors." Some comments are in order. The algorithm yields the smallest positive integer which gives specified remainders when divided by given numbers. The validity of the algorithm can be established by translating the process into modern mathematical notations. Subsequent Indian mathematicians including Brahmagupta (628 AD), Mahavira (850), Aryabhata II (950), Sripati (1039), Bhāskara II (1150) and Narayana (1350) have developed several variants of this algorithm and have also discussed several special cases of the algorithm. == Elaboration of Aryabhatta's Kuttaka == Without loss of generality, let ⁠ a x − b y = c {\displaystyle ax-by=c} ⁠ be our Diophantine equation where a, b are positive integers and c is an integer. Divide both sides of the equation by ⁠ gcd ( a , b ) {\displaystyle \gcd(a,b)} ⁠. If c is not divisible by ⁠ gcd ( a , b ) {\displaystyle \gcd(a,b)} ⁠ then there are no integer solutions to this equation. After the division, we get the equation ⁠ a ′ x − b ′ y = c ′ {\displaystyle a'x-b'y=c'} ⁠. The solution to this equation is the solution to ⁠ a x − b y = c {\displaystyle ax-by=c} ⁠. Without loss of generality, let us consider a > b. Using Euclidean division, follow these recursive steps: a′ = a1b′ + r1 b′ = a2r1 + r2 r1 = a3r2 + r3 ... rn−2 = anrn−1 + 1. Where rn = 1. Now, define quantities xn+2, xn+1, xn,... by backward induction as follows: If n is odd, take xn+2 = 0 and xn+1 = 1. If n is even, take xn+2=1 and xn+1=rn−1−1. Now, calculate all xm (n≥m≥1) by xm=amxm+1+xm+2. Then y = c′x1 and x = c′x2. == Example == === Problem statement === Consider the following problem: "Find an integer such that it leaves a remainder of 15 when divided by 29 and a remainder of 19 when divided by 45." === Data === Remainders = 15, 19 Greater remainder = 19 Divisor corresponding to greater remainder = 45 Smaller remainder = 15 Divisor corresponding to smaller remainder = 29 Difference of remainders = 19 - 15 = 4 === Step 1: Mutual divisions === Divide 45 by 29 to get quotient 1 and remainder 16: 29 ) 45 ( 1 29 ---- Divide 29 by 16 to get quotient 1 and remainder 13: 16 ) 29 ( 1 16 ---- Divide 16 by 13 to get quotient 1 and remainder 3: 13 ) 16 ( 1 13 ---- Divide 13 by 3 to get quotient 4 and remainder 1: 3 ) 13 ( 4 3 ---- Divide 3 by 1 to get quotient 3 and remainder 0: 1 ) 3 ( 3 1 ---- The process of mutual division stops here. 0 === Step 2: Choosing an optional integer === Quotients = 1, 1, 1, 4, 3 Number of quotients = 4 (an even integer) (excluding the first quotient) Choose an optional integer = 2 (= k) The last quotient = 3 Multiply the optional integer by last quotient = 2 × 3 = 6 Add the above product to difference of remainders = 6 + 4 = 10 (= 3 × k + 4) === Step 4: Computation of successive numbers === Write elements of 1st column : 1, 1, 4, 3, 2, 4 (contains 4 quotients) Compute elements of 2nd column : 1, 1, 4, 10, 2 (contains 3 quotients) Compute elements of 3rd column : 1, 1, 42, 10 (contains 2 quotients) Compute elements of 4th column : 1, 52, 42 (contains 1 quotient) Compute elements of 5th column : 94, 52 (contains no quotients) The computational procedure is shown below: Quotient 1 : 1 1 1 1 94 ↗ Quotient 2 : 1 1 1 52 (52×1 + 42 = 94) 52 ↗ Quotient 3 : 4 4 42 (42×1 + 10 =52) 42 ↗ Quotient 4 : 3 10 (10×4 + 2 = 42) 10 ↗ k : 2 (2×3 + 4 = 10) 2 Difference : 4 of remainders === Step 5: Computation of solution === The last number obtained = 94 The residue when 94 is divided by the divisor corresponding to smaller remainder = 7 Multiply this residue by the divisor corresponding to larger remainder = 7 × 45 = 315 Add the larger remainder = 315 + 19 = 334 === Solution === The required number is 334. === Verification of solution === 334 = 11 × 29 + 15. So, 334 leaves a remainder of 15 when divided by 29. 334 = 7 × 45 + 19. So, 334 leaves a remainder of 19 when divided by 45. === Remarks === The number 334 is the smallest integer which leaves remainders 15 and 19 when divided by 29 and 45 respectively. == An example from Laghubhāskarīya == The following example taken from Laghubhāskarīya of Bhāskara I illustrates how the Kuttaka algorithm was used in the astronomical calculations in India. === Problem statement === The sum, the difference and the product increased by unity, of the residues of the revolutions of Saturn and Mars – each is a perfect square. Taking the equations furnished by the above and applying the methods of such quadratics obtain the (simplest) solution by the substitution of 2, 3, etc. successively (in the general solution). Then calculate the ahargana and the revolutions performed by Saturn and Mars in that time together with the number of solar years elapsed. === Some background information === In the Indian astronomical tradition, a Yuga is a period consisting of 1,577,917,500 civil days. Saturn makes 146,564 revolutions and Mars makes 229,6824 revolutions in a Yuga. So Saturn makes 146,564/1,577,917,500 = 36,641/394,479,375 revolutions in a day. By saying that the residue of the revolution of Saturn is x, what is meant is that the fractional number of revolutions is x/394,479,375. Similarly, Mars makes 229,6824/1,577,917,500 = 190,412/131,493,125 revolutions in a day. By saying that the residue of the revolution of Mars is y, what is meant is that the fractional number of revolutions is y/131,493,125. === Computation of the residues === Let x and y denote the residues of the revolutions of Saturn and Mars respectively satisfying the conditions stated in the problem. They must be such that each of x + y. x − y and xy + 1 is a perfect square. Setting x + y = 4p2, x − y = 4q2 one obtains x = 2(p2 + q2), y = 2(p2 − q2) and so xy + 1 = (2p2 − 1)2 + 4(p2 − q4). For xy + 1 also to be a perfect square we must have p2 − q4 = 0, that is p2 = q4. Thus the following general solution is obtained: x = 2(q4 + q2), y = 2(q4 − q2). The value q = 2 yields the special solution x = 40, y = 24. === Computations of the aharganas and the numbers of revolutions === Ahargana is the number of days elapsed since the beginning of the Yuga. ==== Saturn ==== Let u be the value of the ahargana corresponding the residue 24 for Saturn. During u days, saturn would have completed (36,641/394,479,375)×u number of revolutions. Since there is a residue of 24, this number would include the fractional number 24/394,479,375 of revolutions also. Hence during the ahargana u, the number of revolutions completed would be (36,641 / 394,479,375) × u − 24/394,479,375 = (36,641 × u − 24) / 394,479,375 which would be an integer. Denoting this integer by v, the problem reduces to solving the following linear Diophantine equation: (36,641 × u − 24) / 394,479,375 = v. Kuttaka may be applied to solve this equation. The smallest solution is u = 346,688,814 and v = 32,202. ==== Mars ==== Let u be the value of the ahargana corresponding the residue 40 for Mars. During u days, Mars would have completed (190,412/131,493,125) × u number of revolutions. Since there is a residue of 40, this number would include the fractional number 40/131,493,125 of revolutions also. Hence during the ahargana u, the number of revolutions completed would be (190,412 / 131,493,125) × u − 40 / 131,493,125 = (190,412 × u − 40) / 131,493,125 which would be an integer. Denoting this integer by v, the problem reduces to solving the following linear Diophantine equation: (190,412 × u − 40) / 131,493,125 = v. Kuttaka may be applied to solve this equation. The smallest solution is u = 118,076,020 and v = 171,872. == References == == Further reading == For a comparison of Indian and Chinese methods for solving linear diophantine equations: A. K. Bag and K. S. Shen (1984). "Kuttaka and Qiuvishu" (PDF). Indian Journal of History of Science. 19 (4): 397–405. Archived from the original (PDF) on 5 July 2015. Retrieved 1 March 2016. For a comparison of the complexity of the Aryabhata algorithm with the complexities of Euclidean algorithm, Chinese remainder theorem and Garner's algorithm: T. R. N. Rao and Chung-Huang Yang (2006). "Aryabhata Remainder Theorem: Relevance to Public Key Crypto-systems" (PDF). Circuits, System, Signals Processing. 25 (1): 1–15. Retrieved 1 March 2016. For a popular readable account of the Kuttaka: Amartya Kumar Dutta (October 2002). "Mathematics in Ancient India 2. Diophantine Equations: The Kuttaka" (PDF). Resonance. 7 (10): 6–22. Retrieved 1 March 2016. For an application of Kuttaka in computing full moon days: Robert Cooke. "Euclid's Algorithm" (PDF). Archived from the original (PDF) on 15 June 2016. Retrieved 1 March 2016. For a discussion of the computational aspects of Aryabhata algorithm: Subhash Kak (1986). "Computational Aspects of Aryabhata Algorithm" (PDF). Indian Journal of History of Science. 21 (1): 62–71. Retrieved 1 March 2016. For the interpretation of Aryabhata's original formulation of algorithm: Bibhutibhusan Datta (1932). "Elder Aryabhata's Rule for the Solution of Indeterminate Equations of the First Degree". Bulletin of Calcutta Mathematical Society. 24 (1): 19–36. For a detailed exposition of the Kuttaka algorithm as given by Sankaranarayana in his commentary on Laghubhaskariya: Bhaskaracharya-1 (Translated by K. S. Shukla) (1963). Laghu-Bhskariya. Lucknow University. pp. 103–114. Retrieved 7 March 2016.{{cite book}}: CS1 maint: numeric names: authors list (link)
Wikipedia:Kuṭṭākāra Śirōmaṇi#0
The Kuṭṭākāra Śirōmaṇi is a medieval Indian treatise in Sanskrit devoted exclusively to the study of the Kuṭṭākāra, or Kuṭṭaka, an algorithm for solving linear Diophantine equations. It is authored by one Dēvarāja about whom little is known. From statements given by the author at the end of the book, one can infer that the name of Dēvarāja's father was Varadarājācārya, then famously known as Siddhāntavallabha. Since the book contains a few verses from the Lilavati, it should have been composed during a period after the Lilavati was composed, that is after 1150 CE. Treatises such as the Kuṭṭākāra Śirōmaṇi devoted exclusively to specialized topics are very rare in Indian mathematical literature. The algorithm was first formulated by Aryabhata I and given in verses in the Ganitapada of his Aryabhatiya. Aryabhata's description of the algorithm was brief and hence obscure and incomprehensible. However, from the interpretations of the verses by later Indian mathematicians we now have a fairly clear understanding of the original formulation of the algorithm. The Kuṭṭākāra Śirōmaṇi is one of the most comprehensive treatment of the algorithm. Devraja also wrote a self commentary, the Maha Laksami Muktavali on the Kuttakara Siromani to further explain the method. The Kuṭṭākāra Śirōmaṇi is divided into three chapters, or Paricchedas. The first chapter of the book is on the Sāgra Kuṭṭākāra, the second chapter deals with the Niragra Kuṭṭākāra. This chapter also contains descriptions of the Samśliṣṭa Kuṭṭākāra. The third and the last chapter is on the Miśra-Śreṇi-Miśra-Kuṭṭākāra. The book also discusses the Vallikakuṭṭākāra and Sthitakuṭṭākāra. The methods are explained in detail with the help of illustrations and their important applications in Astronomy. == See also == Kuṭṭaka == References ==
Wikipedia:Kyne (drag queen)#0
Kyne Santos (born April 5, 1998), often mononymously billed as Kyne, is a Canadian drag queen best known for competing in the first season of Canada's Drag Race. == Early life == Santos was born in Manila in the Philippines. He is of Filipino descent. He moved to Kitchener, Ontario, Canada with his parents when he was 5. He studied math at the University of Waterloo. == Career == Prior to competing in Canada's Drag Race, Kyne had a following on YouTube for drag enthusiasts with a series of tutorials on sewing and wig styling. Kyne uses she/her pronouns when in drag, and he/him pronouns not in drag. Although Kyne was eliminated from Canada's Drag Race in the second episode, and got the "villain edit" because his self-confidence was perceived by some viewers as lapsing into cockiness, he subsequently became a popular figure on social media, attracting over 800,000 followers on TikTok with a popular series of math tutorials presented in drag. The math videos have included straightforward presentations on general mathematical concepts such as pi and googol, math riddles and memes, and in-depth analysis of the use of mathematics in the news, such as demonstrating the numerical flaws in bad reporting on issues such as the COVID-19 pandemic and race-based crime statistics. Santos has described his math tutorials as inspired by a desire to present math in a fun and entertaining way, and by a desire to break down barriers, including countering common stereotypes that LGBTQ people cannot succeed in maths and sciences, and presenting a counterexample to the widespread belief that people can be analytical or creative but not both. In January 2021, Santos also shared his coming out story in a new video for the ongoing It Gets Better Project. In 2023, Kyne was named as a part of the year's Forbes 30 Under 30 Local: Toronto list. == References ==
Wikipedia:Kōsaku Yosida#0
Kōsaku Yosida (吉田 耕作, Yosida Kōsaku, 7 February 1909, Hiroshima – 20 June 1990) was a Japanese mathematician who worked in the field of functional analysis. He is known for the Hille-Yosida theorem concerning C0-semigroups. Yosida studied mathematics at the University of Tokyo, and held posts at Osaka and Nagoya Universities. In 1955, Yosida returned to the University of Tokyo. == See also == Einar Carl Hille Functional analysis == References == Kôsaku Yosida: Functional analysis. Grundlehren der mathematischen Wissenschaften 123, Springer-Verlag, 1971 (3rd ed.), 1974 (4th ed.), 1978 (5th ed.), 1980 (6th ed.) == External links == Photo Archived 30 June 2013 at the Wayback Machine Kosaku Yosida / School of Mathematics and Statistics University of St Andrews, Scotland 94. Normed Rings and Spectral Theorems, II. By Kôsaku YOSIDA. Mathematical Inlstitute, Nagoya Imperial University. (Comm. by T.TAKAGMI, M.I.A. Oct.12,1943.) Kōsaku Yosida at the Mathematics Genealogy Project Kosaku Yosida (1909 - 1990) - Biography - MacTutor
Wikipedia:Kṛṣṇa Daivajña#0
Kṛṣṇa Daivajña was a 16th-17th century Indian astrologer-astronomer-mathematician from Varanasi patronized by the Mughal Emperor Jahangir. As a mathematician Kṛṣṇa Daivajña is best known for his elaborate commentary on Bhaskara II's (c. 1114–1185) Bījagaṇita and, as an astrologer, his fame rested on his commentary on Śrīpati's (c. 1019 – 1066) Jātakapaddhati. These commentaries contain not only detailed explanations of the text being commented upon, but also the rationales of the various rules and often additional original material. He has also composed an original work by name Chādakanirṇaya dealing with eclipses. Kṛṣṇa Daivajña's family originally lived in Dadhigrama in the Vidarbha region; his father moved his family to Varanasi and took residence there. Kṛṣṇa Daivajña's father was Ballāla and his grandfather was Trimalla. He had five brothers of whom Ranganātha was known for his commentary Guḍharthaprāśikā on Suryasiddhanta. Several of his nephews, these include Munīśvara, Gadādhara and Nārāyaṇa, have composed reputed works on astrology and astronomy. He studied under Viṣṇu, a pupil of Nṛsiṃha who was a pupil and nephew of Gaṇeśa Daivajna the author of Grahalāghava. Kṛṣṇa Daivajña was associated with the Mughal court. In his commentary on Jātakapaddhati, he used the birth date of Abdur Rahīm Khān-i Khānān, an influential courtier of the third Mughal emperor Akbar, to illustrate some of his astrological computations and observations. This points to his close connections to the Mughal court. Later, he came under the service of Jehangir from whom he received honor and emoluments. This has been attested by his nephews Munīśvara, Gadādhara and Nārāyaṇa in their writings. Subsequently, Munīśvara came under the patronage of Shah Jehan, and perhaps emulating his uncle Kṛṣṇa Daivajña, he used the emperor's date of accession as an example of a particular astrological practice in his astrological work. == Works == === Bījapallava: Commentary on Bhāskara II's Bījagaṇita === Among Kṛṣṇa Daivajña's various commentaries, the more widely known and studied one is his commentary called Bījapallava (also called Kalpālatāvatāra, Bījānkura and Nāvāakura) on Bījagaṇita. The commentary is in Sanskrit prose and contains more details than that are generally given in other conventional commentaries. T. Hayashi, a Japanese historian of Indian mathematics, in his forward to the critical edition of Bījapallava, writes: ". . . he [Kṛṣṇa Daivajña] goes on to discuss the mathematical contents in great detail, giving proofs (upapattis) for the rules and step-by-step solutions for the examples; but when the solution is easy, he merely refers to Bhaskara's auto-commentary. His discussions, often in the form of disputations between an imaginary opponent and himself, go deep into the nature of important mathematical concepts such as negative quantity, zero and unknown quantity, into the raison d'être of particular steps of the algorithms, and into various conditions for solubility of the mathematical problems treated in the Bijaganita." === Jātakapaddhati-udāharaṇa: Commentary on Śrīpati's Jātakapaddhati === Śrīpati's Jātakapaddhati is a standard work on nativities or birth charts. As already pointed out, in this work, to illustrate his arguments, Kṛṣṇa Daivajña took the birth date of Abdur Rahīm Khān-i Khānān, a prominent courtier of the third Mughal emperor Akbar. In this work he has also praised lavishly both Akbar and Khān-i Khānān. === Chadākanirṇaya === This is a work which deals with eclipses. === Janipaddhativṛtti === This work has been cited along with Chadakanirṇaya by Muniśvara in his commentary on Goladhyāya. == An image of Kṛṣṇa Daivajña == There is a Mughal painting titled "Birth of a Prince", now preserved in Museum of Fine Arts, Boston, which depicts the birth scene of Jehangir in which there is shown a group of four astrologers casting the horoscope of the new born prince. Analyzing the image, S. R. Sarma has come to the conclusion that one of the four astrologers, the one who is depicted as drawing the birth chart, should be Kṛṣṇa Daivajña. == See also == Bījapallava == Additional reading == Full text of Bījapallavaṃ, Kṛṣṇa Daivajña's commentary on the Bījagaṇita of Bhāskara II: Kṛṣṇa Daivajña (1958). Bijapallavam edited with Introduction by T. V. Radhakrishna Sastri. T. M. S. S. M. Library, Tanjore: S. Gopalan. Retrieved 22 June 2024. Full text of a critical study on Bījapallavaṃ: Sita Sundar Ram (2012). Bijapallava of Kṛṣṇa Daivajña: Algebra in Sixteenth Century India, a Critical Study. Chennai: The Kuppuswami Sastri Research Institute. Retrieved 22 June 2024. == References ==
Wikipedia:L-notation#0
L-notation is an asymptotic notation analogous to big-O notation, denoted as L n [ α , c ] {\displaystyle L_{n}[\alpha ,c]} for a bound variable n {\displaystyle n} tending to infinity. Like big-O notation, it is usually used to roughly convey the rate of growth of a function, such as the computational complexity of a particular algorithm. == Definition == It is defined as L n [ α , c ] = e ( c + o ( 1 ) ) ( ln ⁡ n ) α ( ln ⁡ ln ⁡ n ) 1 − α {\displaystyle L_{n}[\alpha ,c]=e^{(c+o(1))(\ln n)^{\alpha }(\ln \ln n)^{1-\alpha }}} where c is a positive constant, and α {\displaystyle \alpha } is a constant 0 ≤ α ≤ 1 {\displaystyle 0\leq \alpha \leq 1} . L-notation is used mostly in computational number theory, to express the complexity of algorithms for difficult number theory problems, e.g. sieves for integer factorization and methods for solving discrete logarithms. The benefit of this notation is that it simplifies the analysis of these algorithms. The e c ( ln ⁡ n ) α ( ln ⁡ ln ⁡ n ) 1 − α {\displaystyle e^{c(\ln n)^{\alpha }(\ln \ln n)^{1-\alpha }}} expresses the dominant term, and the e o ( 1 ) ( ln ⁡ n ) α ( ln ⁡ ln ⁡ n ) 1 − α {\displaystyle e^{o(1)(\ln n)^{\alpha }(\ln \ln n)^{1-\alpha }}} takes care of everything smaller. When α {\displaystyle \alpha } is 0, then L n [ α , c ] = L n [ 0 , c ] = e ( c + o ( 1 ) ) ln ⁡ ln ⁡ n = ( ln ⁡ n ) c + o ( 1 ) {\displaystyle L_{n}[\alpha ,c]=L_{n}[0,c]=e^{(c+o(1))\ln \ln n}=(\ln n)^{c+o(1)}\,} is a polylogarithmic function (a polynomial function of ln n); When α {\displaystyle \alpha } is 1 then L n [ α , c ] = L n [ 1 , c ] = e ( c + o ( 1 ) ) ln ⁡ n = n c + o ( 1 ) {\displaystyle L_{n}[\alpha ,c]=L_{n}[1,c]=e^{(c+o(1))\ln n}=n^{c+o(1)}\,} is a fully exponential function of ln n (and thereby polynomial in n). If α {\displaystyle \alpha } is between 0 and 1, the function is subexponential of ln n (and superpolynomial). == Examples == Many general-purpose integer factorization algorithms have subexponential time complexities. The best is the general number field sieve, which has an expected running time of L n [ 1 / 3 , c ] = e ( c + o ( 1 ) ) ( ln ⁡ n ) 1 / 3 ( ln ⁡ ln ⁡ n ) 2 / 3 {\displaystyle L_{n}[1/3,c]=e^{(c+o(1))(\ln n)^{1/3}(\ln \ln n)^{2/3}}} for c = ( 64 / 9 ) 1 / 3 ≈ 1.923 {\displaystyle c=(64/9)^{1/3}\approx 1.923} . The best such algorithm prior to the number field sieve was the quadratic sieve which has running time L n [ 1 / 2 , 1 ] = e ( 1 + o ( 1 ) ) ( ln ⁡ n ) 1 / 2 ( ln ⁡ ln ⁡ n ) 1 / 2 . {\displaystyle L_{n}[1/2,1]=e^{(1+o(1))(\ln n)^{1/2}(\ln \ln n)^{1/2}}.\,} For the elliptic curve discrete logarithm problem, the fastest general purpose algorithm is the baby-step giant-step algorithm, which has a running time on the order of the square-root of the group order n. In L-notation this would be L n [ 1 , 1 / 2 ] = n 1 / 2 + o ( 1 ) . {\displaystyle L_{n}[1,1/2]=n^{1/2+o(1)}.\,} The existence of the AKS primality test, which runs in polynomial time, means that the time complexity for primality testing is known to be at most L n [ 0 , c ] = ( ln ⁡ n ) c + o ( 1 ) {\displaystyle L_{n}[0,c]=(\ln n)^{c+o(1)}\,} where c has been proven to be at most 6. == History == L-notation has been defined in various forms throughout the literature. The first use of it came from Carl Pomerance in his paper "Analysis and comparison of some integer factoring algorithms". This form had only the c {\displaystyle c} parameter: the α {\displaystyle \alpha } in the formula was 1 / 2 {\displaystyle 1/2} for the algorithms he was analyzing. Pomerance had been using the letter L {\displaystyle L} (or lower case l {\displaystyle l} ) in this and previous papers for formulae that involved many logarithms. The formula above involving two parameters was introduced by Arjen Lenstra and Hendrik Lenstra in their article on "Algorithms in Number Theory". It was introduced in their analysis of a discrete logarithm algorithm of Coppersmith. This is the most commonly used form in the literature today. The Handbook of Applied Cryptography defines the L-notation with a big O {\displaystyle O} around the formula presented in this article. This is not the standard definition. The big O {\displaystyle O} would suggest that the running time is an upper bound. However, for the integer factoring and discrete logarithm algorithms that L-notation is commonly used for, the running time is not an upper bound, so this definition is not preferred. == References ==
Wikipedia:L. E. J. Brouwer#0
Luitzen Egbertus Jan "Bertus" Brouwer (27 February 1881 – 2 December 1966) was a Dutch mathematician and philosopher who worked in topology, set theory, measure theory and complex analysis. Regarded as one of the greatest mathematicians of the 20th century, he is known as one of the founders of modern topology, particularly for establishing his fixed-point theorem and the topological invariance of dimension. Brouwer also became a major figure in the philosophy of intuitionism, a constructivist school of mathematics which argues that math is a cognitive construct rather than a type of objective truth. This position led to the Brouwer–Hilbert controversy, in which Brouwer sparred with his formalist colleague David Hilbert. Brouwer's ideas were subsequently taken up by his student Arend Heyting and Hilbert's former student Hermann Weyl. In addition to his mathematical work, Brouwer also published the short philosophical tract Life, Art, and Mysticism (1905). == Biography == Brouwer was born to Dutch Protestant parents. Early in his career, Brouwer proved a number of theorems in the emerging field of topology. The most important were his fixed point theorem, the topological invariance of degree, and the topological invariance of dimension. Among mathematicians generally, the best known is the first one, usually referred to now as the Brouwer fixed point theorem. It is a corollary to the second, concerning the topological invariance of degree, which is the best known among algebraic topologists. The third theorem is perhaps the hardest. Brouwer also proved the simplicial approximation theorem in the foundations of algebraic topology, which justifies the reduction to combinatorial terms, after sufficient subdivision of simplicial complexes, of the treatment of general continuous mappings. In 1912, at age 31, he was elected a member of the Royal Netherlands Academy of Arts and Sciences. He was an Invited Speaker of the ICM in 1908 at Rome and in 1912 at Cambridge, UK. He was elected to the American Philosophical Society in 1943. Brouwer founded intuitionism, a philosophy of mathematics that challenged the then-prevailing formalism of David Hilbert and his collaborators, who included Paul Bernays, Wilhelm Ackermann, and John von Neumann (cf. Kleene (1952), p. 46–59). A variety of constructive mathematics, intuitionism is a philosophy of the foundations of mathematics. It is sometimes (simplistically) characterized by saying that its adherents do not admit the law of excluded middle as a general axiom in mathematical reasoning, although it may be proven as a theorem in some special cases. Brouwer was a member of the Significs Group. It formed part of the early history of semiotics—the study of symbols—around Victoria, Lady Welby in particular. The original meaning of his intuitionism probably cannot be completely disentangled from the intellectual milieu of that group. In 1905, at the age of 24, Brouwer expressed his philosophy of life in a short tract Life, Art and Mysticism, which has been described by the mathematician Martin Davis as "drenched in romantic pessimism" (Davis (2002), p. 94). Arthur Schopenhauer had a formative influence on Brouwer, not least because he insisted that all concepts be fundamentally based on sense intuitions. Brouwer then "embarked on a self-righteous campaign to reconstruct mathematical practice from the ground up so as to satisfy his philosophical convictions"; indeed his thesis advisor refused to accept his Chapter II "as it stands, ... all interwoven with some kind of pessimism and mystical attitude to life which is not mathematics, nor has anything to do with the foundations of mathematics" (Davis, p. 94 quoting van Stigt, p. 41). Nevertheless, in 1908: "... Brouwer, in a paper titled 'The untrustworthiness of the principles of logic', challenged the belief that the rules of the classical logic, which have come down to us essentially from Aristotle (384--322 B.C.) have an absolute validity, independent of the subject matter to which they are applied" (Kleene (1952), p. 46). "After completing his dissertation, Brouwer made a conscious decision to temporarily keep his contentious ideas under wraps and to concentrate on demonstrating his mathematical prowess" (Davis (2000), p. 95); by 1910 he had published a number of important papers, in particular the Fixed Point Theorem. Hilbert—the formalist with whom the intuitionist Brouwer would ultimately spend years in conflict—admired the young man and helped him receive a regular academic appointment (1912) at the University of Amsterdam (Davis, p. 96). It was then that "Brouwer felt free to return to his revolutionary project which he was now calling intuitionism " (ibid). He was combative as a young man. According to Mark van Atten, this pugnacity reflected his combination of independence, brilliance, high moral standards and extreme sensitivity to issues of justice. He was involved in a very public and eventually demeaning controversy with Hilbert in the late 1920s over editorial policy at Mathematische Annalen, at the time a leading journal. According to Abraham Fraenkel, Brouwer espoused Germanic Aryanness and Hilbert removed him from the editorial board of Mathematische Annalen after Brouwer objected to contributions from Ostjuden. In later years Brouwer became relatively isolated; the development of intuitionism at its source was taken up by his student Arend Heyting. Dutch mathematician and historian of mathematics Bartel Leendert van der Waerden attended lectures given by Brouwer in later years, and commented: "Even though his most important research contributions were in topology, Brouwer never gave courses in topology, but always on — and only on — the foundations of his intuitionism. It seemed that he was no longer convinced of his results in topology because they were not correct from the point of view of intuitionism, and he judged everything he had done before, his greatest output, false according to his philosophy." About his last years, Davis (2002) remarks: "...he felt more and more isolated, and spent his last years under the spell of 'totally unfounded financial worries and a paranoid fear of bankruptcy, persecution and illness.' He was killed in 1966 at the age of 85, struck by a vehicle while crossing the street in front of his house." (Davis, p. 100 quoting van Stigt. p. 110.) == Bibliography == === In English translation === Jean van Heijenoort, 1967 3rd printing 1976 with corrections, A Source Book in Mathematical Logic, 1879-1931. Harvard University Press, Cambridge MA, ISBN 0-674-32449-8 pbk. The original papers are prefaced with valuable commentary. 1923. L. E. J. Brouwer: "On the significance of the principle of excluded middle in mathematics, especially in function theory." With two Addenda and corrigenda, 334-45. Brouwer gives brief synopsis of his belief that the law of excluded middle cannot be "applied without reservation even in the mathematics of infinite systems" and gives two examples of failures to illustrate his assertion. 1925. A. N. Kolmogorov: "On the principle of excluded middle", pp. 414–437. Kolmogorov supports most of Brouwer's results but disputes a few; he discusses the ramifications of intuitionism with respect to "transfinite judgements", e.g. transfinite induction. 1927. L. E. J. Brouwer: "On the domains of definition of functions". Brouwer's intuitionistic treatment of the continuum, with an extended commentary. 1927. David Hilbert: "The foundations of mathematics," 464-80 1927. L. E. J. Brouwer: "Intuitionistic reflections on formalism," 490-92. Brouwer lists four topics on which intuitionism and formalism might "enter into a dialogue." Three of the topics involve the law of excluded middle. 1927. Hermann Weyl: "Comments on Hilbert's second lecture on the foundations of mathematics," 480-484. In 1920 Weyl, Hilbert's prize pupil, sided with Brouwer against Hilbert. But in this address Weyl "while defending Brouwer against some of Hilbert's criticisms...attempts to bring out the significance of Hilbert's approach to the problems of the foundations of mathematics." Ewald, William B., ed., 1996. From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols. Oxford Univ. Press. 1928. "Mathematics, science, and language," 1170-85. 1928. "The structure of the continuum," 1186-96. 1952. "Historical background, principles, and methods of intuitionism," 1197-1207. Brouwer, L. E. J., Collected Works, Vol. I, Amsterdam: North-Holland, 1975. Brouwer, L. E. J., Collected Works, Vol. II, Amsterdam: North-Holland, 1976. Brouwer, L. E. J., "Life, Art, and Mysticism," Notre Dame Journal of Formal Logic, vol. 37 (1996), pp. 389–429. Translated by W. P. van Stigt with an introduction by the translator, pp. 381–87. Davis quotes from this work, "a short book... drenched in romantic pessimism" (p. 94). W. P. van Stigt, 1990, Brouwer's Intuitionism, Amsterdam: North-Holland, 1990 == See also == Gerrit Mannoury George F. C. Griss Bar induction Constructivist epistemology == Notes == == References == == Further reading == Dirk van Dalen, Mystic, Geometer, and Intuitionist: The Life of L. E. J. Brouwer. Oxford Univ. Press. 1999. Volume 1: The Dawning Revolution. 2005. Volume 2: Hope and Disillusion. 2013. L. E. J. Brouwer: Topologist, Intuitionist, Philosopher. How Mathematics is Rooted in Life. London: Springer (based on previous work). Martin Davis, 2000. The Engines of Logic, W. W. Norton, London, ISBN 0-393-32229-7 pbk. Cf. Chapter Five: "Hilbert to the Rescue" wherein Davis discusses Brouwer and his relationship with Hilbert and Weyl with brief biographical information of Brouwer. Davis's references include: Stephen Kleene, 1952 with corrections 1971, 10th reprint 1991, Introduction to Metamathematics, North-Holland Publishing Company, Amsterdam Netherlands, ISBN 0-7204-2103-9. Cf. in particular Chapter III: A Critique of Mathematical Reasoning, §13 "Intuitionism" and §14 "Formalism". Koetsier, Teun, Editor, Mathematics and the Divine: A Historical Study, Amsterdam: Elsevier Science and Technology, 2004, ISBN 0-444-50328-5. Pambuccian, Victor, 2022, Brouwer’s Intuitionism: Mathematics in the Being Mode of Existence, Published in: Sriraman, B. (ed) Handbook of the History and Philosophy of Mathematical Practice. Springer, Cham. doi:10.1007/978-3-031-40846-5_103 == External links == Media related to L. E. J. Brouwer (mathematician) at Wikimedia Commons Works by or about L. E. J. Brouwer at the Internet Archive Life, Art and Mysticism written by L.E.J. Brouwer Luitzen Egbertus Jan Brouwer entry in Stanford Encyclopedia of Philosophy
Wikipedia:LLT polynomial#0
In mathematics, an LLT polynomial is one of a family of symmetric functions introduced as q-analogues of products of Schur functions. J. Haglund, M. Haiman, and N. Loehr showed how to expand Macdonald polynomials in terms of LLT polynomials. Ian Grojnowski and Mark Haiman proved a positivity conjecture for LLT polynomials that combined with the previous result implies the Macdonald positivity conjecture for Macdonald polynomials, and extended the definition of LLT polynomials to arbitrary finite root systems. == References == I. Grojnowski, M. Haiman, Affine algebras and positivity (preprint available here) J. Haglund, M. Haiman, N. Loehr A Combinatorial Formula for Macdonald PolynomialsMR2138143 J. Amer. Math. Soc. 18 (2005), no. 3, 735–761 Alain Lascoux, Bernard Leclerc, and Jean-Yves Thibon Ribbon Tableaux, Hall-Littlewood Functions, Quantum Affine Algebras and Unipotent Varieties MR1434225 J. Math. Phys. 38 (1997), no. 2, 1041–1068.
Wikipedia:LMS Journal of Computation and Mathematics#0
LMS Journal of Computation and Mathematics was a peer-reviewed online mathematics journal covering computational aspects of mathematics published by the London Mathematical Society. The journal published its first article in 1998 and ceased operation in 2017. An open access archive of the journal is maintained by Cambridge University Press. == Abstracting and indexing == The journal is abstracted and indexed in MathSciNet, Scopus, and Zentralblatt MATH. == References == == External links == Official website
Wikipedia:Ladislav Rieger#0
Ladislav Svante Rieger (1916–1963) was a Czechoslovak mathematician who worked in the areas of algebra, mathematical logic, and axiomatic set theory. He is considered to be the founder of mathematical logic in Czechoslovakia, having begun his work around 1957. == Notes == == References == == Further reading == == External links == Ladislav Rieger at the Mathematics Genealogy Project
Wikipedia:Ladislav Skula#0
Ladislav "Ladja" Skula (born June 30, 1937) is a Czech mathematician. His work spans across topology, algebraic number theory, and the theory of ordered sets. He has published over 80 papers and notable results on the Fermat quotient. He obtained his Dr.Sc. degree from Charles University in Prague with a thesis on "obor Algebra a teorie čísel" (On Algebra and Number Theory). In 1991, he was appointed professor at the Masaryk University in Brno, where he is now emeritus professor. == Selected publications == Agoh, Takashi; Dilcher, Karl; Skula, Ladislav (1997). "Fermat quotients for composite moduli". Journal of Number Theory. 66: 29–50. doi:10.1006/jnth.1997.2162. Agoh, Takashi; Dilcher, Karl; Skula, Ladislav (1998). "Wilson quotients for composite moduli". Mathematics of Computation. 67 (222). AMS: 843–861. Bibcode:1998MaCom..67..843A. doi:10.1090/S0025-5718-98-00951-X. Kureš, Miroslav; Skula, Ladislav (2011). "Reduction of matrices over orders of imaginary quadratic field". Linear Algebra and Its Applications. 435 (6). AMS: 1903–1919. doi:10.1016/j.laa.2011.03.037. Agoh, Takashi; Skula, Ladislav (1996). "Kummer type congruences and Stickelberger subideals" (PDF). Acta Arithmetica. 75 (3). Institute of Mathematics Polish Academy of Sciences: 235–250. doi:10.4064/aa-75-3-235-250. Retrieved November 7, 2014. Skula, Ladislav (1996). "On a special ideal contained in the Stickelberger ideal". Journal of Number Theory. 8: 173–195. doi:10.1006/jnth.1996.0073. Skula, Ladislav (1998). "Involutions for matrices and generalized inverses". Linear Algebra and Its Applications. 271 (1–3): 283–308. doi:10.1016/S0024-3795(97)00280-2. == External links == Skula's homepage at Masaryk University
Wikipedia:Lagrange reversion theorem#0
In mathematics, the Lagrange reversion theorem gives series or formal power series expansions of certain implicitly defined functions; indeed, of compositions with such functions. Let v be a function of x and y in terms of another function f such that v = x + y f ( v ) {\displaystyle v=x+yf(v)} Then for any function g, for small enough y: g ( v ) = g ( x ) + ∑ k = 1 ∞ y k k ! ( ∂ ∂ x ) k − 1 ( f ( x ) k g ′ ( x ) ) . {\displaystyle g(v)=g(x)+\sum _{k=1}^{\infty }{\frac {y^{k}}{k!}}\left({\frac {\partial }{\partial x}}\right)^{k-1}\left(f(x)^{k}g'(x)\right).} If g is the identity, this becomes v = x + ∑ k = 1 ∞ y k k ! ( ∂ ∂ x ) k − 1 ( f ( x ) k ) {\displaystyle v=x+\sum _{k=1}^{\infty }{\frac {y^{k}}{k!}}\left({\frac {\partial }{\partial x}}\right)^{k-1}\left(f(x)^{k}\right)} In which case the equation can be derived using perturbation theory. In 1770, Joseph Louis Lagrange (1736–1813) published his power series solution of the implicit equation for v mentioned above. However, his solution used cumbersome series expansions of logarithms. In 1780, Pierre-Simon Laplace (1749–1827) published a simpler proof of the theorem, which was based on relations between partial derivatives with respect to the variable x and the parameter y. Charles Hermite (1822–1901) presented the most straightforward proof of the theorem by using contour integration. Lagrange's reversion theorem is used to obtain numerical solutions to Kepler's equation. == Simple proof == We start by writing: g ( v ) = ∫ δ ( y f ( z ) − z + x ) g ( z ) ( 1 − y f ′ ( z ) ) d z {\displaystyle g(v)=\int \delta (yf(z)-z+x)g(z)(1-yf'(z))\,dz} Writing the delta-function as an integral we have: g ( v ) = ∬ exp ⁡ ( i k [ y f ( z ) − z + x ] ) g ( z ) ( 1 − y f ′ ( z ) ) d k 2 π d z = ∑ n = 0 ∞ ∬ ( i k y f ( z ) ) n n ! g ( z ) ( 1 − y f ′ ( z ) ) e i k ( x − z ) d k 2 π d z = ∑ n = 0 ∞ ( ∂ ∂ x ) n ∬ ( y f ( z ) ) n n ! g ( z ) ( 1 − y f ′ ( z ) ) e i k ( x − z ) d k 2 π d z {\displaystyle {\begin{aligned}g(v)&=\iint \exp(ik[yf(z)-z+x])g(z)(1-yf'(z))\,{\frac {dk}{2\pi }}\,dz\\[10pt]&=\sum _{n=0}^{\infty }\iint {\frac {(ikyf(z))^{n}}{n!}}g(z)(1-yf'(z))e^{ik(x-z)}\,{\frac {dk}{2\pi }}\,dz\\[10pt]&=\sum _{n=0}^{\infty }\left({\frac {\partial }{\partial x}}\right)^{n}\iint {\frac {(yf(z))^{n}}{n!}}g(z)(1-yf'(z))e^{ik(x-z)}\,{\frac {dk}{2\pi }}\,dz\end{aligned}}} The integral over k then gives δ ( x − z ) {\displaystyle \delta (x-z)} and we have: g ( v ) = ∑ n = 0 ∞ ( ∂ ∂ x ) n [ ( y f ( x ) ) n n ! g ( x ) ( 1 − y f ′ ( x ) ) ] = ∑ n = 0 ∞ ( ∂ ∂ x ) n [ y n f ( x ) n g ( x ) n ! − y n + 1 ( n + 1 ) ! { ( g ( x ) f ( x ) n + 1 ) ′ − g ′ ( x ) f ( x ) n + 1 } ] {\displaystyle {\begin{aligned}g(v)&=\sum _{n=0}^{\infty }\left({\frac {\partial }{\partial x}}\right)^{n}\left[{\frac {(yf(x))^{n}}{n!}}g(x)(1-yf'(x))\right]\\[10pt]&=\sum _{n=0}^{\infty }\left({\frac {\partial }{\partial x}}\right)^{n}\left[{\frac {y^{n}f(x)^{n}g(x)}{n!}}-{\frac {y^{n+1}}{(n+1)!}}\left\{(g(x)f(x)^{n+1})'-g'(x)f(x)^{n+1}\right\}\right]\end{aligned}}} Rearranging the sum and cancelling then gives the result: g ( v ) = g ( x ) + ∑ k = 1 ∞ y k k ! ( ∂ ∂ x ) k − 1 ( f ( x ) k g ′ ( x ) ) {\displaystyle g(v)=g(x)+\sum _{k=1}^{\infty }{\frac {y^{k}}{k!}}\left({\frac {\partial }{\partial x}}\right)^{k-1}\left(f(x)^{k}g'(x)\right)} == References == == External links == Lagrange Inversion [Reversion] Theorem on MathWorld Cornish–Fisher expansion, an application of the theorem Article on equation of time contains an application to Kepler's equation.
Wikipedia:Lagrange's identity (boundary value problem)#0
In the study of ordinary differential equations and their associated boundary value problems in mathematics, Lagrange's identity, named after Joseph Louis Lagrange, gives the boundary terms arising from integration by parts of a self-adjoint linear differential operator. Lagrange's identity is fundamental in Sturm–Liouville theory. In more than one independent variable, Lagrange's identity is generalized by Green's second identity. == Statement == In general terms, Lagrange's identity for any pair of functions u and v in function space C2 (that is, twice differentiable) in n dimensions is: v L [ u ] − u L ∗ [ v ] = ∇ ⋅ M , {\displaystyle vL[u]-uL^{*}[v]=\nabla \cdot {\boldsymbol {M}},} where: M i = ∑ j = 1 n a i j ( v ∂ u ∂ x j − u ∂ v ∂ x j ) + u v ( b i − ∑ j = 1 n ∂ a i j ∂ x j ) , {\displaystyle M_{i}=\sum _{j=1}^{n}a_{ij}\left(v{\frac {\partial u}{\partial x_{j}}}-u{\frac {\partial v}{\partial x_{j}}}\right)+uv\left(b_{i}-\sum _{j=1}^{n}{\frac {\partial a_{ij}}{\partial x_{j}}}\right),} and ∇ ⋅ M = ∑ i = 1 n ∂ ∂ x i M i , {\displaystyle \nabla \cdot {\boldsymbol {M}}=\sum _{i=1}^{n}{\frac {\partial }{\partial x_{i}}}M_{i},} The operator L and its adjoint operator L* are given by: L [ u ] = ∑ i , j = 1 n a i , j ∂ 2 u ∂ x i ∂ x j + ∑ i = 1 n b i ∂ u ∂ x i + c u {\displaystyle L[u]=\sum _{i,\ j=1}^{n}a_{i,j}{\frac {\partial ^{2}u}{\partial x_{i}\partial x_{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x_{i}}}+cu} and L ∗ [ v ] = ∑ i , j = 1 n ∂ 2 ( a i , j v ) ∂ x i ∂ x j − ∑ i = 1 n ∂ ( b i v ) ∂ x i + c v . {\displaystyle L^{*}[v]=\sum _{i,\ j=1}^{n}{\frac {\partial ^{2}(a_{i,j}v)}{\partial x_{i}\partial x_{j}}}-\sum _{i=1}^{n}{\frac {\partial (b_{i}v)}{\partial x_{i}}}+cv.} If Lagrange's identity is integrated over a bounded region, then the divergence theorem can be used to form Green's second identity in the form: ∫ Ω v L [ u ] d Ω = ∫ Ω u L ∗ [ v ] d Ω + ∫ S M ⋅ n d S , {\displaystyle \int _{\Omega }vL[u]\,d\Omega =\int _{\Omega }uL^{*}[v]\ d\Omega +\int _{S}{\boldsymbol {M\cdot n}}\,dS,} where S is the surface bounding the volume Ω and n is the unit outward normal to the surface S. === Ordinary differential equations === Any second order ordinary differential equation of the form: a ( x ) d 2 y d x 2 + b ( x ) d y d x + c ( x ) y + λ w ( x ) y = 0 , {\displaystyle a(x){\frac {d^{2}y}{dx^{2}}}+b(x){\frac {dy}{dx}}+c(x)y+\lambda w(x)y=0,} can be put in the form: d d x ( p ( x ) d y d x ) + ( q ( x ) + λ w ( x ) ) y ( x ) = 0. {\displaystyle {\frac {d}{dx}}\left(p(x){\frac {dy}{dx}}\right)+\left(q(x)+\lambda w(x)\right)y(x)=0.} This general form motivates introduction of the Sturm–Liouville operator L, defined as an operation upon a function f such that: L f = d d x ( p ( x ) d f d x ) + q ( x ) f . {\displaystyle Lf={\frac {d}{dx}}\left(p(x){\frac {df}{dx}}\right)+q(x)f.} It can be shown that for any u and v for which the various derivatives exist, Lagrange's identity for ordinary differential equations holds: u L v − v L u = − d d x [ p ( x ) ( v d u d x − u d v d x ) ] . {\displaystyle uLv-vLu=-{\frac {d}{dx}}\left[p(x)\left(v{\frac {du}{dx}}-u{\frac {dv}{dx}}\right)\right].} For ordinary differential equations defined in the interval [0, 1], Lagrange's identity can be integrated to obtain an integral form (also known as Green's formula): ∫ 0 1 d x ( u L v − v L u ) = [ p ( x ) ( u d v d x − v d u d x ) ] 0 1 , {\displaystyle \int _{0}^{1}dx\ (uLv-vLu)=\left[p(x)\left(u{\frac {dv}{dx}}-v{\frac {du}{dx}}\right)\right]_{0}^{1},} where p = P ( x ) {\displaystyle p=P(x)} , q = Q ( x ) {\displaystyle q=Q(x)} , u = U ( x ) {\displaystyle u=U(x)} and v = V ( x ) {\displaystyle v=V(x)} are functions of x {\displaystyle x} . u {\displaystyle u} and v {\displaystyle v} having continuous second derivatives on the interval [ 0 , 1 ] {\displaystyle [0,1]} . === Proof of form for ordinary differential equations === We have: u L v = u [ d d x ( p ( x ) d v d x ) + q ( x ) v ] , {\displaystyle uLv=u\left[{\frac {d}{dx}}\left(p(x){\frac {dv}{dx}}\right)+q(x)v\right],} and v L u = v [ d d x ( p ( x ) d u d x ) + q ( x ) u ] . {\displaystyle vLu=v\left[{\frac {d}{dx}}\left(p(x){\frac {du}{dx}}\right)+q(x)u\right].} Subtracting: u L v − v L u = u d d x ( p ( x ) d v d x ) − v d d x ( p ( x ) d u d x ) . {\displaystyle uLv-vLu=u{\frac {d}{dx}}\left(p(x){\frac {dv}{dx}}\right)-v{\frac {d}{dx}}\left(p(x){\frac {du}{dx}}\right).} The leading multiplied u and v can be moved inside the differentiation, because the extra differentiated terms in u and v are the same in the two subtracted terms and simply cancel each other. Thus, u L v − v L u = d d x ( p ( x ) u d v d x ) − d d x ( v p ( x ) d u d x ) , = d d x [ p ( x ) ( u d v d x − v d u d x ) ] , {\displaystyle {\begin{aligned}uLv-vLu&={\frac {d}{dx}}\left(p(x)u{\frac {dv}{dx}}\right)-{\frac {d}{dx}}\left(vp(x){\frac {du}{dx}}\right),\\&={\frac {d}{dx}}\left[p(x)\left(u{\frac {dv}{dx}}-v{\frac {du}{dx}}\right)\right],\end{aligned}}} which is Lagrange's identity. Integrating from zero to one: ∫ 0 1 d x ( u L v − v L u ) = [ p ( x ) ( u d v d x − v d u d x ) ] 0 1 , {\displaystyle \int _{0}^{1}dx\ (uLv-vLu)=\left[p(x)\left(u{\frac {dv}{dx}}-v{\frac {du}{dx}}\right)\right]_{0}^{1},} as was to be shown. == References ==
Wikipedia:Lagrangian system#0
In mathematics, a Lagrangian system is a pair (Y, L), consisting of a smooth fiber bundle Y → X and a Lagrangian density L, which yields the Euler–Lagrange differential operator acting on sections of Y → X. In classical mechanics, many dynamical systems are Lagrangian systems. The configuration space of such a Lagrangian system is a fiber bundle Q → R {\displaystyle Q\rightarrow \mathbb {R} } over the time axis R {\displaystyle \mathbb {R} } . In particular, Q = R × M {\displaystyle Q=\mathbb {R} \times M} if a reference frame is fixed. In classical field theory, all field systems are the Lagrangian ones. == Lagrangians and Euler–Lagrange operators == A Lagrangian density L (or, simply, a Lagrangian) of order r is defined as an n-form, n = dim X, on the r-order jet manifold JrY of Y. A Lagrangian L can be introduced as an element of the variational bicomplex of the differential graded algebra O∗∞(Y) of exterior forms on jet manifolds of Y → X. The coboundary operator of this bicomplex contains the variational operator δ which, acting on L, defines the associated Euler–Lagrange operator δL. === In coordinates === Given bundle coordinates xλ, yi on a fiber bundle Y and the adapted coordinates xλ, yi, yiΛ, (Λ = (λ1, ...,λk), |Λ| = k ≤ r) on jet manifolds JrY, a Lagrangian L and its Euler–Lagrange operator read L = L ( x λ , y i , y Λ i ) d n x , {\displaystyle L={\mathcal {L}}(x^{\lambda },y^{i},y_{\Lambda }^{i})\,d^{n}x,} δ L = δ i L d y i ∧ d n x , δ i L = ∂ i L + ∑ | Λ | ( − 1 ) | Λ | d Λ ∂ i Λ L , {\displaystyle \delta L=\delta _{i}{\mathcal {L}}\,dy^{i}\wedge d^{n}x,\qquad \delta _{i}{\mathcal {L}}=\partial _{i}{\mathcal {L}}+\sum _{|\Lambda |}(-1)^{|\Lambda |}\,d_{\Lambda }\,\partial _{i}^{\Lambda }{\mathcal {L}},} where d Λ = d λ 1 ⋯ d λ k , d λ = ∂ λ + y λ i ∂ i + ⋯ , {\displaystyle d_{\Lambda }=d_{\lambda _{1}}\cdots d_{\lambda _{k}},\qquad d_{\lambda }=\partial _{\lambda }+y_{\lambda }^{i}\partial _{i}+\cdots ,} denote the total derivatives. For instance, a first-order Lagrangian and its second-order Euler–Lagrange operator take the form L = L ( x λ , y i , y λ i ) d n x , δ i L = ∂ i L − d λ ∂ i λ L . {\displaystyle L={\mathcal {L}}(x^{\lambda },y^{i},y_{\lambda }^{i})\,d^{n}x,\qquad \delta _{i}L=\partial _{i}{\mathcal {L}}-d_{\lambda }\partial _{i}^{\lambda }{\mathcal {L}}.} === Euler–Lagrange equations === The kernel of an Euler–Lagrange operator provides the Euler–Lagrange equations δL = 0. == Cohomology and Noether's theorems == Cohomology of the variational bicomplex leads to the so-called variational formula d L = δ L + d H Θ L , {\displaystyle dL=\delta L+d_{H}\Theta _{L},} where d H Θ L = d x λ ∧ d λ ϕ , ϕ ∈ O ∞ ∗ ( Y ) {\displaystyle d_{H}\Theta _{L}=dx^{\lambda }\wedge d_{\lambda }\phi ,\qquad \phi \in O_{\infty }^{*}(Y)} is the total differential and θL is a Lepage equivalent of L. Noether's first theorem and Noether's second theorem are corollaries of this variational formula. == Graded manifolds == Extended to graded manifolds, the variational bicomplex provides description of graded Lagrangian systems of even and odd variables. == Alternative formulations == In a different way, Lagrangians, Euler–Lagrange operators and Euler–Lagrange equations are introduced in the framework of the calculus of variations. == Classical mechanics == In classical mechanics equations of motion are first and second order differential equations on a manifold M or various fiber bundles Q over R {\displaystyle \mathbb {R} } . A solution of the equations of motion is called a motion. == See also == Lagrangian mechanics Calculus of variations Noether's theorem Noether identities Jet bundle Jet (mathematics) Variational bicomplex == References == Arnold, V. I. (1989). Mathematical Methods of Classical Mechanics. Graduate Texts in Mathematics. Vol. 60 (second ed.). Springer-Verlag. ISBN 0-387-96890-3. Giachetta, G.; Mangiarotti, L.; Sardanashvily, G. (1997). New Lagrangian and Hamiltonian Methods in Field Theory. World Scientific. ISBN 981-02-1587-8. Giachetta, G.; Mangiarotti, L.; Sardanashvily, G. (2011). Geometric formulation of classical and quantum mechanics. World Scientific. doi:10.1142/7816. hdl:11581/203967. ISBN 978-981-4313-72-8. Olver, P. (1993). Applications of Lie Groups to Differential Equations (2 ed.). Springer-Verlag. ISBN 0-387-94007-3. Sardanashvily, G. (2013). "Graded Lagrangian formalism". Int. J. Geom. Methods Mod. Phys. 10 (5). World Scientific: 1350016. arXiv:1206.2508. doi:10.1142/S0219887813500163. ISSN 0219-8878. == External links == Sardanashvily, G. (2009). Fibre Bundles, Jet Manifolds and Lagrangian Theory. Lectures for Theoreticians (Lecture notes). arXiv:0908.1886.
Wikipedia:Laguerre transformations#0
The Laguerre transformations or axial homographies are an analogue of Möbius transformations over the dual numbers. When studying these transformations, the dual numbers are often interpreted as representing oriented lines on the plane. The Laguerre transformations map lines to lines, and include in particular all isometries of the plane. Strictly speaking, these transformations act on the dual number projective line, which adjoins to the dual numbers a set of points at infinity. Topologically, this projective line is equivalent to a cylinder. Points on this cylinder are in a natural one-to-one correspondence with oriented lines on the plane. == Definition == A Laguerre transformation is a linear fractional transformation z ↦ a z + b c z + d {\displaystyle z\mapsto {\frac {az+b}{cz+d}}} where a , b , c , d {\displaystyle a,b,c,d} are all dual numbers, z {\displaystyle z} lies on the dual number projective line, and a d − b c {\displaystyle ad-bc} is not a zero divisor. A dual number is a hypercomplex number of the form x + y ε {\displaystyle x+y\varepsilon } where ε 2 = 0 {\displaystyle \varepsilon ^{2}=0} but ε ≠ 0 {\displaystyle \varepsilon \neq 0} . This can be compared to the complex numbers which are of the form x + y i {\displaystyle x+yi} where i 2 = − 1 {\displaystyle i^{2}=-1} . The points of the dual number projective line can be defined equivalently in two ways: The usual set of dual numbers, but with some additional "points at infinity". Formally, the set is { x + y ε ∣ x ∈ R , y ∈ R } ∪ { 1 x ε ∣ x ∈ R } {\displaystyle \{x+y\varepsilon \mid x\in \mathbb {R} ,y\in \mathbb {R} \}\cup \left\{{\frac {1}{x\varepsilon }}\mid x\in \mathbb {R} \right\}} . The points at infinity can be expressed as 1 x ε {\displaystyle {\frac {1}{x\varepsilon }}} where x {\displaystyle x} is an arbitrary real number. Different values of x {\displaystyle x} correspond to different points at infinity. These points are infinite because ε {\displaystyle \varepsilon } is often understood as being an infinitesimal number, and so 1 / ε {\displaystyle 1/\varepsilon } is therefore infinite. The homogeneous coordinates [x : y] with x and y dual numbers such that the ideal that they generate is the whole ring of dual numbers. The ring is viewed through the injection x ↦ [x : 1]. The projective line includes points [1 : yε]. == Line coordinates == A line which makes an angle θ {\displaystyle \theta } with the x-axis, and whose x-intercept is denoted s {\displaystyle s} , is represented by the dual number z = tan ⁡ ( θ / 2 ) ( 1 + ε s ) . {\displaystyle z=\tan(\theta /2)(1+\varepsilon s).} The above doesn't make sense when the line is parallel to the x-axis. In that case, if θ = π {\displaystyle \theta =\pi } then set z = − 2 ε R {\displaystyle z={\frac {-2}{\varepsilon R}}} where R {\displaystyle R} is the y-intercept of the line. This may not appear to be valid, as one is dividing by a zero divisor, but this is a valid point on the projective dual line. If θ = 2 π {\displaystyle \theta =2\pi } then set z = 1 2 ε R {\displaystyle z={\frac {1}{2}}\varepsilon R} . Finally, observe that these coordinates represent oriented lines. An oriented line is an ordinary line with one of two possible orientations attached to it. This can be seen from the fact that if θ {\displaystyle \theta } is increased by π {\displaystyle \pi } then the resulting dual number representative is not the same. == Matrix representations == It's possible to express the above line coordinates as homogeneous coordinates z = [ sin ⁡ ( θ + ε R 2 ) : cos ⁡ ( θ + ε R 2 ) ] {\displaystyle z=\left[\sin \left({\frac {\theta +\varepsilon R}{2}}\right):\cos \left({\frac {\theta +\varepsilon R}{2}}\right)\right]} where R {\displaystyle R} is the perpendicular distance of the line from the origin. This representation has numerous advantages: One advantage is that there is no need to break into different cases, such as parallel to the x {\displaystyle x} -axis and non-parallel. The other advantage is that these homogeneous coordinates can be interpreted as vectors, allowing us to multiply them by matrices. Every Laguerre transformation can be represented as a 2×2 matrix whose entries are dual numbers. The matrix representation of z ↦ p z + q r z + s {\displaystyle z\mapsto {\frac {pz+q}{rz+s}}} is ( p q r s ) {\displaystyle {\begin{pmatrix}p&q\\r&s\end{pmatrix}}} (but notice that any non-nilpotent scalar multiple of this matrix represents the same Laguerre transformation). Additionally, as long as the determinant of a 2×2 matrix with dual-number entries is not nilpotent, then it represents a Laguerre transformation. (Note that in the above, we represent the homogeneous vector [ z : w ] {\displaystyle [z:w]} as a column vector in the obvious way, instead of as a row vector.) == Points, oriented lines and oriented circles == Laguerre transformations do not act on points. This is because if three oriented lines pass through the same point, their images under a Laguerre transformation do not have to meet at one point. Laguerre transformations can be seen as acting on oriented lines as well as on oriented circles. An oriented circle is an ordinary circle with an orientation represented by a binary value attached to it, which is either 1 {\displaystyle 1} or − 1 {\displaystyle -1} . The only exception is a circle of radius zero, which has orientation equal to 0 {\displaystyle 0} . A point is defined to be an oriented circle of radius zero. If an oriented circle has orientation equal to 1 {\displaystyle 1} , then the circle is said to be "anti-clockwise" oriented; if it has orientation equal to − 1 {\displaystyle -1} then it is "clockwise" oriented. The radius of an oriented circle is defined to be the radius r {\displaystyle r} of the underlying unoriented circle multiplied by the orientation. The image of an oriented circle under a Laguerre transformation is another oriented circle. If two oriented figures – either circles or lines – are tangent to each other then their images under a Laguerre transformation are also tangent. Two oriented circles are defined to be tangent if their underlying circles are tangent and their orientations are equal at the point of contact. Tangency between lines and circles is defined similarly. A Laguerre transformation might map a point to an oriented circle which is no longer a point. An oriented circle can never be mapped to an oriented line. Likewise, an oriented line can never be mapped to an oriented circle. This is opposite to Möbius geometry, where lines and circles can be mapped to each other, but neither can be mapped to points. Both Möbius geometry and Laguerre geometry are subgeometries of Lie sphere geometry, where points and oriented lines can be mapped to each other, but tangency remains preserved. The matrix representations of oriented circles (which include points but not lines) are precisely the invertible 2 × 2 {\displaystyle 2\times 2} skew-Hermitian dual number matrices. These are all of the form H = ( ε a b + c ε − b + c ε ε d ) {\displaystyle H={\begin{pmatrix}\varepsilon a&b+c\varepsilon \\-b+c\varepsilon &\varepsilon d\end{pmatrix}}} (where all the variables are real, and b ≠ 0 {\displaystyle b\neq 0} ). The set of oriented lines tangent to an oriented circle is given by { v ∈ D P 1 ∣ v ∗ H v = 0 } {\displaystyle \{v\in \mathbb {DP} ^{1}\mid v^{*}Hv=0\}} where D P 1 {\displaystyle \mathbb {DP} ^{1}} denotes the projective line over the dual numbers D {\displaystyle \mathbb {D} } . Applying a Laguerre transformation represented by M {\displaystyle M} to the oriented circle represented by H {\displaystyle H} gives the oriented circle represented by ( M − 1 ) ∗ H M − 1 {\displaystyle (M^{-1})^{*}HM^{-1}} . The radius of an oriented circle is equal to the half the trace. The orientation is then the sign of the trace. == Profile == Note that the animated figures below show some oriented lines, but without any visual indication of a line's orientation (so two lines that differ only in orientation are displayed in the same way); oriented circles are shown as a set of oriented tangent lines, which results in a certain visual effect. The following can be found in Isaak Yaglom's Complex numbers in geometry and a paper by Gutin entitled Generalizations of singular value decomposition to dual-numbered matrices. === Unitary matrices === Mappings of the form z ↦ p z − q q ¯ z + p ¯ {\displaystyle z\mapsto {\frac {pz-q}{{\bar {q}}z+{\bar {p}}}}} express rigid body motions (sometimes called direct Euclidean isometries). The matrix representations of these transformations span a subalgebra isomorphic to the planar quaternions. The mapping z ↦ − z {\displaystyle z\mapsto -z} represents a reflection about the x-axis. The transformation z ↦ 1 / z {\displaystyle z\mapsto 1/z} expresses a reflection about the y-axis. Observe that if U {\displaystyle U} is the matrix representation of any combination of the above three transformations, but normalised so as to have determinant 1 {\displaystyle 1} , then U {\displaystyle U} satisfies U U ∗ = U ∗ U = I {\displaystyle UU^{*}=U^{*}U=I} where U ∗ {\displaystyle U^{*}} means U ¯ T {\displaystyle {\overline {U}}^{\mathrm {T} }} . We will call these unitary matrices. Notice though that these are unitary in the sense of the dual numbers and not the complex numbers. The unitary matrices express precisely the Euclidean isometries. === Axial dilation matrices === An axial dilation by t {\displaystyle t} units is a transformation of the form z + ( ε t / 2 ) ( − ε t / 2 ) z + 1 {\displaystyle {\frac {z+(\varepsilon t/2)}{(-\varepsilon t/2)z+1}}} . An axial dilation by t {\displaystyle t} units increases the radius of all oriented circles by t {\displaystyle t} units while preserving their centres. If a circle has negative orientation, then its radius is considered negative, and therefore for some positive values of t {\displaystyle t} the circle actually shrinks. An axial dilation is depicted in Figure 1, in which two circles of opposite orientations undergo the same axial dilation. On lines, an axial dilation by t {\displaystyle t} units maps any line z {\displaystyle z} to a line z ′ {\displaystyle z'} such that z {\displaystyle z} and z ′ {\displaystyle z'} are parallel, and the perpendicular distance between z {\displaystyle z} and z ′ {\displaystyle z'} is t {\displaystyle t} . Lines that are parallel but have opposite orientations move in opposite directions. === Real diagonal matrices === The transformation z ↦ k z {\displaystyle z\mapsto kz} for a value of k {\displaystyle k} that's real preserves the x-intercept of a line, while changing its angle to the x-axis. See Figure 2 to observe the effect on a grid of lines (including the x axis in the middle) and Figure 3 to observe the effect on two circles that differ initially only in orientation (to see that the outcome is sensitive to orientation). === A general decomposition === Putting it all together, a general Laguerre transformation in matrix form can be expressed as U S V ∗ {\displaystyle USV^{*}} where U {\displaystyle U} and V {\displaystyle V} are unitary, and S {\displaystyle S} is a matrix either of the form ( a 0 0 b ) {\displaystyle {\begin{pmatrix}a&0\\0&b\end{pmatrix}}} or ( a − b ε b ε a ) {\displaystyle {\begin{pmatrix}a&-b\varepsilon \\b\varepsilon &a\end{pmatrix}}} where a {\displaystyle a} and b {\displaystyle b} are real numbers. The matrices U {\displaystyle U} and V {\displaystyle V} express Euclidean isometries. The matrix S {\displaystyle S} either represents a transformation of the form z ↦ k z {\displaystyle z\mapsto kz} or an axial dilation. The resemblance to Singular Value Decomposition should be clear. Note: In the event that S {\displaystyle S} is an axial dilation, the factor V {\displaystyle V} can be set to the identity matrix. This follows from the fact that if V {\displaystyle V} is unitary and S {\displaystyle S} is an axial dilation, then it can be seen that S V = { V S , det ( V ) = + 1 V S T , det ( V ) = − 1 {\displaystyle SV={\begin{cases}VS,&\det(V)=+1\\VS^{\mathrm {T} },&\det(V)=-1\end{cases}}} , where S T {\displaystyle S^{\mathrm {T} }} denotes the transpose of S {\displaystyle S} . So U S V ∗ = { ( U V ∗ ) S , det ( V ) = + 1 ( U V ∗ ) S T , det ( V ) = − 1 {\displaystyle USV^{*}={\begin{cases}(UV^{*})S,&\det(V)=+1\\(UV^{*})S^{\mathrm {T} },&\det(V)=-1\end{cases}}} . == Other number systems and the parallel postulate == === Complex numbers and elliptic geometry === A question arises: What happens if the role of the dual numbers above is changed to the complex numbers? In that case, the complex numbers represent oriented lines in the elliptic plane (the plane which elliptic geometry takes places over). This is in contrast to the dual numbers, which represent oriented lines in the Euclidean plane. The elliptic plane is essentially a sphere (but where antipodal points are identified), and the lines are thus great circles. We can choose an arbitrary great circle to be the equator. The oriented great circle which intersects the equator at longitude s {\displaystyle s} , and makes an angle θ {\displaystyle \theta } with the equator at the point of intersection, can be represented by the complex number tan ⁡ ( θ / 2 ) ( cos ⁡ ( s ) + i sin ⁡ ( s ) ) {\displaystyle \tan(\theta /2)(\cos(s)+i\sin(s))} . In the case where θ = π {\displaystyle \theta =\pi } (where the line is literally the same as the equator, but oriented in the opposite direction as when θ = 0 {\displaystyle \theta =0} ) the oriented line is represented as ∞ {\displaystyle \infty } . Similar to the case of the dual numbers, the unitary matrices act as isometries of the elliptic plane. The set of "elliptic Laguerre transformations" (which are the analogues of the Laguerre transformations in this setting) can be decomposed using Singular Value Decomposition of complex matrices, in a similar way to how we decomposed Euclidean Laguerre transformations using an analogue of Singular Value Decomposition for dual-number matrices. === Split-complex numbers and hyperbolic geometry === If the role of the dual numbers or complex numbers is changed to the split-complex numbers, then a similar formalism can be developed for representing oriented lines on the hyperbolic plane instead of the Euclidean or elliptic planes: A split-complex number can be written in the form ( a , − b − 1 ) {\displaystyle (a,-b^{-1})} because the algebra in question is isomorphic to R ⊕ R {\displaystyle \mathbb {R} \oplus \mathbb {R} } . (Notice though that as a *-algebra, as opposed to a mere algebra, the split-complex numbers are not decomposable in this way). The terms a {\displaystyle a} and b {\displaystyle b} in ( a , − b − 1 ) {\displaystyle (a,-b^{-1})} represent points on the boundary of the hyperbolic plane; they are respectively the starting and ending points of an oriented line. Since the boundary of the hyperbolic plane is homeomorphic to the projective line R P 1 {\displaystyle \mathbb {RP} ^{1}} , we need a {\displaystyle a} and b {\displaystyle b} to belong to the projective line R P 1 {\displaystyle \mathbb {RP} ^{1}} instead of the affine line R 1 {\displaystyle \mathbb {R} ^{1}} . Indeed, this hints that ( R ⊕ R ) P 1 ≅ R P 1 ⊕ R P 1 {\displaystyle (\mathbb {R} \oplus \mathbb {R} )\mathbb {P} ^{1}\cong \mathbb {R} \mathbb {P} ^{1}\oplus \mathbb {R} \mathbb {P} ^{1}} . The analogue of unitary matrices over the split-complex numbers are the isometries of the hyperbolic plane. This is shown by Yaglom. Furthermore, the set of linear fractional transformations can be decomposed in a way that resembles Singular Value Decomposition, but which also unifies it with the Jordan decomposition. === Summary === We therefore have a correspondence between the three planar number systems (complex, dual and split-complex numbers) and the three non-Euclidean geometries. The number system that corresponds to Euclidean geometry is the dual numbers. == In higher dimensions == === Euclidean === n-dimensional Laguerre space is isomorphic to n + 1 Minkowski space. To associate a point P = ( x 1 , x 2 , … , x n , r ) {\displaystyle P=(x_{1},x_{2},\dotsc ,x_{n},r)} in Minkowski space to an oriented hypersphere, intersect the light cone centred at P {\displaystyle P} with the t = 0 {\displaystyle t=0} hyperplane. The group of Laguerre transformations is isomorphic then to the Poincaré group R n , 1 ⋊ O ⁡ ( n , 1 ) {\displaystyle \mathbb {R} ^{n,1}\rtimes \operatorname {O} (n,1)} . These transformations are exactly those which preserve a kind of squared distance between oriented circles called their Darboux product. The direct Laguerre transformations are defined as the subgroup R n , 1 ⋊ O + ⁡ ( n , 1 ) {\displaystyle \mathbb {R} ^{n,1}\rtimes \operatorname {O} ^{+}(n,1)} . In 2 dimensions, the direct Laguerre transformations can be represented by 2×2 dual number matrices. If the 2×2 dual number matrices are understood as constituting the Clifford algebra Cl 2 , 0 , 1 ⁡ ( R ) {\displaystyle \operatorname {Cl} _{2,0,1}(\mathbb {R} )} , then analogous Clifford algebraic representations are possible in higher dimensions. If we embed Minkowski space R n , 1 {\displaystyle \mathbb {R} ^{n,1}} in the projective space R P n + 1 {\displaystyle \mathbb {RP} ^{n+1}} while keeping the transformation group the same, then the points at infinity are oriented flats. We call them "flats" because their shape is flat. In 2 dimensions, these are the oriented lines. As an aside, there are two non-equivalent definitions of a Laguerre transformation: Either as a Lie sphere transformation that preserves oriented flats, or as a Lie sphere transformation that preserves the Darboux product. We use the latter convention in this article. Note that even in 2 dimensions, the former transformation group is more general than the latter: A homothety for example maps oriented lines to oriented lines, but does not in general preserve the Darboux product. This can be demonstrated using the homothety centred at ( 0 , 0 ) {\displaystyle (0,0)} by t {\displaystyle t} units. Now consider the action of this transformation on two circles: One simply being the point ( 0 , 0 ) {\displaystyle (0,0)} , and the other being a circle of raidus 1 {\displaystyle 1} centred at ( 0 , 0 ) {\displaystyle (0,0)} . These two circles have a Darboux product equal to − 1 {\displaystyle -1} . Their images under the homothety have a Darboux product equal to − t 2 {\displaystyle -t^{2}} . This therefore only gives a Laguerre transformation when t 2 = 1 {\displaystyle t^{2}=1} . == Conformal interpretation == In this section, we interpret Laguerre transformations differently from in the rest of the article. When acting on line coordinates, Laguerre transformations are not understood to be conformal in the sense described here. This is clearly demonstrated in Figure 2. The Laguerre transformations preserve angles when the proper angle for the dual number plane is identified. When a ray y = mx, x ≥ 0, and the positive x-axis are taken for sides of an angle, the slope m is the magnitude of this angle. This number m corresponds to the signed area of the right triangle with base on the interval [(√2,0), (√2, m √2)]. The line {1 + aε: a ∈ ℝ}, with the dual number multiplication, forms a subgroup of the unit dual numbers, each element being a shear mapping when acting on the dual number plane. Other angles in the plane are generated by such action, and since shear mapping preserves area, the size of these angles is the same as the original. Note that the inversion z to 1/z leaves angle size invariant. As the general Laguerre transformation is generated by translations, dilations, shears, and inversions, and all of these leave angle invariant, the general Laguerre transformation is conformal in the sense of these angles.: 81 == See also == Edmond Laguerre Laguerre plane Isaak Yaglom Line coordinates == References == == External links == "Oriented circles and 3D relativistic geometry" An elementary video introducing concepts in Laguerre geometry. The video is presented from the rational trigonometry perspective
Wikipedia:Lahun Mathematical Papyri#0
The Kahun Papyri (KP; also Petrie Papyri or Lahun Papyri) are a collection of ancient Egyptian texts discussing administrative, mathematical and medical topics. Its many fragments were discovered by Flinders Petrie in 1889 and are kept at the University College London. This collection of papyri is one of the largest ever found. Most of the texts are dated to ca. 1825 BC, to the reign of Amenemhat III. In general the collection spans the Middle Kingdom of Egypt. The texts span a variety of topics: Business papers of the cult of Senusret II. Hymns to king Senusret III. The Kahun Gynaecological Papyrus, which deals with gynaecological illnesses and conditions. The Lahun Mathematical Papyri are a collection of mathematical texts. A veterinarian Papyrus A late Middle Kingdom account, listing festivals. == See also == List of ancient Egyptian papyri == References == == External links == A Kahun Mathematical Fragment Archived 2012-09-03 at archive.today, a paper by John A.R. Legon PlanetMath: Kahun Papyrus and Arithmetic Progressions Archived 2016-01-19 at the Wayback Machine
Wikipedia:Lakes of Wada#0
In mathematics, the lakes of Wada (和田の湖, Wada no mizuumi) are three disjoint connected open sets of the plane or open unit square with the counterintuitive property that they all have the same boundary. In other words, for any point selected on the boundary of one of the lakes, the other two lakes' boundaries also contain that point. More than two sets with the same boundary are said to have the Wada property; examples include Wada basins in dynamical systems. This property is rare in real-world systems. The lakes of Wada were introduced by Kunizō Yoneyama (1917, page 60), who credited the discovery to Takeo Wada. His construction is similar to the construction by Brouwer (1910) of an indecomposable continuum, and in fact it is possible for the common boundary of the three sets to be an indecomposable continuum. == Construction of the lakes of Wada == The Lakes of Wada are formed by starting with a closed unit square of dry land, and then digging 3 lakes according to the following rule: On day n = 1, 2, 3,... extend lake n mod 3 (= 0, 1, 2) so that it is open and connected and passes within a distance 1/n of all remaining dry land. This should be done so that the remaining dry land remains homeomorphic to a closed unit square. After an infinite number of days, the three lakes are still disjoint connected open sets, and the remaining dry land is the boundary of each of the 3 lakes. For example, the first five days might be (see the image on the right): Dig a blue lake of width 1/3 passing within √2/3 of all dry land. Dig a red lake of width 1/32 passing within √2/32 of all dry land. Dig a green lake of width 1/33 passing within √2/33 of all dry land. Extend the blue lake by a channel of width 1/34 passing within √2/34 of all dry land. (The small channel connects the thin blue lake to the thick one, near the middle of the image.) Extend the red lake by a channel of width 1/35 passing within √2/35 of all dry land. (The tiny channel connects the thin red lake to the thick one, near the top left of the image.) A variation of this construction can produce a countable infinite number of connected lakes with the same boundary: instead of extending the lakes in the order 1, 2, 0, 1, 2, 0, 1, 2, 0, ...., extend them in the order 0, 0, 1, 0, 1, 2, 0, 1, 2, 3, 0, 1, 2, 3, 4, ... and so on. == Wada basins == Wada basins are certain special basins of attraction studied in the mathematics of non-linear systems. A basin having the property that every neighborhood of every point on the boundary of that basin intersects at least three basins is called a Wada basin, or said to have the Wada property. Unlike the Lakes of Wada, Wada basins are often disconnected. An example of Wada basins is given by the Newton fractal describing the basins of attraction of the Newton–Raphson method for finding the roots of a cubic polynomial with distinct roots, such as z3 − 1; see the picture. == Wada basins in chaos theory == In chaos theory, Wada basins arise very frequently. Usually, the Wada property can be seen in the basin of attraction of dissipative dynamical systems. But the exit basins of Hamiltonian systems can also show the Wada property. In the context of the chaotic scattering of systems with multiple exits, basins of exits show the Wada property. M. A. F. Sanjuán et al. has shown that in the Hénon–Heiles system the exit basins have this Wada property. == See also == List of topologies – List of concrete topologies and topological spaces == References == Brouwer, L. E. J. (1910), "Zur Analysis Situs", Mathematische Annalen, 68 (3): 422–434, doi:10.1007/BF01475781 Yoneyama, Kunizô (1917), "Theory of Continuous Set of Points", Tôhoku Mathematical Journal, 12: 43–158 == Further reading == Breban, Romulus; Nusse, H E. (2005), "On the creation of Wada basins in interval maps through fixed point tangent bifurcation", Physica D, 207 (1–2): 52–63, Bibcode:2005PhyD..207...52B, doi:10.1016/j.physd.2005.05.012 Coudene, Yves (2006), "Pictures of hyperbolic dynamical systems" (PDF), Notices of the American Mathematical Society, 53 (1): 8–13, ISSN 0002-9920, MR 2189945 Gelbaum, Bernard R.; Olmsted, John M. H. (2003), Counterexamples in analysis, Mineola, N.Y.: Dover Publications, ISBN 0-486-42875-3 example 10.13 Hocking, J. G.; Young, G. S. (1988), Topology, New York: Dover Publications, p. 144, ISBN 0-486-65676-4 Kennedy, J; Yorke, J.A. (1991), "Basins of Wada", Physica D, 51 (1–3): 213–225, Bibcode:1991PhyD...51..213K, doi:10.1016/0167-2789(91)90234-Z Sweet, D.; Ott, E.; Yorke, J. A. (1999), "Complex topology in Chaotic scattering: A Laboratory Observation", Nature, 399 (6734): 315, Bibcode:1999Natur.399..315S, doi:10.1038/20573 == External links == An experimental realization of Wada basins (with photographs), andamooka.org An introduction to Wada basins and the Wada property www-chaos.umd.edu Reflective Spheres of Infinity: Wada Basin Fractals, miqel.com Wada basins: Rendering chaotic scattering, astronomy.swin.edu.au
Wikipedia:Lam Lay Yong#0
Lam Lay Yong (maiden name Oon Lay Yong, Chinese: 蓝丽蓉; pinyin: Lán Lìróng; born 1936) is a retired Professor of Mathematics. == Academic career == From 1988 to 1996 she was Professor at the Department of Mathematics, National University of Singapore (NUS). She graduated from the University of Malaya (later becoming University of Singapore) in 1957 and pursued graduate study in Cambridge University, obtaining her Ph.D. degree from University of Singapore in 1966, and becoming a lecturer at the University of Singapore. She was promoted to full professor in 1988, taught in NUS for 35 years, and retired in 1996. From 1974 to 1990, Lam Lay Yong was the associate editor of Historia Mathematica. Lam was a member of Académie Internationale d'Histoire des Sciences. In 2001, Lam Lay Yong was awarded the Kenneth O. May Prize jointly with Ubiratan D'Ambrosio. Lam was the first Asian and first woman to receive this award. Her reception speech was Ancient Chinese Mathematics and its influence on World Mathematics. Lam Lay Yong also won the 2005 Outstanding Science Alumni Award from NUS. She is the granddaughter of Tan Kah Kee and niece of Lee Kong Chian. == Chinese origins of Hindu-Arabic Numerals Hypothesis == Lam Lay Yong has hypothesised that Hindu–Arabic numeral system originated in China. This is based on her comparative studies on Chinese counting rods system. She states that the rod numerals and the Hindu numerals have a few features in common. These are nine signs, concept of zero, a place value system, and decimal base. She claims that, "While no one knows how the Hindu-Arabic system originates in India, on the other hand, there is strong evidence of a transmission of the concept of the rod system to India." She even claims that there is no unquestionable evidence that the system originated in India, and that she claims that there are two factors concerning this. One was from mathematician's mention, for example a critique of Severus Sebokht on Indian ingenuity, and Al-Khwarizmi's book on Hindu Calculation. The other factor is the presence of Brahmi numerals. However Michel Danino criticised this by saying that Lam Lay Yong's evidence for this was not evidence-based nor rigorous, and that she is ill-qualified for cross-cultural studies. According to Michel Danino, her thesis has not been accepted, thus, the Chinese origin of Hindu-Arabic numerals remains to be hypothetical, and not widely accepted. All of this seems to contradict Yong's claims that there is strong evidence of rod numerals in India. == Publication == Jiu Zhang Suanshu (1994) "(Nine Chapters on the Mathematical Art): An Overview, Archive for History of Exact Sciences, vol. 47: pp. 1–51. Zhang Qiujian Suanjing (1997) "(The Mathematical Classic of Zhang Qiujian): An Overview", Archive for History of Exact Sciences, vol. 50: pp. 201–240. Lam Lay Yong, Ang Tian Se (2004) Fleeting Footsteps. Tracing the Conception of Arithmetic and Algebra in Ancient China, Revised Edition, World Scientific, Singapore. Lam Lay Yong (1977) A Critical Study of the Yang Hui suan fa, NUS Press. Lam Lay Yong, "A Chinese Genesis, Rewriting the history of our numeral system", Archive for History of Exact Sciences 38: 101–108. Lam Lay Yong (1966) "On the Chinese Origin of the Galley Method of Arithmetical Division", The British Journal for the History of Science 3: 66–69 Cambridge University Press. Lam Lay Yong (1996) [1] "The Development of Hindu-Arabic and Traditional Chinese Arithmetic", Chinese Science 13: 35–54. Oon Lay Yong (2009) Arithmetic in Ancient China OCT October 2009. Lam Lay-Yong and Shen Kangshen (沈康身) (1989) "Methods of solving linear equations in traditional China", Historia Mathematica, Volume 16, Issue 2, Pages 107–122. == References == == External links == Faculty of Science, NUS, Lam Lay Yong Chinese Invented Number System: Singapore Researcher An Interview with Lam Lay Yong - Singapore Mathematical Society Views on Mathematics Education in Singapore
Wikipedia:Lan-Hsuan Huang#0
Lan-Hsuan Huang (Chinese: 黃籃萱) is a Taiwanese-American mathematician and mathematical physicist specializing in differential geometry, geometric analysis, and their applications in the theory of relativity. She is a professor of mathematics at the University of Connecticut. Huang serves on the editorial board of the Journal of Mathematical Physics. == Education and career == Huang majored in mathematics at National Taiwan University, graduating in 2004. She went to Stanford University for graduate study in mathematics, and completed her Ph.D. there in 2009. Her doctoral dissertation, Center of Mass and Constant Mean Curvature Foliations for Isolated Systems, was supervised by Richard Schoen. After three years at Columbia University as Joseph F. Ritt Assistant Professor of Mathematics, she joined the Department of Mathematics at the University of Connecticut in 2012 as a tenure-track assistant professor. She was promoted to associate professor in 2016, and to full professor in 2020. == Recognition == In 2018, Huang was named a Simons Fellow in mathematics and a von Neumann Fellow at the Institute for Advanced Study. She was elected a Fellow of the American Mathematical Society in the 2024 class of fellows. She was awarded a Simons Fellowship again in 2025. == References == == External links == Home page Lan-Hsuan Huang publications indexed by Google Scholar
Wikipedia:Lancaster University School of Mathematics#0
Lancaster University School of Mathematics, also known as LUSoM, is a maths school located in Preston, Lancashire, England. As a maths school, it is a specialist mathematics free school sixth form college. The school was set up by the Rigby Education Trust, a single-academy trust set up in partnership between the Lancaster University and Cardinal Newman College for the purpose of opening and operating the school. It opened to students in September 2022 and is located in a £8.5 million school building on London Road, Preston, and is the first purpose-built specialist Maths School in the UK. The school is highly selective, with prospective students expected to have GCSE mathematics qualification at grade 8 or 9 and required to sit an admissions assessment. The course structure at LUSoM requires all students to study A-level Mathematics and Further Mathematics and a third A-level from either Physics, Chemistry or Computer Science. In addition, students may select a fourth subject from those three, or choose any other A-level subject to be taught at Cardinal Newman College, which is located less than half a mile from the school site. == References ==
Wikipedia:Landau kernel#0
The Landau kernel is named after the German number theorist Edmund Landau. The kernel is a summability kernel defined as: L n ( t ) = { ( 1 − t 2 ) n c n if − 1 ≤ t ≤ 1 0 otherwise {\displaystyle L_{n}(t)={\begin{cases}{\frac {(1-t^{2})^{n}}{c_{n}}}&{\text{if }}{-1}\leq t\leq 1\\0&{\text{otherwise}}\end{cases}}} where the coefficients c n {\displaystyle c_{n}} are defined as follows: c n = ∫ − 1 1 ( 1 − t 2 ) n d t . {\displaystyle c_{n}=\int _{-1}^{1}(1-t^{2})^{n}\,dt.} == Visualisation == Using integration by parts, one can show that: c n = ( n ! ) 2 2 2 n + 1 ( 2 n ) ! ( 2 n + 1 ) . {\displaystyle c_{n}={\frac {(n!)^{2}\,2^{2n+1}}{(2n)!(2n+1)}}.} Hence, this implies that the Landau kernel can be defined as follows: L n ( t ) = { ( 1 − t 2 ) n ( 2 n ) ! ( 2 n + 1 ) ( n ! ) 2 2 2 n + 1 for t ∈ [ − 1 , 1 ] 0 elsewhere {\displaystyle L_{n}(t)={\begin{cases}(1-t^{2})^{n}{\frac {(2n)!(2n+1)}{(n!)^{2}\,2^{2n+1}}}&{\text{for }}t\in [-1,1]\\0&{\text{elsewhere}}\end{cases}}} Plotting this function for different values of n reveals that as n goes to infinity, L n ( t ) {\displaystyle L_{n}(t)} approaches the Dirac delta function, as seen in the image, where the following functions are plotted. == Properties == Some general properties of the Landau kernel is that it is nonnegative and continuous on R {\displaystyle \mathbb {R} } . These properties are made more concrete in the following section. === Dirac sequences === The third bullet point means that the area under the graph of the function y = K n ( t ) {\displaystyle y=K_{n}(t)} becomes increasingly concentrated close to the origin as n approaches infinity. This definition lends us to the following theorem. Proof: We prove the third property only. In order to do so, we introduce the following lemma: Proof of the Lemma: Using the definition of the coefficients above, we find that the integrand is even, we may write c n 2 = ∫ 0 1 ( 1 − t 2 ) n d t = ∫ 0 1 ( 1 − t ) n ( 1 + t ) n d t ≥ ∫ 0 1 ( 1 − t ) n d t = 1 1 + n {\displaystyle {\frac {c_{n}}{2}}=\int _{0}^{1}(1-t^{2})^{n}\,dt=\int _{0}^{1}(1-t)^{n}(1+t)^{n}\,dt\geq \int _{0}^{1}(1-t)^{n}\,dt={\frac {1}{1+n}}} completing the proof of the lemma. A corollary of this lemma is the following: == See also == Poisson kernel Fejér kernel Dirichlet kernel == References ==
Wikipedia:Landau's algorithm#0
In algebra, a nested radical is a radical expression (one containing a square root sign, cube root sign, etc.) that contains (nests) another radical expression. Examples include 5 − 2 5 , {\displaystyle {\sqrt {5-2{\sqrt {5}}\ }},} which arises in discussing the regular pentagon, and more complicated ones such as 2 + 3 + 4 3 3 . {\displaystyle {\sqrt[{3}]{2+{\sqrt {3}}+{\sqrt[{3}]{4}}\ }}.} == Denesting == Some nested radicals can be rewritten in a form that is not nested. For example, 3 + 2 2 = 1 + 2 , {\displaystyle {\sqrt {3+2{\sqrt {2}}}}=1+{\sqrt {2}}\,,} 2 3 − 1 3 = 1 − 2 3 + 4 3 9 3 . {\displaystyle {\sqrt[{3}]{{\sqrt[{3}]{2}}-1}}={\frac {1-{\sqrt[{3}]{2}}+{\sqrt[{3}]{4}}}{\sqrt[{3}]{9}}}\,.} Another simple example, 2 3 = 2 6 {\displaystyle {\sqrt[{3}]{\sqrt {2}}}={\sqrt[{6}]{2}}} Rewriting a nested radical in this way is called denesting. This is not always possible, and, even when possible, it is often difficult. == Two nested square roots == In the case of two nested square roots, the following theorem completely solves the problem of denesting. If a and c are rational numbers and c is not the square of a rational number, there are two rational numbers x and y such that a + c = x ± y {\displaystyle {\sqrt {a+{\sqrt {c}}}}={\sqrt {x}}\pm {\sqrt {y}}} if and only if a 2 − c {\displaystyle a^{2}-c~} is the square of a rational number d. If the nested radical is real, x and y are the two numbers a + d 2 {\displaystyle {\frac {a+d}{2}}~} and a − d 2 , {\displaystyle ~{\frac {a-d}{2}}~,~} where d = a 2 − c {\displaystyle ~d={\sqrt {a^{2}-c}}~} is a rational number. In particular, if a and c are integers, then 2x and 2y are integers. This result includes denestings of the form a + c = z ± y , {\displaystyle {\sqrt {a+{\sqrt {c}}}}=z\pm {\sqrt {y}}~,} as z may always be written z = ± z 2 , {\displaystyle z=\pm {\sqrt {z^{2}}},} and at least one of the terms must be positive (because the left-hand side of the equation is positive). A more general denesting formula could have the form a + c = α + β x + γ y + δ x y . {\displaystyle {\sqrt {a+{\sqrt {c}}}}=\alpha +\beta {\sqrt {x}}+\gamma {\sqrt {y}}+\delta {\sqrt {x}}{\sqrt {y}}~.} However, Galois theory implies that either the left-hand side belongs to Q ( c ) , {\displaystyle \mathbb {Q} ({\sqrt {c}}),} or it must be obtained by changing the sign of either x , {\displaystyle {\sqrt {x}},} y , {\displaystyle {\sqrt {y}},} or both. In the first case, this means that one can take x = c and γ = δ = 0. {\displaystyle \gamma =\delta =0.} In the second case, α {\displaystyle \alpha } and another coefficient must be zero. If β = 0 , {\displaystyle \beta =0,} one may rename xy as x for getting δ = 0. {\displaystyle \delta =0.} Proceeding similarly if α = 0 , {\displaystyle \alpha =0,} it results that one can suppose α = δ = 0. {\displaystyle \alpha =\delta =0.} This shows that the apparently more general denesting can always be reduced to the above one. Proof: By squaring, the equation a + c = x ± y {\displaystyle {\sqrt {a+{\sqrt {c}}}}={\sqrt {x}}\pm {\sqrt {y}}} is equivalent with a + c = x + y ± 2 x y , {\displaystyle a+{\sqrt {c}}=x+y\pm 2{\sqrt {xy}},} and, in the case of a minus in the right-hand side, (square roots are nonnegative by definition of the notation). As the inequality may always be satisfied by possibly exchanging x and y, solving the first equation in x and y is equivalent with solving a + c = x + y ± 2 x y . {\displaystyle a+{\sqrt {c}}=x+y\pm 2{\sqrt {xy}}.} This equality implies that x y {\displaystyle {\sqrt {xy}}} belongs to the quadratic field Q ( c ) . {\displaystyle \mathbb {Q} ({\sqrt {c}}).} In this field every element may be uniquely written α + β c , {\displaystyle \alpha +\beta {\sqrt {c}},} with α {\displaystyle \alpha } and β {\displaystyle \beta } being rational numbers. This implies that ± 2 x y {\displaystyle \pm 2{\sqrt {xy}}} is not rational (otherwise the right-hand side of the equation would be rational; but the left-hand side is irrational). As x and y must be rational, the square of ± 2 x y {\displaystyle \pm 2{\sqrt {xy}}} must be rational. This implies that α = 0 {\displaystyle \alpha =0} in the expression of ± 2 x y {\displaystyle \pm 2{\sqrt {xy}}} as α + β c . {\displaystyle \alpha +\beta {\sqrt {c}}.} Thus a + c = x + y + β c {\displaystyle a+{\sqrt {c}}=x+y+\beta {\sqrt {c}}} for some rational number β . {\displaystyle \beta .} The uniqueness of the decomposition over 1 and c {\displaystyle {\sqrt {c}}} implies thus that the considered equation is equivalent with a = x + y and ± 2 x y = c . {\displaystyle a=x+y\quad {\text{and}}\quad \pm 2{\sqrt {xy}}={\sqrt {c}}.} It follows by Vieta's formulas that x and y must be roots of the quadratic equation z 2 − a z + c 4 = 0 ; {\displaystyle z^{2}-az+{\frac {c}{4}}=0~;} its Δ = a 2 − c = d 2 > 0 {\displaystyle ~\Delta =a^{2}-c=d^{2}>0~} (≠ 0, otherwise c would be the square of a), hence x and y must be a + a 2 − c 2 {\displaystyle {\frac {a+{\sqrt {a^{2}-c}}}{2}}~} and a − a 2 − c 2 . {\displaystyle ~{\frac {a-{\sqrt {a^{2}-c}}}{2}}~.} Thus x and y are rational if and only if d = a 2 − c {\displaystyle d={\sqrt {a^{2}-c}}~} is a rational number. For explicitly choosing the various signs, one must consider only positive real square roots, and thus assuming c > 0. The equation a 2 = c + d 2 {\displaystyle a^{2}=c+d^{2}} shows that |a| > √c. Thus, if the nested radical is real, and if denesting is possible, then a > 0. Then the solution is a + c = a + d 2 + a − d 2 , a − c = a + d 2 − a − d 2 . {\displaystyle {\begin{aligned}{\sqrt {a+{\sqrt {c}}}}&={\sqrt {\frac {a+d}{2}}}+{\sqrt {\frac {a-d}{2}}},\\[6pt]{\sqrt {a-{\sqrt {c}}}}&={\sqrt {\frac {a+d}{2}}}-{\sqrt {\frac {a-d}{2}}}.\end{aligned}}} == Some identities of Ramanujan == Srinivasa Ramanujan demonstrated a number of curious identities involving nested radicals. Among them are the following: 3 + 2 5 4 3 − 2 5 4 4 = 5 4 + 1 5 4 − 1 = 1 2 ( 3 + 5 4 + 5 + 125 4 ) , {\displaystyle {\sqrt[{4}]{\frac {3+2{\sqrt[{4}]{5}}}{3-2{\sqrt[{4}]{5}}}}}={\frac {{\sqrt[{4}]{5}}+1}{{\sqrt[{4}]{5}}-1}}={\tfrac {1}{2}}\left(3+{\sqrt[{4}]{5}}+{\sqrt {5}}+{\sqrt[{4}]{125}}\right),} 28 3 − 27 3 = 1 3 ( 98 3 − 28 3 − 1 ) , {\displaystyle {\sqrt {{\sqrt[{3}]{28}}-{\sqrt[{3}]{27}}}}={\tfrac {1}{3}}\left({\sqrt[{3}]{98}}-{\sqrt[{3}]{28}}-1\right),} 32 5 5 − 27 5 5 3 = 1 25 5 + 3 25 5 − 9 25 5 , {\displaystyle {\sqrt[{3}]{{\sqrt[{5}]{\frac {32}{5}}}-{\sqrt[{5}]{\frac {27}{5}}}}}={\sqrt[{5}]{\frac {1}{25}}}+{\sqrt[{5}]{\frac {3}{25}}}-{\sqrt[{5}]{\frac {9}{25}}},} and == Landau's algorithm == In 1989 Susan Landau introduced the first algorithm for deciding which nested radicals can be denested. Earlier algorithms worked in some cases but not others. Landau's algorithm involves complex roots of unity and runs in exponential time with respect to the depth of the nested radical. == In trigonometry == In trigonometry, the sines and cosines of many angles can be expressed in terms of nested radicals. For example, sin ⁡ π 60 = sin ⁡ 3 ∘ = 1 16 [ 2 ( 1 − 3 ) 5 + 5 + 2 ( 5 − 1 ) ( 3 + 1 ) ] {\displaystyle \sin {\frac {\pi }{60}}=\sin 3^{\circ }={\frac {1}{16}}\left[2(1-{\sqrt {3}}){\sqrt {5+{\sqrt {5}}}}+{\sqrt {2}}({\sqrt {5}}-1)({\sqrt {3}}+1)\right]} and sin ⁡ π 24 = sin ⁡ 7.5 ∘ = 1 2 2 − 2 + 3 = 1 2 2 − 1 + 3 2 . {\displaystyle \sin {\frac {\pi }{24}}=\sin 7.5^{\circ }={\frac {1}{2}}{\sqrt {2-{\sqrt {2+{\sqrt {3}}}}}}={\frac {1}{2}}{\sqrt {2-{\frac {1+{\sqrt {3}}}{\sqrt {2}}}}}.} The last equality results directly from the results of § Two nested square roots. == In the solution of the cubic equation == Nested radicals appear in the algebraic solution of the cubic equation. Any cubic equation can be written in simplified form without a quadratic term, as x 3 + p x + q = 0 , {\displaystyle x^{3}+px+q=0,} whose general solution for one of the roots is x = − q 2 + q 2 4 + p 3 27 3 + − q 2 − q 2 4 + p 3 27 3 . {\displaystyle x={\sqrt[{3}]{-{q \over 2}+{\sqrt {{q^{2} \over 4}+{p^{3} \over 27}}}}}+{\sqrt[{3}]{-{q \over 2}-{\sqrt {{q^{2} \over 4}+{p^{3} \over 27}}}}}.} In the case in which the cubic has only one real root, the real root is given by this expression with the radicands of the cube roots being real and with the cube roots being the real cube roots. In the case of three real roots, the square root expression is an imaginary number; here any real root is expressed by defining the first cube root to be any specific complex cube root of the complex radicand, and by defining the second cube root to be the complex conjugate of the first one. The nested radicals in this solution cannot in general be simplified unless the cubic equation has at least one rational solution. Indeed, if the cubic has three irrational but real solutions, we have the casus irreducibilis, in which all three real solutions are written in terms of cube roots of complex numbers. On the other hand, consider the equation x 3 − 7 x + 6 = 0 , {\displaystyle x^{3}-7x+6=0,} which has the rational solutions 1, 2, and −3. The general solution formula given above gives the solutions x = − 3 + 10 3 i 9 3 + − 3 − 10 3 i 9 3 . {\displaystyle x={\sqrt[{3}]{-3+{\frac {10{\sqrt {3}}i}{9}}}}+{\sqrt[{3}]{-3-{\frac {10{\sqrt {3}}i}{9}}}}.} For any given choice of cube root and its conjugate, this contains nested radicals involving complex numbers, yet it is reducible (even though not obviously so) to one of the solutions 1, 2, or −3. == Infinitely nested radicals == === Square roots === Under certain conditions infinitely nested square roots such as x = 2 + 2 + 2 + 2 + ⋯ {\displaystyle x={\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+\cdots }}}}}}}}} represent rational numbers. This rational number can be found by realizing that x also appears under the radical sign, which gives the equation x = 2 + x . {\displaystyle x={\sqrt {2+x}}.} If we solve this equation, we find that x = 2 (the second solution x = −1 doesn't apply, under the convention that the positive square root is meant). This approach can also be used to show that generally, if n > 0, then n + n + n + n + ⋯ = 1 2 ( 1 + 1 + 4 n ) {\displaystyle {\sqrt {n+{\sqrt {n+{\sqrt {n+{\sqrt {n+\cdots }}}}}}}}={\tfrac {1}{2}}\left(1+{\sqrt {1+4n}}\right)} and is the positive root of the equation x2 − x − n = 0. For n = 1, this root is the golden ratio φ, approximately equal to 1.618. The same procedure also works to obtain, if n > 0, n − n − n − n − ⋯ = 1 2 ( − 1 + 1 + 4 n ) , {\displaystyle {\sqrt {n-{\sqrt {n-{\sqrt {n-{\sqrt {n-\cdots }}}}}}}}={\tfrac {1}{2}}\left(-1+{\sqrt {1+4n}}\right),} which is the positive root of the equation x2 + x − n = 0. ==== Nested square roots of 2 ==== The nested square roots of 2 are a special case of the wide class of infinitely nested radicals. There are many known results that bind them to sines and cosines. For example, it has been shown that nested square roots of 2 as R ( b k , … , b 1 ) = b k 2 2 + b k − 1 2 + b k − 2 2 + ⋯ + b 2 2 + x {\displaystyle R(b_{k},\ldots ,b_{1})={\frac {b_{k}}{2}}{\sqrt {2+b_{k-1}{\sqrt {2+b_{k-2}{\sqrt {2+\cdots +b_{2}{\sqrt {2+x}}}}}}}}} where x = 2 sin ⁡ ( π b 1 / 4 ) {\displaystyle x=2\sin(\pi b_{1}/4)} with b 1 {\displaystyle b_{1}} in [−2,2] and b i ∈ { − 1 , 0 , 1 } {\displaystyle b_{i}\in \{-1,0,1\}} for i ≠ 1 {\displaystyle i\neq 1} , are such that R ( b k , … , b 1 ) = cos ⁡ θ {\displaystyle R(b_{k},\ldots ,b_{1})=\cos \theta } for θ = ( 1 2 − b k 4 − b k b k − 1 8 − b k b k − 1 b k − 2 16 − ⋯ − b k b k − 1 ⋯ b 1 2 k + 1 ) π . {\displaystyle \theta =\left({\frac {1}{2}}-{\frac {b_{k}}{4}}-{\frac {b_{k}b_{k-1}}{8}}-{\frac {b_{k}b_{k-1}b_{k-2}}{16}}-\cdots -{\frac {b_{k}b_{k-1}\cdots b_{1}}{2^{k+1}}}\right)\pi .} This result allows to deduce for any x ∈ [ − 2 , 2 ] {\displaystyle x\in [-2,2]} the value of the following infinitely nested radicals consisting of k nested roots as R k ( x ) = 2 + 2 + ⋯ + 2 + x . {\displaystyle R_{k}(x)={\sqrt {2+{\sqrt {2+\cdots +{\sqrt {2+x}}}}}}.} If x ≥ 2 {\displaystyle x\geq 2} , then R k ( x ) = 2 + 2 + ⋯ + 2 + x = ( x + x 2 − 4 2 ) 1 / 2 k + ( x + x 2 − 4 2 ) − 1 / 2 k {\displaystyle {\begin{aligned}R_{k}(x)&={\sqrt {2+{\sqrt {2+\cdots +{\sqrt {2+x}}}}}}\\&=\left({\frac {x+{\sqrt {x^{2}-4}}}{2}}\right)^{1/2^{k}}+\left({\frac {x+{\sqrt {x^{2}-4}}}{2}}\right)^{-1/2^{k}}\end{aligned}}} These results can be used to obtain some nested square roots representations of π {\displaystyle \pi } . Let us consider the term R ( b k , … , b 1 ) {\displaystyle R\left(b_{k},\ldots ,b_{1}\right)} defined above. Then π = lim k → ∞ [ 2 k + 1 2 − b 1 R ( 1 , − 1 , 1 , 1 , … , 1 , 1 , b 1 ⏟ k terms ) ] {\displaystyle \pi =\lim _{k\rightarrow \infty }\left[{\frac {2^{k+1}}{2-b_{1}}}R(\underbrace {1,-1,1,1,\ldots ,1,1,b_{1}} _{k{\text{ terms }}})\right]} where b 1 ≠ 2 {\displaystyle b_{1}\neq 2} . ==== Ramanujan's infinite radicals ==== Ramanujan posed the following problem to the Journal of Indian Mathematical Society: ? = 1 + 2 1 + 3 1 + ⋯ . {\displaystyle ?={\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}}.} This can be solved by noting a more general formulation: ? = a x + ( n + a ) 2 + x a ( x + n ) + ( n + a ) 2 + ( x + n ) ⋯ . {\displaystyle ?={\sqrt {ax+(n+a)^{2}+x{\sqrt {a(x+n)+(n+a)^{2}+(x+n){\sqrt {\mathrm {\cdots } }}}}}}.} Setting this to F(x) and squaring both sides gives us F ( x ) 2 = a x + ( n + a ) 2 + x a ( x + n ) + ( n + a ) 2 + ( x + n ) ⋯ , {\displaystyle F(x)^{2}=ax+(n+a)^{2}+x{\sqrt {a(x+n)+(n+a)^{2}+(x+n){\sqrt {\mathrm {\cdots } }}}},} which can be simplified to F ( x ) 2 = a x + ( n + a ) 2 + x F ( x + n ) . {\displaystyle F(x)^{2}=ax+(n+a)^{2}+xF(x+n).} It can be shown that F ( x ) = x + n + a {\displaystyle F(x)={x+n+a}} satisfies the equation for F ( x ) {\displaystyle F(x)} , so it can be hoped that it is the true solution. For a complete proof, we would need to show that this is indeed the solution to the equation for F ( x ) {\displaystyle F(x)} . So, setting a = 0, n = 1, and x = 2, we have 3 = 1 + 2 1 + 3 1 + ⋯ . {\displaystyle 3={\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}}.} Ramanujan stated the following infinite radical denesting in his lost notebook: 5 + 5 + 5 − 5 + 5 + 5 + 5 − ⋯ = 2 + 5 + 15 − 6 5 2 . {\displaystyle {\sqrt {5+{\sqrt {5+{\sqrt {5-{\sqrt {5+{\sqrt {5+{\sqrt {5+{\sqrt {5-\cdots }}}}}}}}}}}}}}={\frac {2+{\sqrt {5}}+{\sqrt {15-6{\sqrt {5}}}}}{2}}.} The repeating pattern of the signs is ( + , + , − , + ) . {\displaystyle (+,+,-,+).} ==== Viète's expression for π ==== Viète's formula for π, the ratio of a circle's circumference to its diameter, is 2 π = 2 2 ⋅ 2 + 2 2 ⋅ 2 + 2 + 2 2 ⋯ . {\displaystyle {\frac {2}{\pi }}={\frac {\sqrt {2}}{2}}\cdot {\frac {\sqrt {2+{\sqrt {2}}}}{2}}\cdot {\frac {\sqrt {2+{\sqrt {2+{\sqrt {2}}}}}}{2}}\cdots .} === Cube roots === In certain cases, infinitely nested cube roots such as x = 6 + 6 + 6 + 6 + ⋯ 3 3 3 3 {\displaystyle x={\sqrt[{3}]{6+{\sqrt[{3}]{6+{\sqrt[{3}]{6+{\sqrt[{3}]{6+\cdots }}}}}}}}} can represent rational numbers as well. Again, by realizing that the whole expression appears inside itself, we are left with the equation x = 6 + x 3 . {\displaystyle x={\sqrt[{3}]{6+x}}.} If we solve this equation, we find that x = 2. More generally, we find that n + n + n + n + ⋯ 3 3 3 3 {\displaystyle {\sqrt[{3}]{n+{\sqrt[{3}]{n+{\sqrt[{3}]{n+{\sqrt[{3}]{n+\cdots }}}}}}}}} is the positive real root of the equation x3 − x − n = 0 for all n > 0. For n = 1, this root is the plastic ratio ρ, approximately equal to 1.3247. The same procedure also works to get n − n − n − n − ⋯ 3 3 3 3 {\displaystyle {\sqrt[{3}]{n-{\sqrt[{3}]{n-{\sqrt[{3}]{n-{\sqrt[{3}]{n-\cdots }}}}}}}}} as the real root of the equation x3 + x − n = 0 for all n > 1. === Herschfeld's convergence theorem === An infinitely nested radical a 1 + a 2 + ⋯ {\displaystyle {\sqrt {a_{1}+{\sqrt {a_{2}+\dotsb }}}}} (where all a i {\displaystyle a_{i}} are nonnegative) converges if and only if there is some M ∈ R {\displaystyle M\in \mathbb {R} } such that M ≥ a n 2 − n {\displaystyle M\geq a_{n}^{2^{-n}}} for all n {\displaystyle n} , or in other words sup a n 2 − n < + ∞ . {\textstyle \sup a_{n}^{2^{-n}}<+\infty .} ==== Proof of "if" ==== We observe that a 1 + a 2 + ⋯ ≤ M 2 1 + M 2 2 + ⋯ = M 1 + 1 + ⋯ < 2 M . {\displaystyle {\sqrt {a_{1}+{\sqrt {a_{2}+\dotsb }}}}\leq {\sqrt {M^{2^{1}}+{\sqrt {M^{2^{2}}+\cdots }}}}=M{\sqrt {1+{\sqrt {1+\dotsb }}}}<2M.} Moreover, the sequence ( a 1 + a 2 + … a n ) {\displaystyle \left({\sqrt {a_{1}+{\sqrt {a_{2}+\dotsc {\sqrt {a_{n}}}}}}}\right)} is monotonically increasing. Therefore it converges, by the monotone convergence theorem. ==== Proof of "only if" ==== If the sequence ( a 1 + a 2 + ⋯ a n ) {\displaystyle \left({\sqrt {a_{1}+{\sqrt {a_{2}+\cdots {\sqrt {a_{n}}}}}}}\right)} converges, then it is bounded. However, a n 2 − n ≤ a 1 + a 2 + ⋯ a n {\displaystyle a_{n}^{2^{-n}}\leq {\sqrt {a_{1}+{\sqrt {a_{2}+\cdots {\sqrt {a_{n}}}}}}}} , hence ( a n 2 − n ) {\displaystyle \left(a_{n}^{2^{-n}}\right)} is also bounded. == See also == Exponentiation Sum of radicals == References == === Further reading === Landau, Susan (1994). "How to Tangle with a Nested Radical". Mathematical Intelligencer. 16 (2): 49–55. doi:10.1007/bf03024284. S2CID 119991567. Decreasing the Nesting Depth of Expressions Involving Square Roots Simplifying Square Roots of Square Roots Weisstein, Eric W. "Square Root". MathWorld. Weisstein, Eric W. "Nested Radical". MathWorld.
Wikipedia:Landon Rabern#0
Landon Rabern (1981–2020) was an American mathematician and computer scientist, best known for his contributions to graph theory, logic, and artificial intelligence. His research primarily focused on problems related to graph coloring, including work on Brooks' theorem, the Borodin–Kostochka conjecture, list critical graphs, and Read's conjecture. Rabern earned a Ph.D. in mathematics from Arizona State University in 2013, under the supervision of Hal Kierstead, with a dissertation that explored coloring graphs using nearly maximum-degree-sized palettes. == Early life and education == Rabern was born and raised in Roseburg, Oregon. He developed an early interest in computers, and machine intelligence. In the 1980s, he began programming on a Commodore 64, learning languages such as BASIC, Pascal (programming language), and C (programming language). By high school, he had created a chess AI, known as "Betsy," which is credited as the first published chess engine capable of playing Fisher Random Chess. Rabern pursued studies in mathematics and computer science at Washington University in St. Louis, with a year abroad in the Netherlands. He later earned a master's degree in mathematics from the University of California, Santa Barbara. While working as a software engineer, Rabern continued to independently explore graph theory. His proof of a conjecture in the field led him to pursue a Ph.D. at Arizona State University. == Career and research == Rabern made significant contributions to graph theory, particularly in areas such as the Borodin–Kostochka conjecture, list critical graphs, and Reed's conjecture. His work in discrete mathematics and combinatorics was recognized for its rigor and creativity. Rabern's work extended beyond mathematics into computer science and philosophy. Notably, he explored the use of automated theorem proving and computer-assisted proofs in graph theory. He also made contributions to the study of semantic paradoxes (e.g., Yablo's paradox) by applying graph-theoretic methods. And in another article provided a novel (two-question) solution to "The Hardest Logic Puzzle Ever". He also contributed a satirical mathematical proof titled "A Teleological Argument" for the existence of the Flying Spaghetti Monster, published in The Gospel of the Flying Spaghetti Monster. In addition to his academic research, Rabern had a successful career in software engineering and data science, co-founding a software company and working with several technology firms, particularly those focused on artificial intelligence and social media. In the final years of his career, Rabern returned to his early interest in artificial intelligence and chess programming. He began a second Ph.D. in Cognitive Science at the University of Colorado Boulder, where he focused on the intersection of psychology and machine learning. Rabern passed away in 2020 at the age of 39. == References == == External links == Landon Rabern's Github page Landon Rabern publications indexed by Google Scholar rabern's page News from the AMS Search | arXiv e-print repository
Wikipedia:Language of mathematics#0
The language of mathematics or mathematical language is an extension of the natural language (for example English) that is used in mathematics and in science for expressing results (scientific laws, theorems, proofs, logical deductions, etc.) with concision, precision and unambiguity. == Features == The main features of the mathematical language are the following. Use of common words with a derived meaning, generally more specific and more precise. For example, "or" means "one, the other or both", while, in common language, "both" is sometimes included and sometimes not. Also, a "line" is straight and has zero width. Use of common words with a meaning that is completely different from their common meaning. For example, a mathematical ring is not related to any other meaning of "ring". Real numbers and imaginary numbers are two sorts of numbers, none being more real or more imaginary than the others. Use of neologisms. For example polynomial, homomorphism. Use of symbols as words or phrases. For example, A = B {\displaystyle A=B} and ∀ x {\displaystyle \forall x} are respectively read as " A {\displaystyle A} equals B {\displaystyle B} " and "for all x {\displaystyle x} ". Use of formulas as part of sentences. For example: "⁠ E = m c 2 {\displaystyle E=mc^{2}} ⁠represents quantitatively the mass–energy equivalence." A formula that is not included in a sentence is generally meaningless, since the meaning of the symbols may depend on the context: in "⁠ E = m c 2 {\displaystyle E=mc^{2}\,} ⁠", this is the context that specifies that E is the energy of a physical body, m is its mass, and c is the speed of light. Use of phrases that cannot be decomposed into their components. In particular, adjectives do not always restrict the meaning of the corresponding noun, and may change the meaning completely. For example, most algebraic integers are not integers and integers are specific algebraic integers. So, an algebraic integer is not an integer that is algebraic. Use of mathematical jargon that consists of phrases that are used for informal explanations or shorthands. For example, "killing" is often used in place of "replacing with zero", and this led to the use of assassinator and annihilator as technical words. == Understanding mathematical text == The consequence of these features is that a mathematical text is generally not understandable without some prerequisite knowledge. For example, the sentence "a free module is a module that has a basis" is perfectly correct, although it appears only as a grammatically correct nonsense, when one does not know the definitions of basis, module, and free module. H. B. Williams, an electrophysiologist, wrote in 1927: Now mathematics is both a body of truth and a special language, a language more carefully defined and more highly abstracted than our ordinary medium of thought and expression. Also it differs from ordinary languages in this important particular: it is subject to rules of manipulation. Once a statement is cast into mathematical form it may be manipulated in accordance with these rules and every configuration of the symbols will represent facts in harmony with and dependent on those contained in the original statement. Now this comes very close to what we conceive the action of the brain structures to be in performing intellectual acts with the symbols of ordinary language. In a sense, therefore, the mathematician has been able to perfect a device through which a part of the labor of logical thought is carried on outside the central nervous system with only that supervision which is requisite to manipulate the symbols in accordance with the rules.: 291 == See also == Formulario mathematico Formal language History of mathematical notation Mathematical notation List of mathematical jargon == References == == Further reading == === Linguistic point of view === Keith Devlin (2000) The Language of Mathematics: Making the Invisible Visible, Holt Publishing. Kay O'Halloran (2004) Mathematical Discourse: Language, Symbolism and Visual Images, Continuum. R. L. E. Schwarzenberger (2000), "The Language of Geometry", in A Mathematical Spectrum Miscellany, Applied Probability Trust. === In education === Lawrence. A. Chang (1983) Handbook for spoken mathematics The regents of the University of California, [1] F. Bruun, J. M. Diaz, & V. J. Dykes (2015) The Language of Mathematics. Teaching Children Mathematics, 21(9), 530–536. J. O. Bullock (1994) Literacy in the Language of Mathematics. The American Mathematical Monthly, 101(8), 735–743. L. Buschman (1995) Communicating in the Language of Mathematics. Teaching Children Mathematics, 1(6), 324–329. B. R. Jones, P. F. Hopper, D. P. Franz, L. Knott, & T. A. Evitts (2008) Mathematics: A Second Language. The Mathematics Teacher, 102(4), 307–312. JSTOR. C. Morgan (1996) “The Language of Mathematics”: Towards a Critical Analysis of Mathematics Texts. For the Learning of Mathematics, 16(3), 2–10. J. K. Moulton (1946) The Language of Mathematics. The Mathematics Teacher, 39(3), 131–133.
Wikipedia:Laplace invariant#0
In differential equations, the Laplace invariant of any of certain differential operators is a certain function of the coefficients and their derivatives. Consider a bivariate hyperbolic differential operator of the second order ∂ x ∂ y + a ∂ x + b ∂ y + c , {\displaystyle \partial _{x}\,\partial _{y}+a\,\partial _{x}+b\,\partial _{y}+c,\,} whose coefficients a = a ( x , y ) , b = c ( x , y ) , c = c ( x , y ) , {\displaystyle a=a(x,y),\ \ b=c(x,y),\ \ c=c(x,y),} are smooth functions of two variables. Its Laplace invariants have the form a ^ = c − a b − a x and b ^ = c − a b − b y . {\displaystyle {\hat {a}}=c-ab-a_{x}\quad {\text{and}}\quad {\hat {b}}=c-ab-b_{y}.} Their importance is due to the classical theorem: Theorem: Two operators of the form are equivalent under gauge transformations if and only if their Laplace invariants coincide pairwise. Here the operators A and A ~ {\displaystyle A\quad {\text{and}}\quad {\tilde {A}}} are called equivalent if there is a gauge transformation that takes one to the other: A ~ g = e − φ A ( e φ g ) ≡ A φ g . {\displaystyle {\tilde {A}}g=e^{-\varphi }A(e^{\varphi }g)\equiv A_{\varphi }g.} Laplace invariants can be regarded as factorization "remainders" for the initial operator A: ∂ x ∂ y + a ∂ x + b ∂ y + c = { ( ∂ x + b ) ( ∂ y + a ) − a b − a x + c , ( ∂ y + a ) ( ∂ x + b ) − a b − b y + c . {\displaystyle \partial _{x}\,\partial _{y}+a\,\partial _{x}+b\,\partial _{y}+c=\left\{{\begin{array}{c}(\partial _{x}+b)(\partial _{y}+a)-ab-a_{x}+c,\\(\partial _{y}+a)(\partial _{x}+b)-ab-b_{y}+c.\end{array}}\right.} If at least one of Laplace invariants is not equal to zero, i.e. c − a b − a x ≠ 0 and/or c − a b − b y ≠ 0 , {\displaystyle c-ab-a_{x}\neq 0\quad {\text{and/or}}\quad c-ab-b_{y}\neq 0,} then this representation is a first step of the Laplace–Darboux transformations used for solving non-factorizable bivariate linear partial differential equations (LPDEs). If both Laplace invariants are equal to zero, i.e. c − a b − a x = 0 and c − a b − b y = 0 , {\displaystyle c-ab-a_{x}=0\quad {\text{and}}\quad c-ab-b_{y}=0,} then the differential operator A is factorizable and corresponding linear partial differential equation of second order is solvable. Laplace invariants have been introduced for a bivariate linear partial differential operator (LPDO) of order 2 and of hyperbolic type. They are a particular case of generalized invariants which can be constructed for a bivariate LPDO of arbitrary order and arbitrary type; see Invariant factorization of LPDOs. == See also == Partial derivative Invariant (mathematics) Invariant theory == References == G. Darboux, "Leçons sur la théorie général des surfaces", Gauthier-Villars (1912) (Edition: Second) G. Tzitzeica G., "Sur un theoreme de M. Darboux". Comptes Rendu de l'Academie des Sciences 150 (1910), pp. 955–956; 971–974 L. Bianchi, "Lezioni di geometria differenziale", Zanichelli, Bologna, (1924) A. B. Shabat, "On the theory of Laplace–Darboux transformations". J. Theor. Math. Phys. Vol. 103, N.1,pp. 170–175 (1995) [1] A.N. Leznov, M.P. Saveliev. "Group-theoretical methods for integration on non-linear dynamical systems" (Russian), Moscow, Nauka (1985). English translation: Progress in Physics, 15. Birkhauser Verlag, Basel (1992)
Wikipedia:Laplace operator#0
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols ∇ ⋅ ∇ {\displaystyle \nabla \cdot \nabla } , ∇ 2 {\displaystyle \nabla ^{2}} (where ∇ {\displaystyle \nabla } is the nabla operator), or Δ {\displaystyle \Delta } . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p). The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics: the Laplacian of the gravitational potential due to a given mass density distribution is a constant multiple of that density distribution. Solutions of Laplace's equation Δf = 0 are called harmonic functions and represent the possible gravitational potentials in regions of vacuum. The Laplacian occurs in many differential equations describing physical phenomena. Poisson's equation describes electric and gravitational potentials; the diffusion equation describes heat and fluid flow; the wave equation describes wave propagation; and the Schrödinger equation describes the wave function in quantum mechanics. In image processing and computer vision, the Laplacian operator has been used for various tasks, such as blob and edge detection. The Laplacian is the simplest elliptic operator and is at the core of Hodge theory as well as the results of de Rham cohomology. == Definition == The Laplace operator is a second-order differential operator in the n-dimensional Euclidean space, defined as the divergence ( ∇ ⋅ {\displaystyle \nabla \cdot } ) of the gradient ( ∇ f {\displaystyle \nabla f} ). Thus if f {\displaystyle f} is a twice-differentiable real-valued function, then the Laplacian of f {\displaystyle f} is the real-valued function defined by: where the latter notations derive from formally writing: ∇ = ( ∂ ∂ x 1 , … , ∂ ∂ x n ) . {\displaystyle \nabla =\left({\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\right).} Explicitly, the Laplacian of f is thus the sum of all the unmixed second partial derivatives in the Cartesian coordinates xi: As a second-order differential operator, the Laplace operator maps Ck functions to Ck−2 functions for k ≥ 2. It is a linear operator Δ : Ck(Rn) → Ck−2(Rn), or more generally, an operator Δ : Ck(Ω) → Ck−2(Ω) for any open set Ω ⊆ Rn. Alternatively, the Laplace operator can be defined as: ∇ 2 f ( x → ) = lim R → 0 2 n R 2 ( f s h e l l R − f ( x → ) ) = lim R → 0 2 n A n − 1 R 1 + n ∫ s h e l l R f ( r → ) − f ( x → ) d r n − 1 {\displaystyle \nabla ^{2}f({\vec {x}})=\lim _{R\rightarrow 0}{\frac {2n}{R^{2}}}(f_{shell_{R}}-f({\vec {x}}))=\lim _{R\rightarrow 0}{\frac {2n}{A_{n-1}R^{1+n}}}\int _{shell_{R}}f({\vec {r}})-f({\vec {x}})dr^{n-1}} where n {\displaystyle n} is the dimension of the space, f s h e l l R {\displaystyle f_{shell_{R}}} is the average value of f {\displaystyle f} on the surface of an n-sphere of radius R {\displaystyle R} , ∫ s h e l l R f ( r → ) d r n − 1 {\displaystyle \int _{shell_{R}}f({\vec {r}})dr^{n-1}} is the surface integral over an n-sphere of radius R {\displaystyle R} , and A n − 1 {\displaystyle A_{n-1}} is the hypervolume of the boundary of a unit n-sphere. == Analytic and geometric Laplacians == There are two conflicting conventions as to how the Laplace operator is defined: The "analytic" Laplacian, which could be characterized in R n {\displaystyle \mathbb {R} ^{n}} as Δ = ∇ 2 = ∑ j = 1 n ( ∂ ∂ x j ) 2 , {\displaystyle \Delta =\nabla ^{2}=\sum _{j=1}^{n}{\Big (}{\frac {\partial }{\partial x_{j}}}{\Big )}^{2},} which is negative-definite in the sense that ∫ R n φ ( x ) ¯ Δ φ ( x ) d x = − ∫ R n | ∇ φ ( x ) | 2 d x < 0 {\displaystyle \int _{\mathbb {R} ^{n}}{\overline {\varphi (x)}}\Delta \varphi (x)\,dx=-\int _{\mathbb {R} ^{n}}|\nabla \varphi (x)|^{2}\,dx<0} for any smooth compactly supported function φ ∈ C c ∞ ( R n ) {\displaystyle \varphi \in C_{c}^{\infty }(\mathbb {R} ^{n})} which is not identically zero); The "geometric", positive-definite Laplacian defined by Δ = − ∇ 2 = − ∑ j = 1 n ( ∂ ∂ x j ) 2 . {\displaystyle \Delta =-\nabla ^{2}=-\sum _{j=1}^{n}{\Big (}{\frac {\partial }{\partial x_{j}}}{\Big )}^{2}.} == Motivation == === Diffusion === In the physical theory of diffusion, the Laplace operator arises naturally in the mathematical description of equilibrium. Specifically, if u is the density at equilibrium of some quantity such as a chemical concentration, then the net flux of u through the boundary ∂V (also called S) of any smooth region V is zero, provided there is no source or sink within V: ∫ S ∇ u ⋅ n d S = 0 , {\displaystyle \int _{S}\nabla u\cdot \mathbf {n} \,dS=0,} where n is the outward unit normal to the boundary of V. By the divergence theorem, ∫ V div ⁡ ∇ u d V = ∫ S ∇ u ⋅ n d S = 0. {\displaystyle \int _{V}\operatorname {div} \nabla u\,dV=\int _{S}\nabla u\cdot \mathbf {n} \,dS=0.} Since this holds for all smooth regions V, one can show that it implies: div ⁡ ∇ u = Δ u = 0. {\displaystyle \operatorname {div} \nabla u=\Delta u=0.} The left-hand side of this equation is the Laplace operator, and the entire equation Δu = 0 is known as Laplace's equation. Solutions of the Laplace equation, i.e. functions whose Laplacian is identically zero, thus represent possible equilibrium densities under diffusion. The Laplace operator itself has a physical interpretation for non-equilibrium diffusion as the extent to which a point represents a source or sink of chemical concentration, in a sense made precise by the diffusion equation. This interpretation of the Laplacian is also explained by the following fact about averages. === Averages === Given a twice continuously differentiable function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } and a point p ∈ R n {\displaystyle p\in \mathbb {R} ^{n}} , the average value of f {\displaystyle f} over the ball with radius h {\displaystyle h} centered at p {\displaystyle p} is: f ¯ B ( p , h ) = f ( p ) + Δ f ( p ) 2 ( n + 2 ) h 2 + o ( h 2 ) for h → 0 {\displaystyle {\overline {f}}_{B}(p,h)=f(p)+{\frac {\Delta f(p)}{2(n+2)}}h^{2}+o(h^{2})\quad {\text{for}}\;\;h\to 0} Similarly, the average value of f {\displaystyle f} over the sphere (the boundary of a ball) with radius h {\displaystyle h} centered at p {\displaystyle p} is: f ¯ S ( p , h ) = f ( p ) + Δ f ( p ) 2 n h 2 + o ( h 2 ) for h → 0. {\displaystyle {\overline {f}}_{S}(p,h)=f(p)+{\frac {\Delta f(p)}{2n}}h^{2}+o(h^{2})\quad {\text{for}}\;\;h\to 0.} === Density associated with a potential === If φ denotes the electrostatic potential associated to a charge distribution q, then the charge distribution itself is given by the negative of the Laplacian of φ: q = − ε 0 Δ φ , {\displaystyle q=-\varepsilon _{0}\Delta \varphi ,} where ε0 is the electric constant. This is a consequence of Gauss's law. Indeed, if V is any smooth region with boundary ∂V, then by Gauss's law the flux of the electrostatic field E across the boundary is proportional to the charge enclosed: ∫ ∂ V E ⋅ n d S = ∫ V div ⁡ E d V = 1 ε 0 ∫ V q d V . {\displaystyle \int _{\partial V}\mathbf {E} \cdot \mathbf {n} \,dS=\int _{V}\operatorname {div} \mathbf {E} \,dV={\frac {1}{\varepsilon _{0}}}\int _{V}q\,dV.} where the first equality is due to the divergence theorem. Since the electrostatic field is the (negative) gradient of the potential, this gives: − ∫ V div ⁡ ( grad ⁡ φ ) d V = 1 ε 0 ∫ V q d V . {\displaystyle -\int _{V}\operatorname {div} (\operatorname {grad} \varphi )\,dV={\frac {1}{\varepsilon _{0}}}\int _{V}q\,dV.} Since this holds for all regions V, we must have div ⁡ ( grad ⁡ φ ) = − 1 ε 0 q {\displaystyle \operatorname {div} (\operatorname {grad} \varphi )=-{\frac {1}{\varepsilon _{0}}}q} The same approach implies that the negative of the Laplacian of the gravitational potential is the mass distribution. Often the charge (or mass) distribution are given, and the associated potential is unknown. Finding the potential function subject to suitable boundary conditions is equivalent to solving Poisson's equation. === Energy minimization === Another motivation for the Laplacian appearing in physics is that solutions to Δf = 0 in a region U are functions that make the Dirichlet energy functional stationary: E ( f ) = 1 2 ∫ U ‖ ∇ f ‖ 2 d x . {\displaystyle E(f)={\frac {1}{2}}\int _{U}\lVert \nabla f\rVert ^{2}\,dx.} To see this, suppose f : U → R is a function, and u : U → R is a function that vanishes on the boundary of U. Then: d d ε | ε = 0 E ( f + ε u ) = ∫ U ∇ f ⋅ ∇ u d x = − ∫ U u Δ f d x {\displaystyle \left.{\frac {d}{d\varepsilon }}\right|_{\varepsilon =0}E(f+\varepsilon u)=\int _{U}\nabla f\cdot \nabla u\,dx=-\int _{U}u\,\Delta f\,dx} where the last equality follows using Green's first identity. This calculation shows that if Δf = 0, then E is stationary around f. Conversely, if E is stationary around f, then Δf = 0 by the fundamental lemma of calculus of variations. == Coordinate expressions == === Two dimensions === The Laplace operator in two dimensions is given by: In Cartesian coordinates, Δ f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 {\displaystyle \Delta f={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}} where x and y are the standard Cartesian coordinates of the xy-plane. In polar coordinates, Δ f = 1 r ∂ ∂ r ( r ∂ f ∂ r ) + 1 r 2 ∂ 2 f ∂ θ 2 = ∂ 2 f ∂ r 2 + 1 r ∂ f ∂ r + 1 r 2 ∂ 2 f ∂ θ 2 , {\displaystyle {\begin{aligned}\Delta f&={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \theta ^{2}}}\\&={\frac {\partial ^{2}f}{\partial r^{2}}}+{\frac {1}{r}}{\frac {\partial f}{\partial r}}+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \theta ^{2}}},\end{aligned}}} where r represents the radial distance and θ the angle. === Three dimensions === In three dimensions, it is common to work with the Laplacian in a variety of different coordinate systems. In Cartesian coordinates, Δ f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 + ∂ 2 f ∂ z 2 . {\displaystyle \Delta f={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}.} In cylindrical coordinates, Δ f = 1 ρ ∂ ∂ ρ ( ρ ∂ f ∂ ρ ) + 1 ρ 2 ∂ 2 f ∂ φ 2 + ∂ 2 f ∂ z 2 , {\displaystyle \Delta f={\frac {1}{\rho }}{\frac {\partial }{\partial \rho }}\left(\rho {\frac {\partial f}{\partial \rho }}\right)+{\frac {1}{\rho ^{2}}}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}},} where ρ {\displaystyle \rho } represents the radial distance, φ the azimuth angle and z the height. In spherical coordinates: Δ f = 1 r 2 ∂ ∂ r ( r 2 ∂ f ∂ r ) + 1 r 2 sin ⁡ θ ∂ ∂ θ ( sin ⁡ θ ∂ f ∂ θ ) + 1 r 2 sin 2 ⁡ θ ∂ 2 f ∂ φ 2 , {\displaystyle \Delta f={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}},} or Δ f = 1 r ∂ 2 ∂ r 2 ( r f ) + 1 r 2 sin ⁡ θ ∂ ∂ θ ( sin ⁡ θ ∂ f ∂ θ ) + 1 r 2 sin 2 ⁡ θ ∂ 2 f ∂ φ 2 , {\displaystyle \Delta f={\frac {1}{r}}{\frac {\partial ^{2}}{\partial r^{2}}}(rf)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}},} by expanding the first and second term, these expressions read Δ f = ∂ 2 f ∂ r 2 + 2 r ∂ f ∂ r + 1 r 2 sin ⁡ θ ( cos ⁡ θ ∂ f ∂ θ + sin ⁡ θ ∂ 2 f ∂ θ 2 ) + 1 r 2 sin 2 ⁡ θ ∂ 2 f ∂ φ 2 , {\displaystyle \Delta f={\frac {\partial ^{2}f}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial f}{\partial r}}+{\frac {1}{r^{2}\sin \theta }}\left(\cos \theta {\frac {\partial f}{\partial \theta }}+\sin \theta {\frac {\partial ^{2}f}{\partial \theta ^{2}}}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}},} where φ represents the azimuthal angle and θ the zenith angle or co-latitude. In particular, the above is equivalent to Δ f = ∂ 2 f ∂ r 2 + 2 r ∂ f ∂ r + 1 r 2 Δ S 2 f , {\displaystyle \Delta f={\frac {\partial ^{2}f}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial f}{\partial r}}+{\frac {1}{r^{2}}}\Delta _{S^{2}}f,} where Δ S 2 f {\displaystyle \Delta _{S^{2}}f} is the Laplace-Beltrami operator on the unit sphere. In general curvilinear coordinates (ξ1, ξ2, ξ3): Δ = ∇ ξ m ⋅ ∇ ξ n ∂ 2 ∂ ξ m ∂ ξ n + ∇ 2 ξ m ∂ ∂ ξ m = g m n ( ∂ 2 ∂ ξ m ∂ ξ n − Γ m n l ∂ ∂ ξ l ) , {\displaystyle \Delta =\nabla \xi ^{m}\cdot \nabla \xi ^{n}{\frac {\partial ^{2}}{\partial \xi ^{m}\,\partial \xi ^{n}}}+\nabla ^{2}\xi ^{m}{\frac {\partial }{\partial \xi ^{m}}}=g^{mn}\left({\frac {\partial ^{2}}{\partial \xi ^{m}\,\partial \xi ^{n}}}-\Gamma _{mn}^{l}{\frac {\partial }{\partial \xi ^{l}}}\right),} where summation over the repeated indices is implied, gmn is the inverse metric tensor and Γl mn are the Christoffel symbols for the selected coordinates. === N dimensions === In arbitrary curvilinear coordinates in N dimensions (ξ1, ..., ξN), we can write the Laplacian in terms of the inverse metric tensor, g i j {\displaystyle g^{ij}} : Δ = 1 det g ∂ ∂ ξ i ( det g g i j ∂ ∂ ξ j ) , {\displaystyle \Delta ={\frac {1}{\sqrt {\det g}}}{\frac {\partial }{\partial \xi ^{i}}}\left({\sqrt {\det g}}\,g^{ij}{\frac {\partial }{\partial \xi ^{j}}}\right),} from the Voss-Weyl formula for the divergence. In spherical coordinates in N dimensions, with the parametrization x = rθ ∈ RN with r representing a positive real radius and θ an element of the unit sphere SN−1, Δ f = ∂ 2 f ∂ r 2 + N − 1 r ∂ f ∂ r + 1 r 2 Δ S N − 1 f {\displaystyle \Delta f={\frac {\partial ^{2}f}{\partial r^{2}}}+{\frac {N-1}{r}}{\frac {\partial f}{\partial r}}+{\frac {1}{r^{2}}}\Delta _{S^{N-1}}f} where ΔSN−1 is the Laplace–Beltrami operator on the (N − 1)-sphere, known as the spherical Laplacian. The two radial derivative terms can be equivalently rewritten as: 1 r N − 1 ∂ ∂ r ( r N − 1 ∂ f ∂ r ) . {\displaystyle {\frac {1}{r^{N-1}}}{\frac {\partial }{\partial r}}\left(r^{N-1}{\frac {\partial f}{\partial r}}\right).} As a consequence, the spherical Laplacian of a function defined on SN−1 ⊂ RN can be computed as the ordinary Laplacian of the function extended to RN∖{0} so that it is constant along rays, i.e., homogeneous of degree zero. == Euclidean invariance == The Laplacian is invariant under all Euclidean transformations: rotations and translations. In two dimensions, for example, this means that: Δ ( f ( x cos ⁡ θ − y sin ⁡ θ + a , x sin ⁡ θ + y cos ⁡ θ + b ) ) = ( Δ f ) ( x cos ⁡ θ − y sin ⁡ θ + a , x sin ⁡ θ + y cos ⁡ θ + b ) {\displaystyle \Delta (f(x\cos \theta -y\sin \theta +a,x\sin \theta +y\cos \theta +b))=(\Delta f)(x\cos \theta -y\sin \theta +a,x\sin \theta +y\cos \theta +b)} for all θ, a, and b. In arbitrary dimensions, Δ ( f ∘ ρ ) = ( Δ f ) ∘ ρ {\displaystyle \Delta (f\circ \rho )=(\Delta f)\circ \rho } whenever ρ is a rotation, and likewise: Δ ( f ∘ τ ) = ( Δ f ) ∘ τ {\displaystyle \Delta (f\circ \tau )=(\Delta f)\circ \tau } whenever τ is a translation. (More generally, this remains true when ρ is an orthogonal transformation such as a reflection.) In fact, the algebra of all scalar linear differential operators, with constant coefficients, that commute with all Euclidean transformations, is the polynomial algebra generated by the Laplace operator. == Spectral theory == The spectrum of the Laplace operator consists of all eigenvalues λ for which there is a corresponding eigenfunction f with: − Δ f = λ f . {\displaystyle -\Delta f=\lambda f.} This is known as the Helmholtz equation. If Ω is a bounded domain in Rn, then the eigenfunctions of the Laplacian are an orthonormal basis for the Hilbert space L2(Ω). This result essentially follows from the spectral theorem on compact self-adjoint operators, applied to the inverse of the Laplacian (which is compact, by the Poincaré inequality and the Rellich–Kondrachov theorem). It can also be shown that the eigenfunctions are infinitely differentiable functions. More generally, these results hold for the Laplace–Beltrami operator on any compact Riemannian manifold with boundary, or indeed for the Dirichlet eigenvalue problem of any elliptic operator with smooth coefficients on a bounded domain. When Ω is the n-sphere, the eigenfunctions of the Laplacian are the spherical harmonics. == Vector Laplacian == The vector Laplace operator, also denoted by ∇ 2 {\displaystyle \nabla ^{2}} , is a differential operator defined over a vector field. The vector Laplacian is similar to the scalar Laplacian; whereas the scalar Laplacian applies to a scalar field and returns a scalar quantity, the vector Laplacian applies to a vector field, returning a vector quantity. When computed in orthonormal Cartesian coordinates, the returned vector field is equal to the vector field of the scalar Laplacian applied to each vector component. The vector Laplacian of a vector field A {\displaystyle \mathbf {A} } is defined as ∇ 2 A = ∇ ( ∇ ⋅ A ) − ∇ × ( ∇ × A ) . {\displaystyle \nabla ^{2}\mathbf {A} =\nabla (\nabla \cdot \mathbf {A} )-\nabla \times (\nabla \times \mathbf {A} ).} This definition can be seen as the Helmholtz decomposition of the vector Laplacian. In Cartesian coordinates, this reduces to the much simpler expression ∇ 2 A = ( ∇ 2 A x , ∇ 2 A y , ∇ 2 A z ) , {\displaystyle \nabla ^{2}\mathbf {A} =(\nabla ^{2}A_{x},\nabla ^{2}A_{y},\nabla ^{2}A_{z}),} where A x {\displaystyle A_{x}} , A y {\displaystyle A_{y}} , and A z {\displaystyle A_{z}} are the components of the vector field A {\displaystyle \mathbf {A} } , and ∇ 2 {\displaystyle \nabla ^{2}} just on the left of each vector field component is the (scalar) Laplace operator. This can be seen to be a special case of Lagrange's formula; see Vector triple product. For expressions of the vector Laplacian in other coordinate systems see Del in cylindrical and spherical coordinates. === Generalization === The Laplacian of any tensor field T {\displaystyle \mathbf {T} } ("tensor" includes scalar and vector) is defined as the divergence of the gradient of the tensor: ∇ 2 T = ( ∇ ⋅ ∇ ) T . {\displaystyle \nabla ^{2}\mathbf {T} =(\nabla \cdot \nabla )\mathbf {T} .} For the special case where T {\displaystyle \mathbf {T} } is a scalar (a tensor of degree zero), the Laplacian takes on the familiar form. If T {\displaystyle \mathbf {T} } is a vector (a tensor of first degree), the gradient is a covariant derivative which results in a tensor of second degree, and the divergence of this is again a vector. The formula for the vector Laplacian above may be used to avoid tensor math and may be shown to be equivalent to the divergence of the Jacobian matrix shown below for the gradient of a vector: ∇ T = ( ∇ T x , ∇ T y , ∇ T z ) = [ T x x T x y T x z T y x T y y T y z T z x T z y T z z ] , where T u v ≡ ∂ T u ∂ v . {\displaystyle \nabla \mathbf {T} =(\nabla T_{x},\nabla T_{y},\nabla T_{z})={\begin{bmatrix}T_{xx}&T_{xy}&T_{xz}\\T_{yx}&T_{yy}&T_{yz}\\T_{zx}&T_{zy}&T_{zz}\end{bmatrix}},{\text{ where }}T_{uv}\equiv {\frac {\partial T_{u}}{\partial v}}.} And, in the same manner, a dot product, which evaluates to a vector, of a vector by the gradient of another vector (a tensor of 2nd degree) can be seen as a product of matrices: A ⋅ ∇ B = [ A x A y A z ] ∇ B = [ A ⋅ ∇ B x A ⋅ ∇ B y A ⋅ ∇ B z ] . {\displaystyle \mathbf {A} \cdot \nabla \mathbf {B} ={\begin{bmatrix}A_{x}&A_{y}&A_{z}\end{bmatrix}}\nabla \mathbf {B} ={\begin{bmatrix}\mathbf {A} \cdot \nabla B_{x}&\mathbf {A} \cdot \nabla B_{y}&\mathbf {A} \cdot \nabla B_{z}\end{bmatrix}}.} This identity is a coordinate dependent result, and is not general. === Use in physics === An example of the usage of the vector Laplacian is the Navier-Stokes equations for a Newtonian incompressible flow: ρ ( ∂ v ∂ t + ( v ⋅ ∇ ) v ) = ρ f − ∇ p + μ ( ∇ 2 v ) , {\displaystyle \rho \left({\frac {\partial \mathbf {v} }{\partial t}}+(\mathbf {v} \cdot \nabla )\mathbf {v} \right)=\rho \mathbf {f} -\nabla p+\mu \left(\nabla ^{2}\mathbf {v} \right),} where the term with the vector Laplacian of the velocity field μ ( ∇ 2 v ) {\displaystyle \mu \left(\nabla ^{2}\mathbf {v} \right)} represents the viscous stresses in the fluid. Another example is the wave equation for the electric field that can be derived from Maxwell's equations in the absence of charges and currents: ∇ 2 E − μ 0 ϵ 0 ∂ 2 E ∂ t 2 = 0. {\displaystyle \nabla ^{2}\mathbf {E} -\mu _{0}\epsilon _{0}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}=0.} This equation can also be written as: ◻ E = 0 , {\displaystyle \Box \,\mathbf {E} =0,} where ◻ ≡ 1 c 2 ∂ 2 ∂ t 2 − ∇ 2 , {\displaystyle \Box \equiv {\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2},} is the D'Alembertian, used in the Klein–Gordon equation. == Some properties == First of all, we say that a smooth function u : Ω ⊂ R N → R {\displaystyle u\colon \Omega \subset \mathbb {R} ^{N}\to \mathbb {R} } is superharmonic whenever − Δ u ≥ 0 {\displaystyle -\Delta u\geq 0} . Let u : Ω → R {\displaystyle u\colon \Omega \to \mathbb {R} } be a smooth function, and let K ⊂ Ω {\displaystyle K\subset \Omega } be a connected compact set. If u {\displaystyle u} is superharmonic, then, for every x ∈ K {\displaystyle x\in K} , we have u ( x ) ≥ inf Ω u + c ‖ u ‖ L 1 ( K ) , {\displaystyle u(x)\geq \inf _{\Omega }u+c\lVert u\rVert _{L^{1}(K)}\;,} for some constant c > 0 {\displaystyle c>0} depending on Ω {\displaystyle \Omega } and K {\displaystyle K} . == Generalizations == A version of the Laplacian can be defined wherever the Dirichlet energy functional makes sense, which is the theory of Dirichlet forms. For spaces with additional structure, one can give more explicit descriptions of the Laplacian, as follows. === Laplace–Beltrami operator === The Laplacian also can be generalized to an elliptic operator called the Laplace–Beltrami operator defined on a Riemannian manifold. The Laplace–Beltrami operator, when applied to a function, is the trace (tr) of the function's Hessian: Δ f = tr ⁡ ( H ( f ) ) {\displaystyle \Delta f=\operatorname {tr} {\big (}H(f){\big )}} where the trace is taken with respect to the inverse of the metric tensor. The Laplace–Beltrami operator also can be generalized to an operator (also called the Laplace–Beltrami operator) which operates on tensor fields, by a similar formula. Another generalization of the Laplace operator that is available on pseudo-Riemannian manifolds uses the exterior derivative, in terms of which the "geometer's Laplacian" is expressed as Δ f = δ d f . {\displaystyle \Delta f=\delta df.} Here δ is the codifferential, which can also be expressed in terms of the Hodge star and the exterior derivative. This operator differs in sign from the "analyst's Laplacian" defined above. More generally, the "Hodge" Laplacian is defined on differential forms α by Δ α = δ d α + d δ α . {\displaystyle \Delta \alpha =\delta d\alpha +d\delta \alpha .} This is known as the Laplace–de Rham operator, which is related to the Laplace–Beltrami operator by the Weitzenböck identity. === D'Alembertian === The Laplacian can be generalized in certain ways to non-Euclidean spaces, where it may be elliptic, hyperbolic, or ultrahyperbolic. In Minkowski space the Laplace–Beltrami operator becomes the D'Alembert operator ◻ {\displaystyle \Box } or D'Alembertian: ◻ = 1 c 2 ∂ 2 ∂ t 2 − ∂ 2 ∂ x 2 − ∂ 2 ∂ y 2 − ∂ 2 ∂ z 2 . {\displaystyle \square ={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-{\frac {\partial ^{2}}{\partial x^{2}}}-{\frac {\partial ^{2}}{\partial y^{2}}}-{\frac {\partial ^{2}}{\partial z^{2}}}.} It is the generalization of the Laplace operator in the sense that it is the differential operator which is invariant under the isometry group of the underlying space and it reduces to the Laplace operator if restricted to time-independent functions. The overall sign of the metric here is chosen such that the spatial parts of the operator admit a negative sign, which is the usual convention in high-energy particle physics. The D'Alembert operator is also known as the wave operator because it is the differential operator appearing in the wave equations, and it is also part of the Klein–Gordon equation, which reduces to the wave equation in the massless case. The additional factor of c in the metric is needed in physics if space and time are measured in different units; a similar factor would be required if, for example, the x direction were measured in meters while the y direction were measured in centimeters. Indeed, theoretical physicists usually work in units such that c = 1 in order to simplify the equation. The d'Alembert operator generalizes to a hyperbolic operator on pseudo-Riemannian manifolds. == See also == Laplace–Beltrami operator, generalization to submanifolds in Euclidean space and Riemannian and pseudo-Riemannian manifold. The Laplacian in differential geometry. The discrete Laplace operator is a finite-difference analog of the continuous Laplacian, defined on graphs and grids. The Laplacian is a common operator in image processing and computer vision (see the Laplacian of Gaussian, blob detector, and scale space). The list of formulas in Riemannian geometry contains expressions for the Laplacian in terms of Christoffel symbols. Weyl's lemma (Laplace equation). Earnshaw's theorem which shows that stable static gravitational, electrostatic or magnetic suspension is impossible. Del in cylindrical and spherical coordinates. Other situations in which a Laplacian is defined are: analysis on fractals, time scale calculus and discrete exterior calculus. == Notes == == References == Evans, L. (1998), Partial Differential Equations, American Mathematical Society, ISBN 978-0-8218-0772-9 The Feynman Lectures on Physics Vol. II Ch. 12: Electrostatic Analogs Gilbarg, D.; Trudinger, N. (2001), Elliptic Partial Differential Equations of Second Order, Springer, ISBN 978-3-540-41160-4. Schey, H. M. (1996), Div, Grad, Curl, and All That, W. W. Norton, ISBN 978-0-393-96997-9. == Further reading == The Laplacian - Richard Fitzpatrick 2006 == External links == "Laplace operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Laplacian". MathWorld. Laplacian in polar coordinates derivation Laplace equations on the fractal cubes and Casimir effect
Wikipedia:Laplace operators in differential geometry#0
In differential geometry there are a number of second-order, linear, elliptic differential operators bearing the name Laplacian. This article provides an overview of some of them. == Connection Laplacian == The connection Laplacian, also known as the rough Laplacian, is a differential operator acting on the various tensor bundles of a manifold, defined in terms of a Riemannian- or pseudo-Riemannian metric. When applied to functions (i.e. tensors of rank 0), the connection Laplacian is often called the Laplace–Beltrami operator. It is defined as the trace of the second covariant derivative: Δ T = tr ∇ 2 T , {\displaystyle \Delta T={\text{tr}}\;\nabla ^{2}T,} where T is any tensor, ∇ {\displaystyle \nabla } is the Levi-Civita connection associated to the metric, and the trace is taken with respect to the metric. Recall that the second covariant derivative of T is defined as ∇ X , Y 2 T = ∇ X ∇ Y T − ∇ ∇ X Y T . {\displaystyle \nabla _{X,Y}^{2}T=\nabla _{X}\nabla _{Y}T-\nabla _{\nabla _{X}Y}T.} Note that with this definition, the connection Laplacian has negative spectrum. On functions, it agrees with the operator given as the divergence of the gradient. If the connection of interest is the Levi-Civita connection one can find a convenient formula for the Laplacian of a scalar function in terms of partial derivatives with respect to a coordinate system: Δ ϕ = | g | − 1 / 2 ∂ μ ( | g | 1 / 2 g μ ν ∂ ν ϕ ) {\displaystyle \Delta \phi =|g|^{-1/2}\partial _{\mu }\left(|g|^{1/2}g^{\mu \nu }\partial _{\nu }\phi \right)} where ϕ {\displaystyle \phi } is a scalar function, | g | {\displaystyle |g|} is absolute value of the determinant of the metric (absolute value is necessary in the pseudo-Riemannian case, e.g. in General Relativity) and g μ ν {\displaystyle g^{\mu \nu }} denotes the inverse of the metric tensor. == Hodge Laplacian == The Hodge Laplacian, also known as the Laplace–de Rham operator, is a differential operator acting on differential forms. (Abstractly, it is a second order operator on each exterior power of the cotangent bundle.) This operator is defined on any manifold equipped with a Riemannian- or pseudo-Riemannian metric. Δ = d δ + δ d = ( d + δ ) 2 , {\displaystyle \Delta =\mathrm {d} \delta +\delta \mathrm {d} =(\mathrm {d} +\delta )^{2},\;} where d is the exterior derivative or differential and δ is the codifferential. The Hodge Laplacian on a compact manifold has nonnegative spectrum. The connection Laplacian may also be taken to act on differential forms by restricting it to act on skew-symmetric tensors. The connection Laplacian differs from the Hodge Laplacian by means of a Weitzenböck identity. == Bochner Laplacian == The Bochner Laplacian is defined differently from the connection Laplacian, but the two will turn out to differ only by a sign, whenever the former is defined. Let M be a compact, oriented manifold equipped with a metric. Let E be a vector bundle over M equipped with a fiber metric and a compatible connection, ∇ {\displaystyle \nabla } . This connection gives rise to a differential operator ∇ : Γ ( E ) → Γ ( T ∗ M ⊗ E ) {\displaystyle \nabla :\Gamma (E)\rightarrow \Gamma (T^{*}M\otimes E)} where Γ ( E ) {\displaystyle \Gamma (E)} denotes smooth sections of E, and T*M is the cotangent bundle of M. It is possible to take the L 2 {\displaystyle L^{2}} -adjoint of ∇ {\displaystyle \nabla } , giving a differential operator ∇ ∗ : Γ ( T ∗ M ⊗ E ) → Γ ( E ) . {\displaystyle \nabla ^{*}:\Gamma (T^{*}M\otimes E)\rightarrow \Gamma (E).} The Bochner Laplacian is given by Δ = ∇ ∗ ∇ {\displaystyle \Delta =\nabla ^{*}\nabla } which is a second order operator acting on sections of the vector bundle E. Note that the connection Laplacian and Bochner Laplacian differ only by a sign: ∇ ∗ ∇ = − tr ∇ 2 {\displaystyle \nabla ^{*}\nabla =-{\text{tr}}\,\nabla ^{2}} == Lichnerowicz Laplacian == The Lichnerowicz Laplacian is defined on symmetric tensors by taking ∇ : Γ ( Sym k ⁡ ( T M ) ) → Γ ( Sym k + 1 ⁡ ( T M ) ) {\displaystyle \nabla :\Gamma (\operatorname {Sym} ^{k}(TM))\to \Gamma (\operatorname {Sym} ^{k+1}(TM))} to be the symmetrized covariant derivative. The Lichnerowicz Laplacian is then defined by Δ L = ∇ ∗ ∇ {\displaystyle \Delta _{L}=\nabla ^{*}\nabla } , where ∇ ∗ {\displaystyle \nabla ^{*}} is the formal adjoint. The Lichnerowicz Laplacian differs from the usual tensor Laplacian by a Weitzenbock formula involving the Riemann curvature tensor, and has natural applications in the study of Ricci flow and the prescribed Ricci curvature problem. == Conformal Laplacian == On a Riemannian manifold, one can define the conformal Laplacian as an operator on smooth functions; it differs from the Laplace–Beltrami operator by a term involving the scalar curvature of the underlying metric. In dimension n ≥ 3, the conformal Laplacian, denoted L, acts on a smooth function u by L u = − 4 n − 1 n − 2 Δ u + R u , {\displaystyle Lu=-4{\frac {n-1}{n-2}}\Delta u+Ru,} where Δ is the Laplace-Beltrami operator (of negative spectrum), and R is the scalar curvature. This operator often makes an appearance when studying how the scalar curvature behaves under a conformal change of a Riemannian metric. If n ≥ 3 and g is a metric and u is a smooth, positive function, then the conformal metric g ~ = u 4 / ( n − 2 ) g {\displaystyle {\tilde {g}}=u^{4/(n-2)}g\,} has scalar curvature given by R ~ = u − ( n + 2 ) / ( n − 2 ) L u . {\displaystyle {\tilde {R}}=u^{-(n+2)/(n-2)}Lu.\,} More generally, the action of the conformal Laplacian of g̃ on smooth functions φ can be related to that of the conformal Laplacian of g via the transformation rule L ~ ( φ ) = u − ( n + 2 ) / ( n − 2 ) L ( u φ ) . {\displaystyle {\tilde {L}}(\varphi )=u^{-(n+2)/(n-2)}L(u\varphi ).} == Complex differential geometry == In complex differential geometry, the Laplace operator (also known as the Laplacian) is defined in terms of the complex differential forms. ∂ f = ∑ ( ∂ f ∂ x k − i ∂ f ∂ y k ) d z k {\displaystyle \partial f=\sum \left({\frac {\partial f}{\partial x_{k}}}-i{\frac {\partial f}{\partial y_{k}}}\right)dz_{k}} This operator acts on complex-valued functions of a complex variable. It is essentially the complex conjugate of the ordinary partial derivative with respect to. It's important in complex analysis and complex differential geometry for studying functions of complex variables. == Comparisons == Below is a table summarizing the various Laplacian operators, including the most general vector bundle on which they act, and what structure is required for the manifold and vector bundle. All of these operators are second order, linear, and elliptic. == See also == Weitzenböck identity == References ==
Wikipedia:Laplace principle (large deviations theory)#0
In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. While some basic ideas of the theory can be traced to Laplace, the formalization started with insurance mathematics, namely ruin theory with Cramér and Lundberg. A unified formalization of large deviation theory was developed in 1966, in a paper by Varadhan. Large deviations theory formalizes the heuristic ideas of concentration of measures and widely generalizes the notion of convergence of probability measures. Roughly speaking, large deviations theory concerns itself with the exponential decline of the probability measures of certain kinds of extreme or tail events. == Introductory examples == Any large deviation is done in the least unlikely of all the unlikely ways! === An elementary example === Consider a sequence of independent tosses of a fair coin. The possible outcomes could be heads or tails. Let us denote the possible outcome of the i-th trial by X i {\displaystyle X_{i}} , where we encode head as 1 and tail as 0. Now let M N {\displaystyle M_{N}} denote the mean value after N {\displaystyle N} trials, namely M N = 1 N ∑ i = 1 N X i {\displaystyle M_{N}={\frac {1}{N}}\sum _{i=1}^{N}X_{i}} . Then M N {\displaystyle M_{N}} lies between 0 and 1. From the law of large numbers it follows that as N grows, the distribution of M N {\displaystyle M_{N}} converges to 0.5 = E ⁡ [ X ] {\displaystyle 0.5=\operatorname {E} [X]} (the expected value of a single coin toss). Moreover, by the central limit theorem, it follows that M N {\displaystyle M_{N}} is approximately normally distributed for large N {\displaystyle N} . The central limit theorem can provide more detailed information about the behavior of M N {\displaystyle M_{N}} than the law of large numbers. For example, we can approximately find a tail probability of M N {\displaystyle M_{N}} – the probability that M N {\displaystyle M_{N}} is greater than some value x {\displaystyle x} – for a fixed value of N {\displaystyle N} . However, the approximation by the central limit theorem may not be accurate if x {\displaystyle x} is far from E ⁡ [ X i ] {\displaystyle \operatorname {E} [X_{i}]} and N {\displaystyle N} is not sufficiently large. Also, it does not provide information about the convergence of the tail probabilities as N → ∞ {\displaystyle N\to \infty } . However, the large deviation theory can provide answers for such problems. Let us make this statement more precise. For a given value 0.5 < x < 1 {\displaystyle 0.5<x<1} , let us compute the tail probability P ( M N > x ) {\displaystyle P(M_{N}>x)} . Define I ( x ) = x ln ⁡ x + ( 1 − x ) ln ⁡ ( 1 − x ) + ln ⁡ 2 {\displaystyle I(x)=x\ln {x}+(1-x)\ln(1-x)+\ln {2}} . Note that the function I ( x ) {\displaystyle I(x)} is a convex, nonnegative function that is zero at x = 1 2 {\displaystyle x={\tfrac {1}{2}}} and increases as x {\displaystyle x} approaches 1 {\displaystyle 1} . It is the negative of the Bernoulli entropy with p = 1 2 {\displaystyle p={\tfrac {1}{2}}} ; that it's appropriate for coin tosses follows from the asymptotic equipartition property applied to a Bernoulli trial. Then by Chernoff's inequality, it can be shown that P ( M N > x ) < exp ⁡ ( − N I ( x ) ) {\displaystyle P(M_{N}>x)<\exp(-NI(x))} . This bound is rather sharp, in the sense that I ( x ) {\displaystyle I(x)} cannot be replaced with a larger number which would yield a strict inequality for all positive N {\displaystyle N} . (However, the exponential bound can still be reduced by a subexponential factor on the order of 1 / N {\displaystyle 1/{\sqrt {N}}} ; this follows from the Stirling approximation applied to the binomial coefficient appearing in the Bernoulli distribution.) Hence, we obtain the following result: P ( M N > x ) ≈ exp ⁡ ( − N I ( x ) ) {\displaystyle P(M_{N}>x)\approx \exp(-NI(x))} . The probability P ( M N > x ) {\displaystyle P(M_{N}>x)} decays exponentially as N → ∞ {\displaystyle N\to \infty } at a rate depending on x. This formula approximates any tail probability of the sample mean of i.i.d. variables and gives its convergence as the number of samples increases. === Large deviations for sums of independent random variables === In the above example of coin-tossing we explicitly assumed that each toss is an independent trial, and the probability of getting head or tail is always the same. Let X , X 1 , X 2 , … {\displaystyle X,X_{1},X_{2},\ldots } be independent and identically distributed (i.i.d.) random variables whose common distribution satisfies a certain growth condition. Then the following limit exists: lim N → ∞ 1 N ln ⁡ P ( M N > x ) = − I ( x ) {\displaystyle \lim _{N\to \infty }{\frac {1}{N}}\ln P(M_{N}>x)=-I(x)} . Here M N = 1 N ∑ i = 1 N X i {\displaystyle M_{N}={\frac {1}{N}}\sum _{i=1}^{N}X_{i}} , as before. Function I ( ⋅ ) {\displaystyle I(\cdot )} is called the "rate function" or "Cramér function" or sometimes the "entropy function". The above-mentioned limit means that for large N {\displaystyle N} , P ( M N > x ) ≈ exp ⁡ [ − N I ( x ) ] {\displaystyle P(M_{N}>x)\approx \exp[-NI(x)]} , which is the basic result of large deviations theory. If we know the probability distribution of X {\displaystyle X} , an explicit expression for the rate function can be obtained. This is given by a Legendre–Fenchel transformation, I ( x ) = sup θ > 0 [ θ x − λ ( θ ) ] {\displaystyle I(x)=\sup _{\theta >0}[\theta x-\lambda (\theta )]} , where λ ( θ ) = ln ⁡ E ⁡ [ exp ⁡ ( θ X ) ] {\displaystyle \lambda (\theta )=\ln \operatorname {E} [\exp(\theta X)]} is called the cumulant generating function (CGF) and E {\displaystyle \operatorname {E} } denotes the mathematical expectation. If X {\displaystyle X} follows a normal distribution, the rate function becomes a parabola with its apex at the mean of the normal distribution. If { X i } {\displaystyle \{X_{i}\}} is an irreducible and aperiodic Markov chain, the variant of the basic large deviations result stated above may hold. === Moderate deviations for sums of independent random variables === The previous example controlled the probability of the event [ M N > x ] {\displaystyle [M_{N}>x]} , that is, the concentration of the law of M N {\displaystyle M_{N}} on the compact set [ − x , x ] {\displaystyle [-x,x]} . It is also possible to control the probability of the event [ M N > x a N ] {\displaystyle [M_{N}>xa_{N}]} for some sequence a N → 0 {\displaystyle a_{N}\to 0} . The following is an example of a moderate deviations principle: In particular, the limit case a N = N {\displaystyle a_{N}={\sqrt {N}}} is the central limit theorem. == Formal definition == Given a Polish space X {\displaystyle {\mathcal {X}}} let { P N } {\displaystyle \{\mathbb {P} _{N}\}} be a sequence of Borel probability measures on X {\displaystyle {\mathcal {X}}} , let { a N } {\displaystyle \{a_{N}\}} be a sequence of positive real numbers such that lim N a N = ∞ {\displaystyle \lim _{N}a_{N}=\infty } , and finally let I : X → [ 0 , ∞ ] {\displaystyle I:{\mathcal {X}}\to [0,\infty ]} be a lower semicontinuous functional on X . {\displaystyle {\mathcal {X}}.} The sequence { P N } {\displaystyle \{\mathbb {P} _{N}\}} is said to satisfy a large deviation principle with speed { a n } {\displaystyle \{a_{n}\}} and rate I {\displaystyle I} if, and only if, for each Borel measurable set E ⊂ X {\displaystyle E\subset {\mathcal {X}}} , − inf x ∈ E ∘ I ( x ) ≤ lim _ N ⁡ a N − 1 log ⁡ ( P N ( E ) ) ≤ lim ¯ N ⁡ a N − 1 log ⁡ ( P N ( E ) ) ≤ − inf x ∈ E ¯ I ( x ) {\displaystyle -\inf _{x\in E^{\circ }}I(x)\leq \varliminf _{N}a_{N}^{-1}\log(\mathbb {P} _{N}(E))\leq \varlimsup _{N}a_{N}^{-1}\log(\mathbb {P} _{N}(E))\leq -\inf _{x\in {\overline {E}}}I(x)} , where E ¯ {\displaystyle {\overline {E}}} and E ∘ {\displaystyle E^{\circ }} denote respectively the closure and interior of E {\displaystyle E} . == Brief history == The first rigorous results concerning large deviations are due to the Swedish mathematician Harald Cramér, who applied them to model the insurance business. From the point of view of an insurance company, the earning is at a constant rate per month (the monthly premium) but the claims come randomly. For the company to be successful over a certain period of time (preferably many months), the total earning should exceed the total claim. Thus to estimate the premium you have to ask the following question: "What should we choose as the premium q {\displaystyle q} such that over N {\displaystyle N} months the total claim C = Σ X i {\displaystyle C=\Sigma X_{i}} should be less than N q {\displaystyle Nq} ?" This is clearly the same question asked by the large deviations theory. Cramér gave a solution to this question for i.i.d. random variables, where the rate function is expressed as a power series. A very incomplete list of mathematicians who have made important advances would include Petrov, Sanov, S.R.S. Varadhan (who has won the Abel prize for his contribution to the theory), D. Ruelle, O.E. Lanford, Mark Freidlin, Alexander D. Wentzell, Amir Dembo, and Ofer Zeitouni. == Applications == Principles of large deviations may be effectively applied to gather information out of a probabilistic model. Thus, theory of large deviations finds its applications in information theory and risk management. In physics, the best known application of large deviations theory arise in thermodynamics and statistical mechanics (in connection with relating entropy with rate function). === Large deviations and entropy === The rate function is related to the entropy in statistical mechanics. This can be heuristically seen in the following way. In statistical mechanics the entropy of a particular macro-state is related to the number of micro-states which corresponds to this macro-state. In our coin tossing example the mean value M N {\displaystyle M_{N}} could designate a particular macro-state. And the particular sequence of heads and tails which gives rise to a particular value of M N {\displaystyle M_{N}} constitutes a particular micro-state. Loosely speaking a macro-state having a higher number of micro-states giving rise to it, has higher entropy. And a state with higher entropy has a higher chance of being realised in actual experiments. The macro-state with mean value of 1/2 (as many heads as tails) has the highest number of micro-states giving rise to it and it is indeed the state with the highest entropy. And in most practical situations we shall indeed obtain this macro-state for large numbers of trials. The "rate function" on the other hand measures the probability of appearance of a particular macro-state. The smaller the rate function the higher is the chance of a macro-state appearing. In our coin-tossing the value of the "rate function" for mean value equal to 1/2 is zero. In this way one can see the "rate function" as the negative of the "entropy". There is a relation between the "rate function" in large deviations theory and the Kullback–Leibler divergence, the connection is established by Sanov's theorem (see Sanov and Novak, ch. 14.5). In a special case, large deviations are closely related to the concept of Gromov–Hausdorff limits. == See also == Large deviation principle Cramér's large deviation theorem Chernoff's inequality Sanov's theorem Contraction principle (large deviations theory), a result on how large deviations principles "push forward" Freidlin–Wentzell theorem, a large deviations principle for Itō diffusions Legendre transformation, Ensemble equivalence is based on this transformation. Laplace principle, a large deviations principle in Rd Laplace's method Schilder's theorem, a large deviations principle for Brownian motion Varadhan's lemma Extreme value theory Large deviations of Gaussian random functions == References == == Bibliography == Special invited paper: Large deviations by S. R. S. Varadhan The Annals of Probability 2008, Vol. 36, No. 2, 397–419 doi:10.1214/07-AOP348 A basic introduction to large deviations: Theory, applications, simulations, Hugo Touchette, arXiv:1106.4146. Entropy, Large Deviations and Statistical Mechanics by R.S. Ellis, Springer Publication. ISBN 3-540-29059-1 Large Deviations for Performance Analysis by Alan Weiss and Adam Shwartz. Chapman and Hall ISBN 0-412-06311-5 Large Deviations Techniques and Applications by Amir Dembo and Ofer Zeitouni. Springer ISBN 0-387-98406-2 A course on large deviations with an introduction to Gibbs measures by Firas Rassoul-Agha and Timo Seppäläinen. Grad. Stud. Math., 162. American Mathematical Society ISBN 978-0-8218-7578-0 Random Perturbations of Dynamical Systems by M.I. Freidlin and A.D. Wentzell. Springer ISBN 0-387-98362-7 "Large Deviations for Two Dimensional Navier-Stokes Equation with Multiplicative Noise", S. S. Sritharan and P. Sundar, Stochastic Processes and Their Applications, Vol. 116 (2006) 1636–1659.[2] "Large Deviations for the Stochastic Shell Model of Turbulence", U. Manna, S. S. Sritharan and P. Sundar, NoDEA Nonlinear Differential Equations Appl. 16 (2009), no. 4, 493–521.[3]
Wikipedia:Laplace's method#0
In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form ∫ a b e M f ( x ) d x , {\displaystyle \int _{a}^{b}e^{Mf(x)}\,dx,} where f {\displaystyle f} is a twice-differentiable function, M {\displaystyle M} is a large number, and the endpoints a {\displaystyle a} and b {\displaystyle b} could be infinite. This technique was originally presented in the book by Laplace (1774). In Bayesian statistics, Laplace's approximation can refer to either approximating the posterior normalizing constant with Laplace's method or approximating the posterior distribution with a Gaussian centered at the maximum a posteriori estimate. Laplace approximations are used in the integrated nested Laplace approximations method for fast approximations of Bayesian inference. == Concept == Let the function f ( x ) {\displaystyle f(x)} have a unique global maximum at x 0 {\displaystyle x_{0}} . M > 0 {\displaystyle M>0} is a constant here. The following two functions are considered: g ( x ) = M f ( x ) , h ( x ) = e M f ( x ) . {\displaystyle {\begin{aligned}g(x)&=Mf(x),\\h(x)&=e^{Mf(x)}.\end{aligned}}} Then, x 0 {\displaystyle x_{0}} is the global maximum of g {\displaystyle g} and h {\displaystyle h} as well. Hence: g ( x 0 ) g ( x ) = M f ( x 0 ) M f ( x ) = f ( x 0 ) f ( x ) , h ( x 0 ) h ( x ) = e M f ( x 0 ) e M f ( x ) = e M ( f ( x 0 ) − f ( x ) ) . {\displaystyle {\begin{aligned}{\frac {g(x_{0})}{g(x)}}&={\frac {Mf(x_{0})}{Mf(x)}}={\frac {f(x_{0})}{f(x)}},\\[4pt]{\frac {h(x_{0})}{h(x)}}&={\frac {e^{Mf(x_{0})}}{e^{Mf(x)}}}=e^{M(f(x_{0})-f(x))}.\end{aligned}}} As M increases, the ratio for h {\displaystyle h} will grow exponentially, while the ratio for g {\displaystyle g} does not change. Thus, significant contributions to the integral of this function will come only from points x {\displaystyle x} in a neighborhood of x 0 {\displaystyle x_{0}} , which can then be estimated. == General theory == To state and motivate the method, one must make several assumptions. It is assumed that x 0 {\displaystyle x_{0}} is not an endpoint of the interval of integration and that the values f ( x ) {\displaystyle f(x)} cannot be very close to f ( x 0 ) {\displaystyle f(x_{0})} unless x {\displaystyle x} is close to x 0 {\displaystyle x_{0}} . f ( x ) {\displaystyle f(x)} can be expanded around x 0 {\displaystyle x_{0}} by Taylor's theorem, f ( x ) = f ( x 0 ) + f ′ ( x 0 ) ( x − x 0 ) + 1 2 f ″ ( x 0 ) ( x − x 0 ) 2 + R {\displaystyle f(x)=f(x_{0})+f'(x_{0})(x-x_{0})+{\frac {1}{2}}f''(x_{0})(x-x_{0})^{2}+R} where R = O ( ( x − x 0 ) 3 ) {\displaystyle R=O\left((x-x_{0})^{3}\right)} (see: big O notation). Since f {\displaystyle f} has a global maximum at x 0 {\displaystyle x_{0}} , and x 0 {\displaystyle x_{0}} is not an endpoint, it is a stationary point, i.e. f ′ ( x 0 ) = 0 {\displaystyle f'(x_{0})=0} . Therefore, the second-order Taylor polynomial approximating f ( x ) {\displaystyle f(x)} is f ( x ) ≈ f ( x 0 ) + 1 2 f ″ ( x 0 ) ( x − x 0 ) 2 . {\displaystyle f(x)\approx f(x_{0})+{\frac {1}{2}}f''(x_{0})(x-x_{0})^{2}.} Then, just one more step is needed to get a Gaussian distribution. Since x 0 {\displaystyle x_{0}} is a global maximum of the function f {\displaystyle f} it can be stated, by definition of the second derivative, that f ″ ( x 0 ) ≤ 0 {\displaystyle f''(x_{0})\leq 0} , thus giving the relation f ( x ) ≈ f ( x 0 ) − 1 2 | f ″ ( x 0 ) | ( x − x 0 ) 2 {\displaystyle f(x)\approx f(x_{0})-{\frac {1}{2}}|f''(x_{0})|(x-x_{0})^{2}} for x {\displaystyle x} close to x 0 {\displaystyle x_{0}} . The integral can then be approximated with: ∫ a b e M f ( x ) d x ≈ e M f ( x 0 ) ∫ a b e − 1 2 M | f ″ ( x 0 ) | ( x − x 0 ) 2 d x {\displaystyle \int _{a}^{b}e^{Mf(x)}\,dx\approx e^{Mf(x_{0})}\int _{a}^{b}e^{-{\frac {1}{2}}M|f''(x_{0})|(x-x_{0})^{2}}\,dx} If f ″ ( x 0 ) < 0 {\displaystyle f''(x_{0})<0} this latter integral becomes a Gaussian integral if we replace the limits of integration by − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } ; when M {\displaystyle M} is large this creates only a small error because the exponential decays very fast away from x 0 {\displaystyle x_{0}} . Computing this Gaussian integral we obtain: ∫ a b e M f ( x ) d x ≈ 2 π M | f ″ ( x 0 ) | e M f ( x 0 ) as M → ∞ . {\displaystyle \int _{a}^{b}e^{Mf(x)}\,dx\approx {\sqrt {\frac {2\pi }{M|f''(x_{0})|}}}e^{Mf(x_{0})}{\text{ as }}M\to \infty .} A generalization of this method and extension to arbitrary precision is provided by the book Fog (2008). === Formal statement and proof === Suppose f ( x ) {\displaystyle f(x)} is a twice continuously differentiable function on [ a , b ] , {\displaystyle [a,b],} and there exists a unique point x 0 ∈ ( a , b ) {\displaystyle x_{0}\in (a,b)} such that: f ( x 0 ) = max x ∈ [ a , b ] f ( x ) and f ″ ( x 0 ) < 0. {\displaystyle f(x_{0})=\max _{x\in [a,b]}f(x)\quad {\text{and}}\quad f''(x_{0})<0.} Then: lim n → ∞ ∫ a b e n f ( x ) d x e n f ( x 0 ) 2 π n ( − f ″ ( x 0 ) ) = 1. {\displaystyle \lim _{n\to \infty }{\frac {\int _{a}^{b}e^{nf(x)}\,dx}{e^{nf(x_{0})}{\sqrt {\frac {2\pi }{n\left(-f''(x_{0})\right)}}}}}=1.} This method relies on 4 basic concepts such as Based on these four concepts, we can derive the relative error of this method. == Other formulations == Laplace's approximation is sometimes written as ∫ a b h ( x ) e M g ( x ) d x ≈ 2 π M | g ″ ( x 0 ) | h ( x 0 ) e M g ( x 0 ) as M → ∞ {\displaystyle \int _{a}^{b}h(x)e^{Mg(x)}\,dx\approx {\sqrt {\frac {2\pi }{M|g''(x_{0})|}}}h(x_{0})e^{Mg(x_{0})}\ {\text{ as }}M\to \infty } where h {\displaystyle h} is positive. Importantly, the accuracy of the approximation depends on the variable of integration, that is, on what stays in g ( x ) {\displaystyle g(x)} and what goes into h ( x ) . {\displaystyle h(x).} In the multivariate case, where x {\displaystyle \mathbf {x} } is a d {\displaystyle d} -dimensional vector and f ( x ) {\displaystyle f(\mathbf {x} )} is a scalar function of x {\displaystyle \mathbf {x} } , Laplace's approximation is usually written as: ∫ h ( x ) e M f ( x ) d x ≈ ( 2 π M ) d / 2 h ( x 0 ) e M f ( x 0 ) | − H ( f ) ( x 0 ) | 1 / 2 as M → ∞ {\displaystyle \int h(\mathbf {x} )e^{Mf(\mathbf {x} )}\,d\mathbf {x} \approx \left({\frac {2\pi }{M}}\right)^{d/2}{\frac {h(\mathbf {x} _{0})e^{Mf(\mathbf {x} _{0})}}{\left|-H(f)(\mathbf {x} _{0})\right|^{1/2}}}{\text{ as }}M\to \infty } where H ( f ) ( x 0 ) {\displaystyle H(f)(\mathbf {x} _{0})} is the Hessian matrix of f {\displaystyle f} evaluated at x 0 {\displaystyle \mathbf {x} _{0}} and where | ⋅ | {\displaystyle |\cdot |} denotes matrix determinant. Analogously to the univariate case, the Hessian is required to be negative-definite. By the way, although x {\displaystyle \mathbf {x} } denotes a d {\displaystyle d} -dimensional vector, the term d x {\displaystyle d\mathbf {x} } denotes an infinitesimal volume here, i.e. d x := d x 1 d x 2 ⋯ d x d {\displaystyle d\mathbf {x} :=dx_{1}dx_{2}\cdots dx_{d}} . == Steepest descent extension == In extensions of Laplace's method, complex analysis, and in particular Cauchy's integral formula, is used to find a contour of steepest descent for an (asymptotically with large M) equivalent integral, expressed as a line integral. In particular, if no point x0 where the derivative of f {\displaystyle f} vanishes exists on the real line, it may be necessary to deform the integration contour to an optimal one, where the above analysis will be possible. Again, the main idea is to reduce, at least asymptotically, the calculation of the given integral to that of a simpler integral that can be explicitly evaluated. See the book of Erdelyi (1956) for a simple discussion (where the method is termed steepest descents). The appropriate formulation for the complex z-plane is ∫ a b e M f ( z ) d z ≈ 2 π − M f ″ ( z 0 ) e M f ( z 0 ) as M → ∞ . {\displaystyle \int _{a}^{b}e^{Mf(z)}\,dz\approx {\sqrt {\frac {2\pi }{-Mf''(z_{0})}}}e^{Mf(z_{0})}{\text{ as }}M\to \infty .} for a path passing through the saddle point at z0. Note the explicit appearance of a minus sign to indicate the direction of the second derivative: one must not take the modulus. Also note that if the integrand is meromorphic, one may have to add residues corresponding to poles traversed while deforming the contour (see for example section 3 of Okounkov's paper Symmetric functions and random partitions). == Further generalizations == An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems. Given a contour C in the complex sphere, a function f {\displaystyle f} defined on that contour and a special point, such as infinity, a holomorphic function M is sought away from C, with prescribed jump across C, and with a given normalization at infinity. If f {\displaystyle f} and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution. An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour. The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, "steepest descent contours" solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov). The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics. == Median-point approximation generalization == In the generalization, evaluation of the integral is considered equivalent to finding the norm of the distribution with density e M f ( x ) . {\displaystyle e^{Mf(x)}.} Denoting the cumulative distribution F ( x ) {\displaystyle F(x)} , if there is a diffeomorphic Gaussian distribution with density e − g − γ 2 y 2 {\displaystyle e^{-g-{\frac {\gamma }{2}}y^{2}}} the norm is given by 2 π γ − 1 e − g {\displaystyle {\sqrt {2\pi \gamma ^{-1}}}e^{-g}} and the corresponding diffeomorphism is y ( x ) = 1 γ Φ − 1 ( F ( x ) F ( ∞ ) ) , {\displaystyle y(x)={\frac {1}{\sqrt {\gamma }}}\Phi ^{-1}{\left({\frac {F(x)}{F(\infty )}}\right)},} where Φ {\displaystyle \Phi } denotes cumulative standard normal distribution function. In general, any distribution diffeomorphic to the Gaussian distribution has density e − g − γ 2 y 2 ( x ) y ′ ( x ) {\displaystyle e^{-g-{\frac {\gamma }{2}}y^{2}(x)}y'(x)} and the median-point is mapped to the median of the Gaussian distribution. Matching the logarithm of the density functions and their derivatives at the median point up to a given order yields a system of equations that determine the approximate values of γ {\displaystyle \gamma } and g {\displaystyle g} . The approximation was introduced in 2019 by D. Makogon and C. Morais Smith primarily in the context of partition function evaluation for a system of interacting fermions. == Complex integrals == For complex integrals in the form: 1 2 π i ∫ c − i ∞ c + i ∞ g ( s ) e s t d s {\displaystyle {\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }g(s)e^{st}\,ds} with t ≫ 1 , {\displaystyle t\gg 1,} we make the substitution t = iu and the change of variable s = c + i x {\displaystyle s=c+ix} to get the bilateral Laplace transform: 1 2 π ∫ − ∞ ∞ g ( c + i x ) e − u x e i c u d x . {\displaystyle {\frac {1}{2\pi }}\int _{-\infty }^{\infty }g(c+ix)e^{-ux}e^{icu}\,dx.} We then split g(c + ix) in its real and complex part, after which we recover u = t/i. This is useful for inverse Laplace transforms, the Perron formula and complex integration. == Example: Stirling's approximation == Laplace's method can be used to derive Stirling's approximation N ! ≈ 2 π N ( N e ) N {\displaystyle N!\approx {\sqrt {2\pi N}}\left({\frac {N}{e}}\right)^{N}\,} for a large integer N. From the definition of the Gamma function, we have N ! = Γ ( N + 1 ) = ∫ 0 ∞ e − x x N d x . {\displaystyle N!=\Gamma (N+1)=\int _{0}^{\infty }e^{-x}x^{N}\,dx.} Now we change variables, letting x = N z {\displaystyle x=Nz} so that d x = N d z . {\displaystyle dx=Ndz.} Plug these values back in to obtain N ! = ∫ 0 ∞ e − N z ( N z ) N N d z = N N + 1 ∫ 0 ∞ e − N z z N d z = N N + 1 ∫ 0 ∞ e − N z e N ln ⁡ z d z = N N + 1 ∫ 0 ∞ e N ( ln ⁡ z − z ) d z . {\displaystyle {\begin{aligned}N!&=\int _{0}^{\infty }e^{-Nz}(Nz)^{N}N\,dz\\&=N^{N+1}\int _{0}^{\infty }e^{-Nz}z^{N}\,dz\\&=N^{N+1}\int _{0}^{\infty }e^{-Nz}e^{N\ln z}\,dz\\&=N^{N+1}\int _{0}^{\infty }e^{N(\ln z-z)}\,dz.\end{aligned}}} This integral has the form necessary for Laplace's method with f ( z ) = ln ⁡ z − z {\displaystyle f(z)=\ln {z}-z} which is twice-differentiable: f ′ ( z ) = 1 z − 1 , {\displaystyle f'(z)={\frac {1}{z}}-1,} f ″ ( z ) = − 1 z 2 . {\displaystyle f''(z)=-{\frac {1}{z^{2}}}.} The maximum of f ( z ) {\displaystyle f(z)} lies at z0 = 1, and the second derivative of f ( z ) {\displaystyle f(z)} has the value −1 at this point. Therefore, we obtain N ! ≈ N N + 1 2 π N e − N = 2 π N N N e − N . {\displaystyle N!\approx N^{N+1}{\sqrt {\frac {2\pi }{N}}}e^{-N}={\sqrt {2\pi N}}N^{N}e^{-N}.} == See also == Method of stationary phase Method of steepest descent Large deviations theory Laplace principle (large deviations theory) Laplace's approximation == Notes == == References == This article incorporates material from saddle point approximation on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia:Laplace–Beltrami operator#0
In differential geometry, the Laplace–Beltrami operator is a generalization of the Laplace operator to functions defined on submanifolds in Euclidean space and, even more generally, on Riemannian and pseudo-Riemannian manifolds. It is named after Pierre-Simon Laplace and Eugenio Beltrami. For any twice-differentiable real-valued function f defined on Euclidean space Rn, the Laplace operator (also known as the Laplacian) takes f to the divergence of its gradient vector field, which is the sum of the n pure second derivatives of f with respect to each vector of an orthonormal basis for Rn. Like the Laplacian, the Laplace–Beltrami operator is defined as the divergence of the gradient, and is a linear operator taking functions into functions. The operator can be extended to operate on tensors as the divergence of the covariant derivative. Alternatively, the operator can be generalized to operate on differential forms using the divergence and exterior derivative. The resulting operator is called the Laplace–de Rham operator (named after Georges de Rham). == Details == The Laplace–Beltrami operator, like the Laplacian, is the (Riemannian) divergence of the (Riemannian) gradient: Δ f = d i v ( ∇ f ) . {\displaystyle \Delta f={\rm {div}}(\nabla f).} An explicit formula in local coordinates is possible. Suppose first that M is an oriented Riemannian manifold. The orientation allows one to specify a definite volume form on M, given in an oriented coordinate system xi by vol n := | g | d x 1 ∧ ⋯ ∧ d x n {\displaystyle \operatorname {vol} _{n}:={\sqrt {|g|}}\;dx^{1}\wedge \cdots \wedge dx^{n}} where |g| := |det(gij)| is the absolute value of the determinant of the metric tensor, and the dxi are the 1-forms forming the dual frame to the frame ∂ i := ∂ ∂ x i {\displaystyle \partial _{i}:={\frac {\partial }{\partial x^{i}}}} of the tangent bundle T M {\displaystyle TM} and ∧ {\displaystyle \wedge } is the wedge product. The divergence of a vector field X {\displaystyle X} on the manifold is then defined as the scalar function ∇ ⋅ X {\displaystyle \nabla \cdot X} with the property ( ∇ ⋅ X ) vol n := L X vol n {\displaystyle (\nabla \cdot X)\operatorname {vol} _{n}:=L_{X}\operatorname {vol} _{n}} where LX is the Lie derivative along the vector field X. In local coordinates, one obtains ∇ ⋅ X = 1 | g | ∂ i ( | g | X i ) {\displaystyle \nabla \cdot X={\frac {1}{\sqrt {|g|}}}\partial _{i}\left({\sqrt {|g|}}X^{i}\right)} where here and below the Einstein notation is implied, so that the repeated index i is summed over. The gradient of a scalar function ƒ is the vector field grad f that may be defined through the inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } on the manifold, as ⟨ grad ⁡ f ( x ) , v x ⟩ = d f ( x ) ( v x ) {\displaystyle \langle \operatorname {grad} f(x),v_{x}\rangle =df(x)(v_{x})} for all vectors vx anchored at point x in the tangent space TxM of the manifold at point x. Here, dƒ is the exterior derivative of the function ƒ; it is a 1-form taking argument vx. In local coordinates, one has ( grad ⁡ f ) i = ∂ i f = g i j ∂ j f {\displaystyle \left(\operatorname {grad} f\right)^{i}=\partial ^{i}f=g^{ij}\partial _{j}f} where gij are the components of the inverse of the metric tensor, so that gijgjk = δik with δik the Kronecker delta. Combining the definitions of the gradient and divergence, the formula for the Laplace–Beltrami operator applied to a scalar function ƒ is, in local coordinates Δ f = 1 | g | ∂ i ( | g | g i j ∂ j f ) . {\displaystyle \Delta f={\frac {1}{\sqrt {|g|}}}\partial _{i}\left({\sqrt {|g|}}g^{ij}\partial _{j}f\right).} If M is not oriented, then the above calculation carries through exactly as presented, except that the volume form must instead be replaced by a volume element (a density rather than a form). Neither the gradient nor the divergence actually depends on the choice of orientation, and so the Laplace–Beltrami operator itself does not depend on this additional structure. == Formal self-adjointness == The exterior derivative d {\displaystyle d} and − ∇ {\displaystyle -\nabla } are formal adjoints, in the sense that for a compactly supported function f {\displaystyle f} ∫ M d f ( X ) vol n = − ∫ M f ∇ ⋅ X vol n {\displaystyle \int _{M}df(X)\operatorname {vol} _{n}=-\int _{M}f\nabla \cdot X\operatorname {vol} _{n}} (proof) where the last equality is an application of Stokes' theorem. Dualizing gives for all compactly supported functions f {\displaystyle f} and h {\displaystyle h} . Conversely, (2) characterizes the Laplace–Beltrami operator completely, in the sense that it is the only operator with this property. As a consequence, the Laplace–Beltrami operator is negative and formally self-adjoint, meaning that for compactly supported functions f {\displaystyle f} and h {\displaystyle h} , ∫ M f Δ h vol n = − ∫ M ⟨ d f , d h ⟩ vol n = ∫ M h Δ f vol n . {\displaystyle \int _{M}f\,\Delta h\operatorname {vol} _{n}=-\int _{M}\langle df,dh\rangle \operatorname {vol} _{n}=\int _{M}h\,\Delta f\operatorname {vol} _{n}.} Because the Laplace–Beltrami operator, as defined in this manner, is negative rather than positive, often it is defined with the opposite sign. == Eigenvalues of the Laplace–Beltrami operator (Lichnerowicz–Obata theorem) == Let M denote a compact Riemannian manifold without boundary. We want to consider the eigenvalue equation, − Δ u = λ u , {\displaystyle -\Delta u=\lambda u,} where u {\displaystyle u} is the eigenfunction associated with the eigenvalue λ {\displaystyle \lambda } . It can be shown using the self-adjointness proved above that the eigenvalues λ {\displaystyle \lambda } are real. The compactness of the manifold M {\displaystyle M} allows one to show that the eigenvalues are discrete and furthermore, the vector space of eigenfunctions associated with a given eigenvalue λ {\displaystyle \lambda } , i.e. the eigenspaces are all finite-dimensional. Notice by taking the constant function as an eigenfunction, we get λ = 0 {\displaystyle \lambda =0} is an eigenvalue. Also since we have considered − Δ {\displaystyle -\Delta } an integration by parts shows that λ ≥ 0 {\displaystyle \lambda \geq 0} . More precisely if we multiply the eigenvalue equation through by the eigenfunction u {\displaystyle u} and integrate the resulting equation on M {\displaystyle M} we get (using the notation d V = vol n {\displaystyle dV=\operatorname {vol} _{n}} ): − ∫ M Δ u u d V = λ ∫ M u 2 d V {\displaystyle -\int _{M}\Delta u\ u\ dV=\lambda \int _{M}u^{2}\ dV} Performing an integration by parts or what is the same thing as using the divergence theorem on the term on the left, and since M {\displaystyle M} has no boundary we get − ∫ M Δ u u d V = ∫ M | ∇ u | 2 d V {\displaystyle -\int _{M}\Delta u\ u\ dV=\int _{M}|\nabla u|^{2}\ dV} Putting the last two equations together we arrive at ∫ M | ∇ u | 2 d V = λ ∫ M u 2 d V {\displaystyle \int _{M}|\nabla u|^{2}\ dV=\lambda \int _{M}u^{2}\ dV} We conclude from the last equation that λ ≥ 0 {\displaystyle \lambda \geq 0} . A fundamental result of André Lichnerowicz states that: Given a compact n-dimensional Riemannian manifold with no boundary with n ≥ 2 {\displaystyle n\geq 2} . Assume the Ricci curvature satisfies the lower bound: Ric ⁡ ( X , X ) ≥ κ g ( X , X ) , κ > 0 , {\displaystyle \operatorname {Ric} (X,X)\geq \kappa g(X,X),\kappa >0,} where g ( ⋅ , ⋅ ) {\displaystyle g(\cdot ,\cdot )} is the metric tensor and X {\displaystyle X} is any tangent vector on the manifold M {\displaystyle M} . Then the first positive eigenvalue λ 1 {\displaystyle \lambda _{1}} of the eigenvalue equation satisfies the lower bound: λ 1 ≥ n n − 1 κ . {\displaystyle \lambda _{1}\geq {\frac {n}{n-1}}\kappa .} This lower bound is sharp and achieved on the sphere S n {\displaystyle \mathbb {S} ^{n}} . In fact on S 2 {\displaystyle \mathbb {S} ^{2}} the eigenspace for λ 1 {\displaystyle \lambda _{1}} is three dimensional and spanned by the restriction of the coordinate functions x 1 , x 2 , x 3 {\displaystyle x_{1},x_{2},x_{3}} from R 3 {\displaystyle \mathbb {R} ^{3}} to S 2 {\displaystyle \mathbb {S} ^{2}} . Using spherical coordinates ( θ , ϕ ) {\displaystyle (\theta ,\phi )} , on S 2 {\displaystyle \mathbb {S} ^{2}} the two dimensional sphere, set x 3 = cos ⁡ ϕ = u 1 , {\displaystyle x_{3}=\cos \phi =u_{1},} we see easily from the formula for the spherical Laplacian displayed below that − Δ S 2 u 1 = 2 u 1 {\displaystyle -\Delta _{\mathbb {S} ^{2}}u_{1}=2u_{1}} Thus the lower bound in Lichnerowicz's theorem is achieved at least in two dimensions. Conversely it was proved by Morio Obata, that if the n-dimensional compact Riemannian manifold without boundary were such that for the first positive eigenvalue λ 1 {\displaystyle \lambda _{1}} one has, λ 1 = n n − 1 κ , {\displaystyle \lambda _{1}={\frac {n}{n-1}}\kappa ,} then the manifold is isometric to the n-dimensional sphere S n ( n − 1 κ ) {\displaystyle \mathbb {S} ^{n}{\bigg (}{\sqrt {\frac {n-1}{\kappa }}}{\bigg )}} , the sphere of radius n − 1 κ {\displaystyle {\sqrt {\frac {n-1}{\kappa }}}} . Proofs of all these statements may be found in the book by Isaac Chavel. Analogous sharp bounds also hold for other Geometries and for certain degenerate Laplacians associated with these geometries like the Kohn Laplacian (after Joseph J. Kohn) on a compact CR manifold. Applications there are to the global embedding of such CR manifolds in C n . {\displaystyle \mathbb {C} ^{n}.} == Tensor Laplacian == The Laplace–Beltrami operator can be written using the trace (or contraction) of the iterated covariant derivative associated with the Levi-Civita connection. The Hessian (tensor) of a function f {\displaystyle f} is the symmetric 2-tensor Hess f ∈ Γ ( T ∗ M ⊗ T ∗ M ) {\displaystyle \displaystyle {\mbox{Hess}}f\in \mathbf {\Gamma } ({\mathsf {T}}^{*}M\otimes {\mathsf {T}}^{*}M)} , Hess f := ∇ 2 f ≡ ∇ ∇ f ≡ ∇ d f {\displaystyle {\mbox{Hess}}f:=\nabla ^{2}f\equiv \nabla \nabla f\equiv \nabla \mathrm {d} f} , where df denotes the (exterior) derivative of a function f. Let Xi be a basis of tangent vector fields (not necessarily induced by a coordinate system). Then the components of Hess f are given by ( Hess f ) i j = Hess f ( X i , X j ) = ∇ X i ∇ X j f − ∇ ∇ X i X j f {\displaystyle ({\mbox{Hess}}f)_{ij}={\mbox{Hess}}f(X_{i},X_{j})=\nabla _{X_{i}}\nabla _{X_{j}}f-\nabla _{\nabla _{X_{i}}X_{j}}f} This is easily seen to transform tensorially, since it is linear in each of the arguments Xi, Xj. The Laplace–Beltrami operator is then the trace (or contraction) of the Hessian with respect to the metric: Δ f := t r ∇ d f ∈ C ∞ ( M ) {\displaystyle \displaystyle \Delta f:=\mathrm {tr} \nabla \mathrm {d} f\in {\mathsf {C}}^{\infty }(M)} . More precisely, this means Δ f ( x ) = ∑ i = 1 n ∇ d f ( X i , X i ) {\displaystyle \displaystyle \Delta f(x)=\sum _{i=1}^{n}\nabla \mathrm {d} f(X_{i},X_{i})} , or in terms of the metric Δ f = ∑ i j g i j ( Hess f ) i j . {\displaystyle \Delta f=\sum _{ij}g^{ij}({\mbox{Hess}}f)_{ij}.} In abstract indices, the operator is often written Δ f = ∇ a ∇ a f {\displaystyle \Delta f=\nabla ^{a}\nabla _{a}f} provided it is understood implicitly that this trace is in fact the trace of the Hessian tensor. Because the covariant derivative extends canonically to arbitrary tensors, the Laplace–Beltrami operator defined on a tensor T by Δ T = g i j ( ∇ X i ∇ X j T − ∇ ∇ X i X j T ) {\displaystyle \Delta T=g^{ij}\left(\nabla _{X_{i}}\nabla _{X_{j}}T-\nabla _{\nabla _{X_{i}}X_{j}}T\right)} is well-defined. == Laplace–de Rham operator == More generally, one can define a Laplacian differential operator on sections of the bundle of differential forms on a pseudo-Riemannian manifold. On a Riemannian manifold it is an elliptic operator, while on a Lorentzian manifold it is hyperbolic. The Laplace–de Rham operator is defined by Δ = d δ + δ d = ( d + δ ) 2 , {\displaystyle \Delta =\mathrm {d} \delta +\delta \mathrm {d} =(\mathrm {d} +\delta )^{2},\;} where d is the exterior derivative or differential and δ is the codifferential, acting as (−1)kn+n+1∗d∗ on k-forms, where ∗ is the Hodge star. The first order operator d + δ {\displaystyle \mathrm {d} +\delta } is the Hodge–Dirac operator. When computing the Laplace–de Rham operator on a scalar function f, we have δf = 0, so that Δ f = δ d f . {\displaystyle \Delta f=\delta \,\mathrm {d} f.} Up to an overall sign, the Laplace–de Rham operator is equivalent to the previous definition of the Laplace–Beltrami operator when acting on a scalar function; see the proof for details. On functions, the Laplace–de Rham operator is actually the negative of the Laplace–Beltrami operator, as the conventional normalization of the codifferential assures that the Laplace–de Rham operator is (formally) positive definite, whereas the Laplace–Beltrami operator is typically negative. The sign is merely a convention, and both are common in the literature. The Laplace–de Rham operator differs more significantly from the tensor Laplacian restricted to act on skew-symmetric tensors. Apart from the incidental sign, the two operators differ by a Weitzenböck identity that explicitly involves the Ricci curvature tensor. == Examples == Many examples of the Laplace–Beltrami operator can be worked out explicitly. === Euclidean space === In the usual (orthonormal) Cartesian coordinates xi on Euclidean space, the metric is reduced to the Kronecker delta, and one therefore has | g | = 1 {\displaystyle |g|=1} . Consequently, in this case Δ f = 1 | g | ∂ i | g | ∂ i f = ∂ i ∂ i f {\displaystyle \Delta f={\frac {1}{\sqrt {|g|}}}\partial _{i}{\sqrt {|g|}}\partial ^{i}f=\partial _{i}\partial ^{i}f} which is the ordinary Laplacian. In curvilinear coordinates, such as spherical or cylindrical coordinates, one obtains alternative expressions. Similarly, the Laplace–Beltrami operator corresponding to the Minkowski metric with signature (− + + +) is the d'Alembertian. === Spherical Laplacian === The spherical Laplacian is the Laplace–Beltrami operator on the (n − 1)-sphere with its canonical metric of constant sectional curvature 1. It is convenient to regard the sphere as isometrically embedded into Rn as the unit sphere centred at the origin. Then for a function f on Sn−1, the spherical Laplacian is defined by Δ S n − 1 f ( x ) = Δ f ( x / | x | ) {\displaystyle \Delta _{S^{n-1}}f(x)=\Delta f(x/|x|)} where f(x/|x|) is the degree zero homogeneous extension of the function f to Rn − {0}, and Δ {\displaystyle \Delta } is the Laplacian of the ambient Euclidean space. Concretely, this is implied by the well-known formula for the Euclidean Laplacian in spherical polar coordinates: Δ f = r 1 − n ∂ ∂ r ( r n − 1 ∂ f ∂ r ) + r − 2 Δ S n − 1 f . {\displaystyle \Delta f=r^{1-n}{\frac {\partial }{\partial r}}\left(r^{n-1}{\frac {\partial f}{\partial r}}\right)+r^{-2}\Delta _{S^{n-1}}f.} More generally, one can formulate a similar trick using the normal bundle to define the Laplace–Beltrami operator of any Riemannian manifold isometrically embedded as a hypersurface of Euclidean space. One can also give an intrinsic description of the Laplace–Beltrami operator on the sphere in a normal coordinate system. Let (ϕ, ξ) be spherical coordinates on the sphere with respect to a particular point p of the sphere (the "north pole"), that is geodesic polar coordinates with respect to p. Here ϕ represents the latitude measurement along a unit speed geodesic from p, and ξ a parameter representing the choice of direction of the geodesic in Sn−1. Then the spherical Laplacian has the form: Δ S n − 1 f ( ξ , ϕ ) = ( sin ⁡ ϕ ) 2 − n ∂ ∂ ϕ ( ( sin ⁡ ϕ ) n − 2 ∂ f ∂ ϕ ) + ( sin ⁡ ϕ ) − 2 Δ ξ f {\displaystyle \Delta _{S^{n-1}}f(\xi ,\phi )=(\sin \phi )^{2-n}{\frac {\partial }{\partial \phi }}\left((\sin \phi )^{n-2}{\frac {\partial f}{\partial \phi }}\right)+(\sin \phi )^{-2}\Delta _{\xi }f} where Δ ξ {\displaystyle \Delta _{\xi }} is the Laplace–Beltrami operator on the ordinary unit (n − 2)-sphere. In particular, for the ordinary 2-sphere using standard notation for polar coordinates we get: Δ S 2 f ( θ , ϕ ) = ( sin ⁡ ϕ ) − 1 ∂ ∂ ϕ ( sin ⁡ ϕ ∂ f ∂ ϕ ) + ( sin ⁡ ϕ ) − 2 ∂ 2 ∂ θ 2 f {\displaystyle \Delta _{S^{2}}f(\theta ,\phi )=(\sin \phi )^{-1}{\frac {\partial }{\partial \phi }}\left(\sin \phi {\frac {\partial f}{\partial \phi }}\right)+(\sin \phi )^{-2}{\frac {\partial ^{2}}{\partial \theta ^{2}}}f} === Hyperbolic space === A similar technique works in hyperbolic space. Here the hyperbolic space Hn−1 can be embedded into the n dimensional Minkowski space, a real vector space equipped with the quadratic form q ( x ) = x 1 2 − x 2 2 − ⋯ − x n 2 . {\displaystyle q(x)=x_{1}^{2}-x_{2}^{2}-\cdots -x_{n}^{2}.} Then Hn is the subset of the future null cone in Minkowski space given by H n = { x ∣ q ( x ) = 1 , x 1 > 1 } . {\displaystyle H^{n}=\{x\mid q(x)=1,x_{1}>1\}.\,} Then Δ H n − 1 f = ◻ f ( x / q ( x ) 1 / 2 ) | H n − 1 {\displaystyle \Delta _{H^{n-1}}f=\left.\Box f\left(x/q(x)^{1/2}\right)\right|_{H^{n-1}}} Here f ( x / q ( x ) 1 / 2 ) {\displaystyle f(x/q(x)^{1/2})} is the degree zero homogeneous extension of f to the interior of the future null cone and □ is the wave operator ◻ = ∂ 2 ∂ x 1 2 − ⋯ − ∂ 2 ∂ x n 2 . {\displaystyle \Box ={\frac {\partial ^{2}}{\partial x_{1}^{2}}}-\cdots -{\frac {\partial ^{2}}{\partial x_{n}^{2}}}.} The operator can also be written in polar coordinates. Let (t, ξ) be spherical coordinates on the sphere with respect to a particular point p of Hn−1 (say, the center of the Poincaré disc). Here t represents the hyperbolic distance from p and ξ a parameter representing the choice of direction of the geodesic in Sn−2. Then the hyperbolic Laplacian has the form: Δ H n − 1 f ( t , ξ ) = sinh ⁡ ( t ) 2 − n ∂ ∂ t ( sinh ⁡ ( t ) n − 2 ∂ f ∂ t ) + sinh ⁡ ( t ) − 2 Δ ξ f {\displaystyle \Delta _{H^{n-1}}f(t,\xi )=\sinh(t)^{2-n}{\frac {\partial }{\partial t}}\left(\sinh(t)^{n-2}{\frac {\partial f}{\partial t}}\right)+\sinh(t)^{-2}\Delta _{\xi }f} where Δ ξ {\displaystyle \Delta _{\xi }} is the Laplace–Beltrami operator on the ordinary unit (n − 2)-sphere. In particular, for the hyperbolic plane using standard notation for polar coordinates we get: Δ H 2 f ( r , θ ) = sinh ⁡ ( r ) − 1 ∂ ∂ r ( sinh ⁡ ( r ) ∂ f ∂ r ) + sinh ⁡ ( r ) − 2 ∂ 2 ∂ θ 2 f {\displaystyle \Delta _{H^{2}}f(r,\theta )=\sinh(r)^{-1}{\frac {\partial }{\partial r}}\left(\sinh(r){\frac {\partial f}{\partial r}}\right)+\sinh(r)^{-2}{\frac {\partial ^{2}}{\partial \theta ^{2}}}f} == See also == Covariant derivative Laplacian operators in differential geometry Laplace operator == Notes == == References == Flanders, Harley (1989), Differential forms with applications to the physical sciences, Dover, ISBN 978-0-486-66169-8 Jost, Jürgen (2002), Riemannian Geometry and Geometric Analysis, Berlin: Springer-Verlag, ISBN 3-540-42627-2. Solomentsev, E.D.; Shikin, E.V. (2001) [1994], "Laplace–Beltrami equation", Encyclopedia of Mathematics, EMS Press
Wikipedia:Laplacian matrix#0
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix, or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method. The Laplacian matrix relates to many functional graph properties. Kirchhoff's theorem can be used to calculate the number of spanning trees for a given graph. The sparsest cut of a graph can be approximated through the Fiedler vector — the eigenvector corresponding to the second smallest eigenvalue of the graph Laplacian — as established by Cheeger's inequality. The spectral decomposition of the Laplacian matrix allows the construction of low-dimensional embeddings that appear in many machine learning applications and determines a spectral layout in graph drawing. Graph-based signal processing is based on the graph Fourier transform that extends the traditional discrete Fourier transform by substituting the standard basis of complex sinusoids for eigenvectors of the Laplacian matrix of a graph corresponding to the signal. The Laplacian matrix is the easiest to define for a simple graph but more common in applications for an edge-weighted graph, i.e., with weights on its edges — the entries of the graph adjacency matrix. Spectral graph theory relates properties of a graph to a spectrum, i.e., eigenvalues and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. Imbalanced weights may undesirably affect the matrix spectrum, leading to the need of normalization — a column/row scaling of the matrix entries — resulting in normalized adjacency and Laplacian matrices. == Definitions for simple graphs == === Laplacian matrix === Given a simple graph G {\displaystyle G} with n {\displaystyle n} vertices v 1 , … , v n {\displaystyle v_{1},\ldots ,v_{n}} , its Laplacian matrix L n × n {\textstyle L_{n\times n}} is defined element-wise as L i , j := { deg ⁡ ( v i ) if i = j − 1 if i ≠ j and v i is adjacent to v j 0 otherwise , {\displaystyle L_{i,j}:={\begin{cases}\deg(v_{i})&{\mbox{if}}\ i=j\\-1&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}},\end{cases}}} or equivalently by the matrix L = D − A , {\displaystyle L=D-A,} where D is the degree matrix, and A is the graph's adjacency matrix. Since G {\textstyle G} is a simple graph, A {\textstyle A} only contains 1s or 0s and its diagonal elements are all 0s. Here is a simple example of a labelled, undirected graph and its Laplacian matrix. We observe for the undirected graph that both the adjacency matrix and the Laplacian matrix are symmetric and that the row- and column-sums of the Laplacian matrix are all zeros (which directly implies that the Laplacian matrix is singular). For directed graphs, either the indegree or outdegree might be used, depending on the application, as in the following example: In the directed graph, the adjacency matrix and Laplacian matrix are asymmetric. In its Laplacian matrix, column-sums or row-sums are zero, depending on whether the indegree or outdegree has been used. === Laplacian matrix for an undirected graph via the oriented incidence matrix === The | v | × | e | {\textstyle |v|\times |e|} oriented incidence matrix B with element Bve for the vertex v and the edge e (connecting vertices v i {\textstyle v_{i}} and v j {\textstyle v_{j}} , with i ≠ j) is defined by B v e = { 1 , if v = v i − 1 , if v = v j 0 , otherwise . {\displaystyle B_{ve}=\left\{{\begin{array}{rl}1,&{\text{if }}v=v_{i}\\-1,&{\text{if }}v=v_{j}\\0,&{\text{otherwise}}.\end{array}}\right.} Even though the edges in this definition are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian | v | × | v | {\textstyle |v|\times |v|} matrix L defined as L = B B T {\displaystyle L=BB^{\textsf {T}}} where B T {\textstyle B^{\textsf {T}}} is the matrix transpose of B. An alternative product B T B {\displaystyle B^{\textsf {T}}B} defines the so-called | e | × | e | {\textstyle |e|\times |e|} edge-based Laplacian, as opposed to the original commonly used vertex-based Laplacian matrix L. === Symmetric Laplacian for a directed graph === The Laplacian matrix of a directed graph is by definition generally non-symmetric, while, e.g., traditional spectral clustering is primarily developed for undirected graphs with symmetric adjacency and Laplacian matrices. A trivial approach to applying techniques requiring the symmetry is to turn the original directed graph into an undirected graph and build the Laplacian matrix for the latter. In the matrix notation, the adjacency matrix of the undirected graph could, e.g., be defined as a Boolean sum of the adjacency matrix A {\displaystyle A} of the original directed graph and its matrix transpose A T {\displaystyle A^{T}} , where the zero and one entries of A {\displaystyle A} are treated as logical, rather than numerical, values, as in the following example: === Laplacian matrix normalization === A vertex with a large degree, also called a heavy node, results in a large diagonal entry in the Laplacian matrix dominating the matrix properties. Normalization is aimed to make the influence of such vertices more equal to that of other vertices, by dividing the entries of the Laplacian matrix by the vertex degrees. To avoid division by zero, isolated vertices with zero degrees are excluded from the process of the normalization. ==== Symmetrically normalized Laplacian ==== The symmetrically normalized Laplacian matrix is defined as: L sym := ( D + ) 1 / 2 L ( D + ) 1 / 2 = I − ( D + ) 1 / 2 A ( D + ) 1 / 2 , {\displaystyle L^{\text{sym}}:=(D^{+})^{1/2}L(D^{+})^{1/2}=I-(D^{+})^{1/2}A(D^{+})^{1/2},} where D + {\displaystyle D^{+}} is the Moore–Penrose inverse of the degree matrix. The elements of L sym {\textstyle L^{\text{sym}}} are thus given by L i , j sym := { 1 if i = j and deg ⁡ ( v i ) ≠ 0 − 1 deg ⁡ ( v i ) deg ⁡ ( v j ) if i ≠ j and v i is adjacent to v j 0 otherwise . {\displaystyle L_{i,j}^{\text{sym}}:={\begin{cases}1&{\mbox{if }}i=j{\mbox{ and }}\deg(v_{i})\neq 0\\-{\frac {1}{\sqrt {\deg(v_{i})\deg(v_{j})}}}&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}} The symmetrically normalized Laplacian matrix is symmetric if and only if the adjacency matrix is symmetric. For a non-symmetric adjacency matrix of a directed graph, either of indegree and outdegree can be used for normalization: ==== Left (random-walk) and right normalized Laplacians ==== The left (random-walk) normalized Laplacian matrix is defined as: L rw := D + L = I − D + A , {\displaystyle L^{\text{rw}}:=D^{+}L=I-D^{+}A,} where D + {\displaystyle D^{+}} is the Moore–Penrose inverse. The elements of L rw {\textstyle L^{\text{rw}}} are given by L i , j rw := { 1 if i = j and deg ⁡ ( v i ) ≠ 0 − 1 deg ⁡ ( v i ) if i ≠ j and v i is adjacent to v j 0 otherwise . {\displaystyle L_{i,j}^{\text{rw}}:={\begin{cases}1&{\mbox{if }}i=j{\mbox{ and }}\deg(v_{i})\neq 0\\-{\frac {1}{\deg(v_{i})}}&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}} Similarly, the right normalized Laplacian matrix is defined as L D + = I − A D + {\displaystyle LD^{+}=I-AD^{+}} . The left or right normalized Laplacian matrix is not symmetric if the adjacency matrix is symmetric, except for the trivial case of all isolated vertices. For example, The example also demonstrates that if G {\displaystyle G} has no isolated vertices, then D + A {\displaystyle D^{+}A} right stochastic and hence is the matrix of a random walk, so that the left normalized Laplacian L rw := D + L = I − D + A {\displaystyle L^{\text{rw}}:=D^{+}L=I-D^{+}A} has each row summing to zero. Thus we sometimes alternatively call L rw {\displaystyle L^{\text{rw}}} the random-walk normalized Laplacian. In the less uncommonly used right normalized Laplacian L D + = I − A D + {\displaystyle LD^{+}=I-AD^{+}} each column sums to zero since A D + {\displaystyle AD^{+}} is left stochastic. For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization: The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic D out + A {\displaystyle D_{\text{out}}^{+}A} , while the right in-degree normalized Laplacian with column-sums all 0 contains left stochastic A D in + {\displaystyle AD_{\text{in}}^{+}} . == Definitions for graphs with weighted edges == Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones. In spectral clustering and graph-based signal processing, where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to the distances between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points. Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights. Most definitions for simple graphs are trivially extended to the standard case of non-negative weights, while negative weights require more attention, especially in normalization. === Laplacian matrix === The Laplacian matrix is defined by L = D − A , {\displaystyle L=D-A,} where D is the degree matrix and A is the adjacency matrix of the graph. For directed graphs, either the indegree or outdegree might be used, depending on the application, as in the following example: Graph self-loops, manifesting themselves by non-zero entries on the main diagonal of the adjacency matrix, are allowed but do not affect the graph Laplacian values. === Symmetric Laplacian via the incidence matrix === For graphs with weighted edges one can define a weighted incidence matrix B and use it to construct the corresponding symmetric Laplacian as L = B B T {\displaystyle L=BB^{\textsf {T}}} . An alternative cleaner approach, described here, is to separate the weights from the connectivity: continue using the incidence matrix as for regular graphs and introduce a matrix just holding the values of the weights. A spring system is an example of this model used in mechanics to describe a system of springs of given stiffnesses and unit length, where the values of the stiffnesses play the role of the weights of the graph edges. We thus reuse the definition of the weightless | v | × | e | {\textstyle |v|\times |e|} incidence matrix B with element Bve for the vertex v and the edge e (connecting vertexes v i {\textstyle v_{i}} and v j {\textstyle v_{j}} , with i > j) defined by B v e = { 1 , if v = v i − 1 , if v = v j 0 , otherwise . {\displaystyle B_{ve}=\left\{{\begin{array}{rl}1,&{\text{if }}v=v_{i}\\-1,&{\text{if }}v=v_{j}\\0,&{\text{otherwise}}.\end{array}}\right.} We now also define a diagonal | e | × | e | {\textstyle |e|\times |e|} matrix W containing the edge weights. Even though the edges in the definition of B are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian | v | × | v | {\textstyle |v|\times |v|} matrix L defined as L = B W B T {\displaystyle L=BWB^{\textsf {T}}} where B T {\textstyle B^{\textsf {T}}} is the matrix transpose of B. The construction is illustrated in the following example, where every edge e i {\textstyle e_{i}} is assigned the weight value i, with i = 1 , 2 , 3 , 4. {\textstyle i=1,2,3,4.} === Symmetric Laplacian for a directed graph === Just like for simple graphs, the Laplacian matrix of a directed weighted graph is by definition generally non-symmetric. The symmetry can be enforced by turning the original directed graph into an undirected graph first before constructing the Laplacian. The adjacency matrix of the undirected graph could, e.g., be defined as a sum of the adjacency matrix A {\displaystyle A} of the original directed graph and its matrix transpose A T {\displaystyle A^{T}} as in the following example: where the zero and one entries of A {\displaystyle A} are treated as numerical, rather than logical as for simple graphs, values, explaining the difference in the results - for simple graphs, the symmetrized graph still needs to be simple with its symmetrized adjacency matrix having only logical, not numerical values, e.g., the logical sum is 1 v 1 = 1, while the numeric sum is 1 + 1 = 2. Alternatively, the symmetric Laplacian matrix can be calculated from the two Laplacians using the indegree and outdegree, as in the following example: The sum of the out-degree Laplacian transposed and the in-degree Laplacian equals to the symmetric Laplacian matrix. === Laplacian matrix normalization === The goal of normalization is, like for simple graphs, to make the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights. Graph self-loops, i.e., non-zero entries on the main diagonal of the adjacency matrix, do not affect the graph Laplacian values, but may need to be counted for calculation of the normalization factors. ==== Symmetrically normalized Laplacian ==== The symmetrically normalized Laplacian is defined as L sym := ( D + ) 1 / 2 L ( D + ) 1 / 2 = I − ( D + ) 1 / 2 A ( D + ) 1 / 2 , {\displaystyle L^{\text{sym}}:=(D^{+})^{1/2}L(D^{+})^{1/2}=I-(D^{+})^{1/2}A(D^{+})^{1/2},} where L is the unnormalized Laplacian, A is the adjacency matrix, D is the degree matrix, and D + {\displaystyle D^{+}} is the Moore–Penrose inverse. Since the degree matrix D is diagonal, its reciprocal square root ( D + ) 1 / 2 {\textstyle (D^{+})^{1/2}} is just the diagonal matrix whose diagonal entries are the reciprocals of the square roots of the diagonal entries of D. If all the edge weights are nonnegative then all the degree values are automatically also nonnegative and so every degree value has a unique positive square root. To avoid the division by zero, vertices with zero degrees are excluded from the process of the normalization, as in the following example: The symmetrically normalized Laplacian is a symmetric matrix if and only if the adjacency matrix A is symmetric and the diagonal entries of D are nonnegative, in which case we can use the term the symmetric normalized Laplacian. The symmetric normalized Laplacian matrix can be also written as L sym := ( D + ) 1 / 2 L ( D + ) 1 / 2 = ( D + ) 1 / 2 B W B T ( D + ) 1 / 2 = S S T {\displaystyle L^{\text{sym}}:=(D^{+})^{1/2}L(D^{+})^{1/2}=(D^{+})^{1/2}BWB^{\textsf {T}}(D^{+})^{1/2}=SS^{T}} using the weightless | v | × | e | {\textstyle |v|\times |e|} incidence matrix B and the diagonal | e | × | e | {\textstyle |e|\times |e|} matrix W containing the edge weights and defining the new | v | × | e | {\textstyle |v|\times |e|} weighted incidence matrix S = ( D + ) 1 / 2 B W 1 / 2 {\textstyle S=(D^{+})^{1/2}BW^{{1}/{2}}} whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edge e = {u, v} has an entry 1 d u {\textstyle {\frac {1}{\sqrt {d_{u}}}}} in the row corresponding to u, an entry − 1 d v {\textstyle -{\frac {1}{\sqrt {d_{v}}}}} in the row corresponding to v, and has 0 entries elsewhere. ==== Random walk normalized Laplacian ==== The random walk normalized Laplacian is defined as L rw := D + L = I − D + A {\displaystyle L^{\text{rw}}:=D^{+}L=I-D^{+}A} where D is the degree matrix. Since the degree matrix D is diagonal, its inverse D + {\textstyle D^{+}} is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding diagonal entries of D. For the isolated vertices (those with degree 0), a common choice is to set the corresponding element L i , i rw {\textstyle L_{i,i}^{\text{rw}}} to 0. The matrix elements of L rw {\textstyle L^{\text{rw}}} are given by L i , j rw := { 1 if i = j and deg ⁡ ( v i ) ≠ 0 − 1 deg ⁡ ( v i ) if i ≠ j and v i is adjacent to v j 0 otherwise . {\displaystyle L_{i,j}^{\text{rw}}:={\begin{cases}1&{\mbox{if}}\ i=j\ {\mbox{and}}\ \deg(v_{i})\neq 0\\-{\frac {1}{\deg(v_{i})}}&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}} The name of the random-walk normalized Laplacian comes from the fact that this matrix is L rw = I − P {\textstyle L^{\text{rw}}=I-P} , where P = D + A {\textstyle P=D^{+}A} is simply the transition matrix of a random walker on the graph, assuming non-negative weights. For example, let e i {\textstyle e_{i}} denote the i-th standard basis vector. Then x = e i P {\textstyle x=e_{i}P} is a probability vector representing the distribution of a random walker's locations after taking a single step from vertex i {\textstyle i} ; i.e., x j = P ( v i → v j ) {\textstyle x_{j}=\mathbb {P} \left(v_{i}\to v_{j}\right)} . More generally, if the vector x {\textstyle x} is a probability distribution of the location of a random walker on the vertices of the graph, then x ′ = x P t {\textstyle x'=xP^{t}} is the probability distribution of the walker after t {\textstyle t} steps. The random walk normalized Laplacian can also be called the left normalized Laplacian L rw := D + L {\displaystyle L^{\text{rw}}:=D^{+}L} since the normalization is performed by multiplying the Laplacian by the normalization matrix D + {\displaystyle D^{+}} on the left. It has each row summing to zero since P = D + A {\displaystyle P=D^{+}A} is right stochastic, assuming all the weights are non-negative. In the less uncommonly used right normalized Laplacian L D + = I − A D + {\displaystyle LD^{+}=I-AD^{+}} each column sums to zero since A D + {\displaystyle AD^{+}} is left stochastic. For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization: The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic D out + A {\displaystyle D_{\text{out}}^{+}A} , while the right in-degree normalized Laplacian with column-sums all 0 contains left stochastic A D in + {\displaystyle AD_{\text{in}}^{+}} . ==== Negative weights ==== Negative weights present several challenges for normalization: The presence of negative weights may naturally result in zero row- and/or column-sums for non-isolated vertices. A vertex with a large row-sum of positive weights and equally negatively large row-sum of negative weights, together summing up to zero, could be considered a heavy node and both large values scaled, while the diagonal entry remains zero, like for an isolated vertex. Negative weights may also give negative row- and/or column-sums, so that the corresponding diagonal entry in the non-normalized Laplacian matrix would be negative and a positive square root needed for the symmetric normalization would not exist. Arguments can be made to take the absolute value of the row- and/or column-sums for the purpose of normalization, thus treating a possible value -1 as a legitimate unit entry of the main diagonal of the normalized Laplacian matrix. == Properties == For an (undirected) graph G and its Laplacian matrix L with eigenvalues λ 0 ≤ λ 1 ≤ ⋯ ≤ λ n − 1 {\textstyle \lambda _{0}\leq \lambda _{1}\leq \cdots \leq \lambda _{n-1}} : L is symmetric. L is positive-semidefinite (that is λ i ≥ 0 {\textstyle \lambda _{i}\geq 0} for all i {\textstyle i} ). This can be seen from the fact that the Laplacian is symmetric and diagonally dominant. L is an M-matrix (its off-diagonal entries are nonpositive, yet the real parts of its eigenvalues are nonnegative). Every row sum and column sum of L is zero. Indeed, in the sum, the degree of the vertex is summed with a "−1" for each neighbor. In consequence, λ 0 = 0 {\textstyle \lambda _{0}=0} , because the vector v 0 = ( 1 , 1 , … , 1 ) {\textstyle \mathbf {v} _{0}=(1,1,\dots ,1)} satisfies L v 0 = 0 . {\textstyle L\mathbf {v} _{0}=\mathbf {0} .} This also implies that the Laplacian matrix is singular. The number of connected components in the graph is the dimension of the nullspace of the Laplacian and the algebraic multiplicity of the 0 eigenvalue. The smallest non-zero eigenvalue of L is called the spectral gap. The second smallest eigenvalue of L (could be zero) is the algebraic connectivity (or Fiedler value) of G and approximates the sparsest cut of a graph. The Laplacian is an operator on the n-dimensional vector space of functions f : V → R {\textstyle f:V\to \mathbb {R} } , where V {\textstyle V} is the vertex set of G, and n = | V | {\textstyle n=|V|} . When G is k-regular, the normalized Laplacian is: L = 1 k L = I − 1 k A {\textstyle {\mathcal {L}}={\tfrac {1}{k}}L=I-{\tfrac {1}{k}}A} , where A is the adjacency matrix and I is an identity matrix. For a graph with multiple connected components, L is a block diagonal matrix, where each block is the respective Laplacian matrix for each component, possibly after reordering the vertices (i.e. L is permutation-similar to a block diagonal matrix). The trace of the Laplacian matrix L is equal to 2 m {\textstyle 2m} where m {\textstyle m} is the number of edges of the considered graph. Now consider an eigendecomposition of L {\textstyle L} , with unit-norm eigenvectors v i {\textstyle \mathbf {v} _{i}} and corresponding eigenvalues λ i {\textstyle \lambda _{i}} : λ i = v i T L v i = v i T M T M v i = ( M v i ) T ( M v i ) . {\displaystyle {\begin{aligned}\lambda _{i}&=\mathbf {v} _{i}^{\textsf {T}}L\mathbf {v} _{i}\\&=\mathbf {v} _{i}^{\textsf {T}}M^{\textsf {T}}M\mathbf {v} _{i}\\&=\left(M\mathbf {v} _{i}\right)^{\textsf {T}}\left(M\mathbf {v} _{i}\right).\\\end{aligned}}} Because λ i {\textstyle \lambda _{i}} can be written as the inner product of the vector M v i {\textstyle M\mathbf {v} _{i}} with itself, this shows that λ i ≥ 0 {\textstyle \lambda _{i}\geq 0} and so the eigenvalues of L {\textstyle L} are all non-negative. All eigenvalues of the normalized symmetric Laplacian satisfy 0 = μ0 ≤ … ≤ μn−1 ≤ 2. These eigenvalues (known as the spectrum of the normalized Laplacian) relate well to other graph invariants for general graphs. One can check that: L rw = I − D − 1 2 ( I − L sym ) D 1 2 {\displaystyle L^{\text{rw}}=I-D^{-{\frac {1}{2}}}\left(I-L^{\text{sym}}\right)D^{\frac {1}{2}}} , i.e., L rw {\textstyle L^{\text{rw}}} is similar to the normalized Laplacian L sym {\textstyle L^{\text{sym}}} . For this reason, even if L rw {\textstyle L^{\text{rw}}} is in general not symmetric, it has real eigenvalues — exactly the same as the eigenvalues of the normalized symmetric Laplacian L sym {\textstyle L^{\text{sym}}} . == Interpretation as the discrete Laplace operator approximating the continuous Laplacian == The graph Laplacian matrix can be further viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian operator obtained by the finite difference method. (See Discrete Poisson equation) In this interpretation, every graph vertex is treated as a grid point; the local connectivity of the vertex determines the finite difference approximation stencil at this grid point, the grid size is always one for every edge, and there are no constraints on any grid points, which corresponds to the case of the homogeneous Neumann boundary condition, i.e., free boundary. Such an interpretation allows one, e.g., generalizing the Laplacian matrix to the case of graphs with an infinite number of vertices and edges, leading to a Laplacian matrix of an infinite size. == Generalizations and extensions of the Laplacian matrix == === Generalized Laplacian === The generalized Laplacian Q {\displaystyle Q} is defined as: { Q i , j < 0 if i ≠ j and v i is adjacent to v j Q i , j = 0 if i ≠ j and v i is not adjacent to v j any number otherwise . {\displaystyle {\begin{cases}Q_{i,j}<0&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\Q_{i,j}=0&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is not adjacent to }}v_{j}\\{\mbox{any number}}&{\mbox{otherwise}}.\end{cases}}} Notice the ordinary Laplacian is a generalized Laplacian. === Admittance matrix of an AC circuit === The Laplacian of a graph was first introduced to model electrical networks. In an alternating current (AC) electrical network, real-valued resistances are replaced by complex-valued impedances. The weight of edge (i, j) is, by convention, minus the reciprocal of the impedance directly between i and j. In models of such networks, the entries of the adjacency matrix are complex, but the Kirchhoff matrix remains symmetric, rather than being Hermitian. Such a matrix is usually called an "admittance matrix", denoted Y {\displaystyle Y} , rather than a "Laplacian". This is one of the rare applications that give rise to complex symmetric matrices. === Magnetic Laplacian === There are other situations in which entries of the adjacency matrix are complex-valued, and the Laplacian does become a Hermitian matrix. The Magnetic Laplacian for a directed graph with real weights w i j {\displaystyle w_{ij}} is constructed as the Hadamard product of the real symmetric matrix of the symmetrized Laplacian and the Hermitian phase matrix with the complex entries γ q ( i , j ) = e i 2 π q ( w i j − w j i ) {\displaystyle \gamma _{q}(i,j)=e^{i2\pi q(w_{ij}-w_{ji})}} which encode the edge direction into the phase in the complex plane. In the context of quantum physics, the magnetic Laplacian can be interpreted as the operator that describes the phenomenology of a free charged particle on a graph, which is subject to the action of a magnetic field and the parameter q {\displaystyle q} is called electric charge. In the following example q = 1 / 4 {\displaystyle q=1/4} : === Deformed Laplacian === The deformed Laplacian is commonly defined as Δ ( s ) = I − s A + s 2 ( D − I ) {\displaystyle \Delta (s)=I-sA+s^{2}(D-I)} where I is the identity matrix, A is the adjacency matrix, D is the degree matrix, and s is a (complex-valued) number. The standard Laplacian is just Δ ( 1 ) {\textstyle \Delta (1)} and Δ ( − 1 ) = D + A {\textstyle \Delta (-1)=D+A} is the signless Laplacian. === Signless Laplacian === The signless Laplacian is defined as Q = D + A {\displaystyle Q=D+A} where D {\displaystyle D} is the degree matrix, and A {\displaystyle A} is the adjacency matrix. Like the signed Laplacian L {\displaystyle L} , the signless Laplacian Q {\displaystyle Q} also is positive semi-definite as it can be factored as Q = R R T {\displaystyle Q=RR^{\textsf {T}}} where R {\textstyle R} is the incidence matrix. Q {\displaystyle Q} has a 0-eigenvector if and only if it has a bipartite connected component (isolated vertices being bipartite connected components). This can be shown as x T Q x = x T R R T x ⟹ R T x = 0 . {\displaystyle \mathbf {x} ^{\textsf {T}}Q\mathbf {x} =\mathbf {x} ^{\textsf {T}}RR^{\textsf {T}}\mathbf {x} \implies R^{\textsf {T}}\mathbf {x} =\mathbf {0} .} This has a solution where x ≠ 0 {\displaystyle \mathbf {x} \neq \mathbf {0} } if and only if the graph has a bipartite connected component. === Directed multigraphs === An analogue of the Laplacian matrix can be defined for directed multigraphs. In this case the Laplacian matrix L is defined as L = D − A {\displaystyle L=D-A} where D is a diagonal matrix with Di,i equal to the outdegree of vertex i and A is a matrix with Ai,j equal to the number of edges from i to j (including loops). == Open source software implementations == SciPy NetworkX Julia == Application software == scikit-learn Spectral Clustering PyGSP: Graph Signal Processing in Python megaman: Manifold Learning for Millions of Points smoothG Laplacian Change Point Detection for Dynamic Graphs (KDD 2020) LaplacianOpt (A Julia Package for Maximizing Laplacian's Second Eigenvalue of Weighted Graphs) LigMG (Large Irregular Graph MultiGrid) Laplacians.jl == See also == Stiffness matrix Resistance distance Transition rate matrix Calculus on finite weighted graphs Graph Fourier transform == References ==
Wikipedia:Lapped transform#0
In signal processing, a lapped transform is a type of linear discrete block transformation where the basis functions of the transformation overlap the block boundaries, yet the number of coefficients overall resulting from a series of overlapping block transforms remains the same as if a non-overlapping block transform had been used. Lapped transforms substantially reduce the blocking artifacts that otherwise occur with block transform coding techniques, in particular those using the discrete cosine transform. The best known example is the modified discrete cosine transform used in the MP3, Vorbis, AAC, and Opus audio codecs. Although the best-known application of lapped transforms has been for audio coding, they have also been used for video and image coding and various other applications. They are used in video coding for coding I-frames in VC-1 and for image coding in the JPEG XR format. More recently, a form of lapped transform has also been used in the development of the Daala video coding format. == References ==
Wikipedia:Large deviations theory#0
In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. While some basic ideas of the theory can be traced to Laplace, the formalization started with insurance mathematics, namely ruin theory with Cramér and Lundberg. A unified formalization of large deviation theory was developed in 1966, in a paper by Varadhan. Large deviations theory formalizes the heuristic ideas of concentration of measures and widely generalizes the notion of convergence of probability measures. Roughly speaking, large deviations theory concerns itself with the exponential decline of the probability measures of certain kinds of extreme or tail events. == Introductory examples == Any large deviation is done in the least unlikely of all the unlikely ways! === An elementary example === Consider a sequence of independent tosses of a fair coin. The possible outcomes could be heads or tails. Let us denote the possible outcome of the i-th trial by X i {\displaystyle X_{i}} , where we encode head as 1 and tail as 0. Now let M N {\displaystyle M_{N}} denote the mean value after N {\displaystyle N} trials, namely M N = 1 N ∑ i = 1 N X i {\displaystyle M_{N}={\frac {1}{N}}\sum _{i=1}^{N}X_{i}} . Then M N {\displaystyle M_{N}} lies between 0 and 1. From the law of large numbers it follows that as N grows, the distribution of M N {\displaystyle M_{N}} converges to 0.5 = E ⁡ [ X ] {\displaystyle 0.5=\operatorname {E} [X]} (the expected value of a single coin toss). Moreover, by the central limit theorem, it follows that M N {\displaystyle M_{N}} is approximately normally distributed for large N {\displaystyle N} . The central limit theorem can provide more detailed information about the behavior of M N {\displaystyle M_{N}} than the law of large numbers. For example, we can approximately find a tail probability of M N {\displaystyle M_{N}} – the probability that M N {\displaystyle M_{N}} is greater than some value x {\displaystyle x} – for a fixed value of N {\displaystyle N} . However, the approximation by the central limit theorem may not be accurate if x {\displaystyle x} is far from E ⁡ [ X i ] {\displaystyle \operatorname {E} [X_{i}]} and N {\displaystyle N} is not sufficiently large. Also, it does not provide information about the convergence of the tail probabilities as N → ∞ {\displaystyle N\to \infty } . However, the large deviation theory can provide answers for such problems. Let us make this statement more precise. For a given value 0.5 < x < 1 {\displaystyle 0.5<x<1} , let us compute the tail probability P ( M N > x ) {\displaystyle P(M_{N}>x)} . Define I ( x ) = x ln ⁡ x + ( 1 − x ) ln ⁡ ( 1 − x ) + ln ⁡ 2 {\displaystyle I(x)=x\ln {x}+(1-x)\ln(1-x)+\ln {2}} . Note that the function I ( x ) {\displaystyle I(x)} is a convex, nonnegative function that is zero at x = 1 2 {\displaystyle x={\tfrac {1}{2}}} and increases as x {\displaystyle x} approaches 1 {\displaystyle 1} . It is the negative of the Bernoulli entropy with p = 1 2 {\displaystyle p={\tfrac {1}{2}}} ; that it's appropriate for coin tosses follows from the asymptotic equipartition property applied to a Bernoulli trial. Then by Chernoff's inequality, it can be shown that P ( M N > x ) < exp ⁡ ( − N I ( x ) ) {\displaystyle P(M_{N}>x)<\exp(-NI(x))} . This bound is rather sharp, in the sense that I ( x ) {\displaystyle I(x)} cannot be replaced with a larger number which would yield a strict inequality for all positive N {\displaystyle N} . (However, the exponential bound can still be reduced by a subexponential factor on the order of 1 / N {\displaystyle 1/{\sqrt {N}}} ; this follows from the Stirling approximation applied to the binomial coefficient appearing in the Bernoulli distribution.) Hence, we obtain the following result: P ( M N > x ) ≈ exp ⁡ ( − N I ( x ) ) {\displaystyle P(M_{N}>x)\approx \exp(-NI(x))} . The probability P ( M N > x ) {\displaystyle P(M_{N}>x)} decays exponentially as N → ∞ {\displaystyle N\to \infty } at a rate depending on x. This formula approximates any tail probability of the sample mean of i.i.d. variables and gives its convergence as the number of samples increases. === Large deviations for sums of independent random variables === In the above example of coin-tossing we explicitly assumed that each toss is an independent trial, and the probability of getting head or tail is always the same. Let X , X 1 , X 2 , … {\displaystyle X,X_{1},X_{2},\ldots } be independent and identically distributed (i.i.d.) random variables whose common distribution satisfies a certain growth condition. Then the following limit exists: lim N → ∞ 1 N ln ⁡ P ( M N > x ) = − I ( x ) {\displaystyle \lim _{N\to \infty }{\frac {1}{N}}\ln P(M_{N}>x)=-I(x)} . Here M N = 1 N ∑ i = 1 N X i {\displaystyle M_{N}={\frac {1}{N}}\sum _{i=1}^{N}X_{i}} , as before. Function I ( ⋅ ) {\displaystyle I(\cdot )} is called the "rate function" or "Cramér function" or sometimes the "entropy function". The above-mentioned limit means that for large N {\displaystyle N} , P ( M N > x ) ≈ exp ⁡ [ − N I ( x ) ] {\displaystyle P(M_{N}>x)\approx \exp[-NI(x)]} , which is the basic result of large deviations theory. If we know the probability distribution of X {\displaystyle X} , an explicit expression for the rate function can be obtained. This is given by a Legendre–Fenchel transformation, I ( x ) = sup θ > 0 [ θ x − λ ( θ ) ] {\displaystyle I(x)=\sup _{\theta >0}[\theta x-\lambda (\theta )]} , where λ ( θ ) = ln ⁡ E ⁡ [ exp ⁡ ( θ X ) ] {\displaystyle \lambda (\theta )=\ln \operatorname {E} [\exp(\theta X)]} is called the cumulant generating function (CGF) and E {\displaystyle \operatorname {E} } denotes the mathematical expectation. If X {\displaystyle X} follows a normal distribution, the rate function becomes a parabola with its apex at the mean of the normal distribution. If { X i } {\displaystyle \{X_{i}\}} is an irreducible and aperiodic Markov chain, the variant of the basic large deviations result stated above may hold. === Moderate deviations for sums of independent random variables === The previous example controlled the probability of the event [ M N > x ] {\displaystyle [M_{N}>x]} , that is, the concentration of the law of M N {\displaystyle M_{N}} on the compact set [ − x , x ] {\displaystyle [-x,x]} . It is also possible to control the probability of the event [ M N > x a N ] {\displaystyle [M_{N}>xa_{N}]} for some sequence a N → 0 {\displaystyle a_{N}\to 0} . The following is an example of a moderate deviations principle: In particular, the limit case a N = N {\displaystyle a_{N}={\sqrt {N}}} is the central limit theorem. == Formal definition == Given a Polish space X {\displaystyle {\mathcal {X}}} let { P N } {\displaystyle \{\mathbb {P} _{N}\}} be a sequence of Borel probability measures on X {\displaystyle {\mathcal {X}}} , let { a N } {\displaystyle \{a_{N}\}} be a sequence of positive real numbers such that lim N a N = ∞ {\displaystyle \lim _{N}a_{N}=\infty } , and finally let I : X → [ 0 , ∞ ] {\displaystyle I:{\mathcal {X}}\to [0,\infty ]} be a lower semicontinuous functional on X . {\displaystyle {\mathcal {X}}.} The sequence { P N } {\displaystyle \{\mathbb {P} _{N}\}} is said to satisfy a large deviation principle with speed { a n } {\displaystyle \{a_{n}\}} and rate I {\displaystyle I} if, and only if, for each Borel measurable set E ⊂ X {\displaystyle E\subset {\mathcal {X}}} , − inf x ∈ E ∘ I ( x ) ≤ lim _ N ⁡ a N − 1 log ⁡ ( P N ( E ) ) ≤ lim ¯ N ⁡ a N − 1 log ⁡ ( P N ( E ) ) ≤ − inf x ∈ E ¯ I ( x ) {\displaystyle -\inf _{x\in E^{\circ }}I(x)\leq \varliminf _{N}a_{N}^{-1}\log(\mathbb {P} _{N}(E))\leq \varlimsup _{N}a_{N}^{-1}\log(\mathbb {P} _{N}(E))\leq -\inf _{x\in {\overline {E}}}I(x)} , where E ¯ {\displaystyle {\overline {E}}} and E ∘ {\displaystyle E^{\circ }} denote respectively the closure and interior of E {\displaystyle E} . == Brief history == The first rigorous results concerning large deviations are due to the Swedish mathematician Harald Cramér, who applied them to model the insurance business. From the point of view of an insurance company, the earning is at a constant rate per month (the monthly premium) but the claims come randomly. For the company to be successful over a certain period of time (preferably many months), the total earning should exceed the total claim. Thus to estimate the premium you have to ask the following question: "What should we choose as the premium q {\displaystyle q} such that over N {\displaystyle N} months the total claim C = Σ X i {\displaystyle C=\Sigma X_{i}} should be less than N q {\displaystyle Nq} ?" This is clearly the same question asked by the large deviations theory. Cramér gave a solution to this question for i.i.d. random variables, where the rate function is expressed as a power series. A very incomplete list of mathematicians who have made important advances would include Petrov, Sanov, S.R.S. Varadhan (who has won the Abel prize for his contribution to the theory), D. Ruelle, O.E. Lanford, Mark Freidlin, Alexander D. Wentzell, Amir Dembo, and Ofer Zeitouni. == Applications == Principles of large deviations may be effectively applied to gather information out of a probabilistic model. Thus, theory of large deviations finds its applications in information theory and risk management. In physics, the best known application of large deviations theory arise in thermodynamics and statistical mechanics (in connection with relating entropy with rate function). === Large deviations and entropy === The rate function is related to the entropy in statistical mechanics. This can be heuristically seen in the following way. In statistical mechanics the entropy of a particular macro-state is related to the number of micro-states which corresponds to this macro-state. In our coin tossing example the mean value M N {\displaystyle M_{N}} could designate a particular macro-state. And the particular sequence of heads and tails which gives rise to a particular value of M N {\displaystyle M_{N}} constitutes a particular micro-state. Loosely speaking a macro-state having a higher number of micro-states giving rise to it, has higher entropy. And a state with higher entropy has a higher chance of being realised in actual experiments. The macro-state with mean value of 1/2 (as many heads as tails) has the highest number of micro-states giving rise to it and it is indeed the state with the highest entropy. And in most practical situations we shall indeed obtain this macro-state for large numbers of trials. The "rate function" on the other hand measures the probability of appearance of a particular macro-state. The smaller the rate function the higher is the chance of a macro-state appearing. In our coin-tossing the value of the "rate function" for mean value equal to 1/2 is zero. In this way one can see the "rate function" as the negative of the "entropy". There is a relation between the "rate function" in large deviations theory and the Kullback–Leibler divergence, the connection is established by Sanov's theorem (see Sanov and Novak, ch. 14.5). In a special case, large deviations are closely related to the concept of Gromov–Hausdorff limits. == See also == Large deviation principle Cramér's large deviation theorem Chernoff's inequality Sanov's theorem Contraction principle (large deviations theory), a result on how large deviations principles "push forward" Freidlin–Wentzell theorem, a large deviations principle for Itō diffusions Legendre transformation, Ensemble equivalence is based on this transformation. Laplace principle, a large deviations principle in Rd Laplace's method Schilder's theorem, a large deviations principle for Brownian motion Varadhan's lemma Extreme value theory Large deviations of Gaussian random functions == References == == Bibliography == Special invited paper: Large deviations by S. R. S. Varadhan The Annals of Probability 2008, Vol. 36, No. 2, 397–419 doi:10.1214/07-AOP348 A basic introduction to large deviations: Theory, applications, simulations, Hugo Touchette, arXiv:1106.4146. Entropy, Large Deviations and Statistical Mechanics by R.S. Ellis, Springer Publication. ISBN 3-540-29059-1 Large Deviations for Performance Analysis by Alan Weiss and Adam Shwartz. Chapman and Hall ISBN 0-412-06311-5 Large Deviations Techniques and Applications by Amir Dembo and Ofer Zeitouni. Springer ISBN 0-387-98406-2 A course on large deviations with an introduction to Gibbs measures by Firas Rassoul-Agha and Timo Seppäläinen. Grad. Stud. Math., 162. American Mathematical Society ISBN 978-0-8218-7578-0 Random Perturbations of Dynamical Systems by M.I. Freidlin and A.D. Wentzell. Springer ISBN 0-387-98362-7 "Large Deviations for Two Dimensional Navier-Stokes Equation with Multiplicative Noise", S. S. Sritharan and P. Sundar, Stochastic Processes and Their Applications, Vol. 116 (2006) 1636–1659.[2] "Large Deviations for the Stochastic Shell Model of Turbulence", U. Manna, S. S. Sritharan and P. Sundar, NoDEA Nonlinear Differential Equations Appl. 16 (2009), no. 4, 493–521.[3]
Wikipedia:Larisa Maksimova#0
Larisa Lvovna Maksimova (Russian: Лариса Львовна Максимова; 5 November 1943 – 4 April 2025) was a Russian mathematical logician known for her research in non-classical logic. == Education and career == Maksimova was born on 5 November 1943, in Kochenyovo, the daughter of two biologists who had temporarily moved there from Tomsk State University to escape the war. She grew up in Novosibirsk, where her parents became geographers at the Novosibirsk Pedagogical Institute. She studied mechanics and mathematics at Novosibirsk State University, publishing her first paper on Wilhelm Ackermann's axioms for strict implication in relevance logic in 1964 and graduating in 1965. Meanwhile, in 1964, she joined the Sobolev Institute of Mathematics, and remained there for the rest of her career. She defended her doctorate at Novosibirsk State University in 1968, a year after the death of her primary mentor at the university, Anatoly Maltsev. She completed a habilitation at the Sobolev Institute in 1986, and was promoted to full professor in 1993. She died on 4 April 2025, at the age of 81. == Recognition == Maksimova won the Maltsev Prize of the Russian Academy of Sciences in 2009, for her papers on definability and interpolation in non-classical logic. With several others from the Sobolev Institute, she won the Russian Federation Government Prize in Education in 2010. She is the subject of a festschrift, Larisa Maksimova on Implication, Interpolation, and Definability (Sergei Odintsov, ed., Springer, 2018). == Books == Maksimova's books include Problems in Set Theory, Mathematical Logic and the Theory of Algorithms (with Igor Lavrov, Izdat Nauka, 1975, 1984, and 1995; translated into English by Valentin Shehtman, Kluwer, 2003) Interpolation and Definability: Modal and Intuitionistic Logics (with Dov Gabbay, Clarendon Press, 2005) == References ==
Wikipedia:Lars Ahlfors#0
Lars Valerian Ahlfors (18 April 1907 – 11 October 1996) was a Finnish mathematician, remembered for his work in the field of Riemann surfaces and his textbook on complex analysis. == Background == Ahlfors was born in Helsinki, Finland. His mother, Sievä Helander, died at his birth. His father, Axel Ahlfors, was a professor of engineering at the Helsinki University of Technology. The Ahlfors family was Swedish-speaking, so he first attended the private school Nya svenska samskolan where all classes were taught in Swedish. Ahlfors studied at University of Helsinki from 1924, graduating in 1928 having studied under Ernst Lindelöf and Rolf Nevanlinna. He assisted Nevanlinna in 1929 with his work on Denjoy's conjecture on the number of asymptotic values of an entire function. In 1929 Ahlfors published the first proof of this conjecture, now known as the Denjoy–Carleman–Ahlfors theorem. It states that the number of asymptotic values approached by an entire function of order ρ along curves in the complex plane going toward infinity is less than or equal to 2ρ. He completed his doctorate from the University of Helsinki in 1930. == Career == Ahlfors worked as an associate professor at the University of Helsinki from 1933 to 1936. In 1936 he was one of the first two people to be awarded the Fields Medal (the other was Jesse Douglas). In 1935 Ahlfors visited Harvard University. He returned to Finland in 1938 to take up a professorship at the University of Helsinki. The outbreak of war in 1939 led to problems although Ahlfors was unfit for military service. He was offered a position at the Swiss Federal Institute of Technology at Zurich in 1944 and finally managed to travel there in March 1945. He did not enjoy his time in Switzerland, so in 1946 he jumped at a chance to leave, returning to work at Harvard, where he remained until his retirement in 1977; he was William Caspar Graustein Professor of Mathematics from 1964. Ahlfors was a visiting scholar at the Institute for Advanced Study in 1962 and again in 1966. He was awarded the Wihuri Prize in 1968 and the Wolf Prize in Mathematics in 1981. He served as the Honorary President of the International Congress of Mathematicians in 1986 at Berkeley, California, in celebration of his 50th year of the award of his Fields Medal. His book Complex Analysis (1953) is the classic text on the subject and is almost certainly referenced in any more recent text which makes heavy use of complex analysis. Ahlfors wrote several other significant books, including Riemann surfaces (1960) and Conformal invariants (1973). He made decisive contributions to meromorphic curves, value distribution theory, Riemann surfaces, conformal geometry, quasiconformal mappings and other areas during his career. == Personal life == In 1933, he married Erna Lehnert, an Austrian who with her parents had first settled in Sweden and then in Finland. The couple had three daughters. Ahlfors died of pneumonia at the Willowwood nursing home in Pittsfield, Massachusetts in 1996. == See also == Ahlfors finiteness theorem Ahlfors function Ahlfors measure conjecture Beurling–Ahlfors transform Schwarz–Ahlfors–Pick theorem Measurable Riemann mapping theorem == Bibliography == Articles Ahlfors, Lars V. An extension of Schwarz's lemma. Trans. Amer. Math. Soc. 43 (1938), no. 3, 359–364. doi:10.2307/1990065 Ahlfors, Lars; Beurling, Arne. Conformal invariants and function-theoretic null-sets. Acta Math. 83 (1950), 101–129. doi:10.1007/BF02392634 Beurling, A.; Ahlfors, L. The boundary correspondence under quasiconformal mappings. Acta Math. 96 (1956), 125–142. doi:10.1007/BF02392360 Ahlfors, Lars; Bers, Lipman. Riemann's mapping theorem for variable metrics. Ann. of Math. (2) 72 (1960), 385–404. doi:10.2307/1970141 Ahlfors, Lars Valerian. Collected papers. Vol. 1. 1929–1955. Edited with the assistance of Rae Michael Shortt. Contemporary Mathematicians. Birkhäuser, Boston, Mass., 1982. xix+520 pp. ISBN 3-7643-3075-9 Ahlfors, Lars Valerian. Collected papers. Vol. 2. 1954–1979. Edited with the assistance of Rae Michael Shortt. Contemporary Mathematicians. Birkhäuser, Boston, Mass., 1982. xix+515 pp. ISBN 3-7643-3076-7 Books Ahlfors, Lars V. Complex analysis. An introduction to the theory of analytic functions of one complex variable. Third edition. International Series in Pure and Applied Mathematics. McGraw-Hill Book Co., New York, 1978. xi+331 pp. ISBN 0-07-000657-1 Ahlfors, Lars V. Conformal invariants. Topics in geometric function theory. Reprint of the 1973 original. With a foreword by Peter Duren, F. W. Gehring and Brad Osgood. AMS Chelsea Publishing, Providence, RI, 2010. xii+162 pp. ISBN 978-0-8218-5270-5 Ahlfors, Lars V. Lectures on quasiconformal mappings. Second edition. With supplemental chapters by C. J. Earle, I. Kra, M. Shishikura and J. H. Hubbard. University Lecture Series, 38. American Mathematical Society, Providence, RI, 2006. viii+162 pp. ISBN 0-8218-3644-7 Ahlfors, Lars V. Möbius transformations in several dimensions. Ordway Professorship Lectures in Mathematics. University of Minnesota, School of Mathematics, Minneapolis, Minn., 1981. ii+150 pp. Ahlfors, Lars V.; Sario, Leo. Riemann surfaces. Princeton Mathematical Series, No. 26 Princeton University Press, Princeton, N.J. 1960 xi+382 pp. == References == == External links == Media related to Lars Ahlfors at Wikimedia Commons Lars Ahlfors at the Mathematics Genealogy Project Ahlfors entry on Harvard University Mathematics department web site. Papers of Lars Valerian Ahlfors : an inventory (Harvard University Archives) Lars Valerian Ahlfors The MacTutor History of Mathematics page about Ahlfors The Mathematics of Lars Valerian Ahlfors, Notices of the American Mathematical Society; vol. 45, no. 2 (February 1998). Lars Valerian Ahlfors (1907–1996), Notices of the American Mathematical Society; vol. 45, no. 2 (February 1998). Frederick Gehring (2005). "Lars Valerian Ahlfors: a biographical memoir" (PDF). Biographical Memoirs. 87. National Academy of Sciences Biographical Memoir Author profile in the database zbMATH
Wikipedia:Lars Hesselholt#0
Lars Hesselholt (born September 25, 1966) is a Danish mathematician who works as a professor of mathematics at Nagoya University in Japan, as well as holding a temporary position as Niels Bohr Professor at the University of Copenhagen. His research interests include homotopy theory, algebraic K-theory, and arithmetic algebraic geometry. Hesselholt was born in Vejrumbro, a village in the Viborg Municipality of Denmark. He studied at Aarhus University, earning a bachelor's degree in 1988, a master's degree in 1992, and a Ph.D. in 1994; his dissertation, supervised by Ib Madsen, concerned K-theory. After postdoctoral studies at the Mittag-Leffler Institute, he joined the faculty of the Massachusetts Institute of Technology in 1994 as a C.L.E. Moore instructor, and stayed at MIT as an assistant and then associate professor, before moving to Nagoya in 2008. Hesselholt's wife is Japanese, and when he joined the Nagoya faculty he became the first westerner with a full professorship in mathematics in Japan. He is the managing editor of the Nagoya Mathematical Journal. Hesselholt became a Sloan fellow in 1998, and was an invited speaker at the International Congress of Mathematicians in 2002. In 2012, he became one of the inaugural fellows of the American Mathematical Society, and a foreign member of the Royal Danish Academy of Sciences and Letters. == References == == External links == Home page at Copenhagen Home page at Nagoya Google scholar profile
Wikipedia:Lars-Erik Persson#0
Lars-Erik Persson (born 24 September 1944) is a Swedish/Norwegian professor in mathematics, known for his works in Fourier analysis, function spaces, inequalities, interpolation theory and related problems connected to convexity and quasi-monotone functions. Persson comes from the small village Svanabyn in Dorotea community, Sweden. He received his PhD degree in mathematics at Umeå University in 1974. In 1975 he was employed as associate professor in mathematics at Luleå University of Technology (LTU), and was appointed full professor in 1994. Since 2019 he is professor emeritus there. Before his professor appointment at LTU, he was appointed as full professor at UiT The Arctic University of Norway, Campus Narvik (previously Narvik University College) in 1992. He still works as professor of mathematics at the same university. He was also appointed as honorary professor at L.N.Gumilyov Eurasian National University, Kazakhstan in 2005. Persson has also worked as a part-time professor at Uppsala University, where he is now professor emeritus. For a shorter period he taught at Lund University, Sweden, as professor in mathematics on the chair of professor Jaak Peetre. He was appointed as senior professor at Karlstad university, Sweden, in 2019. Persson has authored or co-authored approximately 320 journal papers and 16 books. He serves as the editor for seven international journals. He has been President of the Swedish Mathematical Society and has been a member of the National Committee of Mathematics, affiliated with the Royal Swedish Academy of Sciences, since 1995, serving as secretary from 1995 to 2002. He was also a member of the NT-R board (Mathematics and Technical Mathematics) at the Swedish Research Council for six years, where he was involved in distributing funding for top-tier basic research in Sweden. He chaired the board in 2010 and from 2012 to 2014. He initiated and was the first director of Center of Interdisciplinary Mathematics (CIM) at Uppsala University. See his homepage at UiT The Arctic University of Norway and his homepage at Karlstad University;Sweden – See also international homepage of Lars-Erik Persson Persson has been invited as guest researcher to universities in numerous countries. In November 2015 he was invited to Collège de France by Fields medalist Pierre-Louis Lions. International conferences were arranged and a journal issue was published in his honor. == References ==
Wikipedia:Latimer–MacDuffee theorem#0
The Latimer–MacDuffee theorem is a theorem in abstract algebra, a branch of mathematics. It is named after Claiborne Latimer and Cyrus Colton MacDuffee, who published it in 1933. Significant contributions to its theory were made later by Olga Taussky-Todd. Let f {\displaystyle f} be a monic, irreducible polynomial of degree n {\displaystyle n} . The Latimer–MacDuffee theorem gives a one-to-one correspondence between Z {\displaystyle \mathbb {Z} } -similarity classes of n × n {\displaystyle n\times n} matrices with characteristic polynomial f {\displaystyle f} and the ideal classes in the order Z [ x ] / ( f ( x ) ) . {\displaystyle \mathbb {Z} [x]/(f(x)).} where ideals are considered equivalent if they are equal up to an overall (nonzero) rational scalar multiple. (Note that this order need not be the full ring of integers, so nonzero ideals need not be invertible.) Since an order in a number field has only finitely many ideal classes (even if it is not the maximal order, and we mean here ideals classes for all nonzero ideals, not just the invertible ones), it follows that there are only finitely many conjugacy classes of matrices over the integers with characteristic polynomial f ( x ) {\displaystyle f(x)} . == References ==
Wikipedia:Lattice reduction#0
In mathematics, the goal of lattice basis reduction is to find a basis with short, nearly orthogonal vectors when given an integer lattice basis as input. This is realized using different algorithms, whose running time is usually at least exponential in the dimension of the lattice. == Nearly orthogonal == One measure of nearly orthogonal is the orthogonality defect. This compares the product of the lengths of the basis vectors with the volume of the parallelepiped they define. For perfectly orthogonal basis vectors, these quantities would be the same. Any particular basis of n {\displaystyle n} vectors may be represented by a matrix B {\displaystyle B} , whose columns are the basis vectors b i , i = 1 , … , n {\displaystyle b_{i},i=1,\ldots ,n} . In the fully dimensional case where the number of basis vectors is equal to the dimension of the space they occupy, this matrix is square, and the volume of the fundamental parallelepiped is simply the absolute value of the determinant of this matrix det ( B ) {\displaystyle \det(B)} . If the number of vectors is less than the dimension of the underlying space, then volume is det ( B T B ) {\displaystyle {\sqrt {\det(B^{T}B)}}} . For a given lattice Λ {\displaystyle \Lambda } , this volume is the same (up to sign) for any basis, and hence is referred to as the determinant of the lattice det ( Λ ) {\displaystyle \det(\Lambda )} or lattice constant d ( Λ ) {\displaystyle d(\Lambda )} . The orthogonality defect is the product of the basis vector lengths divided by the parallelepiped volume; δ ( B ) = Π i = 1 n ‖ b i ‖ det ( B T B ) = Π i = 1 n ‖ b i ‖ d ( Λ ) {\displaystyle \delta (B)={\frac {\Pi _{i=1}^{n}\|b_{i}\|}{\sqrt {\det(B^{T}B)}}}={\frac {\Pi _{i=1}^{n}\|b_{i}\|}{d(\Lambda )}}} From the geometric definition it may be appreciated that δ ( B ) ≥ 1 {\displaystyle \delta (B)\geq 1} with equality if and only if the basis is orthogonal. If the lattice reduction problem is defined as finding the basis with the smallest possible defect, then the problem is NP-hard . However, there exist polynomial time algorithms to find a basis with defect δ ( B ) ≤ c {\displaystyle \delta (B)\leq c} where c is some constant depending only on the number of basis vectors and the dimension of the underlying space (if different). This is a good enough solution in many practical applications. == In two dimensions == For a basis consisting of just two vectors, there is a simple and efficient method of reduction closely analogous to the Euclidean algorithm for the greatest common divisor of two integers. As with the Euclidean algorithm, the method is iterative; at each step the larger of the two vectors is reduced by adding or subtracting an integer multiple of the smaller vector. The pseudocode of the algorithm, often known as Lagrange's algorithm or the Lagrange-Gauss algorithm, is as follows: Input: ( u , v ) {\textstyle (u,v)} a basis for the lattice L {\textstyle L} . Assume that | | v | | ≤ | | u | | {\textstyle ||v||\leq ||u||} , otherwise swap them. Output: A basis ( u , v ) {\textstyle (u,v)} with | | u | | = λ 1 ( L ) , | | v | | = λ 2 ( L ) {\textstyle ||u||=\lambda _{1}(L),||v||=\lambda _{2}(L)} . While | | v | | < | | u | | {\textstyle ||v||<||u||} : q := ⌊ ⟨ u , v | | v | | 2 ⟩ ⌉ {\textstyle q:=\left\lfloor {\left\langle u,{\dfrac {v}{||v||^{2}}}\right\rangle }\right\rceil } # Round to nearest integer r := u − q v {\textstyle r:=u-qv} u := v {\textstyle u:=v} v := r {\textstyle v:=r} See the section on Lagrange's algorithm in for further details. == Applications == Lattice reduction algorithms are used in a number of modern number theoretical applications, including in the discovery of a spigot algorithm for π {\displaystyle \pi } . Although determining the shortest basis is possibly an NP-complete problem, algorithms such as the LLL algorithm can find a short (not necessarily shortest) basis in polynomial time with guaranteed worst-case performance. LLL is widely used in the cryptanalysis of public key cryptosystems. When used to find integer relations, a typical input to the algorithm consists of an augmented n × n {\displaystyle n\times n} identity matrix with the entries in the last column consisting of the n {\displaystyle n} elements (multiplied by a large positive constant w {\displaystyle w} to penalize vectors that do not sum to zero) between which the relation is sought. The LLL algorithm for computing a nearly-orthogonal basis was used to show that integer programming in any fixed dimension can be done in polynomial time. == Algorithms == The following algorithms reduce lattice bases; several public implementations of these algorithms are also listed. == References ==
Wikipedia:Laura Gardini#0
Laura Gardini (born 1952) is an Italian mathematician who studies chaos in dynamical systems, with applications in mathematical finance. She is professor in mathematics for economic applications at the University of Urbino. == Education and career == Gardini is originally from Ravenna, where she was born on August 21, 1952. She graduated cum laude from the University of Bologna in 1975, and became a researcher for the Ente Nazionale Idrocarburi (ENI), an Italian national energy association. During this period she also taught mechanics in the Faculty of Engineering of the University of Ancona. In 1988 she moved to the University of Urbino as a researcher in mathematics for economic applications; she became associate professor there in 1992 and full professor in 1994. She is co-editor-in-chief of the Elsevier journal Mathematics and Computers in Simulation. She is one of the founders of an annual workshop on dynamical systems in economics and finance, held at the University of Urbino since 2000. == Recognition == A festschrift in honor of her 60th birthday, Global Analysis of Dynamic Models in Economics and Finance: Essays in Honour of Laura Gardini, was published in 2013. == Books == Gardini is the coauthor of books including: Chaotic Dynamics in Two-Dimensional Noninvertible Maps (with Christian Mira, Alexandra Barugola, and Jean-Claude Cathala, World Scientific, 1996) ISBN 978-981-02-1647-4. Chaos in Discrete Dynamical Systems: A Visual Introduction in 2 Dimensions (with Ralph H. Abraham and Christian Mira, Springer, 1997). Continuous and Discontinuous Piecewise-Smooth One-Dimensional Maps: Invariant Sets and Bifurcation Structures (with Viktor Avrutin, Iryna Sushko, and Fabio Tramontana, World Scientific, 2019) Appunti di matematica finanziaria (with Rita Laura D'Ecclesia, 1998; 8th ed., Giappichelli, 2019) == References == == External links == Laura Gardini publications indexed by Google Scholar
Wikipedia:Laura Grigori#0
Laura Grigori is a French-Romanian applied mathematician and computer scientist known for her research on numerical linear algebra and communication-avoiding algorithms. She is a director of research for the French Institute for Research in Computer Science and Automation (INRIA) in Paris, and heads the "Alpines" scientific computing project jointly affiliated with INRIA and the Laboratoire Jacques-Louis Lions of Sorbonne University. == Education and career == Grigori earned her Ph.D. from Henri Poincaré University in 2001. Her dissertation, Prédiction de structure et algorithmique parallèle pour la factorisation LU des matrices creuses, concerned parallel algorithms for LU decomposition of sparse matrices, and was supervised by Michel Cosnard. After postdoctoral research at the University of California, Berkeley and the Lawrence Berkeley National Laboratory, she became a researcher for INRIA in 2004, and became the head of the Alpines project in 2013. In 2021, she will join the SIAM Council as a Member-at-Large. == Recognition == A 2012 paper on communication-avoiding algorithms for parallel matrix decomposition by Grigori with James Demmel, Mark Hoemmen, and Julien Langou won the 2016 Society for Industrial and Applied Mathematics (SIAM) Activity Group on Supercomputing Best Paper Prize for the best paper on parallel scientific and engineering computing from the previous four years. Grigori has been an invited plenary speaker at many international conferences on scientific computing. In 2020 Grigori was named a SIAM Fellow "for contributions to numerical linear algebra, including communication-avoiding algorithms". == References == == External links == Laura Grigori publications indexed by Google Scholar
Wikipedia:Laura Martignon#0
Laura Martignon (born 1952) is a Colombian and Italian professor and scientist. From 2003 until 2020 she served as a Professor of Mathematics and Mathematical Education at the Ludwigsburg University of Education. Until 2017 she was an Adjunct Scientist of the Max Planck Institute for Human Development in Berlin, where she previously worked as Senior Researcher. She also worked for ten years as a Mathematics Professor at the University of Brasília and spent a period of one and a half years, as visiting scholar, at the Hebrew University of Jerusalem. == Education == Martignon obtained a bachelor's degree in Mathematics at Universidad Nacional de Colombia in Bogotà in 1971, a master's degree in Mathematics in 1975, and then graduated as a Doctor. rer. nat. in Mathematics at the University of Tübingen in 1978. She obtained her "emquadramento" (tenure) at the University of Brasília in 1984 and her German Habilitation in Neuroinformatics at the University of Ulm, Germany, in 1998. == Academic contributions == Martignon specialized in Mathematics Education and, as an applied mathematician, in mathematical modeling collaborating in interdisciplinary scientific contexts. Together with physicist Thomas Seligman she applied functional analysis determining criteria for the applicability of integral transforms in n-body reaction calculations and constructing Hilbert Spaces for the embedding of observables and of density matrices. In Neuroinformatics she modeled synchronization in the spiking events of groups of neurons: With her colleagues from Neuroscience Günther Palm, Sonja Grün, Ad Aertsen, Hermann von Hasseln, Gustavo Deco and the statistician Kathryn Laskey she set the basis for valid measurements of higher order synchronizations. Her recent contributions have been in probabilistic reasoning, decision making and their connections with Mathematics Education. In 1995 she was one of the founding members of the ABC Center for Adaptive Behavior and Cognition, directed by Gerd Gigerenzer first in Munich (1995–1997) at the Max Planck Institute for Psychological Research and then in Berlin at the Max Planck Institute for Human Development ( since 1997). With colleagues from ABC, mainly with Ulrich Hoffrage, she modeled the take-the-best heuristic as a non-compensatory linear model for comparison providing a first partial characterization of its ecological rationality [MH]. She is best known for having conceptualized and defined Fast-And-Frugal trees for classification and decision, mainly with Konstantinos Katsikopoulos and Jan Woike,[MKW] [WHM] proving their fundamental properties, creating a theoretical bridge from natural frequencies to fast and frugal heuristics for classification and decision. Today her work on reasoning motivates most of her research in Mathematics Education. With Stefan Krauss, Rolf Biehler, Joachim Engel, Christoph Wassner and Sebastian Kuntze she has propagated the tenets of the ABC Group on the advantages of natural information formats and decision heuristics in school and as a topic of Math Education[MK]. She has collaborated with Keith Stenning, studying probability-free judgement based on defeasible logics and its impact for Mathematics Education [SMV]. She has also done research on Gender in Mathematics Education leading a project on the topic at her University and founding the review journal Mathematik und Gender. For a period of 6 years she was the representative of the Working Group Frauen und Mathematik of the German Society of Mathematics Education (GDM) [2]. == See also == Max Planck Institute for Human development Mathematics and Gender at the Ludwigsburg University == References == == Selected publications == === Books === Wer Wagt, Gewinnt? (2019) Laura Martignon & Ulrich Hoffrage, Hogrefe: Bern Nachhaltigkeit und Gerechtigkeit: Grundlagen und schulische Konsequenzen (2008) de Haan, G., Kamp, G., Lerch, A., Martignon, L., Müller-Christ, G., Nutzinger, H.G., Springer: New York Simple Heuristics that Make us Smart (1999) Gigerenzer, Todd and the ABC Group, Oxford University Press Matrizes Positivas – Impa Press. === Articles === == Patents == Patents by Inventor Laura Martignon == External links == https://scholar.google.de/citations?hl=de&user=N-OrirMAAAAJ Interview with the SWR (South-West Radio Channel) on "Wie wir uns von Statistiken täuschen lassen" (How statistics deceive us) Mathe bringt Glück -- im Deutschland funk Gender Curricula [3]
Wikipedia:Laura Matusevich#0
Laura Felicia Matusevich is an Argentine mathematician. == Birth and Education == Matusevich was born in Córdoba, Argentina, and earned her undergraduate degree from the Universidad Nacional de Córdoba. She earned her PhD from the University of California, Berkeley in 2002. == Career == From 2003 until 2004 Matusevich was a Benjamin Peirce Assistant Professor at Harvard University. From 2004 until 2006 she was a tenure-track assistant professor at the University of Pennsylvania. In 2005, she began as a tenure-track assistant professor at Texas A&M University, where she became a tenure-track associate professor in 2009. She became a full professor there in 2017. == Publications == Matusevich has published 34 research articles as of the 16th of March 2022. == Recognition == Matusevich is the recipient of multiple awards and honors, having been an Alfred P. Sloan Research Fellow, a NSF Postdoctoral Fellow, and an Antorchas Fellow (one of 25 people awarded this nationwide in Argentina), among others. == References ==
Wikipedia:Laura Ortíz-Bobadilla#0
Laura Ortíz-Bobadilla is a Mexican mathematician specializing in differential geometry, and especially on holomorphic foliations and the limit cycles of dynamical systems. She is a researcher in the Institute of Mathematics of the National Autonomous University of Mexico (UNAM). == Education and career == Ortíz-Bobadilla is originally from Mexico City. She studied mathematics at UNAM, earning bachelor's and master's degrees under the mentorship of José Antonio Seade Kuri and Xavier Gómez-Mont, respectively. She completed a PhD in 1991 at the Steklov Institute of Mathematics in Moscow, Russia; her dissertation, Analytic Classification of Complex Linear Vector Fields: Case of Nontrivial Jordan Cell, was supervised by Yulij Ilyashenko. She has been a researcher in the Institute of Mathematics since 1992. == Book == With Xavier Gómez-Mont, Ortíz-Bobadilla is the author of a Spanish-language textbook on dynamic systems on surfaces, Sistemas dinámicos holomorfos en superficies. == Recognition == Ortíz-Bobadilla is a member of the Mexican Academy of Sciences. In 2020, UNAM gave her their National University Award for Teaching in the Exact Sciences. == References == == External links == Laura Ortíz-Bobadilla publications indexed by Google Scholar
Wikipedia:Lauren Lynn Rose#0
Lauren Lynn Rose is a current Associate Professor of Mathematics at Bard College and founder of several mathematical outreach programs. == Professional career == Rose received her B.A. in Mathematics from Tufts University. She received her Master's of Science and Ph.D. in Mathematics from Cornell University in 1988. Her dissertation, The Structure of Modules of Splines over Polynomial Rings, was supervised by Louis Billera. Rose did a post-doc and taught shortly at Ohio State University. She was an Associate Professor of Mathematics at Wellesley College from 1990-1997 before she began teaching at Bard College in 1997. == Awards and creations == Rose co-founded the Bard Math Circle in 2007 alongside colleague Japheth Wood, and later started the Mid-Hudson Math Teacher’s Circle in 2013, and the Girls' Math Club in 2017 al. The Bard Math Circle was created in by students and faculty at Bard College to address the death of math enrichment opportunities in the Mid-Hudson Valley for elementary, middle, and high school students. The Girls' Math Club was aimed at improving girls' confidence in math and encouraging them in the field starting in middle school. It was made possible by a $6,000 grant form the Mathematical Association of America known as the Tensor Women and Mathematics Grant. Rose is co-organizer of the Julia Robinson Mathematics Festival Community Math Circle. Rose is a co-creator of the card game EvenQuads. This card game is a SET-like game and was produced in 2021 by the Association for Women in Mathematics (AWM). The EvenQuads deck allows for five different games to be played and has biographies of women mathematicians on the back. Rose is the founder of Math & Girls + Inspiration = Success (MAGPIES). MAGPIES is a virtual mathematics outreach program created during the academic year 2020-2021 to address the lack of outreach opportunities during the pandemic. It is primarily for upper elementary to middle school girls with the goal of creating a "safe space for girls to experience the joy of mathematics in a collaborative and inclusive setting". Rose is a national leader in the math circle movement, and in 2022 she chaired the Special Interest Group of the MAA on math circles for students and teachers. In 2022, Rose was selected as an AWM Fellow for her "broad efforts in the professional development of women in mathematics ... her commitment to involving people from diverse communities in mathematics, through Math Circles and outreach in prisons; and for her creative contributions to the AWM including the We Speak Series and the Card Project". == References ==
Wikipedia:Lauren M. Childs#0
Lauren Maressa Childs is an American mathematician specialising in mathematical and computational modelling applied to topics in biology, particularly spread of infectious disease. She is an associate professor of mathematics and Cliff and Agnes Lilly Faculty Fellow at Virginia Tech. She was awarded the 2023-2024 Ruth I. Michler Memorial Prize. == Education and career == Childs obtained a dual bachelor's degree in mathematics and chemistry from Duke University in 2004, and a master's degree in applied mathematics from Cornell University in 2007. She completed a Ph.D. at Cornell University in 2010, under the supervision of Steven Strogatz, with dissertation Macrophages, Oscillators and Fish: Using Dynamical Systems to Examine Biological Problems. Childs was a postdoctoral researcher at Georgia Institute of Technology from 2010 to 2012, and at Harvard T.H. Chan School of Public Health from 2012 to 2016. In 2016 she worked for 6 months as a visiting assistant professor at Williams College before taking up an assistant professorship at Virginia Tech in August 2016, where in 2022 she became an associate professor. === Research === Childs' research concerns mathematical models of the spread of infectious disease. This includes numerical analysis and theoretical simulations of systems of differential equations. == Recognition == Childs was awarded the 2023-2024 Ruth I. Michler Memorial Prize of the Association for Women in Mathematics. == References == == External links == Home page Faculty page Lauren M. Childs publications indexed by Google Scholar
Wikipedia:Laurence Broze#0
Laurence Broze (born 1960) is a Belgian applied mathematician specializing in statistics and econometrics and particularly in the theory of rational expectations. She is a professor of applied mathematics at the University of Lille in France. From 2012 to 2018 she was president of l'association femmes et mathématiques, a French association for women in mathematics. == Education and career == Broze was born in Brussels. She went to high school in Charleroi and earned an agrégation in mathematics in 1982 at the Université libre de Bruxelles. She earned her doctorate at the same university in 1986, and completed a habilitation at the University of Lille in 1994. Her doctoral thesis, Réduction, identification et estimation des modèles à anticipations rationnelles, was supervised by Simone Huyberechts. She became an assistant at the Université libre de Bruxelles in 1985, and moved to Charles de Gaulle University – Lille III in 1989. At Charles de Gaulle University, she also served as vice president of research from 2000 to 2006, and directed the unit for mathematics, computer science, management, and economics (UFR MIME) from 2009 to 2014; since 2015 she has been assistant director of UFR MIME. In 2018, Charles de Gaulle University merged with two others to become the University of Lille. Since 1996 she has also been a part-time visiting professor at Saint-Louis University, Brussels. == Contributions == With A. Szafarz, Broze is the author of The Econometric Analysis of Non-Uniqueness in Rational Expectations Models (Contributions to Economic Analysis, Elsevier, 1991). With Szafarz and C. Gourieroux, she is the author of Reduced Forms of Rational Expectations Models (Fundamentals of Pure and Applied Economics 42, Harwood Academic Publishers, 1990). == Recognition == In 2014, Broze became a knight of the Legion of Honour, and an officer of the Ordre des Palmes Académiques. == References ==
Wikipedia:Laurence Chisholm Young#0
Laurence Chisholm Young (14 July 1905 – 24 December 2000) was a British mathematician known for his contributions to measure theory, the calculus of variations, optimal control theory, and potential theory. He was the son of William Henry Young and Grace Chisholm Young, both prominent mathematicians. He moved to the US in 1949 but never sought American citizenship. The concept of Young measure is named after him: he also introduced the concept of the generalized curve and a concept of generalized surface which later evolved in the concept of varifold. The Young integral also is named after him and has now been generalised in the theory of rough paths. == Life and academic career == Laurence Chisholm Young was born in Göttingen, the fifth of the six children of William Henry Young and Grace Chisholm Young. He held positions of Professor at the University of Cape Town, South Africa, and at the University of Wisconsin-Madison. He was also a chess grandmaster. == Selected publications == === Books === Young, L. C. (1927), The Theory of Integration, Cambridge Tracts in Mathematics and Mathematical Physics, vol. 21, Cambridge: Cambridge University Press, pp. viii + 53, JFM 53.0207.19, available from the Internet archive. Young, L. C. (1969), Lectures on the Calculus of Variations and Optimal Control, Philadelphia–London–Toronto: W. B. Saunders, pp. xi+331, ISBN 9780721696409, MR 0259704, Zbl 0177.37801. Young, Laurence (1981), Mathematicians and their times. History of mathematics and mathematics of history, North-Holland Mathematics Studies, 48 / Notas de Matemática [Mathematical Notes], 76, Amsterdam–New York: North-Holland Publishing Co., pp. x+344, ISBN 978-0-444-86135-1, MR 0629980, Zbl 0446.01028. === Papers === Young, L. C. (1936), "An inequality of the Hölder type, connected with Stieltjes integration", Acta Mathematica, 67 (1): 251–282, doi:10.1007/bf02401743, JFM 62.0250.02, Zbl 0016.10404. Young, L. C. (1937), "Generalized curves and the existence of an attained absolute minimum in the Calculus of Variations", Comptes Rendus des Séances de la Société des Sciences et des Lettres de Varsovie, Classe III, XXX (7–9): 211–234, JFM 63.1064.01, Zbl 0019.21901, memoir presented by Stanisław Saks at the session of 16 December 1937 of the Warsaw Society of Sciences and Letters. The free PDF copy is made available by the RCIN –Digital Repository of the Scientifics Institutes. Young, L. C. (January 1942), "Generalized Surfaces in the Calculus of Variations", Annals of Mathematics, Second Series, 43 (1): 84–103, doi:10.2307/1968882, JFM 68.0227.03, JSTOR 1968882, MR 0006023, Zbl 0063.09081. Young, L. C. (July 1942a), "Generalized Surfaces in the Calculus of Variations. II", Annals of Mathematics, Second Series, 43 (3): 530–544, doi:10.2307/1968809, JSTOR 1968809, MR 0006832, Zbl 0063.08362. Young, L. C. (1951), "Surfaces parametriques generalisees", Bulletin de la Société Mathématique de France, 79: 59–84, doi:10.24033/bsmf.1419, MR 0046421, Zbl 0044.10203. Young, L. C. (1954), "A variational algorithm" (PDF), Rivista di Matematica della Università di Parma, (1), 5: 255–268, MR 0081437, Zbl 0059.09605. Young, L. C. (1959), "Partial area – I" (PDF), Rivista di Matematica della Università di Parma, (1), 10: 103–113, MR 0141760, Zbl 0107.27402. Young, L. C. (1959a), "Partial area. Part. II: Contours on hypersurfaces" (PDF), Rivista di Matematica della Università di Parma, (1), 10: 171–182, MR 0141761, Zbl 0107.27402. Young, L. C. (1959b), "Partial area. Part III: Symmetrization and the isoperimetric and least area problems" (PDF), Rivista di Matematica della Università di Parma, (1), 10: 257–263, MR 0141762, Zbl 0107.27402. Young, Laurence C. (1989), "Remarks and personal reminiscences", in Roxin, Emilio O. (ed.), Modern optimal control: a conference in honor of Solomon Lefschetz and Joseph P. LaSalle, Lecture Notes in Pure and Applied Mathematics, vol. 119, New York: Marcel Dekker, pp. 421–433, ISBN 9780824781682, MR 1013226. == See also == Bounded variation Caccioppoli set Measure theory Varifold == Notes == == References == == External links == O'Connor, John J.; Robertson, Edmund F., "Laurence Chisholm Young", MacTutor History of Mathematics Archive, University of St Andrews Obituary on University of Wisconsin web site Archived 27 September 2011 at the Wayback Machine Laurence Chisholm Young at the Mathematics Genealogy Project
Wikipedia:Laurent C. Siebenmann#0
Laurent Carl Siebenmann (the first name is sometimes spelled Laurence or Larry) (born 1939) is a Canadian mathematician based at the Université de Paris-Sud at Orsay, France. After working for several years as a Professor at Orsay he became a Directeur de Recherches at the Centre national de la recherche scientifique in 1976. He is a topologist who works on manifolds and who co-discovered the Kirby–Siebenmann class. == Education == Siebenmann's undergraduate studies were at the University of Toronto. He received a Ph.D. from Princeton University under the supervision of John Milnor in 1965 with the dissertation The obstruction to finding a boundary for an open manifold of dimension greater than five. His doctoral students at Orsay included Francis Bonahon and Albert Fathi. == Recognition == In 1985 he was awarded the Jeffery–Williams Prize by the Canadian Mathematical Society. In 2012 he became a fellow of the American Mathematical Society. == Selected publications == Kirby, Robion C.; Siebenmann, Laurence C. (1977). Foundational Essays on Topological Manifolds, Smoothings, and Triangulations (PDF). Annals of Mathematics Studies. Vol. 88. Princeton, NJ: Princeton University Press. ISBN 0-691-08191-3. MR 0645390. == References == == External links == Kirby and the promised land of topological manifolds: memories and memorable arguments; a talk by Siebenmann Photos at Oberwolfach Home page Laurent C. Siebenmann at the Mathematics Genealogy Project
Wikipedia:Laurent Saloff-Coste#0
Laurent Saloff-Coste (born 1958) is a French mathematician whose research is in Analysis, Probability theory, and Geometric group theory. He is a professor of mathematics at Cornell University. == Education and career == Saloff-Coste received his "doctorat de 3eme cycle" in 1983 at the Pierre and Marie Curie University, Paris VI. He completed his "Doctorat d'Etat" in 1989 under Nicholas Varopoulos. In the 1990s, he worked as "Directeur de Recherche" (CNRS) at Paul Sabatier University in Toulouse. Since 1998, he is a professor of mathematics at Cornell University in Ithaca, New York, where he was chair from 2009 to 2015. === Research === Saloff-Coste works in the areas of analysis and probability theory, including problems involving geometry and partial differential equations. In particular, he has studied the behavior of diffusion processes on manifolds and their fundamental solutions, in connection to the geometry of the underlying spaces. He also studies random walks on groups and how their behavior reflects the algebraic structure of the underlying group. He has developed quantitative estimates for the convergence of finite Markov chains and corresponding stochastic algorithms. == Recognition == He received the Rollo Davidson Prize in 1994, and is a fellow of the American Mathematical Society and of the Institute of Mathematical Statistics. In 2011 he was elected to the American Academy of Arts and Sciences. == Selected publications == Aspects of Sobolev Type Inequalities, London Mathematical Society Lecture Notes, Band 289, Cambridge University Press, 2002. Random walks on finite groups, in Harry Kesten (publisher) Probability on Discrete Structures, Encyclopaedia Math. Sciences, Band 110, Springer Verlag, 2004, S. 263–346. with Persi Diaconis Comparison theorems for random walks on finite groups, Annals of Probability, Band 21, 1993, S. 2131–2156 Lectures on finite Markov chains, in Lectures on Probability Theory and Statistics, Lecture Notes in Mathematics, Band 1665, 1997, S. 301-413 with Nicholas Varopoulos, T. Coulhon Analysis and geometry on groups, Cambridge Tracts in Mathematics, Band 100, Cambridge University Press 1992 with Dominique Bakry, Michel Ledoux Markov Semigroups at Saint Flour, Series Probability at Saint Flour, Springer Verlag 2012 == External links == Website == References ==
Wikipedia:Laver function#0
Liver function tests (LFTs or LFs), also referred to as a hepatic panel or liver panel, are groups of blood tests that provide information about the state of a patient's liver. These tests include prothrombin time (PT/INR), activated partial thromboplastin time (aPTT), albumin, bilirubin (direct and indirect), and others. The liver transaminases aspartate transaminase (AST or SGOT) and alanine transaminase (ALT or SGPT) are useful biomarkers of liver injury in a patient with some degree of intact liver function. Most liver diseases cause only mild symptoms initially, but these diseases must be detected early. Hepatic (liver) involvement in some diseases can be of crucial importance. This testing is performed on a patient's blood sample. Some tests are associated with functionality (e.g., albumin), some with cellular integrity (e.g., transaminase), and some with conditions linked to the biliary tract (gamma-glutamyl transferase and alkaline phosphatase). Because some of these tests do not measure function, it is more accurate to call these liver chemistries or liver tests rather than liver function tests. Several biochemical tests are useful in the evaluation and management of patients with hepatic dysfunction. These tests can be used to detect the presence of liver disease. They can help distinguish among different types of liver disorders, gauge the extent of known liver damage, and monitor the response to treatment. Some or all of these measurements are also carried out (usually about twice a year for routine cases) on individuals taking certain medications, such as anticonvulsants, to ensure that these medications are not adversely impacting the person's liver. == Standard liver panel == Standard liver tests for assessing liver damage include alanine aminotransferase (ALT), aspartate aminotransferase (AST), and alkaline phosphatase (ALP). Bilirubin may be used to estimate the excretory function of the liver and coagulation tests and albumin can be used to evaluate the metabolic activity of the liver. Although example reference ranges are given, these will vary depending on method of analysis used at the administering laboratory, as well as age, gender, ethnicity, and potentially unrelated health factors. Individual results should always be interpreted using the reference range provided by the laboratory that performed the test. === Total bilirubin === Measurement of total bilirubin includes both unconjugated (indirect) and conjugated (direct) bilirubin. Unconjugated bilirubin is a breakdown product of heme (a part of hemoglobin in red blood cells). The liver is responsible for clearing the blood of unconjugated bilirubin, by 'conjugating' it (modified to make it water-soluble) through an enzyme named UDP-glucuronyl-transferase. When the total bilirubin level exceeds 17 μmol/L, it indicates liver disease. When total bilirubin levels exceed 40 μmol/L, bilirubin deposition at the sclera, skin, and mucous membranes will give these areas a yellow colour, thus it is called jaundice. The increase in predominantly unconjugated bilirubin is due to overproduction, reduced hepatic uptake of the unconjugated bilirubin and reduced conjugation of bilirubin. Overproduction can be due to the reabsorption of a haematoma and ineffective erythropoiesis leading to increased red blood cell destruction. Gilbert's syndrome and Crigler–Najjar syndrome have defects in the UDP-glucuronyl-transferase enzyme, affecting bilirubin conjugation. The degree of rise in conjugated bilirubin is directly proportional to the degree of hepatocyte injury. Viral hepatitis can also cause the rise in conjugated bilirubin. In parenchymal liver disease and incomplete extrahepatic obstruction, the rise in conjugated bilirubin is less than the complete common bile duct obstruction due to malignant causes. In Dubin–Johnson syndrome, a mutation in multiple drug-resistance protein 2 (MRP2) causes a rise in conjugated bilirubin. In acute appendicitis, total bilirubin can rise from 20.52 μmol/L to 143 μmol/L. In pregnant women, the total bilirubin level is low in all three trimesters. The measurement of bilirubin levels in the newborns is done through the use of bilimeter or transcutanoeus bilirubinometer instead of performing LFTs. When the total serum bilirubin increases over 95th percentile for age during the first week of life for high risk babies, it is known as hyperbilirubinemia of the newborn (neonatal jaundice) and requires light therapy to reduce the amount of bilirubin in the blood. Pathological jaundice in newborns should be suspected when the serum bilirubin level rises by more than 5 mg/dL per day, serum bilirubin more than the physiological range, clinical jaundice more than 2 weeks, and conjugated bilirubin (dark urine staining clothes). Haemolytic jaundice is the commonest cause of pathological jaundice. Those babies with Rh hemolytic disease, ABO incompatibility with the mother, Glucose-6-phosphate dehydrogenase (G-6-PD) deficiency and minor blood group incompatibility are at increased risk of getting haemolytic jaundice. === Alanine transaminase (ALT) === Apart from being found in high concentrations in the liver, ALT is found in the kidneys, heart, and muscles. It catalyses the transamination reaction, and only exists in a cytoplasmic form. Any kind of liver injury can cause a rise in ALT. A rise of up to 300 IU/L is not specific to the liver, but can be due to the damage of other organs such as the kidneys or muscles. When ALT rises to more than 500 IU/L, causes are usually from the liver. It can be due to hepatitis, ischemic liver injury, and toxins that causes liver damage. The ALT levels in hepatitis C rises more than in hepatitis A and B. Persistent ALT elevation more than 6 months is known as chronic hepatitis. Alcoholic liver disease, non-alcoholic fatty liver disease (NAFLD), fat accumulation in liver during childhood obesity, steatohepatitis (inflammation of fatty liver disease) are associated with a rise in ALT. Rise in ALT is also associated with reduced insulin response, reduced glucose tolerance, and increased free fatty acids and triglycerides. Bright liver syndrome (bright liver on ultrasound suggestive of fatty liver) with raised ALT is suggestive of metabolic syndrome. In pregnancy, ALT levels would rise during the second trimester. In one of the studies, measured ALT levels in pregnancy-related conditions such as hyperemesis gravidarum was 103.5 IU/L, pre-eclampsia was 115, HELLP syndrome was 149. ALT levels would reduce by greater than 50% in three days after child delivery. Another study also shows that caffeine consumption can reduce the risk of ALT elevation in those who consume alcohol, overweight people, impaired glucose metabolism, and viral hepatitis. === Aspartate transaminase (AST) === AST exists in two isoenzymes namely mitochondrial form and cytoplasmic form. It is found in highest concentration in the liver, followed by heart, muscle, kidney, brain, pancreas, and lungs. This wide range of AST containing organs makes it a relatively less specific indicator of liver damage compared to ALT. An increase of mitochondrial AST in bloods is highly suggestive of tissue necrosis in myocardial infarction and chronic liver disease. More than 80% of the liver AST activity are contributed by mitochondrial form of the isoenzymes, while the circulating AST in blood are contributed by cytoplasmic form of AST. AST is especially markedly raised in those with liver cirrhosis. AST can be released from a variety of other tissues and if the elevation is less than two times the normal AST, no further workup needs to be performed if a patient is proceeding to surgery. In certain pregnancy related conditions such as hyperemesis gravidarum, AST can reach as high as 73 IU/L, 66 IU/L in pre-eclampsia, and 81 IU/L in HELLP syndrome. === AST/ALT ratio === The AST/ALT ratio increases in liver functional impairment. In alcoholic liver disease, the mean ratio is 1.45, and mean ratio is 1.33 in post necrotic liver cirrhosis. Ratio is greater than 1.17 in viral cirrhosis, greater than 2.0 in alcoholic hepatitis, and 0.9 in non-alcoholic hepatitis. Ratio is greater than 4.5 in Wilson disease or hyperthyroidism. === Alkaline phosphatase (ALP) === Alkaline phosphatase (ALP) is an enzyme in the cells lining the biliary ducts of the liver. It can also be found on the mucosal epithelium of the small intestine, proximal convoluted tubule of the kidneys, bone, liver, and placenta. It plays an important role in lipid transposition in small intestines and calcification of bones. 50% of all the serum ALP activities in blood are contributed by bone. Acute viral hepatitis usually has normal or increased ALP. For example, hepatitis A has increased ALP due to cholestasis (impaired bile formation or bile flow obstruction) and would have the feature of prolonged itching. Other causes include: infiltrative liver diseases, granulomatous liver disease, abscess, amyloidosis of the liver and peripheral arterial disease. Mild elevation of ALP can be seen in liver cirrhosis, hepatitis, and congestive cardiac failure. Transient hyperphosphataemia is a benign condition in infants, and can reach normal level in 4 months. In contrast, low levels of ALP is found in hypothyroidism, pernicious anemia, zinc deficiency, and hypophosphatasia. ALP activity is significantly increased in the third trimester of pregnancy. This is due to increased synthesis from the placenta as well as increased synthesis in the liver induced by large amounts of estrogens. Levels in the third trimester can be as much as 2-fold greater than in non-pregnant women. As a result, ALP is not a reliable marker of hepatic function in pregnant women. In contrast to ALP, levels of ALT, AST, GGT, and lactate dehydrogenase are only slightly changed or largely unchanged during pregnancy. Bilirubin levels are significantly decreased in pregnancy. In pregnancy conditions such as hyperemesis gravdirum, ALP levels can reach 215 IU/L, meanwhile, in pre-eclampsia, ALP can reach 14 IU/L, and in HELLP syndrome ALP levels can reach 15 IU/L. === Gamma-glutamyltransferase (GGT) === GGT is a microsomal enzyme found in hepatocytes, biliary epithelial cells, renal tubules, pancreas, and intestines. It helps in glutathione metabolism by transporting peptides across the cell membrane. Much like ALP, GGT measurements are usually elevated if cholestasis is present. In acute viral hepatitis, the GGT levels can peak at 2nd and 3rd week of illness, and remained elevated at 6 weeks of illness. GGT is also elevated in 30% of the hepatitis C patients. GGT can increase by 10 times in alcoholism. GGT can increase by 2 to 3 times in 50% of the patients with non-alcoholic liver disease. When GGT levels is elevated, the triglyceride level is elevated also. With insulin treatment, the GGT level can reduce. Other causes of elevated GGT are: diabetes mellitus, acute pancreatitis, myocardial infarction, anorexia nervosa, Guillain–Barré syndrome, hyperthyroidism, obesity and myotonic dystrophy. In pregnancy conditions GGT activity is reduced in 2nd and 3rd trimesters. In hyperemesis gravidarum, GGT level value can reach 45 IU/L, 17 IU/L in pre-eclampsia, and 35 IU/L in HELPP syndrome. === Albumin === Albumin is a protein made specifically by the liver, and can be measured cheaply and easily. It is the main constituent of total protein (the remaining constituents are primarily globulins). Albumin levels are decreased in chronic liver disease, such as cirrhosis. It is also decreased in nephrotic syndrome, where it is lost through the urine. The consequence of low albumin can be edema since the intravascular oncotic pressure becomes lower than the extravascular space. An alternative to albumin measurement is prealbumin, which is better at detecting acute changes (half-life of albumin and prealbumin is about 2 weeks and about 2 days, respectively). == Other tests == Other tests are requested alongside LFT to rule out specific causes. === 5' Nucleotidase === 5' Nucleotidase (5NT) is a glycoprotein found throughout the body, in the cytoplasmic membrane, catalyzing the conversion to inorganic phosphates from nucleoside-5-phosphate. Its level is raised in conditions such as obstructive jaundice, parenchymal liver disease, liver metastases, and bone disease. Serum NT levels are higher during 2nd and 3rd trimesters in pregnancy. === Ceruloplasmin === Ceruloplasmin is an acute phase protein synthesized in the liver. It is the carrier of the copper ion. Its level is increased in infections, rheumatoid arthritis, pregnancy, non-Wilson liver disease and obstructive jaundice. In Wilson disease, the ceruloplasmin level is depressed which lead to copper accumulation in body tissues. === Alpha-fetoprotein === Alpha-fetoprotein (AFP) is significantly expressed in foetal liver. However, the mechanism that led to the suppression of AFP synthesis in adults is not fully known. Exposure of the liver to cancer-causing agents and arrest of liver maturation in childhood can lead to the rise in AFP. AFP can reach until 400–500 μg/L in hepatocellular carcinoma. AFP concentration of more than 400 μg/L is associated with greater tumour size, involvement of both lobes of liver, portal vein invasion and a lower median survival rate. === Coagulation test === The liver is responsible for the production of the vast majority of coagulation factors. In patients with liver disease, international normalized ratio (INR) can be used as a marker of liver synthetic function as it includes factor VII, which has the shortest half life (2–6 hours) of all coagulation factors measured in INR. An elevated INR in patients with liver disease, however, does not necessarily mean the patient has a tendency to bleed, as it only measures procoagulants and not anticoagulants. In liver disease the synthesis of both are decreased and some patients are even found to be hypercoagulable (increased tendency to clot) despite an elevated INR. In liver patients, coagulation is better determined by more modern tests such as thromboelastogram (TEG) or thomboelastrometry (ROTEM). Prothrombin time (PT) and its derived measures of prothrombin ratio (PR) and INR are measures of the extrinsic pathway of coagulation. This test is also called "ProTime INR" and "INR PT". They are used to determine the clotting tendency of blood, in the measure of warfarin dosage, liver damage, and vitamin K status. === Serum glucose === The serum glucose test, abbreviated as "BG" or "Glu", measures the liver's ability to produce glucose (gluconeogenesis); it is usually the last function to be lost in the setting of fulminant liver failure. === Lactate dehydrogenase === Lactate dehydrogenase (LDH) is found in many body tissues, including the liver. Elevated levels of LDH may indicate liver damage. LDH isotype-1 (or cardiac) is used for estimating damage to cardiac tissue, although troponin and creatine kinase tests are preferred. == See also == Reference ranges for blood tests Elevated transaminases Liver disorders Child–Pugh score == References == == External links == Liver Function Tests at the U.S. National Library of Medicine Medical Subject Headings (MeSH) Liver Function Tests at Lab Tests Online Overview at Mayo Clinic Abnormal Liver Function Tests Archived 11 April 2012 at the Wayback Machine Overview of liver enzymes Abnormal Liver Tests Curriculum at AASLD Further workup of abnormal liver tests: "etiology panel"
Wikipedia:Lawrence Zalcman#0
Lawrence Allen Zalcman (Hebrew: לורנס זלצמן; June 9, 1943 – May 31, 2022) was a professor (and later a professor emeritus) of Mathematics at Bar-Ilan University in Israel. His research primarily concerned Complex analysis, potential theory, and the relations of these ideas to approximation theory, harmonic analysis, integral geometry and partial differential equations. On top of his scientific achievements, Zalcman received numerous awards for mathematical exposition, including the Chauvenet Prize in 1976, the Lester R. Ford Award in 1975 and 1981, and the Paul R. Halmos – Lester R. Ford Award in 2017. In addition to Bar-Ilan University, Zalcman taught at the University of Maryland and Stanford University in the United States. == Life and career == Zalcman was born in Kansas City, Missouri on June 9, 1943. In 1961, he graduated from Southwest High School in Kansas City, Missouri before continuing his education at Dartmouth College, where he would graduate in 1964. Zalcman went on to receive his Ph.D. from the Massachusetts Institute of Technology in 1968 under the supervision of Kenneth Myron Hoffman. In 2012, Zalcman became a fellow of the American Mathematical Society. In the theory of normal families, Zalcman's Lemma, which he used as part of his treatment of Bloch's principle, is named after him. Other eponymous honors are Zalcman domains, which play a role in the classification of Riemann surfaces, and Zalcman functions in complex dynamics. In the theory of partial differential equations, the Pizzetti-Zalcman formula is partially named after him. Lawrence Zalcman died in Jerusalem on May 31, 2022. == Selected publications == Analytic capacity and rational approximation. Springer Verlag. 1968. ISBN 9783540358251. with Peter Lax: Complex proofs of real theorems, American Mathematical Society 2012 == References ==
Wikipedia:Laws of Form#0
Laws of Form (hereinafter LoF) is a book by G. Spencer-Brown, published in 1969, that straddles the boundary between mathematics and philosophy. LoF describes three distinct logical systems: The primary arithmetic (described in Chapter 4 of LoF), whose models include Boolean arithmetic; The primary algebra (Chapter 6 of LoF), whose models include the two-element Boolean algebra (hereinafter abbreviated 2), Boolean logic, and the classical propositional calculus; Equations of the second degree (Chapter 11), whose interpretations include finite automata and Alonzo Church's Restricted Recursive Arithmetic (RRA). "Boundary algebra" is a Meguire (2011) term for the union of the primary algebra and the primary arithmetic. Laws of Form sometimes loosely refers to the "primary algebra" as well as to LoF. == The book == The preface states that the work was first explored in 1959, and Spencer Brown cites Bertrand Russell as being supportive of his endeavour. He also thanks J. C. P. Miller of University College London for helping with the proofreading and offering other guidance. In 1963 Spencer Brown was invited by Harry Frost, staff lecturer in the physical sciences at the department of Extra-Mural Studies of the University of London, to deliver a course on the mathematics of logic. LoF emerged from work in electronic engineering its author did around 1960. Key ideas of the LOF were first outlined in his 1961 manuscript Design with the Nor, which remained unpublished until 2021, and further refined during subsequent lectures on mathematical logic he gave under the auspices of the University of London's extension program. LoF has appeared in several editions. The second series of editions appeared in 1972 with the "Preface to the First American Edition", which emphasised the use of self-referential paradoxes, and the most recent being a 1997 German translation. LoF has never gone out of print. LoF's mystical and declamatory prose and its love of paradox make it a challenging read for all. Spencer-Brown was influenced by Wittgenstein and R. D. Laing. LoF also echoes a number of themes from the writings of Charles Sanders Peirce, Bertrand Russell, and Alfred North Whitehead. The work has had curious effects on some classes of its readership; for example, on obscure grounds, it has been claimed that the entire book is written in an operational way, giving instructions to the reader instead of telling them what "is", and that in accordance with G. Spencer-Brown's interest in paradoxes, the only sentence that makes a statement that something is, is the statement which says no such statements are used in this book. Furthermore, the claim asserts that except for this one sentence the book can be seen as an example of E-Prime. What prompted such a claim, is obscure, either in terms of incentive, logical merit, or as a matter of fact, because the book routinely and naturally uses the verb to be throughout, and in all its grammatical forms, as may be seen both in the original and in quotes shown below. == Reception == Ostensibly a work of formal mathematics and philosophy, LoF became something of a cult classic: it was praised by Heinz von Foerster when he reviewed it for the Whole Earth Catalog. Those who agree point to LoF as embodying an enigmatic "mathematics of consciousness", its algebraic symbolism capturing an (perhaps even "the") implicit root of cognition: the ability to "distinguish". LoF argues that primary algebra reveals striking connections among logic, Boolean algebra, and arithmetic, and the philosophy of language and mind. Stafford Beer wrote in a review for Nature, "When one thinks of all that Russell went through sixty years ago, to write the Principia, and all we his readers underwent in wrestling with those three vast volumes, it is almost sad". Banaschewski (1977) argues that the primary algebra is nothing but new notation for Boolean algebra. Indeed, the two-element Boolean algebra 2 can be seen as the intended interpretation of the primary algebra. Yet the notation of the primary algebra: Fully exploits the duality characterizing not just Boolean algebras but all lattices; Highlights how syntactically distinct statements in logic and 2 can have identical semantics; Dramatically simplifies Boolean algebra calculations, and proofs in sentential and syllogistic logic. Moreover, the syntax of the primary algebra can be extended to formal systems other than 2 and sentential logic, resulting in boundary mathematics (see § Related work below). LoF has influenced, among others, Heinz von Foerster, Louis Kauffman, Niklas Luhmann, Humberto Maturana, Francisco Varela and William Bricken. Some of these authors have modified the primary algebra in a variety of interesting ways. LoF claimed that certain well-known mathematical conjectures of very long standing, such as the four color theorem, Fermat's Last Theorem, and the Goldbach conjecture, are provable using extensions of the primary algebra. Spencer-Brown eventually circulated a purported proof of the four color theorem, but it was met with skepticism. == The form (Chapter 1) == The symbol: Also called the "mark" or "cross", is the essential feature of the Laws of Form. In Spencer-Brown's inimitable and enigmatic fashion, the Mark symbolizes the root of cognition, i.e., the dualistic Mark indicates the capability of differentiating a "this" from "everything else but this". In LoF, a Cross denotes the drawing of a "distinction", and can be thought of as signifying the following, all at once: The act of drawing a boundary around something, thus separating it from everything else; That which becomes distinct from everything by drawing the boundary; Crossing from one side of the boundary to the other. All three ways imply an action on the part of the cognitive entity (e.g., person) making the distinction. As LoF puts it: "The first command: Draw a distinction can well be expressed in such ways as: Let there be a distinction, Find a distinction, See a distinction, Describe a distinction, Define a distinction, Or: Let a distinction be drawn". (LoF, Notes to chapter 2) The counterpoint to the Marked state is the Unmarked state, which is simply nothing, the void, or the un-expressable infinite represented by a blank space. It is simply the absence of a Cross. No distinction has been made and nothing has been crossed. The Marked state and the void are the two primitive values of the Laws of Form. The Cross can be seen as denoting the distinction between two states, one "considered as a symbol" and another not so considered. From this fact arises a curious resonance with some theories of consciousness and language. Paradoxically, the Form is at once Observer and Observed, and is also the creative act of making an observation. LoF (excluding back matter) closes with the words: ...the first distinction, the Mark and the observer are not only interchangeable, but, in the form, identical. C. S. Peirce came to a related insight in the 1890s; see § Related work. == The primary arithmetic (Chapter 4) == The syntax of the primary arithmetic goes as follows. There are just two atomic expressions: The empty Cross ; All or part of the blank page (the "void"). There are two inductive rules: A Cross may be written over any expression; Any two expressions may be concatenated. The semantics of the primary arithmetic are perhaps nothing more than the sole explicit definition in LoF: "Distinction is perfect continence". Let the "unmarked state" be a synonym for the void. Let an empty Cross denote the "marked state". To cross is to move from one value, the unmarked or marked state, to the other. We can now state the "arithmetical" axioms A1 and A2, which ground the primary arithmetic (and hence all of the Laws of Form): "A1. The law of Calling". Calling twice from a state is indistinguishable from calling once. To make a distinction twice has the same effect as making it once. For example, saying "Let there be light" and then saying "Let there be light" again, is the same as saying it once. Formally: = {\displaystyle \ =} "A2. The law of Crossing". After crossing from the unmarked to the marked state, crossing again ("recrossing") starting from the marked state returns one to the unmarked state. Hence recrossing annuls crossing. Formally: = {\displaystyle \ =} In both A1 and A2, the expression to the right of '=' has fewer symbols than the expression to the left of '='. This suggests that every primary arithmetic expression can, by repeated application of A1 and A2, be simplified to one of two states: the marked or the unmarked state. This is indeed the case, and the result is the expression's "simplification". The two fundamental metatheorems of the primary arithmetic state that: Every finite expression has a unique simplification. (T3 in LoF); Starting from an initial marked or unmarked state, "complicating" an expression by a finite number of repeated application of A1 and A2 cannot yield an expression whose simplification differs from the initial state. (T4 in LoF). Thus the relation of logical equivalence partitions all primary arithmetic expressions into two equivalence classes: those that simplify to the Cross, and those that simplify to the void. A1 and A2 have loose analogs in the properties of series and parallel electrical circuits, and in other ways of diagramming processes, including flowcharting. A1 corresponds to a parallel connection and A2 to a series connection, with the understanding that making a distinction corresponds to changing how two points in a circuit are connected, and not simply to adding wiring. The primary arithmetic is analogous to the following formal languages from mathematics and computer science: A Dyck language with a null alphabet; The simplest context-free language in the Chomsky hierarchy; A rewrite system that is strongly normalizing and confluent. The phrase "calculus of indications" in LoF is a synonym for "primary arithmetic". === The notion of canon === While LoF does not formally define canon, the following two excerpts from the Notes to chpt. 2 are apt: The more important structures of command are sometimes called canons. They are the ways in which the guiding injunctions appear to group themselves in constellations, and are thus by no means independent of each other. A canon bears the distinction of being outside (i.e., describing) the system under construction, but a command to construct (e.g., 'draw a distinction'), even though it may be of central importance, is not a canon. A canon is an order, or set of orders, to permit or allow, but not to construct or create. ...the primary form of mathematical communication is not description but injunction... Music is a similar art form, the composer does not even attempt to describe the set of sounds he has in mind, much less the set of feelings occasioned through them, but writes down a set of commands which, if they are obeyed by the performer, can result in a reproduction, to the listener, of the composer's original experience. These excerpts relate to the distinction in metalogic between the object language, the formal language of the logical system under discussion, and the metalanguage, a language (often a natural language) distinct from the object language, employed to exposit and discuss the object language. The first quote seems to assert that the canons are part of the metalanguage. The second quote seems to assert that statements in the object language are essentially commands addressed to the reader by the author. Neither assertion holds in standard metalogic. == The primary algebra (Chapter 6) == === Syntax === Given any valid primary arithmetic expression, insert into one or more locations any number of Latin letters bearing optional numerical subscripts; the result is a primary algebra formula. Letters so employed in mathematics and logic are called variables. A primary algebra variable indicates a location where one can write the primitive value or its complement . Multiple instances of the same variable denote multiple locations of the same primitive value. === Rules governing logical equivalence === The sign '=' may link two logically equivalent expressions; the result is an equation. By "logically equivalent" is meant that the two expressions have the same simplification. Logical equivalence is an equivalence relation over the set of primary algebra formulas, governed by the rules R1 and R2. Let "C" and "D" be formulae each containing at least one instance of the subformula A: R1, Substitution of equals. Replace one or more instances of A in C by B, resulting in E. If A=B, then C=E. R2, Uniform replacement. Replace all instances of A in C and D with B. C becomes E and D becomes F. If C=D, then E=F. Note that A=B is not required. R2 is employed very frequently in primary algebra demonstrations (see below), almost always silently. These rules are routinely invoked in logic and most of mathematics, nearly always unconsciously. The primary algebra consists of equations, i.e., pairs of formulae linked by an infix operator '='. R1 and R2 enable transforming one equation into another. Hence the primary algebra is an equational formal system, like the many algebraic structures, including Boolean algebra, that are varieties. Equational logic was common before Principia Mathematica (e.g. Johnson (1892)), and has present-day advocates (Gries & Schneider (1993)). Conventional mathematical logic consists of tautological formulae, signalled by a prefixed turnstile. To denote that the primary algebra formula A is a tautology, simply write "A = ". If one replaces '=' in R1 and R2 with the biconditional, the resulting rules hold in conventional logic. However, conventional logic relies mainly on the rule modus ponens; thus conventional logic is ponential. The equational-ponential dichotomy distills much of what distinguishes mathematical logic from the rest of mathematics. === Initials === An initial is a primary algebra equation verifiable by a decision procedure and as such is not an axiom. LoF lays down the initials: The absence of anything to the right of the "=" above, is deliberate. J2 is the familiar distributive law of sentential logic and Boolean algebra. Another set of initials, friendlier to calculations, is: It is thanks to C2 that the primary algebra is a lattice. By virtue of J1a, it is a complemented lattice whose upper bound is . By J0, is the corresponding lower bound and identity element. J0 is also an algebraic version of A2 and makes clear the sense in which aliases with the blank page. T13 in LoF generalizes C2 as follows. Any primary algebra (or sentential logic) formula B can be viewed as an ordered tree with branches. Then: T13: A subformula A can be copied at will into any depth of B greater than that of A, as long as A and its copy are in the same branch of B. Also, given multiple instances of A in the same branch of B, all instances but the shallowest are redundant. While a proof of T13 would require induction, the intuition underlying it should be clear. C2 or its equivalent is named: "Generation" in LoF; "Exclusion" in Johnson (1892); "Pervasion" in the work of William Bricken. Perhaps the first instance of an axiom or rule with the power of C2 was the "Rule of (De)Iteration", combining T13 and AA=A, of C. S. Peirce's existential graphs. LoF asserts that concatenation can be read as commuting and associating by default and hence need not be explicitly assumed or demonstrated. (Peirce made a similar assertion about his existential graphs.) Let a period be a temporary notation to establish grouping. That concatenation commutes and associates may then be demonstrated from the: Initial AC.D=CD.A and the consequence AA=A. This result holds for all lattices, because AA=A is an easy consequence of the absorption law, which holds for all lattices; Initials AC.D=AD.C and J0. Since J0 holds only for lattices with a lower bound, this method holds only for bounded lattices (which include the primary algebra and 2). Commutativity is trivial; just set A=. Associativity: AC.D = CA.D = CD.A = A.CD. Having demonstrated associativity, the period can be discarded. The initials in Meguire (2011) are AC.D=CD.A, called B1; B2, J0 above; B3, J1a above; and B4, C2. By design, these initials are very similar to the axioms for an abelian group, G1-G3 below. === Proof theory === The primary algebra contains three kinds of proved assertions: Consequence is a primary algebra equation verified by a demonstration. A demonstration consists of a sequence of steps, each step justified by an initial or a previously demonstrated consequence. Theorem is a statement in the metalanguage verified by a proof, i.e., an argument, formulated in the metalanguage, that is accepted by trained mathematicians and logicians. Initial, defined above. Demonstrations and proofs invoke an initial as if it were an axiom. The distinction between consequence and theorem holds for all formal systems, including mathematics and logic, but is usually not made explicit. A demonstration or decision procedure can be carried out and verified by computer. The proof of a theorem cannot be. Let A and B be primary algebra formulas. A demonstration of A=B may proceed in either of two ways: Modify A in steps until B is obtained, or vice versa; Simplify both and to . This is known as a "calculation". Once A=B has been demonstrated, A=B can be invoked to justify steps in subsequent demonstrations. primary algebra demonstrations and calculations often require no more than J1a, J2, C2, and the consequences (C3 in LoF), (C1), and AA=A (C5). The consequence , C7' in LoF, enables an algorithm, sketched in LoFs proof of T14, that transforms an arbitrary primary algebra formula to an equivalent formula whose depth does not exceed two. The result is a normal form, the primary algebra analog of the conjunctive normal form. LoF (T14–15) proves the primary algebra analog of the well-known Boolean algebra theorem that every formula has a normal form. Let A be a subformula of some formula B. When paired with C3, J1a can be viewed as the closure condition for calculations: B is a tautology if and only if A and (A) both appear in depth 0 of B. A related condition appears in some versions of natural deduction. A demonstration by calculation is often little more than: Invoking T13 repeatedly to eliminate redundant subformulae; Erasing any subformulae having the form . The last step of a calculation always invokes J1a. LoF includes elegant new proofs of the following standard metatheory: Completeness: all primary algebra consequences are demonstrable from the initials (T17). Independence: J1 cannot be demonstrated from J2 and vice versa (T18). That sentential logic is complete is taught in every first university course in mathematical logic. But university courses in Boolean algebra seldom mention the completeness of 2. === Interpretations === If the Marked and Unmarked states are read as the Boolean values 1 and 0 (or True and False), the primary algebra interprets 2 (or sentential logic). LoF shows how the primary algebra can interpret the syllogism. Each of these interpretations is discussed in a subsection below. Extending the primary algebra so that it could interpret standard first-order logic has yet to be done, but Peirce's beta existential graphs suggest that this extension is feasible. ==== Two-element Boolean algebra 2 ==== The primary algebra is an elegant minimalist notation for the two-element Boolean algebra 2. Let: One of Boolean join (+) or meet (×) interpret concatenation; The complement of A interpret 0 (1) interpret the empty Mark if join (meet) interprets concatenation (because a binary operation applied to zero operands may be regarded as being equal to the identity element of that operation; or to put it in another way, an operand that is missing could be regarded as acting by default like the identity element). If join (meet) interprets AC, then meet (join) interprets A | ¯ C | ¯ | ¯ {\displaystyle {\overline {{\overline {A|}}\ \ {\overline {C|}}{\Big |}}}} . Hence the primary algebra and 2 are isomorphic but for one detail: primary algebra complementation can be nullary, in which case it denotes a primitive value. Modulo this detail, 2 is a model of the primary algebra. The primary arithmetic suggests the following arithmetic axiomatization of 2: 1+1=1+0=0+1=1=~0, and 0+0=0=~1. The set B = { {\displaystyle \ B=\{} , {\displaystyle ,} } {\displaystyle \ \}} is the Boolean domain or carrier. In the language of universal algebra, the primary algebra is the algebraic structure ⟨ B , − − , − | ¯ , | ¯ ⟩ {\displaystyle \langle B,-\ -,{\overline {-\ |}},{\overline {\ \ |}}\rangle } of type ⟨ 2 , 1 , 0 ⟩ {\displaystyle \langle 2,1,0\rangle } . The expressive adequacy of the Sheffer stroke points to the primary algebra also being a ⟨ B , − − | ¯ , | ¯ ⟩ {\displaystyle \langle B,{\overline {-\ -\ |}},{\overline {\ \ |}}\rangle } algebra of type ⟨ 2 , 0 ⟩ {\displaystyle \langle 2,0\rangle } . In both cases, the identities are J1a, J0, C2, and ACD=CDA. Since the primary algebra and 2 are isomorphic, 2 can be seen as a ⟨ B , + , ¬ , 1 ⟩ {\displaystyle \langle B,+,\lnot ,1\rangle } algebra of type ⟨ 2 , 1 , 0 ⟩ {\displaystyle \langle 2,1,0\rangle } . This description of 2 is simpler than the conventional one, namely an ⟨ B , + , × , ¬ , 1 , 0 ⟩ {\displaystyle \langle B,+,\times ,\lnot ,1,0\rangle } algebra of type ⟨ 2 , 2 , 1 , 0 , 0 ⟩ {\displaystyle \langle 2,2,1,0,0\rangle } . The two possible interpretations are dual to each other in the Boolean sense. (In Boolean algebra, exchanging AND ↔ OR and 1 ↔ 0 throughout an equation yields an equally valid equation.) The identities remain invariant regardless of which interpretation is chosen, so the transformations or modes of calculation remain the same; only the interpretation of each form would be different. Example: J1a is . Interpreting juxtaposition as OR and as 1, this translates to ¬ A ∨ A = 1 {\displaystyle \neg A\lor A=1} which is true. Interpreting juxtaposition as AND and as 0, this translates to ¬ A ∧ A = 0 {\displaystyle \neg A\land A=0} which is true as well (and the dual of ¬ A ∨ A = 1 {\displaystyle \neg A\lor A=1} ). ===== operator-operand duality ===== The marked state, , is both an operator (e.g., the complement) and operand (e.g., the value 1). This can be summarized neatly by defining two functions m ( x ) {\displaystyle m(x)} and u ( x ) {\displaystyle u(x)} for the marked and unmarked state, respectively: let m ( x ) = 1 − max ( { 0 } ∪ x ) {\displaystyle m(x)=1-\max(\{0\}\cup x)} and u ( x ) = max ( { 0 } ∪ x ) {\displaystyle u(x)=\max(\{0\}\cup x)} , where x {\displaystyle x} is a (possibly empty) set of boolean values. This reveals that u {\displaystyle u} is either the value 0 or the OR operator, while m {\displaystyle m} is either the value 1 or the NOR operator, depending on whether x {\displaystyle x} is the empty set or not. As noted above, there is a dual form of these functions exchanging AND ↔ OR and 1 ↔ 0. ==== Sentential logic ==== Let the blank page denote False, and let a Cross be read as Not. Then the primary arithmetic has the following sentential reading: = False = True = not False = Not True = False The primary algebra interprets sentential logic as follows. A letter represents any given sentential expression. Thus: interprets Not A interprets A Or B interprets Not A Or B or If A Then B. interprets Not (Not A Or Not B) or Not (If A Then Not B) or A And B. Thus any expression in sentential logic has a primary algebra translation. Equivalently, the primary algebra interprets sentential logic. Given an assignment of every variable to the Marked or Unmarked states, this primary algebra translation reduces to a primary arithmetic expression, which can be simplified. Repeating this exercise for all possible assignments of the two primitive values to each variable, reveals whether the original expression is tautological or satisfiable. This is an example of a decision procedure, one more or less in the spirit of conventional truth tables. Given some primary algebra formula containing N variables, this decision procedure requires simplifying 2N primary arithmetic formulae. For a less tedious decision procedure more in the spirit of Quine's "truth value analysis", see Meguire (2003). Schwartz (1981) proved that the primary algebra is equivalent — syntactically, semantically, and proof theoretically — with the classical propositional calculus. Likewise, it can be shown that the primary algebra is syntactically equivalent with expressions built up in the usual way from the classical truth values true and false, the logical connectives NOT, OR, and AND, and parentheses. Interpreting the Unmarked State as False is wholly arbitrary; that state can equally well be read as True. All that is required is that the interpretation of concatenation change from OR to AND. IF A THEN B now translates as instead of . More generally, the primary algebra is "self-dual", meaning that any primary algebra formula has two sentential or Boolean readings, each the dual of the other. Another consequence of self-duality is the irrelevance of De Morgan's laws; those laws are built into the syntax of the primary algebra from the outset. The true nature of the distinction between the primary algebra on the one hand, and 2 and sentential logic on the other, now emerges. In the latter formalisms, complementation/negation operating on "nothing" is not well-formed. But an empty Cross is a well-formed primary algebra expression, denoting the Marked state, a primitive value. Hence a nonempty Cross is an operator, while an empty Cross is an operand because it denotes a primitive value. Thus the primary algebra reveals that the heretofore distinct mathematical concepts of operator and operand are in fact merely different facets of a single fundamental action, the making of a distinction. ==== Syllogisms ==== Appendix 2 of LoF shows how to translate traditional syllogisms and sorites into the primary algebra. A valid syllogism is simply one whose primary algebra translation simplifies to an empty Cross. Let A* denote a literal, i.e., either A or A | ¯ {\displaystyle {\overline {A|}}} , indifferently. Then every syllogism that does not require that one or more terms be assumed nonempty is one of 24 possible permutations of a generalization of Barbara whose primary algebra equivalent is A ∗ B | ¯ B | ¯ C ∗ | ¯ A ∗ C ∗ {\displaystyle {\overline {A^{*}\ B|}}\ \ {\overline {{\overline {B|}}\ C^{*}{\Big |}}}\ A^{*}\ C^{*}} . These 24 possible permutations include the 19 syllogistic forms deemed valid in Aristotelian and medieval logic. This primary algebra translation of syllogistic logic also suggests that the primary algebra can interpret monadic and term logic, and that the primary algebra has affinities to the Boolean term schemata of Quine (1982), Part II. === An example of calculation === The following calculation of Leibniz's nontrivial Praeclarum Theorema exemplifies the demonstrative power of the primary algebra. Let C1 be A | ¯ | ¯ {\displaystyle {\overline {{\overline {A|}}{\Big |}}}} =A, C2 be A A B | ¯ = A B | ¯ {\displaystyle A\ {\overline {A\ B|}}=A\ {\overline {B|}}} , C3 be | ¯ A = | ¯ {\displaystyle {\overline {\ \ |}}\ A={\overline {\ \ |}}} , J1a be A | ¯ A = | ¯ {\displaystyle {\overline {A|}}\ A={\overline {\ \ |}}} , and let OI mean that variables and subformulae have been reordered in a way that commutativity and associativity permit. === Relation to magmas === The primary algebra embodies a point noted by Huntington in 1933: Boolean algebra requires, in addition to one unary operation, one, and not two, binary operations. Hence the seldom-noted fact that Boolean algebras are magmas. (Magmas were called groupoids until the latter term was appropriated by category theory.) To see this, note that the primary algebra is a commutative: Semigroup because primary algebra juxtaposition commutes and associates; Monoid with identity element , by virtue of J0. Groups also require a unary operation, called inverse, the group counterpart of Boolean complementation. Let denote the inverse of a. Let denote the group identity element. Then groups and the primary algebra have the same signatures, namely they are both ⟨ − − , − | ¯ , | ¯ ⟩ {\displaystyle \langle -\ -,{\overline {-\ |}},{\overline {\ \ |}}\rangle } algebras of type 〈2,1,0〉. Hence the primary algebra is a boundary algebra. The axioms for an abelian group, in boundary notation, are: G1. abc = acb (assuming association from the left); G2. G3. . From G1 and G2, the commutativity and associativity of concatenation may be derived, as above. Note that G3 and J1a are identical. G2 and J0 would be identical if = replaced A2. This is the defining arithmetical identity of group theory, in boundary notation. The primary algebra differs from an abelian group in two ways: From A2, it follows that ≠ . If the primary algebra were a group, = would hold, and one of a = or a = a would have to be a primary algebra consequence. Note that and are mutual primary algebra complements, as group theory requires, so that | ¯ | ¯ | ¯ = | ¯ {\displaystyle {\overline {\ {\overline {\ {\overline {\ \ |}}\ {\Big |}}}\ {\Bigg |}}}={\overline {\ \ |}}} is true of both group theory and the primary algebra; C2 most clearly demarcates the primary algebra from other magmas, because C2 enables demonstrating the absorption law that defines lattices, and the distributive law central to Boolean algebra. Both A2 and C2 follow from B's being an ordered set. == Equations of the second degree (Chapter 11) == Chapter 11 of LoF introduces equations of the second degree, composed of recursive formulae that can be seen as having "infinite" depth. Some recursive formulae simplify to the marked or unmarked state. Others "oscillate" indefinitely between the two states depending on whether a given depth is even or odd. Specifically, certain recursive formulae can be interpreted as oscillating between true and false over successive intervals of time, in which case a formula is deemed to have an "imaginary" truth value. Thus the flow of time may be introduced into the primary algebra. Turney (1986) shows how these recursive formulae can be interpreted via Alonzo Church's Restricted Recursive Arithmetic (RRA). Church introduced RRA in 1955 as an axiomatic formalization of finite automata. Turney presents a general method for translating equations of the second degree into Church's RRA, illustrating his method using the formulae E1, E2, and E4 in chapter 11 of LoF. This translation into RRA sheds light on the names Spencer-Brown gave to E1 and E4, namely "memory" and "counter". RRA thus formalizes and clarifies LoF's notion of an imaginary truth value. == Related work == Gottfried Leibniz, in memoranda not published before the late 19th and early 20th centuries, invented Boolean logic. His notation was isomorphic to that of LoF: concatenation read as conjunction, and "non-(X)" read as the complement of X. Recognition of Leibniz's pioneering role in algebraic logic was foreshadowed by Lewis (1918) and Rescher (1954). But a full appreciation of Leibniz's accomplishments had to await the work of Wolfgang Lenzen, published in the 1980s and reviewed in Lenzen (2004). Charles Sanders Peirce (1839–1914) anticipated the primary algebra in three veins of work: Two papers he wrote in 1886 proposed a logical algebra employing but one symbol, the streamer, nearly identical to the Cross of LoF. The semantics of the streamer are identical to those of the Cross, except that Peirce never wrote a streamer with nothing under it. An excerpt from one of these papers was published in 1976, but they were not published in full until 1993. In a 1902 encyclopedia article, Peirce notated Boolean algebra and sentential logic in the manner of this entry, except that he employed two styles of brackets, toggling between '(', ')' and '[', ']' with each increment in formula depth. The syntax of his alpha existential graphs is merely concatenation, read as conjunction, and enclosure by ovals, read as negation. If primary algebra concatenation is read as conjunction, then these graphs are isomorphic to the primary algebra. LoF cites vol. 4 of Peirce's Collected Papers, the source for the formalisms in (2) and (3) above. (1)-(3) were virtually unknown at the time when (1960s) and in the place where (UK) LoF was written. Peirce's semiotics, about which LoF is silent, may yet shed light on the philosophical aspects of LoF. Kauffman (2001) discusses another notation similar to that of LoF, that of a 1917 article by Jean Nicod, who was a disciple of Bertrand Russell's. The above formalisms are, like the primary algebra, all instances of boundary mathematics, i.e., mathematics whose syntax is limited to letters and brackets (enclosing devices). A minimalist syntax of this nature is a "boundary notation". Boundary notation is free of infix operators, prefix, or postfix operator symbols. The very well known curly braces ('{', '}') of set theory can be seen as a boundary notation. The work of Leibniz, Peirce, and Nicod is innocent of metatheory, as they wrote before Emil Post's landmark 1920 paper (which LoF cites), proving that sentential logic is complete, and before Hilbert and Łukasiewicz showed how to prove axiom independence using models. Craig (1979) argued that the world, and how humans perceive and interact with that world, has a rich Boolean structure. Craig was an orthodox logician and an authority on algebraic logic. Second-generation cognitive science emerged in the 1970s, after LoF was written. On cognitive science and its relevance to Boolean algebra, logic, and set theory, see Lakoff (1987) (see index entries under "Image schema examples: container") and Lakoff & Núñez (2000). Neither book cites LoF. The biologists and cognitive scientists Humberto Maturana and his student Francisco Varela both discuss LoF in their writings, which identify "distinction" as the fundamental cognitive act. The Berkeley psychologist and cognitive scientist Eleanor Rosch has written extensively on the closely related notion of categorization. Other formal systems with possible affinities to the primary algebra include: Mereology which typically has a lattice structure very similar to that of Boolean algebra. For a few authors, mereology is simply a model of Boolean algebra and hence of the primary algebra as well. Mereotopology, which is inherently richer than Boolean algebra; The system of Whitehead (1934), whose fundamental primitive is "indication". The primary arithmetic and algebra are a minimalist formalism for sentential logic and Boolean algebra. Other minimalist formalisms having the power of set theory include: The lambda calculus; Combinatory logic with two (S and K) or even one (X) primitive combinators; Mathematical logic done with merely three primitive notions: one connective, NAND (whose primary algebra translation is A B | ¯ {\displaystyle {\overline {A\ \ B\ |}}} or, dually, A | ¯ B | ¯ {\displaystyle {\overline {A|}}\ \ {\overline {B|}}} ), universal quantification, and one binary atomic formula, denoting set membership. This is the system of Quine (1951). The beta existential graphs, with a single binary predicate denoting set membership. This has yet to be explored. The alpha graphs mentioned above are a special case of the beta graphs. == Editions == 1969. London: Allen & Unwin, hardcover. ISBN 0-04-510028-4 1972. Crown Publishers, hardcover: ISBN 0-517-52776-6 1973. Bantam Books, paperback. ISBN 0-553-07782-1 1979. E. P. Dutton, paperback. ISBN 0-525-47544-3 1994. Portland, Oregon: Cognizer Company, paperback. ISBN 0-9639899-0-1 1997 German translation, titled Gesetze der Form. Lübeck: Bohmeier Verlag. ISBN 3-89094-321-7 2008 Bohmeier Verlag, Leipzig, 5th international edition. ISBN 978-3-89094-580-4 == See also == Boolean algebra – Algebraic manipulation of "true" and "false" Boolean algebras canonically defined – Technical treatment of Boolean algebras Entitative graph – Type of diagrammatic notation for propositional logicPages displaying short descriptions of redirect targets Existential graph – Type of diagrammatic notation for propositional logic Mark and space – States of a communications signal Programming and Metaprogramming – 1968 non-fiction book by John C. Lilly Propositional calculus – Branch of logic Two-element Boolean algebra – Boolean algebra List of Boolean algebra topics == Notes == == References == === Works cited === == Further reading == == External links == Laws of Form, archive of website by Richard Shoup. Spencer-Brown's talks at Esalen, 1973. Self-referential forms are introduced in the section entitled "Degree of Equations and the Theory of Types". Audio recording of the opening session, 1973 AUM Conference at Esalen. Louis H. Kauffman, "Box Algebra, Boundary Mathematics, Logic, and Laws of Form." Kissel, Matthias, "A nonsystematic but easy to understand introduction to Laws of Form." A meeting with G.S.B by Moshe Klein The Markable Mark, an introduction by easy stages to the ideas of Laws of Form The BF Calculus and the Square Root of Negation by Louis Kauffman and Arthur Collings; it extends the Laws of Form by adding an imaginary logical value. (Imaginary logical values are introduced in chapter 11 of the book Laws of Form.) Laws of Form Course - a free on-line course taking people through the main body of the text of Laws of Form by Leon Conrad, Spencer-Brown's last student, who studied the work with the author.
Wikipedia:Lax equivalence theorem#0
In numerical analysis, the Lax equivalence theorem is a fundamental theorem in the analysis of linear finite difference methods for the numerical solution of linear partial differential equations. It states that for a linear consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable. The importance of the theorem is that while the convergence of the solution of the linear finite difference method to the solution of the linear partial differential equation is what is desired, it is ordinarily difficult to establish because the numerical method is defined by a recurrence relation while the differential equation involves a differentiable function. However, consistency—the requirement that the linear finite difference method approximates the correct linear partial differential equation—is straightforward to verify, and stability is typically much easier to show than convergence (and would be needed in any event to show that round-off error will not destroy the computation). Hence convergence is usually shown via the Lax equivalence theorem. Stability in this context means that a matrix norm of the matrix used in the iteration is at most unity, called (practical) Lax–Richtmyer stability. Often a von Neumann stability analysis is substituted for convenience, although von Neumann stability only implies Lax–Richtmyer stability in certain cases. This theorem is due to Peter Lax. It is sometimes called the Lax–Richtmyer theorem, after Peter Lax and Robert D. Richtmyer. == References ==
Wikipedia:Lax–Milgram theorem#0
In mathematics, the Generalized–Lax–Milgram theorem is a generalization of the famous Lax–Milgram theorem, which gives conditions under which a bilinear form can be "inverted" to show the existence and uniqueness of a weak solution to a given boundary value problem. The result was proved by J Necas in 1962, and is a generalization of the famous Lax Milgram theorem by Peter Lax and Arthur Milgram. == Background == In the modern, functional-analytic approach to the study of partial differential equations, one does not attempt to solve a given partial differential equation directly, but by using the structure of the vector space of possible solutions, e.g. a Sobolev space W k,p. Abstractly, consider two real normed spaces U and V with their continuous dual spaces U∗ and V∗ respectively. In many applications, U is the space of possible solutions; given some partial differential operator Λ : U → V∗ and a specified element f ∈ V∗, the objective is to find a u ∈ U such that Λ u = f . {\displaystyle \Lambda u=f.} However, in the weak formulation, this equation is only required to hold when "tested" against all other possible elements of V. This "testing" is accomplished by means of a bilinear function B : U × V → R which encodes the differential operator Λ; a weak solution to the problem is to find a u ∈ U such that B ( u , v ) = ⟨ f , v ⟩ for all v ∈ V . {\displaystyle B(u,v)=\langle f,v\rangle {\mbox{ for all }}v\in V.} The achievement of Lax and Milgram in their 1954 result was to specify sufficient conditions for this weak formulation to have a unique solution that depends continuously upon the specified datum f ∈ V∗: it suffices that U = V is a Hilbert space, that B is continuous, and that B is strongly coercive, i.e. | B ( u , u ) | ≥ c ‖ u ‖ 2 {\displaystyle |B(u,u)|\geq c\|u\|^{2}} for some constant c > 0 and all u ∈ U. For example, in the solution of the Poisson equation on a bounded, open domain Ω ⊂ Rn, { − Δ u ( x ) = f ( x ) , x ∈ Ω ; u ( x ) = 0 , x ∈ ∂ Ω ; {\displaystyle {\begin{cases}-\Delta u(x)=f(x),&x\in \Omega ;\\u(x)=0,&x\in \partial \Omega ;\end{cases}}} the space U could be taken to be the Sobolev space H01(Ω) with dual H−1(Ω); the former is a subspace of the Lp space V = L2(Ω); the bilinear form B associated to −Δ is the L2(Ω) inner product of the derivatives: B ( u , v ) = ∫ Ω ∇ u ( x ) ⋅ ∇ v ( x ) d x . {\displaystyle B(u,v)=\int _{\Omega }\nabla u(x)\cdot \nabla v(x)\,\mathrm {d} x.} Hence, the weak formulation of the Poisson equation, given f ∈ L2(Ω), is to find uf such that ∫ Ω ∇ u f ( x ) ⋅ ∇ v ( x ) d x = ∫ Ω f ( x ) v ( x ) d x for all v ∈ H 0 1 ( Ω ) . {\displaystyle \int _{\Omega }\nabla u_{f}(x)\cdot \nabla v(x)\,\mathrm {d} x=\int _{\Omega }f(x)v(x)\,\mathrm {d} x{\mbox{ for all }}v\in H_{0}^{1}(\Omega ).} == Statement of the theorem == In 1962 J Necas provided the following generalization of Lax and Milgram's earlier result, which begins by dispensing with the requirement that U and V be the same space. Let U and V be two real Hilbert spaces and let B : U × V → R be a continuous bilinear functional. Suppose also that B is weakly coercive: for some constant c > 0 and all u ∈ U, sup ‖ v ‖ = 1 | B ( u , v ) | ≥ c ‖ u ‖ {\displaystyle \sup _{\|v\|=1}|B(u,v)|\geq c\|u\|} and, for all 0 ≠ v ∈ V, sup ‖ u ‖ = 1 | B ( u , v ) | > 0 {\displaystyle \sup _{\|u\|=1}|B(u,v)|>0} Then, for all f ∈ V∗, there exists a unique solution u = uf ∈ U to the weak problem B ( u f , v ) = ⟨ f , v ⟩ for all v ∈ V . {\displaystyle B(u_{f},v)=\langle f,v\rangle {\mbox{ for all }}v\in V.} Moreover, the solution depends continuously on the given data: ‖ u f ‖ ≤ 1 c ‖ f ‖ . {\displaystyle \|u_{f}\|\leq {\frac {1}{c}}\|f\|.} Necas' proof extends directly to the situation where U {\displaystyle U} is a Banach space and V {\displaystyle V} a reflexive Banach space. == See also == Lions–Lax–Milgram theorem == References == Babuška, Ivo (1970–1971). "Error-bounds for finite element method". Numerische Mathematik. 16 (4): 322–333. doi:10.1007/BF02165003. hdl:10338.dmlcz/103498. ISSN 0029-599X. MR 0288971. S2CID 122191183. Zbl 0214.42001. Lax, Peter D.; Milgram, Arthur N. (1954), "Parabolic equations", Contributions to the theory of partial differential equations, Annals of Mathematics Studies, vol. 33, Princeton, N. J.: Princeton University Press, pp. 167–190, MR 0067317, Zbl 0058.08703 – via De Gruyter Nečas, Jindřich, Sur une méthode pour résoudre les équations aux dérivées partielles du type elliptique, voisine de la variationnelle, Annali della Scuola Normale Superiore di Pisa - Scienze Fisiche e Matematiche, Serie 3, Volume 16 (1962) no. 4, pp. 305-326. == External links == Roşca, Ioan (2001) [1994], "Babuška–Lax–Milgram theorem", Encyclopedia of Mathematics, EMS Press
Wikipedia:Lax–Wendroff theorem#0
In computational mathematics, the Lax–Wendroff theorem, named after Peter Lax and Burton Wendroff, states that if a conservative numerical scheme for a hyperbolic system of conservation laws converges, then it converges towards a weak solution. == See also == Lax–Wendroff method Godunov's scheme == References == Randall J. LeVeque, Numerical methods for conservation laws, Birkhäuser, 1992 ISBN 978-3-7643-2723-1
Wikipedia:Leading-order term#0
The leading-order terms (or leading-order corrections) within a mathematical equation, expression or model are the terms with the largest order of magnitude. The sizes of the different terms in the equation(s) will change as the variables change, and hence, which terms are leading-order may also change. A common and powerful way of simplifying and understanding a wide variety of complicated mathematical models is to investigate which terms are the largest (and therefore most important), for particular sizes of the variables and parameters, and analyse the behaviour produced by just these terms (regarding the other smaller terms as negligible). This gives the main behaviour – the true behaviour is only small deviations away from this. The main behaviour may be captured sufficiently well by just the strictly leading-order terms, or it may be decided that slightly smaller terms should also be included. In which case, the phrase "leading-order terms" might be used informally to mean this whole group of terms. The behaviour produced by just the group of leading-order terms is called the leading-order behaviour of the model. == Basic example == Consider the equation y = x3 + 5x + 0.1. For five different values of x, the table shows the sizes of the four terms in this equation, and which terms are leading-order. As x increases further, the leading-order terms stay as x3 and y, but as x decreases and then becomes more and more negative, which terms are leading-order again changes. There is no strict cut-off for when two terms should or should not be regarded as approximately the same order, or magnitude. One possible rule of thumb is that two terms that are within a factor of 10 (one order of magnitude) of each other should be regarded as of about the same order, and two terms that are not within a factor of 100 (two orders of magnitude) of each other should not. However, in between is a grey area, so there are no fixed boundaries where terms are to be regarded as approximately leading-order and where not. Instead the terms fade in and out, as the variables change. Deciding whether terms in a model are leading-order (or approximately leading-order), and if not, whether they are small enough to be regarded as negligible, (two different questions), is often a matter of investigation and judgement, and will depend on the context. == Leading-order behaviour == Equations with only one leading-order term are possible, but rare. For example, the equation 100 = 1 + 1 + 1 + ... + 1, (where the right hand side comprises one hundred 1's). For any particular combination of values for the variables and parameters, an equation will typically contain at least two leading-order terms, and other lower-order terms. In this case, by making the assumption that the lower-order terms, and the parts of the leading-order terms that are the same size as the lower-order terms (perhaps the second or third significant figure onwards), are negligible, a new equation may be formed by dropping all these lower-order terms and parts of the leading-order terms. The remaining terms provide the leading-order equation, or leading-order balance, or dominant balance, and creating a new equation just involving these terms is known as taking an equation to leading-order. The solutions to this new equation are called the leading-order solutions to the original equation. Analysing the behaviour given by this new equation gives the leading-order behaviour of the model for these values of the variables and parameters. The size of the error in making this approximation is normally roughly the size of the largest neglected term. Suppose we want to understand the leading-order behaviour of the example above. When x = 0.001, the x3 and 5x terms may be regarded as negligible, and dropped, along with any values in the third decimal places onwards in the two remaining terms. This gives the leading-order balance y = 0.1. Thus the leading-order behaviour of this equation at x=0.001 is that y is constant. Similarly, when x = 10, the 5x and 0.1 terms may be regarded as negligible, and dropped, along with any values in the third significant figure onwards in the two remaining terms. This gives the leading-order balance y = x3. Thus the leading-order behaviour of this equation at x=10 is that y increases cubically with x. The main behaviour of y may thus be investigated at any value of x. The leading-order behaviour is more complicated when more terms are leading-order. At x=2 there is a leading-order balance between the cubic and linear dependencies of y on x. Note that this description of finding leading-order balances and behaviours gives only an outline description of the process – it is not mathematically rigorous. == Next-to-leading order == Of course, y is not actually completely constant at x = 0.001 – this is just its main behaviour in the vicinity of this point. It may be that retaining only the leading-order (or approximately leading-order) terms, and regarding all the other smaller terms as negligible, is insufficient (when using the model for future prediction, for example), and so it may be necessary to also retain the set of next largest terms. These can be called the next-to-leading order (NLO) terms or corrections. The next set of terms down after that can be called the next-to-next-to-leading order (NNLO) terms or corrections. == Usage == === Matched asymptotic expansions === Leading-order simplification techniques are used in conjunction with the method of matched asymptotic expansions, when the accurate approximate solution in each subdomain is the leading-order solution. === Simplifying the Navier–Stokes equations === For particular fluid flow scenarios, the (very general) Navier–Stokes equations may be considerably simplified by considering only the leading-order components. For example, the Stokes flow equations. Also, the thin film equations of lubrication theory. === Simplification of differential equations by machine learning === Various differential equations may be locally simplified by considering only the leading-order components. Machine learning algorithms can partition simulation or observational data into localized partitions with leading-order equation terms for aerodynamics, ocean dynamics, tumor-induced angiogenesis, and synthetic data applications. == See also == Valuation, an algebraic generalization of "leading order" == References ==
Wikipedia:Leah Keshet#0
Leah Edelstein-Keshet (Hebrew: לאה אדלשטיין-קשת) is an Israeli-Canadian mathematical biologist. Edelstein-Keshet is known for her contributions to the field of mathematical biology and biophysics.[1] Her research spans many topics including sub-cellular biology, ecology, and biomedical research, with particular focus on cell motility and the cytoskeleton, modeling of physiology and diseases, such as autoimmune diabetes, and swarming and aggregation behavior in social organisms. She is a full-time professor at the University of British Columbia in Vancouver, Canada. == Early life and education == Dr. Edelstein-Keshet was born in Israel, and moved to Canada with her parents when she was 12. She earned her Bachelor of Science and Master of Science in Mathematics from Dalhousie University and received in 1982 her doctorate in Applied Mathematics from the Weizmann Institute of Science in Israel, where she was supervised by Lee Segel. == Career == Dr. Edelstein-Keshet held teaching positions at Brown University and Duke University before joining the University of British Columbia as Associate Professor in 1989, where she is now Associate Head (Faculty Affairs). She has authored three books, including Mathematical Models in Biology in the SIAM Series Classics in Applied Mathematics. In 1995 she became the first female president of the Society for Mathematical Biology. == Awards and recognition == In 2003 she was awarded the Krieger–Nelson Prize of the Canadian Mathematical Society. She became a Fellow of the Society for Industrial and Applied Mathematics in 2014 "for contributions to the mathematics and modeling of the cell, the immune system, and biological swarms, as well as to applied mathematics education". She was also awarded a Faculty of Science Award for Leadership from the University of British Columbia. She is the 2022 SIAM John von Neumann Prize Lecturer. == See also == Timeline of women in science == References == == External links == Leah Edelstein-Keshet at UBC Leah Edelstein-Keshet at the Mathematics Genealogy Project
Wikipedia:Least fixed point#0
In order theory, a branch of mathematics, the least fixed point (lfp or LFP, sometimes also smallest fixed point) of a function from a partially ordered set ("poset" for short) to itself is the fixed point which is less than each other fixed point, according to the order of the poset. A function need not have a least fixed point, but if it does then the least fixed point is unique. == Examples == With the usual order on the real numbers, the least fixed point of the real function f(x) = x2 is x = 0 (since the only other fixed point is 1 and 0 < 1). In contrast, f(x) = x + 1 has no fixed points at all, so has no least one, and f(x) = x has infinitely many fixed points, but has no least one. Let G = ( V , A ) {\displaystyle G=(V,A)} be a directed graph and v {\displaystyle v} be a vertex. The set of vertices accessible from v {\displaystyle v} can be defined as the least fixed-point of the function f : ℘ ( V ) → ℘ ( V ) {\displaystyle f:\wp (V)\to \wp (V)} , defined as f ( X ) = { v } ∪ { x ∈ V : for some w ∈ X there is an arc from w to x } . {\displaystyle f(X)=\{v\}\cup \{x\in V:{\text{ for some }}w\in X{\text{ there is an arc from }}w{\text{ to }}x\}.} The set of vertices which are co-accessible from v {\displaystyle v} is defined by a similar least fix-point. The strongly connected component of v {\displaystyle v} is the intersection of those two least fixed-points. Let G = ( V , Σ , R , S 0 ) {\displaystyle G=(V,\Sigma ,R,S_{0})} be a context-free grammar. The set E {\displaystyle E} of symbols which produces the empty string ε {\displaystyle \varepsilon } can be obtained as the least fixed-point of the function f : ℘ ( V ) → ℘ ( V ) {\displaystyle f:\wp (V)\to \wp (V)} , defined as f ( X ) = { S ∈ V : S ∈ X or ( S → ε ) ∈ R or ( S → S 1 … S n ) ∈ R and S i ∈ X , for all i } {\displaystyle f(X)=\{S\in V:\;S\in X{\text{ or }}(S\to \varepsilon )\in R{\text{ or }}(S\to S^{1}\dots S^{n})\in R{\text{ and }}S^{i}\in X{\text{, for all }}i\}} , where ℘ ( V ) {\displaystyle \wp (V)} denotes the power set of V {\displaystyle V} . == Applications == Many fixed-point theorems yield algorithms for locating the least fixed point. Least fixed points often have desirable properties that arbitrary fixed points do not. === Denotational semantics === In computer science, the denotational semantics approach uses least fixed points to obtain from a given program text a corresponding mathematical function, called its semantics. To this end, an artificial mathematical object, ⊥ {\displaystyle \bot } , is introduced, denoting the exceptional value "undefined". Given e.g. the program datatype int, its mathematical counterpart is defined as Z ⊥ = Z ∪ { ⊥ } ; {\displaystyle \mathbb {Z} _{\bot }=\mathbb {Z} \cup \{\bot \};} it is made a partially ordered set by defining ⊥ ⊏ n {\displaystyle \bot \sqsubset n} for each n ∈ Z {\displaystyle n\in \mathbb {Z} } and letting any two different members n , m ∈ Z {\displaystyle n,m\in \mathbb {Z} } be uncomparable w.r.t. ⊏ {\displaystyle \sqsubset } , see picture. The semantics of a program definition int f(int n){...} is some mathematical function f : Z ⊥ → Z ⊥ . {\displaystyle f:\mathbb {Z} _{\bot }\to \mathbb {Z} _{\bot }.} If the program definition f does not terminate for some input n, this can be expressed mathematically as f ( n ) = ⊥ . {\displaystyle f(n)=\bot .} The set of all mathematical functions is made partially ordered by defining f ⊑ g {\displaystyle f\sqsubseteq g} if, for each n , {\displaystyle n,} the relation f ( n ) ⊑ g ( n ) {\displaystyle f(n)\sqsubseteq g(n)} holds, that is, if f ( n ) {\displaystyle f(n)} is less defined or equal to g ( n ) . {\displaystyle g(n).} For example, the semantics of the expression x+x/x is less defined than that of x+1, since the former, but not the latter, maps 0 {\displaystyle 0} to ⊥ , {\displaystyle \bot ,} and they agree otherwise. Given some program text f, its mathematical counterpart is obtained as least fixed point of some mapping from functions to functions that can be obtained by "translating" f. For example, the C definition is translated to a mapping F : ( Z ⊥ → Z ⊥ ) → ( Z ⊥ → Z ⊥ ) , {\displaystyle F:(\mathbb {Z} _{\bot }\to \mathbb {Z} _{\bot })\to (\mathbb {Z} _{\bot }\to \mathbb {Z} _{\bot }),} defined as ( F ( f ) ) ( n ) = { 1 if n = 0 , n ⋅ f ( n − 1 ) if n ≠ ⊥ and n ≠ 0 , ⊥ if n = ⊥ . {\displaystyle (F(f))(n)={\begin{cases}1&{\text{if }}n=0,\\n\cdot f(n-1)&{\text{if }}n\neq \bot {\text{ and }}n\neq 0,\\\bot &{\text{if }}n=\bot .\\\end{cases}}} The mapping F {\displaystyle F} is defined in a non-recursive way, although fact was defined recursively. Under certain restrictions (see Kleene fixed-point theorem), which are met in the example, F {\displaystyle F} necessarily has a least fixed point, fact {\displaystyle \operatorname {fact} } , that is ( F ( fact ) ) ( n ) = fact ⁡ ( n ) {\displaystyle (F(\operatorname {fact} ))(n)=\operatorname {fact} (n)} for all n ∈ Z ⊥ {\displaystyle n\in \mathbb {Z} _{\bot }} . It is possible to show that fact ⁡ ( n ) = { n ! if n ≥ 0 , ⊥ if n < 0 or n = ⊥ . {\displaystyle \operatorname {fact} (n)={\begin{cases}n!&{\text{if }}n\geq 0,\\\bot &{\text{if }}n<0{\text{ or }}n=\bot .\end{cases}}} A larger fixed point of F {\displaystyle F} is e.g. the function fact 0 , {\displaystyle \operatorname {fact} _{0},} defined by fact 0 ⁡ ( n ) = { n ! if n ≥ 0 , 0 if n < 0 , ⊥ if n = ⊥ , {\displaystyle \operatorname {fact} _{0}(n)={\begin{cases}n!&{\text{if }}n\geq 0,\\0&{\text{if }}n<0,\\\bot &{\text{if }}n=\bot ,\end{cases}}} however, this function does not correctly reflect the behavior of the above program text for negative n ; {\displaystyle n;} e.g. the call fact(-1) will not terminate at all, let alone return 0. Only the least fixed point, fact , {\displaystyle \operatorname {fact} ,} can reasonably be used as a mathematical program semantic. === Descriptive complexity === Immerman and Vardi independently showed the descriptive complexity result that the polynomial-time computable properties of linearly ordered structures are definable in FO(LFP), i.e. in first-order logic with a least fixed point operator. However, FO(LFP) is too weak to express all polynomial-time properties of unordered structures (for instance that a structure has even size). == Greatest fixed points == The greatest fixed point of a function can be defined analogously to the least fixed point, as the fixed point which is greater than any other fixed point, according to the order of the poset. In computer science, greatest fixed points are much less commonly used than least fixed points. Specifically, the posets found in domain theory usually do not have a greatest element, hence for a given function, there may be multiple, mutually incomparable maximal fixed points, and the greatest fixed point of that function may not exist. To address this issue, the optimal fixed point has been defined as the most-defined fixed point compatible with all other fixed points. The optimal fixed point always exists, and is the greatest fixed point if the greatest fixed point exists. The optimal fixed point allows formal study of recursive and corecursive functions that do not converge with the least fixed point. Unfortunately, whereas Kleene's recursion theorem shows that the least fixed point is effectively computable, the optimal fixed point of a computable function may be a non-computable function. == See also == Knaster–Tarski theorem Fixed-point logic == Notes == == References == Immerman, Neil. Descriptive Complexity, 1999, Springer-Verlag. Libkin, Leonid. Elements of Finite Model Theory, 2004, Springer.
Wikipedia:Least-squares spectral analysis#0
Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum based on a least-squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in the long and gapped records; LSSA mitigates such problems. Unlike in Fourier analysis, data need not be equally spaced to use LSSA. Developed in 1969 and 1971, LSSA is also known as the Vaníček method and the Gauss-Vaniček method after Petr Vaníček, and as the Lomb method or the Lomb–Scargle periodogram, based on the simplifications first by Nicholas R. Lomb and then by Jeffrey D. Scargle. == Historical background == The close connections between Fourier analysis, the periodogram, and the least-squares fitting of sinusoids have been known for a long time. However, most developments are restricted to complete data sets of equally spaced samples. In 1963, Freek J. M. Barning of Mathematisch Centrum, Amsterdam, handled unequally spaced data by similar techniques, including both a periodogram analysis equivalent to what nowadays is called the Lomb method and least-squares fitting of selected frequencies of sinusoids determined from such periodograms — and connected by a procedure known today as the matching pursuit with post-back fitting or the orthogonal matching pursuit. Petr Vaníček, a Canadian geophysicist and geodesist of the University of New Brunswick, proposed in 1969 also the matching-pursuit approach for equally and unequally spaced data, which he called "successive spectral analysis" and the result a "least-squares periodogram". He generalized this method to account for any systematic components beyond a simple mean, such as a "predicted linear (quadratic, exponential, ...) secular trend of unknown magnitude", and applied it to a variety of samples, in 1971. Vaníček's strictly least-squares method was then simplified in 1976 by Nicholas R. Lomb of the University of Sydney, who pointed out its close connection to periodogram analysis. Subsequently, the definition of a periodogram of unequally spaced data was modified and analyzed by Jeffrey D. Scargle of NASA Ames Research Center, who showed that, with minor changes, it becomes identical to Lomb's least-squares formula for fitting individual sinusoid frequencies. Scargle states that his paper "does not introduce a new detection technique, but instead studies the reliability and efficiency of detection with the most commonly used technique, the periodogram, in the case where the observation times are unevenly spaced," and further points out regarding least-squares fitting of sinusoids compared to periodogram analysis, that his paper "establishes, apparently for the first time, that (with the proposed modifications) these two methods are exactly equivalent." Press summarizes the development this way: A completely different method of spectral analysis for unevenly sampled data, one that mitigates these difficulties and has some other very desirable properties, was developed by Lomb, based in part on earlier work by Barning and Vanicek, and additionally elaborated by Scargle. In 1989, Michael J. Korenberg of Queen's University in Kingston, Ontario, developed the "fast orthogonal search" method of more quickly finding a near-optimal decomposition of spectra or other problems, similar to the technique that later became known as the orthogonal matching pursuit. == Development of LSSA and variants == === The Vaníček method === In the Vaníček method, a discrete data set is approximated by a weighted sum of sinusoids of progressively determined frequencies using a standard linear regression or least-squares fit. The frequencies are chosen using a method similar to Barning's, but going further in optimizing the choice of each successive new frequency by picking the frequency that minimizes the residual after least-squares fitting (equivalent to the fitting technique now known as matching pursuit with pre-backfitting). The number of sinusoids must be less than or equal to the number of data samples (counting sines and cosines of the same frequency as separate sinusoids). A data vector Φ is represented as a weighted sum of sinusoidal basis functions, tabulated in a matrix A by evaluating each function at the sample times, with weight vector x: ϕ ≈ A x {\displaystyle \phi \approx {\textbf {A}}x} , where the weights vector x is chosen to minimize the sum of squared errors in approximating Φ. The solution for x is closed-form, using standard linear regression: x = ( A T A ) − 1 A T ϕ . {\displaystyle x=({\textbf {A}}^{\mathrm {T} }{\textbf {A}})^{-1}{\textbf {A}}^{\mathrm {T} }\phi .} Here the matrix A can be based on any set of functions mutually independent (not necessarily orthogonal) when evaluated at the sample times; functions used for spectral analysis are typically sines and cosines evenly distributed over the frequency range of interest. If we choose too many frequencies in a too-narrow frequency range, the functions will be insufficiently independent, the matrix ill-conditioned, and the resulting spectrum meaningless. When the basis functions in A are orthogonal (that is, not correlated, meaning the columns have zero pair-wise dot products), the matrix ATA is diagonal; when the columns all have the same power (sum of squares of elements), then that matrix is an identity matrix times a constant, so the inversion is trivial. The latter is the case when the sample times are equally spaced and sinusoids chosen as sines and cosines equally spaced in pairs on the frequency interval 0 to a half cycle per sample (spaced by 1/N cycles per sample, omitting the sine phases at 0 and maximum frequency where they are identically zero). This case is known as the discrete Fourier transform, slightly rewritten in terms of measurements and coefficients. x = A T ϕ {\displaystyle x={\textbf {A}}^{\mathrm {T} }\phi } — DFT case for N equally spaced samples and frequencies, within a scalar factor. === The Lomb method === Trying to lower the computational burden of the Vaníček method in 1976 (no longer an issue), Lomb proposed using the above simplification in general, except for pair-wise correlations between sine and cosine bases of the same frequency, since the correlations between pairs of sinusoids are often small, at least when they are not tightly spaced. This formulation is essentially that of the traditional periodogram but adapted for use with unevenly spaced samples. The vector x is a reasonably good estimate of an underlying spectrum, but since we ignore any correlations, Ax is no longer a good approximation to the signal, and the method is no longer a least-squares method — yet in the literature continues to be referred to as such. Rather than just taking dot products of the data with sine and cosine waveforms directly, Scargle modified the standard periodogram formula so to find a time delay τ {\displaystyle \tau } first, such that this pair of sinusoids would be mutually orthogonal at sample times t j {\displaystyle t_{j}} and also adjusted for the potentially unequal powers of these two basis functions, to obtain a better estimate of the power at a frequency. This procedure made his modified periodogram method exactly equivalent to Lomb's method. Time delay τ {\displaystyle \tau } by definition equals to tan ⁡ 2 ω τ = ∑ j sin ⁡ 2 ω t j ∑ j cos ⁡ 2 ω t j . {\displaystyle \tan {2\omega \tau }={\frac {\sum _{j}\sin 2\omega t_{j}}{\sum _{j}\cos 2\omega t_{j}}}.} Then the periodogram at frequency ω {\displaystyle \omega } is estimated as: P x ( ω ) = 1 2 ( [ ∑ j X j cos ⁡ ω ( t j − τ ) ] 2 ∑ j cos 2 ⁡ ω ( t j − τ ) + [ ∑ j X j sin ⁡ ω ( t j − τ ) ] 2 ∑ j sin 2 ⁡ ω ( t j − τ ) ) {\displaystyle P_{x}(\omega )={\frac {1}{2}}\left({\frac {\left[\sum _{j}X_{j}\cos \omega (t_{j}-\tau )\right]^{2}}{\sum _{j}\cos ^{2}\omega (t_{j}-\tau )}}+{\frac {\left[\sum _{j}X_{j}\sin \omega (t_{j}-\tau )\right]^{2}}{\sum _{j}\sin ^{2}\omega (t_{j}-\tau )}}\right)} , which, as Scargle reports, has the same statistical distribution as the periodogram in the evenly sampled case. At any individual frequency ω {\displaystyle \omega } , this method gives the same power as does a least-squares fit to sinusoids of that frequency and of the form: ϕ ( t ) = A sin ⁡ ω t + B cos ⁡ ω t . {\displaystyle \phi (t)=A\sin \omega t+B\cos \omega t.} In practice, it is always difficult to judge if a given Lomb peak is significant or not, especially when the nature of the noise is unknown, so for example a false-alarm spectral peak in the Lomb periodogram analysis of noisy periodic signal may result from noise in turbulence data. Fourier methods can also report false spectral peaks when analyzing patched-up or data edited otherwise. === The generalized Lomb–Scargle periodogram === The standard Lomb–Scargle periodogram is only valid for a model with a zero mean. Commonly, this is approximated — by subtracting the mean of the data before calculating the periodogram. However, this is an inaccurate assumption when the mean of the model (the fitted sinusoids) is non-zero. The generalized Lomb–Scargle periodogram removes this assumption and explicitly solves for the mean. In this case, the function fitted is ϕ ( t ) = A sin ⁡ ω t + B cos ⁡ ω t + C . {\displaystyle \phi (t)=A\sin \omega t+B\cos \omega t+C.} The generalized Lomb–Scargle periodogram has also been referred to in the literature as a floating mean periodogram. === Korenberg's "fast orthogonal search" method === Michael Korenberg of Queen's University in Kingston, Ontario, developed a method for choosing a sparse set of components from an over-complete set — such as sinusoidal components for spectral analysis — called the fast orthogonal search (FOS). Mathematically, FOS uses a slightly modified Cholesky decomposition in a mean-square error reduction (MSER) process, implemented as a sparse matrix inversion. As with the other LSSA methods, FOS avoids the major shortcoming of discrete Fourier analysis, so it can accurately identify embedded periodicities and excel with unequally spaced data. The fast orthogonal search method was also applied to other problems, such as nonlinear system identification. === Palmer's Chi-squared method === Palmer has developed a method for finding the best-fit function to any chosen number of harmonics, allowing more freedom to find non-sinusoidal harmonic functions. His is a fast (FFT-based) technique for weighted least-squares analysis on arbitrarily spaced data with non-uniform standard errors. Source code that implements this technique is available. Because data are often not sampled at uniformly spaced discrete times, this method "grids" the data by sparsely filling a time series array at the sample times. All intervening grid points receive zero statistical weight, equivalent to having infinite error bars at times between samples. == Applications == The most useful feature of LSSA is enabling incomplete records to be spectrally analyzed — without the need to manipulate data or to invent otherwise non-existent data. Magnitudes in the LSSA spectrum depict the contribution of a frequency or period to the variance of the time series. Generally, spectral magnitudes thus defined enable the output's straightforward significance level regime. Alternatively, spectral magnitudes in the Vaníček spectrum can also be expressed in dB. Note that spectral magnitudes in the Vaníček spectrum follow β-distribution. Inverse transformation of Vaníček's LSSA is possible, as is most easily seen by writing the forward transform as a matrix; the matrix inverse (when the matrix is not singular) or pseudo-inverse will then be an inverse transformation; the inverse will exactly match the original data if the chosen sinusoids are mutually independent at the sample points and their number is equal to the number of data points. No such inverse procedure is known for the periodogram method. == Implementation == The LSSA can be implemented in less than a page of MATLAB code. In essence: "to compute the least-squares spectrum we must compute m spectral values ... which involves performing the least-squares approximation m times, each time to get [the spectral power] for a different frequency" I.e., for each frequency in a desired set of frequencies, sine and cosine functions are evaluated at the times corresponding to the data samples, and dot products of the data vector with the sinusoid vectors are taken and appropriately normalized; following the method known as Lomb/Scargle periodogram, a time shift is calculated for each frequency to orthogonalize the sine and cosine components before the dot product; finally, a power is computed from those two amplitude components. This same process implements a discrete Fourier transform when the data are uniformly spaced in time and the frequencies chosen correspond to integer numbers of cycles over the finite data record. This method treats each sinusoidal component independently, or out of context, even though they may not be orthogonal to data points; it is Vaníček's original method. In addition, it is possible to perform a full simultaneous or in-context least-squares fit by solving a matrix equation and partitioning the total data variance between the specified sinusoid frequencies. Such a matrix least-squares solution is natively available in MATLAB as the backslash operator. Furthermore, the simultaneous or in-context method, as opposed to the independent or out-of-context version (as well as the periodogram version due to Lomb), cannot fit more components (sines and cosines) than there are data samples, so that: "...serious repercussions can also arise if the selected frequencies result in some of the Fourier components (trig functions) becoming nearly linearly dependent with each other, thereby producing an ill-conditioned or near singular N. To avoid such ill conditioning it becomes necessary to either select a different set of frequencies to be estimated (e.g., equally spaced frequencies) or simply neglect the correlations in N (i.e., the off-diagonal blocks) and estimate the inverse least squares transform separately for the individual frequencies..." Lomb's periodogram method, on the other hand, can use an arbitrarily high number of, or density of, frequency components, as in a standard periodogram; that is, the frequency domain can be over-sampled by an arbitrary factor. However, as mentioned above, one should keep in mind that Lomb's simplification and diverging from the least squares criterion opened up his technique to grave sources of errors, resulting even in false spectral peaks. In Fourier analysis, such as the Fourier transform and discrete Fourier transform, the sinusoids fitted to data are all mutually orthogonal, so there is no distinction between the simple out-of-context dot-product-based projection onto basis functions versus an in-context simultaneous least-squares fit; that is, no matrix inversion is required to least-squares partition the variance between orthogonal sinusoids of different frequencies. In the past, Fourier's was for many a method of choice thanks to its processing-efficient fast Fourier transform implementation when complete data records with equally spaced samples are available, and they used the Fourier family of techniques to analyze gapped records as well, which, however, required manipulating and even inventing non-existent data just so to be able to run a Fourier-based algorithm. == See also == Non-uniform discrete Fourier transform Orthogonal functions SigSpec Sinusoidal model Spectral density Spectral density estimation, for competing alternatives == References == == External links == LSSA package freeware download, FORTRAN, Vaníček's least-squares spectral analysis method, from the University of New Brunswick. LSWAVE package freeware download, MATLAB, includes the Vaníček's least-squares spectral analysis method, from the U.S. National Geodetic Survey.
Wikipedia:Lebesgue integrability condition#0
In the branch of mathematics known as real analysis, the Riemann integral, created by Bernhard Riemann, was the first rigorous definition of the integral of a function on an interval. It was presented to the faculty at the University of Göttingen in 1854, but not published in a journal until 1868. For many functions and practical applications, the Riemann integral can be evaluated by the fundamental theorem of calculus or approximated by numerical integration, or simulated using Monte Carlo integration. == Overview == Imagine you have a curve on a graph, and the curve stays above the x-axis between two points, a and b. The area under that curve, from a to b, is what we want to figure out. This area can be described as the set of all points (x, y) on the graph that follow these rules: a ≤ x ≤ b (the x-coordinate is between a and b) and 0 < y < f(x) (the y-coordinate is between 0 and the height of the curve f(x)). Mathematically, this region can be expressed in set-builder notation as S = { ( x , y ) : a ≤ x ≤ b , 0 < y < f ( x ) } . {\displaystyle S=\left\{(x,y)\,:\,a\leq x\leq b\,,\,0<y<f(x)\right\}.} To measure this area, we use a Riemann integral, which is written as: ∫ a b f ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx.} This notation means “the integral of f(x) from a to b,” and it represents the exact area under the curve f(x) and above the x-axis, between x = a and x = b. The idea behind the Riemann integral is to break the area into small, simple shapes (like rectangles), add up their areas, and then make the rectangles smaller and smaller to get a better estimate. In the end, when the rectangles are infinitely small, the sum gives the exact area, which is what the integral represents. If the curve dips below the x-axis, the integral gives a signed area. This means the integral adds the part above the x-axis as positive and subtracts the part below the x-axis as negative. So, the result of ∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x)\,dx} can be positive, negative, or zero, depending on how much of the curve is above or below the x-axis. == Definition == === Partitions of an interval === A partition of an interval [a, b] is a finite sequence of numbers of the form a = x 0 < x 1 < x 2 < ⋯ < x i < ⋯ < x n = b {\displaystyle a=x_{0}<x_{1}<x_{2}<\dots <x_{i}<\dots <x_{n}=b} Each [xi, xi + 1] is called a sub-interval of the partition. The mesh or norm of a partition is defined to be the length of the longest sub-interval, that is, max ( x i + 1 − x i ) , i ∈ [ 0 , n − 1 ] . {\displaystyle \max \left(x_{i+1}-x_{i}\right),\quad i\in [0,n-1].} A tagged partition P(x, t) of an interval [a, b] is a partition together with a choice of a sample point within each sub-interval: that is, numbers t0, ..., tn − 1 with ti ∈ [xi, xi + 1] for each i. The mesh of a tagged partition is the same as that of an ordinary partition. Suppose that two partitions P(x, t) and Q(y, s) are both partitions of the interval [a, b]. We say that Q(y, s) is a refinement of P(x, t) if for each integer i, with i ∈ [0, n], there exists an integer r(i) such that xi = yr(i) and such that ti = sj for some j with j ∈ [r(i), r(i + 1)]. That is, a tagged partition breaks up some of the sub-intervals and adds sample points where necessary, "refining" the accuracy of the partition. We can turn the set of all tagged partitions into a directed set by saying that one tagged partition is greater than or equal to another if the former is a refinement of the latter. === Riemann sum === Let f be a real-valued function defined on the interval [a, b]. The Riemann sum of f with respect to a tagged partition P(x, t) of [a, b] is ∑ i = 0 n − 1 f ( t i ) ( x i + 1 − x i ) . {\displaystyle \sum _{i=0}^{n-1}f(t_{i})\left(x_{i+1}-x_{i}\right).} Each term in the sum is the product of the value of the function at a given point and the length of an interval. Consequently, each term represents the (signed) area of a rectangle with height f(ti) and width xi + 1 − xi. The Riemann sum is the (signed) area of all the rectangles. Closely related concepts are the lower and upper Darboux sums. These are similar to Riemann sums, but the tags are replaced by the infimum and supremum (respectively) of f on each sub-interval: L ( f , P ) = ∑ i = 0 n − 1 inf t ∈ [ x i , x i + 1 ] f ( t ) ( x i + 1 − x i ) , U ( f , P ) = ∑ i = 0 n − 1 sup t ∈ [ x i , x i + 1 ] f ( t ) ( x i + 1 − x i ) . {\displaystyle {\begin{aligned}L(f,P)&=\sum _{i=0}^{n-1}\inf _{t\in [x_{i},x_{i+1}]}f(t)(x_{i+1}-x_{i}),\\U(f,P)&=\sum _{i=0}^{n-1}\sup _{t\in [x_{i},x_{i+1}]}f(t)(x_{i+1}-x_{i}).\end{aligned}}} If f is continuous, then the lower and upper Darboux sums for an untagged partition are equal to the Riemann sum for that partition, where the tags are chosen to be the minimum or maximum (respectively) of f on each subinterval. (When f is discontinuous on a subinterval, there may not be a tag that achieves the infimum or supremum on that subinterval.) The Darboux integral, which is similar to the Riemann integral but based on Darboux sums, is equivalent to the Riemann integral. === Riemann integral === Loosely speaking, the Riemann integral is the limit of the Riemann sums of a function as the partitions get finer. If the limit exists then the function is said to be integrable (or more specifically Riemann-integrable). The Riemann sum can be made as close as desired to the Riemann integral by making the partition fine enough. One important requirement is that the mesh of the partitions must become smaller and smaller, so that it has the limit zero. If this were not so, then we would not be getting a good approximation to the function on certain subintervals. In fact, this is enough to define an integral. To be specific, we say that the Riemann integral of f exists and equals s if the following condition holds: For all ε > 0, there exists δ > 0 such that for any tagged partition x0, ..., xn and t0, ..., tn − 1 whose mesh is less than δ, we have | ( ∑ i = 0 n − 1 f ( t i ) ( x i + 1 − x i ) ) − s | < ε . {\displaystyle \left|\left(\sum _{i=0}^{n-1}f(t_{i})(x_{i+1}-x_{i})\right)-s\right|<\varepsilon .} Unfortunately, this definition is very difficult to use. It would help to develop an equivalent definition of the Riemann integral which is easier to work with. We develop this definition now, with a proof of equivalence following. Our new definition says that the Riemann integral of f exists and equals s if the following condition holds: For all ε > 0, there exists a tagged partition y0, ..., ym and r0, ..., rm − 1 such that for any tagged partition x0, ..., xn and t0, ..., tn − 1 which is a refinement of y0, ..., ym and r0, ..., rm − 1, we have | ( ∑ i = 0 n − 1 f ( t i ) ( x i + 1 − x i ) ) − s | < ε . {\displaystyle \left|\left(\sum _{i=0}^{n-1}f(t_{i})(x_{i+1}-x_{i})\right)-s\right|<\varepsilon .} Both of these mean that eventually, the Riemann sum of f with respect to any partition gets trapped close to s. Since this is true no matter how close we demand the sums be trapped, we say that the Riemann sums converge to s. These definitions are actually a special case of a more general concept, a net. As we stated earlier, these two definitions are equivalent. In other words, s works in the first definition if and only if s works in the second definition. To show that the first definition implies the second, start with an ε, and choose a δ that satisfies the condition. Choose any tagged partition whose mesh is less than δ. Its Riemann sum is within ε of s, and any refinement of this partition will also have mesh less than δ, so the Riemann sum of the refinement will also be within ε of s. To show that the second definition implies the first, it is easiest to use the Darboux integral. First, one shows that the second definition is equivalent to the definition of the Darboux integral; for this see the Darboux integral article. Now we will show that a Darboux integrable function satisfies the first definition. Fix ε, and choose a partition y0, ..., ym such that the lower and upper Darboux sums with respect to this partition are within ε/2 of the value s of the Darboux integral. Let r = 2 sup x ∈ [ a , b ] | f ( x ) | . {\displaystyle r=2\sup _{x\in [a,b]}|f(x)|.} If r = 0, then f is the zero function, which is clearly both Darboux and Riemann integrable with integral zero. Therefore, we will assume that r > 0. If m > 1, then we choose δ such that δ < min { ε 2 r ( m − 1 ) , ( y 1 − y 0 ) , ( y 2 − y 1 ) , ⋯ , ( y m − y m − 1 ) } {\displaystyle \delta <\min \left\{{\frac {\varepsilon }{2r(m-1)}},\left(y_{1}-y_{0}\right),\left(y_{2}-y_{1}\right),\cdots ,\left(y_{m}-y_{m-1}\right)\right\}} If m = 1, then we choose δ to be less than one. Choose a tagged partition x0, ..., xn and t0, ..., tn − 1 with mesh smaller than δ. We must show that the Riemann sum is within ε of s. To see this, choose an interval [xi, xi + 1]. If this interval is contained within some [yj, yj + 1], then m j ≤ f ( t i ) ≤ M j {\displaystyle m_{j}\leq f(t_{i})\leq M_{j}} where mj and Mj are respectively, the infimum and the supremum of f on [yj, yj + 1]. If all intervals had this property, then this would conclude the proof, because each term in the Riemann sum would be bounded by a corresponding term in the Darboux sums, and we chose the Darboux sums to be near s. This is the case when m = 1, so the proof is finished in that case. Therefore, we may assume that m > 1. In this case, it is possible that one of the [xi, xi + 1] is not contained in any [yj, yj + 1]. Instead, it may stretch across two of the intervals determined by y0, ..., ym. (It cannot meet three intervals because δ is assumed to be smaller than the length of any one interval.) In symbols, it may happen that y j < x i < y j + 1 < x i + 1 < y j + 2 . {\displaystyle y_{j}<x_{i}<y_{j+1}<x_{i+1}<y_{j+2}.} (We may assume that all the inequalities are strict because otherwise we are in the previous case by our assumption on the length of δ.) This can happen at most m − 1 times. To handle this case, we will estimate the difference between the Riemann sum and the Darboux sum by subdividing the partition x0, ..., xn at yj + 1. The term f(ti)(xi + 1 − xi) in the Riemann sum splits into two terms: f ( t i ) ( x i + 1 − x i ) = f ( t i ) ( x i + 1 − y j + 1 ) + f ( t i ) ( y j + 1 − x i ) . {\displaystyle f\left(t_{i}\right)\left(x_{i+1}-x_{i}\right)=f\left(t_{i}\right)\left(x_{i+1}-y_{j+1}\right)+f\left(t_{i}\right)\left(y_{j+1}-x_{i}\right).} Suppose, without loss of generality, that ti ∈ [yj, yj + 1]. Then m j ≤ f ( t i ) ≤ M j , {\displaystyle m_{j}\leq f(t_{i})\leq M_{j},} so this term is bounded by the corresponding term in the Darboux sum for yj. To bound the other term, notice that x i + 1 − y j + 1 < δ < ε 2 r ( m − 1 ) , {\displaystyle x_{i+1}-y_{j+1}<\delta <{\frac {\varepsilon }{2r(m-1)}},} It follows that, for some (indeed any) t*i ∈ [yj + 1, xi + 1], | f ( t i ) − f ( t i ∗ ) | ( x i + 1 − y j + 1 ) < ε 2 ( m − 1 ) . {\displaystyle \left|f\left(t_{i}\right)-f\left(t_{i}^{*}\right)\right|\left(x_{i+1}-y_{j+1}\right)<{\frac {\varepsilon }{2(m-1)}}.} Since this happens at most m − 1 times, the distance between the Riemann sum and a Darboux sum is at most ε/2. Therefore, the distance between the Riemann sum and s is at most ε. == Examples == Let f : [ 0 , 1 ] → R {\displaystyle f:[0,1]\to \mathbb {R} } be the function which takes the value 1 at every point. Any Riemann sum of f on [0, 1] will have the value 1, therefore the Riemann integral of f on [0, 1] is 1. Let I Q : [ 0 , 1 ] → R {\displaystyle I_{\mathbb {Q} }:[0,1]\to \mathbb {R} } be the indicator function of the rational numbers in [0, 1]; that is, I Q {\displaystyle I_{\mathbb {Q} }} takes the value 1 on rational numbers and 0 on irrational numbers. This function does not have a Riemann integral. To prove this, we will show how to construct tagged partitions whose Riemann sums get arbitrarily close to both zero and one. To start, let x0, ..., xn and t0, ..., tn − 1 be a tagged partition (each ti is between xi and xi + 1). Choose ε > 0. The ti have already been chosen, and we can't change the value of f at those points. But if we cut the partition into tiny pieces around each ti, we can minimize the effect of the ti. Then, by carefully choosing the new tags, we can make the value of the Riemann sum turn out to be within ε of either zero or one. Our first step is to cut up the partition. There are n of the ti, and we want their total effect to be less than ε. If we confine each of them to an interval of length less than ε/n, then the contribution of each ti to the Riemann sum will be at least 0 · ε/n and at most 1 · ε/n. This makes the total sum at least zero and at most ε. So let δ be a positive number less than ε/n. If it happens that two of the ti are within δ of each other, choose δ smaller. If it happens that some ti is within δ of some xj, and ti is not equal to xj, choose δ smaller. Since there are only finitely many ti and xj, we can always choose δ sufficiently small. Now we add two cuts to the partition for each ti. One of the cuts will be at ti − δ/2, and the other will be at ti + δ/2. If one of these leaves the interval [0, 1], then we leave it out. ti will be the tag corresponding to the subinterval [ t i − δ 2 , t i + δ 2 ] . {\displaystyle \left[t_{i}-{\frac {\delta }{2}},t_{i}+{\frac {\delta }{2}}\right].} If ti is directly on top of one of the xj, then we let ti be the tag for both intervals: [ t i − δ 2 , x j ] , and [ x j , t i + δ 2 ] . {\displaystyle \left[t_{i}-{\frac {\delta }{2}},x_{j}\right],\quad {\text{and}}\quad \left[x_{j},t_{i}+{\frac {\delta }{2}}\right].} We still have to choose tags for the other subintervals. We will choose them in two different ways. The first way is to always choose a rational point, so that the Riemann sum is as large as possible. This will make the value of the Riemann sum at least 1 − ε. The second way is to always choose an irrational point, so that the Riemann sum is as small as possible. This will make the value of the Riemann sum at most ε. Since we started from an arbitrary partition and ended up as close as we wanted to either zero or one, it is false to say that we are eventually trapped near some number s, so this function is not Riemann integrable. However, it is Lebesgue integrable. In the Lebesgue sense its integral is zero, since the function is zero almost everywhere. But this is a fact that is beyond the reach of the Riemann integral. There are even worse examples. I Q {\displaystyle I_{\mathbb {Q} }} is equivalent (that is, equal almost everywhere) to a Riemann integrable function, but there are non-Riemann integrable bounded functions which are not equivalent to any Riemann integrable function. For example, let C be the Smith–Volterra–Cantor set, and let IC be its indicator function. Because C is not Jordan measurable, IC is not Riemann integrable. Moreover, no function g equivalent to IC is Riemann integrable: g, like IC, must be zero on a dense set, so as in the previous example, any Riemann sum of g has a refinement which is within ε of 0 for any positive number ε. But if the Riemann integral of g exists, then it must equal the Lebesgue integral of IC, which is 1/2. Therefore, g is not Riemann integrable. == Similar concepts == It is popular to define the Riemann integral as the Darboux integral. This is because the Darboux integral is technically simpler and because a function is Riemann-integrable if and only if it is Darboux-integrable. Some calculus books do not use general tagged partitions, but limit themselves to specific types of tagged partitions. If the type of partition is limited too much, some non-integrable functions may appear to be integrable. One popular restriction is the use of "left-hand" and "right-hand" Riemann sums. In a left-hand Riemann sum, ti = xi for all i, and in a right-hand Riemann sum, ti = xi + 1 for all i. Alone this restriction does not impose a problem: we can refine any partition in a way that makes it a left-hand or right-hand sum by subdividing it at each ti. In more formal language, the set of all left-hand Riemann sums and the set of all right-hand Riemann sums is cofinal in the set of all tagged partitions. Another popular restriction is the use of regular subdivisions of an interval. For example, the nth regular subdivision of [0, 1] consists of the intervals [ 0 , 1 n ] , [ 1 n , 2 n ] , … , [ n − 1 n , 1 ] . {\displaystyle \left[0,{\frac {1}{n}}\right],\left[{\frac {1}{n}},{\frac {2}{n}}\right],\ldots ,\left[{\frac {n-1}{n}},1\right].} Again, alone this restriction does not impose a problem, but the reasoning required to see this fact is more difficult than in the case of left-hand and right-hand Riemann sums. However, combining these restrictions, so that one uses only left-hand or right-hand Riemann sums on regularly divided intervals, is dangerous. If a function is known in advance to be Riemann integrable, then this technique will give the correct value of the integral. But under these conditions the indicator function I Q {\displaystyle I_{\mathbb {Q} }} will appear to be integrable on [0, 1] with integral equal to one: Every endpoint of every subinterval will be a rational number, so the function will always be evaluated at rational numbers, and hence it will appear to always equal one. The problem with this definition becomes apparent when we try to split the integral into two pieces. The following equation ought to hold: ∫ 0 2 − 1 I Q ( x ) d x + ∫ 2 − 1 1 I Q ( x ) d x = ∫ 0 1 I Q ( x ) d x . {\displaystyle \int _{0}^{{\sqrt {2}}-1}I_{\mathbb {Q} }(x)\,dx+\int _{{\sqrt {2}}-1}^{1}I_{\mathbb {Q} }(x)\,dx=\int _{0}^{1}I_{\mathbb {Q} }(x)\,dx.} If we use regular subdivisions and left-hand or right-hand Riemann sums, then the two terms on the left are equal to zero, since every endpoint except 0 and 1 will be irrational, but as we have seen the term on the right will equal 1. As defined above, the Riemann integral avoids this problem by refusing to integrate I Q . {\displaystyle I_{\mathbb {Q} }.} The Lebesgue integral is defined in such a way that all these integrals are 0. == Properties == === Linearity === The Riemann integral is a linear transformation; that is, if f and g are Riemann-integrable on [a, b] and α and β are constants, then ∫ a b ( α f ( x ) + β g ( x ) ) d x = α ∫ a b f ( x ) d x + β ∫ a b g ( x ) d x . {\displaystyle \int _{a}^{b}(\alpha f(x)+\beta g(x))\,dx=\alpha \int _{a}^{b}f(x)\,dx+\beta \int _{a}^{b}g(x)\,dx.} Because the Riemann integral of a function is a number, this makes the Riemann integral a linear functional on the vector space of Riemann-integrable functions. == Integrability == A bounded function on a compact interval [a, b] is Riemann integrable if and only if it is continuous almost everywhere (the set of its points of discontinuity has measure zero, in the sense of Lebesgue measure). This is the Lebesgue-Vitali theorem (of characterization of the Riemann integrable functions). It has been proven independently by Giuseppe Vitali and by Henri Lebesgue in 1907, and uses the notion of measure zero, but makes use of neither Lebesgue's general measure or integral. The integrability condition can be proven in various ways, one of which is sketched below. In particular, any set that is at most countable has Lebesgue measure zero, and thus a bounded function (on a compact interval) with only finitely or countably many discontinuities is Riemann integrable. Another sufficient criterion to Riemann integrability over [a, b], but which does not involve the concept of measure, is the existence of a right-hand (or left-hand) limit at every point in [a, b) (or (a, b]). An indicator function of a bounded set is Riemann-integrable if and only if the set is Jordan measurable. The Riemann integral can be interpreted measure-theoretically as the integral with respect to the Jordan measure. If a real-valued function is monotone on the interval [a, b] it is Riemann integrable, since its set of discontinuities is at most countable, and therefore of Lebesgue measure zero. If a real-valued function on [a, b] is Riemann integrable, it is Lebesgue integrable. That is, Riemann-integrability is a stronger (meaning more difficult to satisfy) condition than Lebesgue-integrability. The converse does not hold; not all Lebesgue-integrable functions are Riemann integrable. The Lebesgue–Vitali theorem does not imply that all type of discontinuities have the same weight on the obstruction that a real-valued bounded function be Riemann integrable on [a, b]. In fact, certain discontinuities have absolutely no role on the Riemann integrability of the function—a consequence of the classification of the discontinuities of a function. If fn is a uniformly convergent sequence on [a, b] with limit f, then Riemann integrability of all fn implies Riemann integrability of f, and ∫ a b f d x = ∫ a b lim n → ∞ f n d x = lim n → ∞ ∫ a b f n d x . {\displaystyle \int _{a}^{b}f\,dx=\int _{a}^{b}{\lim _{n\to \infty }{f_{n}}\,dx}=\lim _{n\to \infty }\int _{a}^{b}f_{n}\,dx.} However, the Lebesgue monotone convergence theorem (on a monotone pointwise limit) does not hold for Riemann integrals. Thus, in Riemann integration, taking limits under the integral sign is far more difficult to logically justify than in Lebesgue integration. == Generalizations == It is easy to extend the Riemann integral to functions with values in the Euclidean vector space R n {\displaystyle \mathbb {R} ^{n}} for any n. The integral is defined component-wise; in other words, if f = (f1, ..., fn) then ∫ f = ( ∫ f 1 , … , ∫ f n ) . {\displaystyle \int \mathbf {f} =\left(\int f_{1},\,\dots ,\int f_{n}\right).} In particular, since the complex numbers are a real vector space, this allows the integration of complex valued functions. The Riemann integral is only defined on bounded intervals, and it does not extend well to unbounded intervals. The simplest possible extension is to define such an integral as a limit, in other words, as an improper integral: ∫ − ∞ ∞ f ( x ) d x = lim a → − ∞ b → ∞ ∫ a b f ( x ) d x . {\displaystyle \int _{-\infty }^{\infty }f(x)\,dx=\lim _{a\to -\infty \atop b\to \infty }\int _{a}^{b}f(x)\,dx.} This definition carries with it some subtleties, such as the fact that it is not always equivalent to compute the Cauchy principal value lim a → ∞ ∫ − a a f ( x ) d x . {\displaystyle \lim _{a\to \infty }\int _{-a}^{a}f(x)\,dx.} For example, consider the sign function f(x) = sgn(x) which is 0 at x = 0, 1 for x > 0, and −1 for x < 0. By symmetry, ∫ − a a f ( x ) d x = 0 {\displaystyle \int _{-a}^{a}f(x)\,dx=0} always, regardless of a. But there are many ways for the interval of integration to expand to fill the real line, and other ways can produce different results; in other words, the multivariate limit does not always exist. We can compute ∫ − a 2 a f ( x ) d x = a , ∫ − 2 a a f ( x ) d x = − a . {\displaystyle {\begin{aligned}\int _{-a}^{2a}f(x)\,dx&=a,\\\int _{-2a}^{a}f(x)\,dx&=-a.\end{aligned}}} In general, this improper Riemann integral is undefined. Even standardizing a way for the interval to approach the real line does not work because it leads to disturbingly counterintuitive results. If we agree (for instance) that the improper integral should always be lim a → ∞ ∫ − a a f ( x ) d x , {\displaystyle \lim _{a\to \infty }\int _{-a}^{a}f(x)\,dx,} then the integral of the translation f(x − 1) is −2, so this definition is not invariant under shifts, a highly undesirable property. In fact, not only does this function not have an improper Riemann integral, its Lebesgue integral is also undefined (it equals ∞ − ∞). Unfortunately, the improper Riemann integral is not powerful enough. The most severe problem is that there are no widely applicable theorems for commuting improper Riemann integrals with limits of functions. In applications such as Fourier series it is important to be able to approximate the integral of a function using integrals of approximations to the function. For proper Riemann integrals, a standard theorem states that if fn is a sequence of functions that converge uniformly to f on a compact set [a, b], then lim n → ∞ ∫ a b f n ( x ) d x = ∫ a b f ( x ) d x . {\displaystyle \lim _{n\to \infty }\int _{a}^{b}f_{n}(x)\,dx=\int _{a}^{b}f(x)\,dx.} On non-compact intervals such as the real line, this is false. For example, take fn(x) to be n−1 on [0, n] and zero elsewhere. For all n we have: ∫ − ∞ ∞ f n d x = 1. {\displaystyle \int _{-\infty }^{\infty }f_{n}\,dx=1.} The sequence (fn) converges uniformly to the zero function, and clearly the integral of the zero function is zero. Consequently, ∫ − ∞ ∞ f d x ≠ lim n → ∞ ∫ − ∞ ∞ f n d x . {\displaystyle \int _{-\infty }^{\infty }f\,dx\neq \lim _{n\to \infty }\int _{-\infty }^{\infty }f_{n}\,dx.} This demonstrates that for integrals on unbounded intervals, uniform convergence of a function is not strong enough to allow passing a limit through an integral sign. This makes the Riemann integral unworkable in applications (even though the Riemann integral assigns both sides the correct value), because there is no other general criterion for exchanging a limit and a Riemann integral, and without such a criterion it is difficult to approximate integrals by approximating their integrands. A better route is to abandon the Riemann integral for the Lebesgue integral. The definition of the Lebesgue integral is not obviously a generalization of the Riemann integral, but it is not hard to prove that every Riemann-integrable function is Lebesgue-integrable and that the values of the two integrals agree whenever they are both defined. Moreover, a function f defined on a bounded interval is Riemann-integrable if and only if it is bounded and the set of points where f is discontinuous has Lebesgue measure zero. An integral which is in fact a direct generalization of the Riemann integral is the Henstock–Kurzweil integral. Another way of generalizing the Riemann integral is to replace the factors xk + 1 − xk in the definition of a Riemann sum by something else; roughly speaking, this gives the interval of integration a different notion of length. This is the approach taken by the Riemann–Stieltjes integral. In multivariable calculus, the Riemann integrals for functions from R n → R {\displaystyle \mathbb {R} ^{n}\to \mathbb {R} } are multiple integrals. == Comparison with other theories of integration == The Riemann integral is unsuitable for many theoretical purposes. Some of the technical deficiencies in Riemann integration can be remedied with the Riemann–Stieltjes integral, and most disappear with the Lebesgue integral, though the latter does not have a satisfactory treatment of improper integrals. The gauge integral is a generalisation of the Lebesgue integral that is at the same time closer to the Riemann integral. These more general theories allow for the integration of more "jagged" or "highly oscillating" functions whose Riemann integral does not exist; but the theories give the same value as the Riemann integral when it does exist. In educational settings, the Darboux integral offers a simpler definition that is easier to work with; it can be used to introduce the Riemann integral. The Darboux integral is defined whenever the Riemann integral is, and always gives the same result. Conversely, the gauge integral is a simple but more powerful generalization of the Riemann integral and has led some educators to advocate that it should replace the Riemann integral in introductory calculus courses. == See also == Area Antiderivative Lebesgue integration == Notes == == References == Shilov, G. E., and Gurevich, B. L., 1978. Integral, Measure, and Derivative: A Unified Approach, Richard A. Silverman, trans. Dover Publications. ISBN 0-486-63519-8. Apostol, Tom (1974), Mathematical Analysis, Addison-Wesley == External links == Media related to Riemann integral at Wikimedia Commons "Riemann integral", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia:Lebesgue point#0
In mathematics, given a locally Lebesgue integrable function f {\displaystyle f} on R k {\displaystyle \mathbb {R} ^{k}} , a point x {\displaystyle x} in the domain of f {\displaystyle f} is a Lebesgue point if lim r → 0 + 1 λ ( B ( x , r ) ) ∫ B ( x , r ) | f ( y ) − f ( x ) | d y = 0. {\displaystyle \lim _{r\rightarrow 0^{+}}{\frac {1}{\lambda (B(x,r))}}\int _{B(x,r)}\!|f(y)-f(x)|\,\mathrm {d} y=0.} Here, B ( x , r ) {\displaystyle B(x,r)} is a ball centered at x {\displaystyle x} with radius r > 0 {\displaystyle r>0} , and λ ( B ( x , r ) ) {\displaystyle \lambda (B(x,r))} is its Lebesgue measure. The Lebesgue points of f {\displaystyle f} are thus points where f {\displaystyle f} does not oscillate too much, in an average sense. The Lebesgue differentiation theorem states that, given any f ∈ L 1 ( R k ) {\displaystyle f\in L^{1}(\mathbb {R} ^{k})} , almost every x {\displaystyle x} is a Lebesgue point of f {\displaystyle f} . == References ==
Wikipedia:Lee Lorch#0
Lee Alexander Lorch (September 20, 1915 – February 28, 2014) was an American mathematician, early civil rights activist. His leadership in the campaign to desegregate Stuyvesant Town, a large housing development on the East Side of Manhattan, helped eventually to make housing discrimination illegal in the United States but also resulted in Lorch losing his own job twice. He and his family then moved to the Southern United States where he and his wife, Grace Lorch, became involved in the civil rights movement there while also teaching at several Black colleges. He encouraged black students to pursue studies in mathematics and mentored several of the first black men and women to earn PhDs in mathematics in the United States. After moving to Canada as a result of McCarthyism, he ended his career as professor emeritus of mathematics at York University in Toronto, Ontario. == Background == He was born in New York City to Adolph Lorch and Florence Mayer Lorch. He graduated from Cornell University in 1935 and obtained his PhD in mathematics from the University of Cincinnati in 1941. He did mathematics-related work for the war effort in a "draft exempt" job but quit in 1943 to enlist in the United States Army. He saw service in India and the Pacific Theater of World War II before being demobilized in 1946. Lorch obtained a teaching position at the City College of New York following the war but was soon fired because of his civil rights work on behalf of African-Americans. == Stuyvesant Town == "I had become very aware of racism through the war; not just anti-Semitism, but the way the American army treated black soldiers. On the troop transport overseas, it was always the black company on board that had to clean the ship and do the dirty work, and I felt very uncomfortable with that," Lorch told an interviewer in 2007. Some time after taking up his job at City College, he moved into Stuyvesant Town, a development owned by the Metropolitan Life Insurance Company built with financial and legal support from New York City for war veterans. Outraged at the development's "No Negroes" policy, Lorch became a vice-chair of a tenants' committee formed to eliminate this discrimination. This had two-thirds support from the other tenants. City College, though conceding the excellence of his work, dismissed Lorch, refusing to give any reason. Lorch obtained a new position at Pennsylvania State University, but rather than give up his apartment he asked a black friend and his family to move into his dwelling as "guests", a move which circumvented the policy against accepting housing applications from blacks, but which also resulted in his being fired from Penn State, as reported in The New York Times on April 10, 1950. An editorial in the Times the following day (April 11) called on Penn State to reconsider, recalling the suspicious nature of his dismissal from City College the previous year, to no avail. "It's hard to imagine now, but there was no civil rights legislation back then. You could be fired without explanation. But how could you do anything else, in all good conscience?" said Lorch in 2007. == Moving South == After being fired by Penn State, Lorch obtained a teaching position at Fisk University, a black college located in Tennessee, in 1950. In 1951 there was a south-eastern sectional meeting of the Mathematical Association of America in Nashville. The citation delivered at the 2007 MAA awards presentation, where Lorch received a standing ovation, recorded that: Lee Lorch, the chair of the mathematics department at Fisk University, and three Black colleagues, Evelyn Boyd (now Granville), Walter Brown, and H. M. Holloway came to the meeting and were able to attend the scientific sessions. However, the organizer for the closing banquet refused to honor the reservations of these four mathematicians. (Letters in Science, August 10, 1951, pp. 161–162 spell out the details). Lorch and his colleagues wrote to the governing bodies of the AMS and MAA seeking bylaws against discrimination. Bylaws were not changed, but non-discriminatory policies were established and have been strictly observed since then. == House Un-American Activities Committee == In 1955, Lorch was called before the House Un-American Activities Committee after he and his wife, Grace, attempted to enroll their daughter, Alice, in an all-black elementary school after the United States Supreme Court ruled in Brown v. Board of Education that school segregation was unconstitutional. The Committee's questioning immediately went in a political direction: though Lorch "pointedly denied" engaging in any Communist activity during his tenure at Fisk, he refused to answer questions about his party membership prior to 1941, citing the right to do so under the First Amendment to the United States Constitution, and never made use of the Fifth Amendment. His refusal to testify before HUAC resulted in his being indicted, tried and acquitted for contempt of Congress–nevertheless, during the House of Un-American Activities Committee hearing Fisk University's president, Charles S. Johnson, issued a statement that Lorch's position before the HUAC was "for all practical purposes tantamount to admission of membership in the Communist Party." Despite the appeals on Lorch's behalf from 48 out of 70 staff members, 22 student body leaders, and 150 alumni, Fisk ended his contract. == Little Rock Nine == In 1957, Lorch was working as chair of the Mathematics Department at Philander Smith College, a small black college in Little Rock, Arkansas. That year, he and his wife, Grace, helped escort the Little Rock Nine, nine high school students attempting to be the first black students to enroll at Little Rock Central High School against white segregationist opposition that was so ferocious his wife helped protect a 15-year-old black girl, Elizabeth Eckford, from a mob. Faced with threats and sticks of dynamite left in their garage and with the school's funding at risk, Lorch resigned and was again forced to look for new employment. == Move to Canada == In 1959, facing a blacklist by most US universities, Lorch accepted a position with the University of Alberta and moved his family to Canada. He moved to York University in Toronto in 1968 and taught there until his retirement in 1985. He maintained an office at York and, in 2007, was collaborating with Martin Muldoon on a paper about Bessel functions. Lorch remained a political activist in Canada and was a member of the Communist Party of Canada, the United Jewish Peoples Order and honorary president of the Canadian Cuban Friendship Association. == Academic work and recognition == Lorch's dissertation, under Otto Szász, focused on the behavior of certain classes of Fourier series and his subsequent research also focused on analysis. He has been recognized for his academic work with a fellowship in the Royal Society of Canada, election to the councils of the Canadian Mathematical Society, the American Mathematical Society and the Royal Society of Canada. Two of the colleges that fired him, Fisk and City University, have awarded Lorch with honorary degrees. He was also honored by the U.S. National Academy of Sciences in 1990 and by Spelman College in 1999. In 2003, the International Society for Analysis, its Applications and Computation presented him with an honorary life membership for distinguished mathematical contributions and for his struggles for the disadvantaged and world peace. In 2007, Lorch was awarded with the Mathematical Association of America's most prestigious award, the Yueh-Gin Gung and Dr. Charles Y. Hu Award for Distinguished Service to Mathematics, and in 2007 he was the first Canadian, and one of only 17 non-Cubans, to be elected to the Cuban Academy of Sciences. In 2012 he became a fellow of the American Mathematical Society. He served on the AMS Council as a member-at-large 1974-1976 and 1980-1982. == Legacy == Lorch's legacy as a teacher at black universities such as Fisk and Philander Smith was to encourage black students including black women to pursue graduate study in mathematics. At Fisk, Lorch taught three of the first black students ever to earn doctorates in mathematics. Of the 21 American black women who obtained a PhD in mathematics before 1980, Lorch taught three during his tenure at Fisk University. In 2010, Lorch was asked if he would have done anything any differently. "More and better of the same," he replied. He died in 2014 in Toronto, aged 98. == See also == Grace Lorch List of peace activists == References == == External links == Encyclopedia of Arkansas History & Culture entry "A New Light on a Fight to Integrate Stuyvesant Town", New York Times, November 21, 2010 (interview with Lee Lorch) A Conversation with Lee Lorch, from a documentary directed by William Kelly in conjunction with the Stuyvesant Town-Peter Cooper Village Oral History Project, 2010 Lee Lorch, Desegregation Activist Who Led Stuyvesant Town Effort, Dies at 98, New York Times, March 1, 2014 Conversations with Lee Lorch on YouTube, a film by Rachel Deutsch Lee Lorch on YouTube, interview by Anton Wagner (2 hours) Lee Lorch on YouTube, a video of the presentation of the CAUT (Canadian Association of University Teachers) Distinguished Academic Award to Lee Lorch, May 9, 2012 "A Life in Sum", profile of Lee Lorch published in Cornell Alumni Magazine, July 9, 2009 CBC Metro Morning interview with Lee Lorch, January 9, 2006 Black History Month featured fonds: Lee and Grace Lorch News from the Clara Thomas Archives & Special Collections, York University Lee Lorch at the Mathematics Genealogy Project "Honorary Unsubscribe" biographical summary by Randy Cassingham, March 2, 2014 "Mathematician and activist Lee Lorch, 1915-2014", by John Dupuis (includes a large number of links to other sites), in blog "Confessions of a Science Librarian", March 17, 2014 Lee Lorch archives held at the Clara Thomas Archives & Special Collections at York University Libraries
Wikipedia:Lee Segel#0
Lee Aaron Segel (5 February 1932 – 31 January 2005) was an Israeli-American applied mathematician. He developed both the Keller-Segel model of chemotaxis, in cell biology, and the Newell-Whitehead-Segel equation, in fluid dynamics. He also co-authored the first simulation model for herbicide resistance evolution. He is also considered one of the forefathers of the field of theoretical immunology. Segel was active in the Santa Fe Institute, the first of the over 50 research centers which focus, today, on complex physical, computational, biological, and social systems. Segel was also editor-in-chief of the Bulletin of Mathematical Biology from 1986 to 2001 and co-authored the first volume in the SIAM Classics in Applied Mathematics series, created by the Society for Industrial and Applied Mathematics. He migrated between numerous prestigious academic institutions worldwide, culminating at Israel’s Weizmann Institute of Science, where he served as dean of the Faculty of Mathematics and Computer Science and chair of the Scientific Council. == Biography == Lee Segel was born in 1932 in Newton, Massachusetts to Minna Segel, an art teacher, and Louis Segel, a partner in the Oppenheim-Segel tailors. Louis Segel was something of an intellectual as could be seen in his house from, e.g., the Kollwitz and Beckman prints and the Shakespeare and Co. edition of 'Ulysses', all purchased in Europe in the 1930s. Both parents were of Jewish-Lithuanian origin, of families that immigrated to Boston near the end of the 19th century. The seeds of Segel's later huge vocabulary could partly be seen to stem from his father's reading (and acting on) a claim that the main effect of a prep school was on the vocabulary of its graduates. Segel graduated from Harvard in 1953, majoring in mathematics. Thinking he might want to go into the brand-new field of computers, he started graduate studies in MIT, where he concentrated on applied mathematics instead. In 1959 he married Ruth Galinski, a lawyer and a distant cousin, in her native London, where they spent the first two years of their wedded life. Later 4 children were born. In 1973 the family moved to Rehovot, Israel. He died in 2005. == Career == Lee Segel received a PhD from MIT in 1959, under the supervision of C. C. Lin. In 1960, he joined the Applied Mathematics faculty at Rensselaer Polytechnic Institute. In 1970 he spent a sabbatical at Cornell Medical School and the Sloan-Kettering Institute. Segel moved from RPI to the Weizmann Institute in 1973, where he became the chairman of the Applied Mathematics department, and later dean of the Faculty of Mathematical Sciences and chair of the Scientific Council. At Los Alamos National Laboratory he was a summer consultant to the theoretical biology group from 1984 to 1999, and he was named Ulam Visiting Scholar for 1992–93. == Hydrodynamics == In 1967 Segel and Scanlon were the first to analyze a non-linear convection problem. Segel's most quoted paper in this field was his last work in this field; it was published in parallel with the work of Newell and Whitehead. These papers gave an explanation of the seemingly spontaneous appearance of patterns - rolls or honeycomb cells - in liquid sufficiently heated from below (Bénard convection patterns). (Preceding this was the Turing pattern formation, proposed in 1952 by Alan Turing to describe chemical patterns.) Amplitude equations are used to study highly complicated complex physical, chemical, or biological systems. Such systems’ full dynamics may be governed by complex nonlinear partial differential equations. However, use of amplitude equations which analyze behavior of such systems near the onset of an instability, makes such systems’ behaviors far simpler to study. One important amplitude equation is the NWS (Newell-Whitehead-Segel) transient, nonlinear partial differential equation, developed by Segel, and simultaneously by Newell and Whitehead. == Patterns == Chemotaxis plays important roles in axon guidance, wound healing, tissue morphogenesis and other physiological events. With Evelyn Keller he developed a model for slime mold (Dictyostelium discoideum) chemotaxis that was perhaps the first example of what was later called an "emergent system"; e.g. in Steven Johnson's 2001 book Emergence: The Connected Lives of Ants, Brains, Cities, and Software. Dictyostelium is 'the main character'. Its amoebas join into a single multicellular aggregate (akin to a multicellular organism) if food runs out; the multicellular aggregate has a better chance to find optimal conditions for spore dispersal. Keller and Segel showed that simple assumptions about an attractive chemical (cyclic AMP), which is both secreted by cells and steers them, could explain such behavior without the need for any master cell that manages the process. They also developed a model for chemotaxis. Hillen and Painter say of it: "its success ... a consequence of its intuitive simplicity, analytical tractability and capacity to replicate key behaviour of chemotactic populations. One such property, the ability to display 'auto-aggregation,' has led to its prominence as a mechanism for self-organisation of biological systems. This phenomenon has been shown to lead to finite-time blow-up under certain formulations of the model, and a large body of work has been devoted to determining when blow-up occurs or whether globally existing solutions exist". A paper with Jackson was the first to apply Turing's reaction–diffusion scheme to population dynamics. Lee Segel also found a way to explain the mechanism from a more intuitive perspective than had previously been used. == Administration == In 1975 Segel was appointed Dean of the Faculty of Mathematics in the Weizmann Institute. A central project was renewing the computer science aspect of the department by bringing simultaneously 4 young leading researchers whom he dubbed the 'Gang of Four' - David Harel (Israel Prize '04), Amir Pnueli (Turing Prize '96, Israel Prize '00), Adi Shamir (Turing Prize '02, Wolf Prize '24) and Shimon Ullman (Israel Prize '15). Segel was the editor of the Bulletin of Mathematical Biology between 1986 and 2002. == Books == Lee Segel was the author of: Mathematics Applied to Continuum Mechanics (Classics in Applied Mathematics) (with additional material on elasticity by G. H. Handelman) Mathematics Applied to Deterministic Problems in the Natural Sciences (Classics in Applied Mathematics) by C. C Lin and Lee A. Segel. This book was made the first volume in the SIAM Classics in Applied Mathematics series. Modeling Dynamic Phenomena in Molecular and Cellular Biology stemmed from his course in mathematical modelling that he taught for 20 years in the Weizmann Inst. And Editor of: Biological Delay Systems: Linear Stability Theory (Cambridge Studies in Mathematical Biology) [Paperback] N. MacDonald, C. Cannings, Frank C. Hoppensteadt and Lee A. Segel (Eds.) Mathematical models in molecular and cellular biology. Design Principles for the Immune System and Other Distributed Autonomous Systems (Santa Fe Institute Studies in the Sciences of Complexity Proceedings) == Honors == Segel was the Ulam Visiting Scholar of the Santa Fe Institute for 1992–93. The Sixth Israeli Mini-Workshop in Applied Mathematics was dedicated to his memory. Springer Press, in partnership with the Society for Mathematical Biology, funds Lee Segel Prizes for the best original research paper published (awarded every 2 years), a prize of 3,000 dollars for the best student research paper (awarded every 2 years), and a prize of 4,000 dollars for the best review paper (awarded every 3 years). The Faculty of Mathematics and Computer Science at the Weizmann Institute awards a yearly Lee A. Segel Prize in Theoretical Biology. == References ==
Wikipedia:Lefschetz zeta function#0
In mathematics, the Lefschetz zeta-function is a tool used in topological periodic and fixed point theory, and dynamical systems. Given a continuous map f : X → X {\displaystyle f\colon X\to X} , the zeta-function is defined as the formal series ζ f ( t ) = exp ⁡ ( ∑ n = 1 ∞ L ( f n ) t n n ) , {\displaystyle \zeta _{f}(t)=\exp \left(\sum _{n=1}^{\infty }L(f^{n}){\frac {t^{n}}{n}}\right),} where L ( f n ) {\displaystyle L(f^{n})} is the Lefschetz number of the n {\displaystyle n} -th iterate of f {\displaystyle f} . This zeta-function is of note in topological periodic point theory because it is a single invariant containing information about all iterates of f {\displaystyle f} . == Examples == The identity map on X {\displaystyle X} has Lefschetz zeta function 1 ( 1 − t ) χ ( X ) , {\displaystyle {\frac {1}{(1-t)^{\chi (X)}}},} where χ ( X ) {\displaystyle \chi (X)} is the Euler characteristic of X {\displaystyle X} , i.e., the Lefschetz number of the identity map. For a less trivial example, let X = S 1 {\displaystyle X=S^{1}} be the unit circle, and let f : S 1 → S 1 {\displaystyle f\colon S^{1}\to S^{1}} be reflection in the x-axis, that is, f ( θ ) = − θ {\displaystyle f(\theta )=-\theta } . Then f {\displaystyle f} has Lefschetz number 2, while f 2 {\displaystyle f^{2}} is the identity map, which has Lefschetz number 0. Likewise, all odd iterates have Lefschetz number 2, while all even iterates have Lefschetz number 0. Therefore, the zeta function of f {\displaystyle f} is ζ f ( t ) = exp ⁡ ( ∑ n = 1 ∞ 2 t 2 n + 1 2 n + 1 ) = exp ⁡ ( { 2 ∑ n = 1 ∞ t n n } − { 2 ∑ n = 1 ∞ t 2 n 2 n } ) = exp ⁡ ( − 2 log ⁡ ( 1 − t ) + log ⁡ ( 1 − t 2 ) ) = 1 − t 2 ( 1 − t ) 2 = 1 + t 1 − t {\displaystyle {\begin{aligned}\zeta _{f}(t)&=\exp \left(\sum _{n=1}^{\infty }{\frac {2t^{2n+1}}{2n+1}}\right)\\&=\exp \left(\left\{2\sum _{n=1}^{\infty }{\frac {t^{n}}{n}}\right\}-\left\{2\sum _{n=1}^{\infty }{\frac {t^{2n}}{2n}}\right\}\right)\\&=\exp \left(-2\log(1-t)+\log(1-t^{2})\right)\\&={\frac {1-t^{2}}{(1-t)^{2}}}\\&={\frac {1+t}{1-t}}\end{aligned}}} == Formula == If f is a continuous map on a compact manifold X of dimension n (or more generally any compact polyhedron), the zeta function is given by the formula ζ f ( t ) = ∏ i = 0 n det ( 1 − t f ∗ | H i ( X , Q ) ) ( − 1 ) i + 1 . {\displaystyle \zeta _{f}(t)=\prod _{i=0}^{n}\det(1-tf_{\ast }|H_{i}(X,\mathbf {Q} ))^{(-1)^{i+1}}.} Thus it is a rational function. The polynomials occurring in the numerator and denominator are essentially the characteristic polynomials of the map induced by f on the various homology spaces. == Connections == This generating function is essentially an algebraic form of the Artin–Mazur zeta function, which gives geometric information about the fixed and periodic points of f. == See also == Lefschetz fixed-point theorem Artin–Mazur zeta function Ruelle zeta function == References == Fel'shtyn, Alexander (2000), "Dynamical zeta functions, Nielsen theory and Reidemeister torsion", Memoirs of the American Mathematical Society, 147 (699), arXiv:chao-dyn/9603017, MR 1697460
Wikipedia:Left and right (algebra)#0
In algebra, the terms left and right denote the order of a binary operation (usually, but not always, called "multiplication") in non-commutative algebraic structures. A binary operation ∗ is usually written in the infix form: s ∗ t The argument s is placed on the left side, and the argument t is on the right side. Even if the symbol of the operation is omitted, the order of s and t does matter (unless ∗ is commutative). A two-sided property is fulfilled on both sides. A one-sided property is related to one (unspecified) of two sides. Although the terms are similar, left–right distinction in algebraic parlance is not related either to left and right limits in calculus, or to left and right in geometry. == Binary operation as an operator == A binary operation ∗ may be considered as a family of unary operators through currying: Rt(s) = s ∗ t, depending on t as a parameter – this is the family of right operations. Similarly, Ls(t) = s ∗ t defines the family of left operations parametrized with s. If for some e, the left operation Le is the identity operation, then e is called a left identity. Similarly, if Re = id, then e is a right identity. In ring theory, a subring which is invariant under any left multiplication in a ring is called a left ideal. Similarly, a right multiplication-invariant subring is a right ideal. == Left and right modules == Over non-commutative rings, the left–right distinction is applied to modules, namely to specify the side where a scalar (module element) appears in the scalar multiplication. The distinction is not purely syntactical because one gets two different associativity rules (the lowest row in the table) which link multiplication in a module with multiplication in a ring. A bimodule is simultaneously a left and right module, with two different scalar multiplication operations, obeying an associativity condition on them. == Other examples == Left eigenvectors Left and right group actions == In category theory == In category theory the usage of "left" and "right" has some algebraic resemblance, but refers to left and right sides of morphisms. See adjoint functors. == See also == Operator associativity == External links == Barile, Margherita. "right ideal". MathWorld. Barile, Margherita. "left ideal". MathWorld. Weisstein, Eric W. "left eigenvector". MathWorld.
Wikipedia:Leibniz formula for determinants#0
In algebra, the Leibniz formula, named in honor of Gottfried Leibniz, expresses the determinant of a square matrix in terms of permutations of the matrix elements. If A {\displaystyle A} is an n × n {\displaystyle n\times n} matrix, where a i j {\displaystyle a_{ij}} is the entry in the i {\displaystyle i} -th row and j {\displaystyle j} -th column of A {\displaystyle A} , the formula is det ( A ) = ∑ τ ∈ S n sgn ⁡ ( τ ) ∏ i = 1 n a i τ ( i ) = ∑ σ ∈ S n sgn ⁡ ( σ ) ∏ i = 1 n a σ ( i ) i {\displaystyle \det(A)=\sum _{\tau \in S_{n}}\operatorname {sgn}(\tau )\prod _{i=1}^{n}a_{i\tau (i)}=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\prod _{i=1}^{n}a_{\sigma (i)i}} where sgn {\displaystyle \operatorname {sgn} } is the sign function of permutations in the permutation group S n {\displaystyle S_{n}} , which returns + 1 {\displaystyle +1} and − 1 {\displaystyle -1} for even and odd permutations, respectively. Another common notation used for the formula is in terms of the Levi-Civita symbol and makes use of the Einstein summation notation, where it becomes det ( A ) = ϵ i 1 ⋯ i n a 1 i 1 ⋯ a n i n , {\displaystyle \det(A)=\epsilon _{i_{1}\cdots i_{n}}{a}_{1i_{1}}\cdots {a}_{ni_{n}},} which may be more familiar to physicists. Directly evaluating the Leibniz formula from the definition requires Ω ( n ! ⋅ n ) {\displaystyle \Omega (n!\cdot n)} operations in general—that is, a number of operations asymptotically proportional to n {\displaystyle n} factorial—because n ! {\displaystyle n!} is the number of order- n {\displaystyle n} permutations. This is impractically difficult for even relatively small n {\displaystyle n} . Instead, the determinant can be evaluated in O ( n 3 ) {\displaystyle O(n^{3})} operations by forming the LU decomposition A = L U {\displaystyle A=LU} (typically via Gaussian elimination or similar methods), in which case det A = det L ⋅ det U {\displaystyle \det A=\det L\cdot \det U} and the determinants of the triangular matrices L {\displaystyle L} and U {\displaystyle U} are simply the products of their diagonal entries. (In practical applications of numerical linear algebra, however, explicit computation of the determinant is rarely required.) See, for example, Trefethen & Bau (1997). The determinant can also be evaluated in fewer than O ( n 3 ) {\displaystyle O(n^{3})} operations by reducing the problem to matrix multiplication, but most such algorithms are not practical. == Formal statement and proof == Theorem. There exists exactly one function F : M n ( K ) → K {\displaystyle F:M_{n}(\mathbb {K} )\rightarrow \mathbb {K} } which is alternating multilinear w.r.t. columns and such that F ( I ) = 1 {\displaystyle F(I)=1} . Proof. Uniqueness: Let F {\displaystyle F} be such a function, and let A = ( a i j ) i = 1 , … , n j = 1 , … , n {\displaystyle A=(a_{i}^{j})_{i=1,\dots ,n}^{j=1,\dots ,n}} be an n × n {\displaystyle n\times n} matrix. Call A j {\displaystyle A^{j}} the j {\displaystyle j} -th column of A {\displaystyle A} , i.e. A j = ( a i j ) i = 1 , … , n {\displaystyle A^{j}=(a_{i}^{j})_{i=1,\dots ,n}} , so that A = ( A 1 , … , A n ) . {\displaystyle A=\left(A^{1},\dots ,A^{n}\right).} Also, let E k {\displaystyle E^{k}} denote the k {\displaystyle k} -th column vector of the identity matrix. Now one writes each of the A j {\displaystyle A^{j}} 's in terms of the E k {\displaystyle E^{k}} , i.e. A j = ∑ k = 1 n a k j E k {\displaystyle A^{j}=\sum _{k=1}^{n}a_{k}^{j}E^{k}} . As F {\displaystyle F} is multilinear, one has F ( A ) = F ( ∑ k 1 = 1 n a k 1 1 E k 1 , … , ∑ k n = 1 n a k n n E k n ) = ∑ k 1 , … , k n = 1 n ( ∏ i = 1 n a k i i ) F ( E k 1 , … , E k n ) . {\displaystyle {\begin{aligned}F(A)&=F\left(\sum _{k_{1}=1}^{n}a_{k_{1}}^{1}E^{k_{1}},\dots ,\sum _{k_{n}=1}^{n}a_{k_{n}}^{n}E^{k_{n}}\right)=\sum _{k_{1},\dots ,k_{n}=1}^{n}\left(\prod _{i=1}^{n}a_{k_{i}}^{i}\right)F\left(E^{k_{1}},\dots ,E^{k_{n}}\right).\end{aligned}}} From alternation it follows that any term with repeated indices is zero. The sum can therefore be restricted to tuples with non-repeating indices, i.e. permutations: F ( A ) = ∑ σ ∈ S n ( ∏ i = 1 n a σ ( i ) i ) F ( E σ ( 1 ) , … , E σ ( n ) ) . {\displaystyle F(A)=\sum _{\sigma \in S_{n}}\left(\prod _{i=1}^{n}a_{\sigma (i)}^{i}\right)F(E^{\sigma (1)},\dots ,E^{\sigma (n)}).} Because F is alternating, the columns E {\displaystyle E} can be swapped until it becomes the identity. The sign function sgn ⁡ ( σ ) {\displaystyle \operatorname {sgn}(\sigma )} is defined to count the number of swaps necessary and account for the resulting sign change. One finally gets: F ( A ) = ∑ σ ∈ S n sgn ⁡ ( σ ) ( ∏ i = 1 n a σ ( i ) i ) F ( I ) = ∑ σ ∈ S n sgn ⁡ ( σ ) ∏ i = 1 n a σ ( i ) i {\displaystyle {\begin{aligned}F(A)&=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\left(\prod _{i=1}^{n}a_{\sigma (i)}^{i}\right)F(I)\\&=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\prod _{i=1}^{n}a_{\sigma (i)}^{i}\end{aligned}}} as F ( I ) {\displaystyle F(I)} is required to be equal to 1 {\displaystyle 1} . Therefore no function besides the function defined by the Leibniz Formula can be a multilinear alternating function with F ( I ) = 1 {\displaystyle F\left(I\right)=1} . Existence: We now show that F, where F is the function defined by the Leibniz formula, has these three properties. Multilinear: F ( A 1 , … , c A j , … ) = ∑ σ ∈ S n sgn ⁡ ( σ ) c a σ ( j ) j ∏ i = 1 , i ≠ j n a σ ( i ) i = c ∑ σ ∈ S n sgn ⁡ ( σ ) a σ ( j ) j ∏ i = 1 , i ≠ j n a σ ( i ) i = c F ( A 1 , … , A j , … ) F ( A 1 , … , b + A j , … ) = ∑ σ ∈ S n sgn ⁡ ( σ ) ( b σ ( j ) + a σ ( j ) j ) ∏ i = 1 , i ≠ j n a σ ( i ) i = ∑ σ ∈ S n sgn ⁡ ( σ ) ( ( b σ ( j ) ∏ i = 1 , i ≠ j n a σ ( i ) i ) + ( a σ ( j ) j ∏ i = 1 , i ≠ j n a σ ( i ) i ) ) = ( ∑ σ ∈ S n sgn ⁡ ( σ ) b σ ( j ) ∏ i = 1 , i ≠ j n a σ ( i ) i ) + ( ∑ σ ∈ S n sgn ⁡ ( σ ) ∏ i = 1 n a σ ( i ) i ) = F ( A 1 , … , b , … ) + F ( A 1 , … , A j , … ) {\displaystyle {\begin{aligned}F(A^{1},\dots ,cA^{j},\dots )&=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )ca_{\sigma (j)}^{j}\prod _{i=1,i\neq j}^{n}a_{\sigma (i)}^{i}\\&=c\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )a_{\sigma (j)}^{j}\prod _{i=1,i\neq j}^{n}a_{\sigma (i)}^{i}\\&=cF(A^{1},\dots ,A^{j},\dots )\\\\F(A^{1},\dots ,b+A^{j},\dots )&=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\left(b_{\sigma (j)}+a_{\sigma (j)}^{j}\right)\prod _{i=1,i\neq j}^{n}a_{\sigma (i)}^{i}\\&=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\left(\left(b_{\sigma (j)}\prod _{i=1,i\neq j}^{n}a_{\sigma (i)}^{i}\right)+\left(a_{\sigma (j)}^{j}\prod _{i=1,i\neq j}^{n}a_{\sigma (i)}^{i}\right)\right)\\&=\left(\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )b_{\sigma (j)}\prod _{i=1,i\neq j}^{n}a_{\sigma (i)}^{i}\right)+\left(\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\prod _{i=1}^{n}a_{\sigma (i)}^{i}\right)\\&=F(A^{1},\dots ,b,\dots )+F(A^{1},\dots ,A^{j},\dots )\\\\\end{aligned}}} Alternating: F ( … , A j 1 , … , A j 2 , … ) = ∑ σ ∈ S n sgn ⁡ ( σ ) ( ∏ i = 1 , i ≠ j 1 , i ≠ j 2 n a σ ( i ) i ) a σ ( j 1 ) j 1 a σ ( j 2 ) j 2 {\displaystyle {\begin{aligned}F(\dots ,A^{j_{1}},\dots ,A^{j_{2}},\dots )&=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\left(\prod _{i=1,i\neq j_{1},i\neq j_{2}}^{n}a_{\sigma (i)}^{i}\right)a_{\sigma (j_{1})}^{j_{1}}a_{\sigma (j_{2})}^{j_{2}}\\\end{aligned}}} For any σ ∈ S n {\displaystyle \sigma \in S_{n}} let σ ′ {\displaystyle \sigma '} be the tuple equal to σ {\displaystyle \sigma } with the j 1 {\displaystyle j_{1}} and j 2 {\displaystyle j_{2}} indices switched. F ( A ) = ∑ σ ∈ S n , σ ( j 1 ) < σ ( j 2 ) [ sgn ⁡ ( σ ) ( ∏ i = 1 , i ≠ j 1 , i ≠ j 2 n a σ ( i ) i ) a σ ( j 1 ) j 1 a σ ( j 2 ) j 2 + sgn ⁡ ( σ ′ ) ( ∏ i = 1 , i ≠ j 1 , i ≠ j 2 n a σ ′ ( i ) i ) a σ ′ ( j 1 ) j 1 a σ ′ ( j 2 ) j 2 ] = ∑ σ ∈ S n , σ ( j 1 ) < σ ( j 2 ) [ sgn ⁡ ( σ ) ( ∏ i = 1 , i ≠ j 1 , i ≠ j 2 n a σ ( i ) i ) a σ ( j 1 ) j 1 a σ ( j 2 ) j 2 − sgn ⁡ ( σ ) ( ∏ i = 1 , i ≠ j 1 , i ≠ j 2 n a σ ( i ) i ) a σ ( j 2 ) j 1 a σ ( j 1 ) j 2 ] = ∑ σ ∈ S n , σ ( j 1 ) < σ ( j 2 ) sgn ⁡ ( σ ) ( ∏ i = 1 , i ≠ j 1 , i ≠ j 2 n a σ ( i ) i ) ( a σ ( j 1 ) j 1 a σ ( j 2 ) j 2 − a σ ( j 1 ) j 2 a σ ( j 2 ) j 1 ) ⏟ = 0 , if A j 1 = A j 2 {\displaystyle {\begin{aligned}F(A)&=\sum _{\sigma \in S_{n},\sigma (j_{1})<\sigma (j_{2})}\left[\operatorname {sgn}(\sigma )\left(\prod _{i=1,i\neq j_{1},i\neq j_{2}}^{n}a_{\sigma (i)}^{i}\right)a_{\sigma (j_{1})}^{j_{1}}a_{\sigma (j_{2})}^{j_{2}}+\operatorname {sgn}(\sigma ')\left(\prod _{i=1,i\neq j_{1},i\neq j_{2}}^{n}a_{\sigma '(i)}^{i}\right)a_{\sigma '(j_{1})}^{j_{1}}a_{\sigma '(j_{2})}^{j_{2}}\right]\\&=\sum _{\sigma \in S_{n},\sigma (j_{1})<\sigma (j_{2})}\left[\operatorname {sgn}(\sigma )\left(\prod _{i=1,i\neq j_{1},i\neq j_{2}}^{n}a_{\sigma (i)}^{i}\right)a_{\sigma (j_{1})}^{j_{1}}a_{\sigma (j_{2})}^{j_{2}}-\operatorname {sgn}(\sigma )\left(\prod _{i=1,i\neq j_{1},i\neq j_{2}}^{n}a_{\sigma (i)}^{i}\right)a_{\sigma (j_{2})}^{j_{1}}a_{\sigma (j_{1})}^{j_{2}}\right]\\&=\sum _{\sigma \in S_{n},\sigma (j_{1})<\sigma (j_{2})}\operatorname {sgn}(\sigma )\left(\prod _{i=1,i\neq j_{1},i\neq j_{2}}^{n}a_{\sigma (i)}^{i}\right)\underbrace {\left(a_{\sigma (j_{1})}^{j_{1}}a_{\sigma (j_{2})}^{j_{2}}-a_{\sigma (j_{1})}^{j_{2}}a_{\sigma (j_{2})}^{j_{_{1}}}\right)} _{=0{\text{, if }}A^{j_{1}}=A^{j_{2}}}\\\\\end{aligned}}} Thus if A j 1 = A j 2 {\displaystyle A^{j_{1}}=A^{j_{2}}} then F ( … , A j 1 , … , A j 2 , … ) = 0 {\displaystyle F(\dots ,A^{j_{1}},\dots ,A^{j_{2}},\dots )=0} . Finally, F ( I ) = 1 {\displaystyle F(I)=1} : F ( I ) = ∑ σ ∈ S n sgn ⁡ ( σ ) ∏ i = 1 n I σ ( i ) i = ∑ σ ∈ S n sgn ⁡ ( σ ) ∏ i = 1 n δ i , σ ( i ) = ∑ σ ∈ S n sgn ⁡ ( σ ) δ σ , id { 1 … n } = sgn ⁡ ( id { 1 … n } ) = 1 {\displaystyle {\begin{aligned}F(I)&=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\prod _{i=1}^{n}I_{\sigma (i)}^{i}=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\prod _{i=1}^{n}\operatorname {\delta } _{i,\sigma (i)}\\&=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\operatorname {\delta } _{\sigma ,\operatorname {id} _{\{1\ldots n\}}}=\operatorname {sgn}(\operatorname {id} _{\{1\ldots n\}})=1\end{aligned}}} Thus the only alternating multilinear functions with F ( I ) = 1 {\displaystyle F(I)=1} are restricted to the function defined by the Leibniz formula, and it in fact also has these three properties. Hence the determinant can be defined as the only function det : M n ( K ) → K {\displaystyle \det :M_{n}(\mathbb {K} )\rightarrow \mathbb {K} } with these three properties. == See also == Matrix Laplace expansion Cramer's rule == References == "Determinant", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Trefethen, Lloyd N.; Bau, David (June 1, 1997). Numerical Linear Algebra. SIAM. ISBN 978-0898713619.
Wikipedia:Leif Arkeryd#0
Leif O. Arkeryd (born 24 August 1940) is professor emeritus of mathematics at Chalmers University of Technology. He is a specialist on the theory of the Boltzmann equation. Arkeryd earned his doctorate from Lund University in 1966, under the supervision of Jaak Peetre. == Selected publications == Arkeryd, Leif: On the Boltzmann equation. I. Existence. Arch. Rational Mech. Anal. 45 (1972), 1–16. Nonstandard analysis. Theory and applications. Proceedings of the NATO Advanced Study Institute on Nonstandard Analysis and its Applications held in Edinburgh, June 30–July 13, 1996. Edited by Leif O. Arkeryd, Nigel J. Cutland and C. Ward Henson. NATO Advanced Science Institutes Series C: Mathematical and Physical Sciences, 493. Kluwer Academic Publishers Group, Dordrecht, 1997. == See also == Influence of non-standard analysis == References ==
Wikipedia:Leifur Ásgeirsson#0
Leifur Ásgeirsson (25 May 1903 – 19 August 1990) was the first Icelandic mathematician to gain major international recognition. == Education and career == Leifur Ásgeirsson graduated in 1927 from Reykjavik Junior College and received his doctorate in 1933 from the University of Göttingen. His doctoral advisor was Richard Courant. From 1931 to 1943 Leifur Ásgeirsson was the head of a district school in Laugar, located in southwestern Iceland's Reykjadalur Valley east of Reykjavík. In 1936 he was an invited speaker at the International Congress of Mathematicians in Oslo. At the University of Iceland he was appointed in 1943 a lecturer in mathematics and in 1945 a full professor of mathematics. The journal Mathematica Scandinavica was founded in 1953 with Ásgeirsson as one of the founding editors. ... Mathematica Scandinavica ... editors were appointed. From Denmark: Fenchel, from Finland: Gustav Elfving, from Iceland: Leifur Ásgeirsson, from Norway: Skolem, and from Sweden: Pleijel. Immediately after the decisions had been taken Fenchel, in agreement with Pleijel, wrote to the other editors and suggested Bundgaard as co-ordinating editor of Math. Scand. == Selected publications == Ásgeirsson, Leifur (1937). "Über eine Mittelwertseigenschaft von Lösungen homogener linearer partieller Differentialgleichungen 2. Ordnung mit konstanten Koeffizienten". Mathematische Annalen. 113 (1): 321–346. doi:10.1007/BF01571637. ISSN 0025-5831. S2CID 122332213. —— (1956). "Some hints on Huygens' principle and Hadamard's conjecture". Communications on Pure and Applied Mathematics. 9 (3): 307–326. doi:10.1002/cpa.3160090304. —— (1961). "On Cauchy's problem for linear partial differential equations of second order in four variables". Communications on Pure and Applied Mathematics. 14 (3): 171–186. doi:10.1002/cpa.3160140302. == References ==
Wikipedia:Leiki Loone#0
Leiki Loone (née Sikk; born 29 February 1944), is an Estonian mathematician specialising in applications of functional analysis in theory of summability and in the structure theory of topological vector spaces. She is married to Estonian philosopher Eero Loone. The couple has two daughters. == Sources == Universitas Tartuensis 27 February 2004: Dotsent Leiki Loone 60
Wikipedia:Leila Bram#0
Leila Ann Dragonette Bram (1927–September 7, 1979) was an American mathematician. She was one of the first to study mock theta functions, and for many years directed the mathematics program at the Office of Naval Research, a position where she set the program for much of mathematics research. == Early life and education == Bram was born in 1927, in Drexel Hill, Pennsylvania, and was educated at the Lansdowne High School in Maryland. As an undergraduate at Bryn Mawr College, she double-majored in mathematics and physics, won the Maria L. Eastman Brooke Hall Memorial Scholarship and European Fellowship, the college's highest honor, and did an honors thesis on beta ray spectroscopy. She graduated in 1947. She went to the University of Pennsylvania for graduate study in mathematics, earning a master's degree and a Ph.D. there. Her 1951 doctoral dissertation, Asymptotic Formula for the Mock Theta Series of Ramanujan, was supervised by Hans Rademacher; it (and the journal version she published from it) became one of only three works to study the mock theta functions between Ramanujan in the 1920s and the work of George Andrews beginning in 1966. == Career and later life == After completing her Ph.D., Bram became a postdoctoral fellow at the Office of Naval Research, and took a permanent position as a mathematician there beginning in 1953, from which she was on leave from 1955 to 1959. At the Office of Naval Research, she later headed the Mathematics Bureau, and then the Mathematics Program. Among her projects there was the establishment of a conference series connecting computer graphics to mathematics that became the International Conferences on Computer Aided Geometric Design. She died of cancer on September 7, 1979. == References ==
Wikipedia:Lek-Heng Lim#0
Lek-Heng Lim (Chinese: 林力行) is a Singaporean mathematician. Lim earned a bachelor's degree at the National University of Singapore, studied for his master's at Cornell University and the University of Cambridge, and completed a doctorate at Stanford University. Lim started his teaching career at the University of California, Berkeley, where he served as Charles Morrey Assistant Professor. He later joined the University of Chicago faculty. While affiliated with Chicago, Lim won the James H. Wilkinson Prize in Numerical Analysis and Scientific Computing and Stephen Smale Prize in 2017, followed by the 2020 Hans Schneider Prize in Linear Algebra. In 2020, Lim was also elected a fellow of the American Mathematical Society. Lim was awarded a Guggenheim Fellowship in April 2022. == References ==
Wikipedia:Lennie Copeland#0
Lennie Phoebe Copeland (1881–1951) was an American mathematician and professor at Wellesley College, and was one of the few women to earn a doctorate in mathematics before World War II. == Biography == Lennie Phoebe Copeland was an only child, born to Emma Stinchfield and Lemuel Copeland in Brewer, Maine, on March 30, 1881. She graduated from Bangor High School and then enrolled at the University of Maine, earning her BS in mathematics in 1904. With her degree in hand, Copeland returned to Bangor High School to teach math until 1910 until she left Maine for Wellesley College in Massachusetts to pursue a master's degree. She received her MA in mathematics in 1911 and immediately went to study at the University of Pennsylvania. There, she received her Sc.D. in 1913 with a dissertation titled, On the Theory of Invariants of n-Lines directed by Oliver Edmunds Glenn. She was immediately asked to join the faculty at Wellesley where she knew many esteemed mathematicians and ended up spending the remainder of her teaching career there. Her research centered on the algebra of invariants. === Educator === At Wellesley, Copeland began as an instructor 1913–1920, and moved to assistant professor 1920–1928, associate professor 1928–1937 and professor from 1937 until she retired in 1946 and was made professor emeritus. In addition to teaching, she also developed and taught a course in the history of mathematics. As part of her research, she collected rare books, especially those about math recreations, and donated them to the Wellesley Treasure Room in the library. Copeland also authored a descriptive catalogue about the library's rare mathematical books. When she and six members of the Wellesley mathematics department became members of the nascent Mathematical Association of America before April 1, 1916, they were charter members. Copeland often attended the MAA annual and summer meetings, and she served on the program committees for the national meetings of 1922 and 1923, chairing the latter one. Locally, she became the first woman to be named president of the New England Association of Teachers of Mathematics, 1925–1927. === Personal life === She participated in the Appalachian Mountain Club and served as its natural history counselor. She enjoyed travel and did so widely for many years, often with her friend and housemate, retired Wellesley mathematician Clara Eliza Smith, who died suddenly in May 1943 of a cerebral hemorrhage. After retiring from Wellesley in 1946, Copeland moved to St. Petersburg, Florida, and lived with Carol S. Scott, a friend from the College. Lennie Phoebe Copeland died in St. Petersburg, on January 11, 1951, at the age of 69. A collection of Copeland's papers is available for researchers at Wellesley College. == Honors == Copeland was awarded an honorary Doctor of Science degree by the University of Maine in 1948. == Memberships == According to Judy Green, Copeland belonged to several professional societies. American Mathematical Society Mathematical Association of America (charter member) Phi Beta Kappa American Association for the Advancement of Science Sigma Xi == References ==
Wikipedia:Leo Sario#0
Leo Reino Sario (18 May 1916 – 15 August 2009) was a Finnish-born mathematician who worked on complex analysis and Riemann surfaces. == Early life and education == After service as a Finnish artillery officer in the Winter War and World War II, he received his PhD in 1948 under Rolf Nevanlinna at the University of Helsinki. == Career == Nevanlinna and Sario were founding members of the Academy of Finland, and there is a statue on the academy grounds named after Sario. Sario moved to the United States in 1950 and obtained temporary positions at the Institute for Advanced Study, MIT, Stanford University, and Harvard University. In 1954 he became a professor at UCLA, remaining there until his retirement in 1986. He was the author or co-author of five major books on complex analysis and over 130 papers. He supervised 36 doctoral students, including Kōtarō Oikawa and Burton Rodin. == Awards and honors == In 1957 he was awarded the Cross of the Commander of Finland's Order of Knighthood. He was a Guggenheim Fellow for the academic year 1957–1958. == Selected publications == with Lars Ahlfors: Riemann surfaces, Princeton Mathematical Series 26, Princeton University Press 1960 2015 pbk reprint with Kiyoshi Noshiro: Value Distribution Theory, Van Nostrand 1966 2013 pbk reprint with Burton Rodin: Principal Functions, Springer 1968, Van Nostrand 1968; 2012 pbk reprint with Kōtarō Oikawa: Capacity Functions, Grundlehren der mathematischen Wissenschaften 149, Springer 1969 with Mitsuru Nakai: Classification Theory of Riemann Surfaces, Grundlehren der mathematischen Wissenschaften 164, Springer 1970; 2012 pbk reprint with Mitsuru Nakai, Cecilia Wang, Lung Ock Chung: Classification Theory of Riemannian Manifolds : Harmonic, quasiharmonic and biharmonic functions, Lecture Notes in Mathematics 605, Springer 1977; 2006 pbk reprint Capacity of a boundary and of a boundary element, Annals of Mathematics, vol. 59, 1954, pp. 135–144 doi:10.2307/1969835 == References ==
Wikipedia:Leon Birnbaum#0
Leon Birnbaum (1918–2010) was a Romanian mathematician and philosopher. He was born in Chernovtsy (now Ukraine) on June 18, 1918 to a family with an intellectual tradition. He studied at the Orthodox High School, then at the Faculty of Mathematics. In 1941, Birnbaum was deported to Magilev-Podolsk in Transnistria when the war reached Chernovtsy. He remained there until 1944. In 1946 he became a mathematics teacher in Strehaia, Romania, then at Turnu Severin and Dej. Meanwhile, he obtained a degree in Russian Language and Literature, and an engineering license in Machine Technology. Encouraged by friends, Birnbaum began to publish Mathematical articles in the journals "Mathematical Studies and Research" of Bucharest, "Notre Dame Journal of Formal Logics" in the USA and many others. He was a member of the "Association Internationale de Cybernetique" in Namur, Belgium and a member of the Editorial Board of the magazine "Informatica", in Ljubljana. He was also a member of ASTRA. == Books == An Introduction to Logosophy, Litera, Bucharest, 1983. Multa et Multum, Litera, Bucharest, 1984. An Essay on a Finite Ontology, Aletheia, Bistrita, 1999. Tripolar Algebra and Elements of Quadripolar Algebra, Aletheia, Bistrita, 2000. A Finite Cosmology and Matters of Theosophy and of the Axiomatic, Aletheia, Bistrita, 2001. An Introduction to Aletheutics, Aletheia, Bistrita, 2001. Theatre, Aletheia, Bistrita, 2001.
Wikipedia:Leon Chwistek#0
Leon Chwistek (Kraków, Austria-Hungary, 13 June 1884 – Barvikha near Moscow, Russia, 20 August 1944) was a Polish logician, philosopher, mathematician, avant-garde painter, theoretician of modern art and literary critic. == Career and philosophy == In 1919 he was one of the founders of the Polish Mathematical Society. From 1922, he lectured in mathematics for natural scientists at the Jagiellonian University, where he obtained his habilitation in 1928 in mathematical logic. Starting in 1929, Chwistek was a Professor of Logic at the University of Lwów in a position for which Alfred Tarski had also applied. His interests in the 1930s were in a general system of philosophy of science, which was published in a book translated in English 1948 as The Limits of Science. In the 1920s–30s, many European philosophers attempted to reform traditional philosophy by means of mathematical logic. Leon Chwistek did not believe that such reform could succeed. He thought that reality could not be described in one homogeneous system, based on the principles of formal logic, because there was not one reality but many. After the outbreak of World War II and the occupation of Lwów (renamed to Lviv) by the USSR, he remained at the university. He also started cooperation with Czerwony Sztandar. In September 1940, he joined the Union of Soviet Writers of Ukraine. In June 1941, just before the entry of the German troops, he evacuated from Lviv together with the Soviet troops deep into Russia. From 1941 to 1943, he lived in Tbilisi, where he taught mathematical analysis, and from 1943 in Moscow. He was active in the Union of Polish Patriots in the USSR. Chwistek argued against the axiomatic method by demonstrating that the extant axiomatic systems are inconsistent. == Artist == Chwistek developed his theory of the multiplicity of realities first with regard to the arts. He distinguished four basic types of realities, then matched them with four basic types of painting. The four types of realities were: 1. popular reality (common-sense realism) 2. physical reality (constructed by physics) 3. phenomenal reality (sensory impressions) 4. visionary/intuitive reality (dreams, hallucinations, subconscious states). The types of painting corresponding to the above were: 1. Primitivism 2. Realism 3. Impressionism 4. Futurism Chwistek never intended his views to constitute a new metaphysical theory. He was a defender of "common sense" against metaphysics and irrational feeling. His theory of plural reality was merely an attempt to specify the various ways in which the term, “real,” is used. Chwistek's fellow artist and closest friend, Stanislaw Ignacy Witkiewicz, harshly criticized his philosophical views. Witkiewicz's own philosophy was based on a monadic character to the individual's existence, embracing a multiplicity of existences, with the world being made up of a multiplicity of Particular Existences. In his 1919 painting titled Fencing inspirations from avant-garde trends prior to World War I such as cubism, Italian futurism, and Robert Delaunay’s simultanism can be observed. == Works == The limits of science. Outline of logic and of the methodology of the exact sciences. Translated from the Polish by Helen Charlotte Brodie and Arthur P. Coleman; introduction and appendix by Helen Charlotte Brodie. New York: Harcourt, Brace, 1948 == See also == History of philosophy in Poland List of Poles == References == == External links == Polish Philosophy Page: Leon Chwistek at the Wayback Machine (archived October 30, 2007) Profile of Leon Chwistek at Culture.pl Instituto Polaco de Cultura: Artola, Inés R. (2015), Formiści: la síntesis de la modernidad (1917 – 1922). Conexiones y protagonistas, Granada: Libargo, ISBN 978-84-938812-7-6
Wikipedia:Leon Lichtenstein#0
Leon Lichtenstein (16 May 1878 – 21 August 1933) was a Polish-German mathematician, who made contributions to the areas of differential equations, conformal mapping, and potential theory. He was also interested in theoretical physics, publishing research in hydrodynamics and astronomy. == Life and work == Leon Lichtenstein was born on 16 May 1878 to an Ashkenazi Jewish family in Warsaw, then part of the Russian Empire. His cousin, Leo Wiener, was the father of MIT mathematician Norbert Wiener. He studied in Berlin, earning both a doctorate in mechanical and electrical engineering at the Technische Hochschule Berlin and a doctorate in mathematics at the Friedrich Wilhelm University with a thesis on differential equations written under the supervision of Hermann Schwarz and Friedrich Schottky. From 1902 he worked as an electrical engineer for Siemens & Halske; then, from 1910, he turned to the academic world by becoming privatdozent at the Berlin Technische Hochschule. Lichtenstein was one of the founders, in 1918, and the first editor of the journal Mathematische Zeitschrift. In 1920 he moved to a mathematics chair at the University of Münster and in 1922 he joined the University of Leipzig where he would spend the rest of his career. At the University of Leipzig, he founded a mathematical school and his students, including Ernst Hölder, Erich Kähler, Aurel Wintner, Hermann Boerner and Karl Maruhn, continuing his research in mathematics and theoretical physics. In 1933, as the Nazi party came to power in Germany, Lichtenstein abandoned his position at the university and left to Poland, as he would have been dismissed anyway for being Jewish. Shortly after, on 21 August 1933, he died of heart and kidney problems in Zakopane, in Poland. == Bibliography == Beiträge zur Theorie der Kabel- Untersuchungen zu den Kapazitätsverhältnissen von verseilten und konzentrischen Mehrfachkabeln. Oldenbourg, München 1908. Grundlagen der Hydromechanik. Springer, Berlin 1929. Reprint 1968. Gleichgewichtsfiguren rotierender Flüssigkeiten. Springer, Berlin 1933. Vorlesungen über einige Klassen nichtlinearer Integralgleichungen und Integro-Differentialgleichungen nebst Anwendungen. Springer, Berlin 1931. Astronomie und Mathematik in ihrer Wechselwirkung. Mathematische Probleme in der Theorie der Figur der Himmelskörper. 1922, Reprint: VDM, Saarbrücken 2007. == See also == Isothermal coordinates Symmetrizable compact operator == References == == Sources == Jagdish Mehra, Helmut Rechenberg, The historical development of quantum theory, Springer, 2000, p. 418 Sanford L. Segal, Mathematicians under the Nazis, Princeton University Press, 2003, p. 44 == External links == Leon Lichtenstein at the Mathematics Genealogy Project
Wikipedia:Leon Simon#0
Leon Melvyn Simon , born in 1945, is a Leroy P. Steele Prize and Bôcher Prize-winning mathematician, known for deep contributions to the fields of geometric analysis, geometric measure theory, and partial differential equations. He is currently Professor Emeritus in the Mathematics Department at Stanford University. == Biography == === Academic career === Leon Simon, born 6 July 1945, received his BSc from the University of Adelaide in 1967, and his PhD in 1971 from the same institution, under the direction of James H. Michael. His doctoral thesis was titled Interior Gradient Bounds for Non-Uniformly Elliptic Equations. He was employed from 1968 to 1971 as a Tutor in Mathematics by the university. Simon has since held a variety of academic positions. He worked first at Flinders University as a lecturer, then at Australian National University as a professor, at the University of Melbourne, the University of Minnesota, at ETH Zurich, and at Stanford. He first came to Stanford in 1973 as Visiting Assistant Professor and was awarded a full professorship in 1986. Simon has more than 100 'mathematical descendants', according to the Mathematics Genealogy Project. Among his doctoral students there are Richard Schoen, Neshan Wickramasekera and Tatiana Toro. === Honours === In 1983 Simon was awarded the Australian Mathematical Society Medal. In the same year he was elected as a Fellow of the Australian Academy of Science. He was an invited speaker at the 1983 International Congress of Mathematicians in Warsaw. In 1994, he was awarded the Bôcher Memorial Prize. The Bôcher Prize is awarded every five years to a groundbreaking author in analysis. In the same year he was also elected a fellow of the American Academy of Arts and Sciences. In May 2003 he was elected a fellow of the Royal Society. In 2012 he became a fellow of the American Mathematical Society. In 2017 he was awarded the Leroy P. Steele Prize for Seminal Contribution to Research. == Research activity == Simon's best known work, for which he was honored with the Leroy P. Steele Prize for Seminal Contribution to Research, deals with the uniqueness of asymptotics of certain nonlinear evolution equations and Euler-Lagrange equations. The main tool is an infinite-dimensional extension and corollary of the Łojasiewicz inequality, using the standard Fredholm theory of elliptic operators and Lyapunov-Schmidt reduction. The resulting Łojasiewicz−Simon inequalities are of interest in and of themselves and have found many applications in geometric analysis. Simon's primary applications of his Łojasiewicz−Simon inequalities deal with the uniqueness of tangent cones of minimal surfaces and of tangent maps of harmonic maps, making use of the deep regularity theories of William Allard, Richard Schoen, and Karen Uhlenbeck. Other authors have made fundamental use of Simon's results, such as Rugang Ye's use for the uniqueness of subsequential limits of Yamabe flow. A simplification and extension of some aspects of Simon's work was later found by Mohamed Ali Jendoubi and others. Simon also made a general study of the Willmore functional for surfaces in general codimension, relating the value of the functional to several geometric quantities. Such geometric estimates have proven to be relevant in a number of other important works, such as in Ernst Kuwert and Reiner Schätzle's analysis of Willmore flow and in Hubert Bray's proof of the Riemannian Penrose inequality. Simon himself was able to apply his analysis to establish the existence of minimizers of the Willmore functional with prescribed topological type. With his thesis advisor James Michael, Simon provided a fundamental Sobolev inequality for submanifolds of Euclidean space, the form of which depends only on dimension and on the length of the mean curvature vector. An extension to submanifolds of Riemannian manifolds is due to David Hoffman and Joel Spruck. Due to the geometric dependence of the Michael−Simon and Hoffman−Spruck inequalities, they have been crucial in a number of contexts, including in Schoen and Shing-Tung Yau's resolution of the positive mass theorem and Gerhard Huisken's analysis of mean curvature flow. Robert Bartnik and Simon considered the problem of prescribing the boundary and mean curvature of a spacelike hypersurface of Minkowski space. They set up the problem as a second-order partial differential equation for a scalar graphing function, giving novel perspective and results for some of the underlying issues previously considered in Shiu-Yuen Cheng and Yau's analysis of similar problems. Using approximation by harmonic polynomials, Robert Hardt and Simon studied the zero set of solutions of general second-order elliptic partial differential equations, obtaining information on Hausdorff measure and rectifiability. By combining their results with earlier results of Harold Donnelly and Charles Fefferman, they obtained asymptotic information on the sizes of the zero sets of the eigenfunctions of the Laplace-Beltrami operator on a Riemannian manifold. Schoen, Simon, and Yau studied stable minimal hypersurfaces of Riemannian manifolds, identifying a simple combination of Simons' formula with the stability inequality which produced various curvature estimates. As a consequence, they were able to re-derive some results of Simons such as the Bernstein theorem in appropriate dimensions. The Schoen−Simon−Yau estimates were adapted from the setting of minimal surfaces to that of "self-shrinking" surfaces by Tobias Colding and William Minicozzi, as part of their analysis of singularities of mean curvature flow. The stable minimal hypersurface theory itself was taken further by Schoen and Simon six years later, using novel methods to provide geometric estimates without dimensional restriction. As opposed to the earlier purely analytic estimates, Schoen and Simon used the machinery of geometric measure theory. The Schoen−Simon estimates are fundamental for the general Almgren–Pitts min-max theory, and consequently for its various applications. William Meeks, Simon, and Yau obtained a number of remarkable results on minimal surfaces and the topology of three-dimensional manifolds, building in large part on earlier works of Meeks and Yau. Some similar results were obtained around the same time by Michael Freedman, Joel Hass, and Peter Scott. == Bibliography == == References == == Further reading == AMS (February 1994), "Leon Simon receives 1994 Bôcher Memorial Prize", Notices of the American Mathematical Society, 41 (2): 99–100, MR 1262536. O'Connor, John J.; Robertson, Edmund F. (November 2006), "Leon Melvyn Simon", MacTutor History of Mathematics Archive, University of St Andrews Walker, Rosanne (25 May 2006) [2001], "Simon, Leon (1945 – )", Encyclopedia of Australian Science, Melbourne: eScholarship Research Centre. == External links == Leon M. Simon at the Mathematics Genealogy Project
Wikipedia:Leon Takhtajan#0
Leon Armenovich Takhtajan (Armenian: Լևոն Թախտաջյան; Russian: Леон Арменович Тахтаджян, born 1 October 1950, Yerevan) is a Russian (and formerly Soviet) mathematical physicist of Armenian descent, currently a professor of mathematics at the Stony Brook University, Stony Brook, NY, and a leading researcher at the Euler International Mathematical Institute, Saint Petersburg, Russia. == Biography == Leon Armenovich Takhtajan was born in Yerevan, Soviet Union, in 1950, son of the Armenian Russian botanist Armen Takhtajan. === Education === Takhtajan received in 1975 his Ph.D. (Russian candidate degree) from the Steklov Institute (Leningrad Department) under Ludvig Faddeev with thesis Complete Integrability of the Equation u t t − u x x + sin ⁡ ( u ) = 0 {\displaystyle u_{tt}-u_{xx}+\sin(u)=0} . He was then employed at the Steklov Institute (Leningrad Department) and in 1982 received his D.S. degree (doctor of science, 2nd degree in Russia) with thesis Completely integrable models of field theory and statistical mechanics. === Career === Since 1992 he has been a professor at Stony Brook University where he was the chair of the mathematics department in 2009–2013. == Research == His research is on integrable systems of mathematical physics (such as the theory of solitons) and applications of quantum field theories and models of string theory to algebraic geometry and complex analysis and includes quantum field theories on algebraic curves and associated reciprocity laws, two-dimensional quantum gravity and Weil–Petersson geometry of moduli spaces, the Kähler geometry of universal Teichmüller space, and trace formulas. His major contributions are in theory of classical and quantum integrable systems, quantum groups and Weil–Petersson geometry of moduli spaces. Together with Ludvig Faddeev and Evgeny Sklyanin he formulated the algebraic Bethe ansatz and quantum inverse scattering method. Together with Ludvig Faddeev and Nicolai Reshetikhin he proposed a method of quantization of Lie groups and algebras, the FRT construction. In 1983 he was an invited speaker at the International Congress of Mathematicians in Warsaw and gave a talk titled Integrable models in classical and quantum field theory. == Selected publications == === Articles === Sklyanin, E. K.; Takhtadzhyan, L. A.; Faddeev, L. D. (1980). "Quantum inverse problem method I". Theoretical and Mathematical Physics. 40 (2): 688. Bibcode:1979TMP....40..688S. doi:10.1007/BF01018718. S2CID 120710212. Takhtadzhan, L. A.; Faddeev, Lyudvig D. (1979). "The quantum method of the inverse problem and the XYZ Heisenberg model". Russian Mathematical Surveys. 34 (5): 11. Bibcode:1979RuMaS..34...11T. doi:10.1070/RM1979v034n05ABEH003909. S2CID 250867355. Решетихин Н. Ю., Тахтаджян Л. А., Фаддеев Л. Д. Квантование групп Ли и алгебр Ли — Алгебра и анализ, 1:1 (1989), Eng. translation: Faddeev, L. D.; Reshetikhin, N. Yu.; Takhtajan, L. A. (1990). "Quantization of Lie Groups and Lie Algebras". Leningrad Mathematical Journal. 1 (1): 193–225. MR 1015339. Faddeev, L. D.; Reshetikhin, N. Yu.; Takhtajan, L. A. (1988). "Quantization of Lie Groups and Lie Algebras". Algebraic Analysis: Papers Dedicated to Professor Mikioi Sato on the Occasion of His Sixtieth Birthday. Academic Press. ISBN 9781483268026. MR 0992450. Faddeev, L. D.; Reshetikhin, N. Yu.; Takhtajan, L. A. (1990). "Quantization of Lie Groups and Lie Algebras". In Jimbo, Michio (ed.). Yang-Baxter Equation in Integrable Systems. Advanced Series in Mathematical Physics. Vol. 10. World Scientific. pp. 299–309. Bibcode:1990ybei.book..299F. doi:10.1142/9789812798336_0016. ISBN 978-981-02-0120-3. Takhtajan, Leon (1994). "On foundation of the generalized Nambu mechanics". Communications in Mathematical Physics. 160 (2): 295–315. arXiv:hep-th/9301111. Bibcode:1994CMaPh.160..295T. doi:10.1007/BF02103278. S2CID 119137896. Zograf, P. G.; Takhtadzhyan, L. A. (1988). "On uniformization of Riemann surfaces and the Weil–Petersson metric on Teichmüller and Schottky spaces". Mathematics of the USSR-Sbornik. 60 (2): 297. Bibcode:1988SbMat..60..297Z. doi:10.1070/SM1988v060n02ABEH003170. === Books === Faddeev, Ludwig; Takhtajan, Leon (2007) [First published 1987]. Hamiltonian methods in the theory of solitons (2nd ed.). Springer Verlag. ISBN 9783540699699. Weil–Petersson Metric on the Universal Teichmuller Space. Vol. 183. Memoirs of the Amer. Math. Soc. 2006. MR 2251887. Quantum mechanics for mathematicians. American Mathematical Society. 2008. MR 2433906. == References == == External links == mathnet.ru
Wikipedia:Leonhard Euler#0
Leonhard Euler ( OY-lər; Swiss Standard German: [ˈleːɔnhard ˈɔʏlər]; German: [ˈleːɔnhaʁt ˈɔʏlɐ] ; 15 April 1707 – 18 September 1783) was a Swiss polymath who was active as a mathematician, physicist, astronomer, logician, geographer, and engineer. He founded the studies of graph theory and topology and made influential discoveries in many other branches of mathematics, such as analytic number theory, complex analysis, and infinitesimal calculus. He also introduced much of modern mathematical terminology and notation, including the notion of a mathematical function. He is known for his work in mechanics, fluid dynamics, optics, astronomy, and music theory. Euler has been called a "universal genius" who "was fully equipped with almost unlimited powers of imagination, intellectual gifts and extraordinary memory". He spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia. Euler is credited for popularizing the Greek letter π {\displaystyle \pi } (lowercase pi) to denote the ratio of a circle's circumference to its diameter, as well as first using the notation f ( x ) {\displaystyle f(x)} for the value of a function, the letter i {\displaystyle i} to express the imaginary unit − 1 {\displaystyle {\sqrt {-1}}} , the Greek letter Σ {\displaystyle \Sigma } (capital sigma) to express summations, the Greek letter Δ {\displaystyle \Delta } (capital delta) for finite differences, and lowercase letters to represent the sides of a triangle while representing the angles as capital letters. He gave the current definition of the constant e {\displaystyle e} , the base of the natural logarithm, now known as Euler's number. Euler made contributions to applied mathematics and engineering, such as his study of ships which helped navigation, his three volumes on optics contributed to the design of microscopes and telescopes, and he studied the bending of beams and the critical load of columns. Euler is credited with being the first to develop graph theory (partly as a solution for the problem of the Seven Bridges of Königsberg, which is also considered the first practical application of topology). He also became famous for, among many other accomplishments, solving several unsolved problems in number theory and analysis, including the famous Basel problem. Euler has also been credited for discovering that the sum of the numbers of vertices and faces minus the number of edges of a polyhedron equals 2, a number now commonly known as the Euler characteristic. In physics, Euler reformulated Isaac Newton's laws of motion into new laws in his two-volume work Mechanica to better explain the motion of rigid bodies. He contributed to the study of elastic deformations of solid objects. Euler formulated the partial differential equations for the motion of inviscid fluid, and laid the mathematical foundations of potential theory. Euler is regarded as arguably the most prolific contributor in the history of mathematics and science, and the greatest mathematician of the 18th century. His 866 publications and his correspondence are being collected in the Opera Omnia Leonhard Euler which, when completed, will consist of 81 quartos. Several great mathematicians who worked after Euler's death have recognised his importance in the field: Pierre-Simon Laplace said, "Read Euler, read Euler, he is the master of us all"; Carl Friedrich Gauss wrote: "The study of Euler's works will remain the best school for the different fields of mathematics, and nothing else can replace it." == Early life == Leonhard Euler was born in Basel on 15 April 1707 to Paul III Euler, a pastor of the Reformed Church, and Marguerite (née Brucker), whose ancestors include a number of well-known scholars in the classics. He was the oldest of four children, with two younger sisters, Anna Maria and Maria Magdalena, and a younger brother, Johann Heinrich. Soon after Leonhard's birth, the Eulers moved from Basel to Riehen, Switzerland, where his father became pastor in the local church and Leonhard spent most of his childhood. From a young age, Euler received schooling in mathematics from his father, who had taken courses from Jacob Bernoulli some years earlier at the University of Basel. Around the age of eight, Euler was sent to live at his maternal grandmother's house and enrolled in the Latin school in Basel. In addition, he received private tutoring from Johannes Burckhardt, a young theologian with a keen interest in mathematics. In 1720, at age 13, Euler enrolled at the University of Basel. Attending university at such a young age was not unusual at the time. The course on elementary mathematics was given by Johann Bernoulli, the younger brother of the deceased Jacob Bernoulli, who had taught Euler's father. Johann Bernoulli and Euler soon got to know each other better. Euler described Bernoulli in his autobiography: the famous professor Johann Bernoulli [...] made it a special pleasure for himself to help me along in the mathematical sciences. Private lessons, however, he refused because of his busy schedule. However, he gave me a far more salutary advice, which consisted in myself getting a hold of some of the more difficult mathematical books and working through them with great diligence, and should I encounter some objections or difficulties, he offered me free access to him every Saturday afternoon, and he was gracious enough to comment on the collected difficulties, which was done with such a desired advantage that, when he resolved one of my objections, ten others at once disappeared, which certainly is the best method of making happy progress in the mathematical sciences. During this time, Euler, backed by Bernoulli, obtained his father's consent to become a mathematician instead of a pastor. In 1723, Euler received a Master of Philosophy with a dissertation that compared the philosophies of René Descartes and Isaac Newton. Afterwards, he enrolled in the theological faculty of the University of Basel. In 1726, Euler completed a dissertation on the propagation of sound titled De Sono, with which he unsuccessfully attempted to obtain a position at the University of Basel. In 1727, he entered the Paris Academy prize competition (offered annually and later biennially by the academy beginning in 1720) for the first time. The problem posed that year was to find the best way to place the masts on a ship. Pierre Bouguer, who became known as "the father of naval architecture", won and Euler took second place. Over the years, Euler entered this competition 15 times, winning 12 of them. == Career == === Saint Petersburg === Johann Bernoulli's two sons, Daniel and Nicolaus, entered into service at the Imperial Russian Academy of Sciences in Saint Petersburg in 1725, leaving Euler with the assurance they would recommend him to a post when one was available. On 31 July 1726, Nicolaus died of appendicitis after spending less than a year in Russia. When Daniel assumed his brother's position in the mathematics/physics division, he recommended that the post in physiology that he had vacated be filled by his friend Euler. In November 1726, Euler eagerly accepted the offer, but delayed making the trip to Saint Petersburg while he unsuccessfully applied for a physics professorship at the University of Basel. Euler arrived in Saint Petersburg in May 1727. He was promoted from his junior post in the medical department of the academy to a position in the mathematics department. He lodged with Daniel Bernoulli with whom he worked in close collaboration. Euler mastered Russian, settled into life in Saint Petersburg and took on an additional job as a medic in the Russian Navy. The academy at Saint Petersburg, established by Peter the Great, was intended to improve education in Russia and to close the scientific gap with Western Europe. As a result, it was made especially attractive to foreign scholars like Euler. The academy's benefactress, Catherine I, who had continued the progressive policies of her late husband, died before Euler's arrival to Saint Petersburg. The Russian conservative nobility then gained power upon the ascension of the twelve-year-old Peter II. The nobility, suspicious of the academy's foreign scientists, cut funding for Euler and his colleagues and prevented the entrance of foreign and non-aristocratic students into the Gymnasium and universities. Conditions improved slightly after the death of Peter II in 1730 and the German-influenced Anna of Russia assumed power. Euler swiftly rose through the ranks in the academy and was made a professor of physics in 1731. He also left the Russian Navy, refusing a promotion to lieutenant. Two years later, Daniel Bernoulli, fed up with the censorship and hostility he faced at Saint Petersburg, left for Basel. Euler succeeded him as the head of the mathematics department. In January 1734, he married Katharina Gsell (1707–1773), a daughter of Georg Gsell. Frederick II had made an attempt to recruit the services of Euler for his newly established Berlin Academy in 1740, but Euler initially preferred to stay in St Petersburg. But after Empress Anna died and Frederick II agreed to pay 1600 ecus (the same as Euler earned in Russia) he agreed to move to Berlin. In 1741, he requested permission to leave for Berlin, arguing he was in need of a milder climate for his eyesight. The Russian academy gave its consent and would pay him 200 rubles per year as one of its active members. === Berlin === Concerned about the continuing turmoil in Russia, Euler left St. Petersburg in June 1741 to take up a post at the Berlin Academy, which he had been offered by Frederick the Great of Prussia. He lived for 25 years in Berlin, where he wrote several hundred articles. In 1748 his text on functions called the Introductio in analysin infinitorum was published and in 1755 a text on differential calculus called the Institutiones calculi differentialis was published. In 1755, he was elected a foreign member of the Royal Swedish Academy of Sciences and of the French Academy of Sciences. Notable students of Euler in Berlin included Stepan Rumovsky, later considered as the first Russian astronomer. In 1748 he declined an offer from the University of Basel to succeed the recently deceased Johann Bernoulli. In 1753 he bought a house in Charlottenburg, in which he lived with his family and widowed mother. Euler became the tutor for Friederike Charlotte of Brandenburg-Schwedt, the Princess of Anhalt-Dessau and Frederick's niece. He wrote over 200 letters to her in the early 1760s, which were later compiled into a volume entitled Letters of Euler on different Subjects in Natural Philosophy Addressed to a German Princess. This work contained Euler's exposition on various subjects pertaining to physics and mathematics and offered valuable insights into Euler's personality and religious beliefs. It was translated into multiple languages, published across Europe and in the United States, and became more widely read than any of his mathematical works. The popularity of the Letters testifies to Euler's ability to communicate scientific matters effectively to a lay audience, a rare ability for a dedicated research scientist. Despite Euler's immense contribution to the academy's prestige and having been put forward as a candidate for its presidency by Jean le Rond d'Alembert, Frederick II named himself as its president. The Prussian king had a large circle of intellectuals in his court, and he found the mathematician unsophisticated and ill-informed on matters beyond numbers and figures. Euler was a simple, devoutly religious man who never questioned the existing social order or conventional beliefs. He was, in many ways, the polar opposite of Voltaire, who enjoyed a high place of prestige at Frederick's court. Euler was not a skilled debater and often made it a point to argue subjects that he knew little about, making him the frequent target of Voltaire's wit. Frederick also expressed disappointment with Euler's practical engineering abilities, stating: I wanted to have a water jet in my garden: Euler calculated the force of the wheels necessary to raise the water to a reservoir, from where it should fall back through channels, finally spurting out in Sanssouci. My mill was carried out geometrically and could not raise a mouthful of water closer than fifty paces to the reservoir. Vanity of vanities! Vanity of geometry! However, the disappointment was almost surely unwarranted from a technical perspective. Euler's calculations look likely to be correct, even if Euler's interactions with Frederick and those constructing his fountain may have been dysfunctional. Throughout his stay in Berlin, Euler maintained a strong connection to the academy in St. Petersburg and also published 109 papers in Russia. He also assisted students from the St. Petersburg academy and at times accommodated Russian students in his house in Berlin. In 1760, with the Seven Years' War raging, Euler's farm in Charlottenburg was sacked by advancing Russian troops. Upon learning of this event, General Ivan Petrovich Saltykov paid compensation for the damage caused to Euler's estate, with Empress Elizabeth of Russia later adding a further payment of 4000 rubles—an exorbitant amount at the time. Euler decided to leave Berlin in 1766 and return to Russia. During his Berlin years (1741–1766), Euler was at the peak of his productivity. He wrote 380 works, 275 of which were published. This included 125 memoirs in the Berlin Academy and over 100 memoirs sent to the St. Petersburg Academy, which had retained him as a member and paid him an annual stipend. Euler's Introductio in Analysin Infinitorum was published in two parts in 1748. In addition to his own research, Euler supervised the library, the observatory, the botanical garden, and the publication of calendars and maps from which the academy derived income. He was even involved in the design of the water fountains at Sanssouci, the King's summer palace. === Return to Russia === The political situation in Russia stabilized after Catherine the Great's accession to the throne, so in 1766 Euler accepted an invitation to return to the St. Petersburg Academy. His conditions were quite exorbitant—a 3000 ruble annual salary, a pension for his wife, and the promise of high-ranking appointments for his sons. At the university he was assisted by his student Anders Johan Lexell. While living in St. Petersburg, a fire in 1771 destroyed his home. == Personal life == On 7 January 1734, Euler married Katharina Gsell, daughter of Georg Gsell, a painter at the Academy Gymnasium in Saint Petersburg. The couple bought a house by the Neva River. Of their 13 children, five survived childhood, three sons and two daughters. Their first son was Johann Albrecht Euler, whose godfather was Christian Goldbach. Three years after his wife's death in 1773, Euler married her half-sister, Salome Abigail Gsell. This marriage lasted until his death in 1783. His brother Johann Heinrich settled in St. Petersburg in 1735 and was employed as a painter at the academy. Early in his life, Euler memorized Virgil's Aeneid, and by old age, he could recite the poem and give the first and last sentence on each page of the edition from which he had learnt it. Euler knew the first hundred prime numbers and could give each of their powers up to the sixth degree. Euler was known as a generous and kind person, not neurotic as seen in some geniuses, keeping his good-natured disposition even after becoming entirely blind. === Eyesight deterioration === Euler's eyesight worsened throughout his mathematical career. In 1738, three years after nearly dying of fever, he became almost blind in his right eye. Euler blamed the cartography he performed for the St. Petersburg Academy for his condition, but the cause of his blindness remains the subject of speculation. Euler's vision in that eye worsened throughout his stay in Germany, to the extent that Frederick called him "Cyclops". Euler said of his loss of vision, "Now I will have fewer distractions." In 1766 a cataract in his left eye was discovered. Though couching of the cataract temporarily improved his vision, complications rendered him almost totally blind in the left eye as well. His condition appeared to have little effect on his productivity. With the aid of his scribes, Euler's productivity in many areas of study increased; in 1775, he produced, on average, one mathematical paper per week. === Death === In St. Petersburg on 18 September 1783, after a lunch with his family, Euler was discussing the newly discovered planet Uranus and its orbit with Anders Johan Lexell when he collapsed and died of a brain hemorrhage. Jacob von Staehlin wrote a short obituary for the Russian Academy of Sciences and Russian mathematician Nicolas Fuss, one of Euler's disciples, wrote a more detailed eulogy, which he delivered at a memorial meeting. In his eulogy for the French Academy, French mathematician and philosopher Marquis de Condorcet wrote: il cessa de calculer et de vivre— ... he ceased to calculate and to live. Euler was buried next to Katharina at the Smolensk Lutheran Cemetery on Vasilievsky Island. In 1837, the Russian Academy of Sciences installed a new monument, replacing his overgrown grave plaque. In 1957, to commemorate the 250th anniversary of his birth, his tomb was moved to the Lazarevskoe Cemetery at the Alexander Nevsky Monastery. == Contributions to mathematics and physics == Euler worked in almost all areas of mathematics, including geometry, infinitesimal calculus, trigonometry, algebra, and number theory, as well as continuum physics, lunar theory, and other areas of physics. He is a seminal figure in the history of mathematics; if printed, his works, many of which are of fundamental interest, would occupy between 60 and 80 quarto volumes. Euler's name is associated with a large number of topics. Euler's work averages 800 pages a year from 1725 to 1783. He also wrote over 4500 letters and hundreds of manuscripts. It has been estimated that Leonhard Euler was the author of a quarter of the combined output in mathematics, physics, mechanics, astronomy, and navigation in the 18th century, while other researchers credit Euler for a third of the output in mathematics in that century. === Mathematical notation === Euler introduced and popularized several notational conventions through his numerous and widely circulated textbooks. Most notably, he introduced the concept of a function and was the first to write f(x) to denote the function f applied to the argument x. He also introduced the modern notation for the trigonometric functions, the letter e for the base of the natural logarithm (now also known as Euler's number), the Greek letter Σ for summations and the letter i to denote the imaginary unit. The use of the Greek letter π to denote the ratio of a circle's circumference to its diameter was also popularized by Euler, although it originated with Welsh mathematician William Jones. === Analysis === The development of infinitesimal calculus was at the forefront of 18th-century mathematical research, and the Bernoullis—family friends of Euler—were responsible for much of the early progress in the field. Thanks to their influence, studying calculus became the major focus of Euler's work. While some of Euler's proofs are not acceptable by modern standards of mathematical rigour (in particular his reliance on the principle of the generality of algebra), his ideas led to many great advances. Euler is well known in analysis for his frequent use and development of power series, the expression of functions as sums of infinitely many terms, such as e x = ∑ n = 0 ∞ x n n ! = lim n → ∞ ( 1 0 ! + x 1 ! + x 2 2 ! + ⋯ + x n n ! ) . {\displaystyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}=\lim _{n\to \infty }\left({\frac {1}{0!}}+{\frac {x}{1!}}+{\frac {x^{2}}{2!}}+\cdots +{\frac {x^{n}}{n!}}\right).} Euler's use of power series enabled him to solve the Basel problem, finding the sum of the reciprocals of squares of every natural number, in 1735 (he provided a more elaborate argument in 1741). The Basel problem was originally posed by Pietro Mengoli in 1644, and by the 1730s was a famous open problem, popularized by Jacob Bernoulli and unsuccessfully attacked by many of the leading mathematicians of the time. Euler found that: ∑ n = 1 ∞ 1 n 2 = lim n → ∞ ( 1 1 2 + 1 2 2 + 1 3 2 + ⋯ + 1 n 2 ) = π 2 6 . {\displaystyle \sum _{n=1}^{\infty }{1 \over n^{2}}=\lim _{n\to \infty }\left({\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\cdots +{\frac {1}{n^{2}}}\right)={\frac {\pi ^{2}}{6}}.} Euler introduced the constant γ = lim n → ∞ ( 1 + 1 2 + 1 3 + 1 4 + ⋯ + 1 n − ln ⁡ ( n ) ) ≈ 0.5772 , {\displaystyle \gamma =\lim _{n\rightarrow \infty }\left(1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+\cdots +{\frac {1}{n}}-\ln(n)\right)\approx 0.5772,} now known as Euler's constant or the Euler–Mascheroni constant, and studied its relationship with the harmonic series, the gamma function, and values of the Riemann zeta function. Euler introduced the use of the exponential function and logarithms in analytic proofs. He discovered ways to express various logarithmic functions using power series, and he successfully defined logarithms for negative and complex numbers, thus greatly expanding the scope of mathematical applications of logarithms. He also defined the exponential function for complex numbers and discovered its relation to the trigonometric functions. For any real number φ (taken to be radians), Euler's formula states that the complex exponential function satisfies e i φ = cos ⁡ φ + i sin ⁡ φ {\displaystyle e^{i\varphi }=\cos \varphi +i\sin \varphi } which was called "the most remarkable formula in mathematics" by Richard Feynman. A special case of the above formula is known as Euler's identity, e i π + 1 = 0 {\displaystyle e^{i\pi }+1=0} Euler elaborated the theory of higher transcendental functions by introducing the gamma function and introduced a new method for solving quartic equations. He found a way to calculate integrals with complex limits, foreshadowing the development of modern complex analysis. He invented the calculus of variations and formulated the Euler–Lagrange equation for reducing optimization problems in this area to the solution of differential equations. Euler pioneered the use of analytic methods to solve number theory problems. In doing so, he united two disparate branches of mathematics and introduced a new field of study, analytic number theory. In breaking ground for this new field, Euler created the theory of hypergeometric series, q-series, hyperbolic trigonometric functions, and the analytic theory of continued fractions. For example, he proved the infinitude of primes using the divergence of the harmonic series, and he used analytic methods to gain some understanding of the way prime numbers are distributed. Euler's work in this area led to the development of the prime number theorem. === Number theory === Euler's interest in number theory can be traced to the influence of Christian Goldbach, his friend in the St. Petersburg Academy. Much of Euler's early work on number theory was based on the work of Pierre de Fermat. Euler developed some of Fermat's ideas and disproved some of his conjectures, such as his conjecture that all numbers of the form 2 2 n + 1 {\textstyle 2^{2^{n}}+1} (Fermat numbers) are prime. Euler linked the nature of prime distribution with ideas in analysis. He proved that the sum of the reciprocals of the primes diverges. In doing so, he discovered the connection between the Riemann zeta function and prime numbers; this is known as the Euler product formula for the Riemann zeta function. Euler invented the totient function φ(n), the number of positive integers less than or equal to the integer n that are coprime to n. Using properties of this function, he generalized Fermat's little theorem to what is now known as Euler's theorem. He contributed significantly to the theory of perfect numbers, which had fascinated mathematicians since Euclid. He proved that the relationship shown between even perfect numbers and Mersenne primes (which he had earlier proved) was one-to-one, a result otherwise known as the Euclid–Euler theorem. Euler also conjectured the law of quadratic reciprocity. The concept is regarded as a fundamental theorem within number theory, and his ideas paved the way for the work of Carl Friedrich Gauss, particularly Disquisitiones Arithmeticae. By 1772 Euler had proved that 231 − 1 = 2,147,483,647 is a Mersenne prime. It may have remained the largest known prime until 1867. Euler also contributed major developments to the theory of partitions of an integer. === Graph theory === In 1735, Euler presented a solution to the problem known as the Seven Bridges of Königsberg. The city of Königsberg, Prussia was set on the Pregel River, and included two large islands that were connected to each other and the mainland by seven bridges. The problem is to decide whether it is possible to follow a path that crosses each bridge exactly once. Euler showed that it is not possible: there is no Eulerian path. This solution is considered to be the first theorem of graph theory. Euler also discovered the formula V − E + F = 2 {\displaystyle V-E+F=2} relating the number of vertices, edges, and faces of a convex polyhedron, and hence of a planar graph. The constant in this formula is now known as the Euler characteristic for the graph (or other mathematical object), and is related to the genus of the object. The study and generalization of this formula, specifically by Cauchy and L'Huilier, is at the origin of topology. === Physics, astronomy, and engineering === Some of Euler's greatest successes were in solving real-world problems analytically, and in describing numerous applications of the Bernoulli numbers, Fourier series, Euler numbers, the constants e and π, continued fractions, and integrals. He integrated Leibniz's differential calculus with Newton's Method of Fluxions, and developed tools that made it easier to apply calculus to physical problems. He made great strides in improving the numerical approximation of integrals, inventing what are now known as the Euler approximations. The most notable of these approximations are Euler's method and the Euler–Maclaurin formula. Euler helped develop the Euler–Bernoulli beam equation, which became a cornerstone of engineering. Besides successfully applying his analytic tools to problems in classical mechanics, Euler applied these techniques to celestial problems. His work in astronomy was recognized by multiple Paris Academy Prizes over the course of his career. His accomplishments include determining with great accuracy the orbits of comets and other celestial bodies, understanding the nature of comets, and calculating the parallax of the Sun. His calculations contributed to the development of accurate longitude tables. Euler made important contributions in optics. He disagreed with Newton's corpuscular theory of light, which was the prevailing theory of the time. His 1740s papers on optics helped ensure that the wave theory of light proposed by Christiaan Huygens would become the dominant mode of thought, at least until the development of the quantum theory of light. In fluid dynamics, Euler was the first to predict the phenomenon of cavitation, in 1754, long before its first observation in the late 19th century, and the Euler number used in fluid flow calculations comes from his related work on the efficiency of turbines. In 1757 he published an important set of equations for inviscid flow in fluid dynamics, that are now known as the Euler equations. Euler is well known in structural engineering for his formula giving Euler's critical load, the critical buckling load of an ideal strut, which depends only on its length and flexural stiffness. === Logic === Euler is credited with using closed curves to illustrate syllogistic reasoning (1768). These diagrams have become known as Euler diagrams. An Euler diagram is a diagrammatic means of representing sets and their relationships. Euler diagrams consist of simple closed curves (usually circles) in the plane that depict sets. Each Euler curve divides the plane into two regions or "zones": the interior, which symbolically represents the elements of the set, and the exterior, which represents all elements that are not members of the set. The sizes or shapes of the curves are not important; the significance of the diagram is in how they overlap. The spatial relationships between the regions bounded by each curve (overlap, containment or neither) corresponds to set-theoretic relationships (intersection, subset, and disjointness). Curves whose interior zones do not intersect represent disjoint sets. Two curves whose interior zones intersect represent sets that have common elements; the zone inside both curves represents the set of elements common to both sets (the intersection of the sets). A curve that is contained completely within the interior zone of another represents a subset of it. Euler diagrams (and their refinement to Venn diagrams) were incorporated as part of instruction in set theory as part of the new math movement in the 1960s. Since then, they have come into wide use as a way of visualizing combinations of characteristics. === Music === One of Euler's more unusual interests was the application of mathematical ideas in music. In 1739 he wrote the Tentamen novae theoriae musicae (Attempt at a New Theory of Music), hoping to eventually incorporate musical theory as part of mathematics. This part of his work, however, did not receive wide attention and was once described as too mathematical for musicians and too musical for mathematicians. Even when dealing with music, Euler's approach is mainly mathematical, for instance, his introduction of binary logarithms as a way of numerically describing the subdivision of octaves into fractional parts. His writings on music are not particularly numerous (a few hundred pages, in his total production of about thirty thousand pages), but they reflect an early preoccupation and one that remained with him throughout his life. A first point of Euler's musical theory is the definition of "genres", i.e. of possible divisions of the octave using the prime numbers 3 and 5. Euler describes 18 such genres, with the general definition 2mA, where A is the "exponent" of the genre (i.e. the sum of the exponents of 3 and 5) and 2m (where "m is an indefinite number, small or large, so long as the sounds are perceptible"), expresses that the relation holds independently of the number of octaves concerned. The first genre, with A = 1, is the octave itself (or its duplicates); the second genre, 2m.3, is the octave divided by the fifth (fifth + fourth, C–G–C); the third genre is 2m.5, major third + minor sixth (C–E–C); the fourth is 2m.32, two-fourths and a tone (C–F–B♭–C); the fifth is 2m.3.5 (C–E–G–B–C); etc. Genres 12 (2m.33.5), 13 (2m.32.52) and 14 (2m.3.53) are corrected versions of the diatonic, chromatic and enharmonic, respectively, of the Ancients. Genre 18 (2m.33.52) is the "diatonico-chromatic", "used generally in all compositions", and which turns out to be identical with the system described by Johann Mattheson. Euler later envisaged the possibility of describing genres including the prime number 7. Euler devised a specific graph, the Speculum musicum, to illustrate the diatonico-chromatic genre, and discussed paths in this graph for specific intervals, recalling his interest in the Seven Bridges of Königsberg (see above). The device drew renewed interest as the Tonnetz in Neo-Riemannian theory (see also Lattice (music)). Euler further used the principle of the "exponent" to propose a derivation of the gradus suavitatis (degree of suavity, of agreeableness) of intervals and chords from their prime factors – one must keep in mind that he considered just intonation, i.e. 1 and the prime numbers 3 and 5 only. Formulas have been proposed extending this system to any number of prime numbers, e.g. in the form d s = ∑ i ( k i p i − k i ) + 1 , {\displaystyle ds=\sum _{i}(k_{i}p_{i}-k_{i})+1,} where pi are prime numbers and ki their exponents. == Personal philosophy and religious beliefs == Euler was religious throughout his life. Much of what is known of his religious beliefs can be deduced from his Letters to a German Princess and an earlier work, Rettung der Göttlichen Offenbahrung gegen die Einwürfe der Freygeister (Defense of the Divine Revelation against the Objections of the Freethinkers). These show that Euler was a devout Christian who believed the Bible to be inspired; the Rettung was primarily an argument for the divine inspiration of scripture. Euler opposed the concepts of Leibniz's monadism and the philosophy of Christian Wolff. He insisted that knowledge is founded in part on the basis of precise quantitative laws, something that monadism and Wolffian science were unable to provide. Euler called Wolff's ideas "heathen and atheistic". There is a legend inspired by Euler's arguments with secular philosophers over religion, which is set during Euler's second stint at the St. Petersburg Academy. The French philosopher Denis Diderot was visiting Russia on Catherine the Great's invitation. The Empress was alarmed that Diderot's arguments for atheism were influencing members of her court, and so Euler was asked to confront him. Diderot was informed that a learned mathematician had produced a proof of the existence of God: he agreed to view the proof as it was presented in court. Euler appeared, advanced toward Diderot, and in a tone of perfect conviction announced this non sequitur: "Sir, a + b n n = x {\displaystyle {\frac {a+b^{n}}{n}}=x} , hence God exists –reply!" Diderot, to whom (says the story) all mathematics was gibberish, stood dumbstruck as peals of laughter erupted from the court. Embarrassed, he asked to leave Russia, a request Catherine granted. However amusing the anecdote may be, it is apocryphal, given that Diderot himself did research in mathematics. The legend was apparently first told by Dieudonné Thiébault with embellishment by Augustus De Morgan. == Legacy == === Recognition === Euler is widely recognized as one of the greatest mathematicians of all time, and more likely than not the most prolific contributor to mathematics and science. Mathematician and physicist John von Neumann called Euler "the greatest virtuoso of the period". Mathematician François Arago said, "Euler calculated without any apparent effort, just as men breathe and as eagles sustain themselves in air". He is generally ranked right below Carl Friedrich Gauss, Isaac Newton, and Archimedes among the greatest mathematicians of all time, while some rank him as equal with them. Physicist and mathematician Henri Poincaré called Euler the "god of mathematics". French mathematician André Weil noted that Euler stood above his contemporaries and more than anyone else was able to cement himself as the leading force of his era's mathematics:No mathematician ever attained such a position of undisputed leadership in all branches of mathematics, pure and applied, as Euler did for the best part of the eighteenth century.Swiss mathematician Nicolas Fuss noted Euler's extraordinary memory and breadth of knowledge, saying: Knowledge that we call erudition was not inimical to him. He had read all the best Roman writers, knew perfectly the ancient history of mathematics, held in his memory the historical events of all times and peoples, and could without hesitation adduce by way of examples the most trifling of historical events. He knew more about medicine, botany, and chemistry than might be expected of someone who had not worked especially in those sciences. === Commemorations === Euler was featured on both the sixth and seventh series of the Swiss 10-franc banknote and on numerous Swiss, German, and Russian postage stamps. In 1782 he was elected a Foreign Honorary Member of the American Academy of Arts and Sciences. The asteroid 2002 Euler was named in his honour. == Selected bibliography == Euler has an extensive bibliography. His books include: Mechanica (1736) Methodus inveniendi lineas curvas maximi minimive proprietate gaudentes, sive solutio problematis isoperimetrici latissimo sensu accepti (1744) (A method for finding curved lines enjoying properties of maximum or minimum, or solution of isoperimetric problems in the broadest accepted sense) Introductio in analysin infinitorum (1748) (Introduction to Analysis of the Infinite) Institutiones calculi differentialis (1755) (Foundations of differential calculus) Vollständige Anleitung zur Algebra (1765) (Elements of Algebra) Institutiones calculi integralis (1768–1770) (Foundations of integral calculus) Letters to a German Princess (1768–1772) Dioptrica, published in three volumes beginning in 1769 It took until 1830 for the bulk of Euler's posthumous works to be individually published, with an additional batch of 61 unpublished works discovered by Paul Heinrich von Fuss (Euler's great-grandson and Nicolas Fuss's son) and published as a collection in 1862. A chronological catalog of Euler's works was compiled by Swedish mathematician Gustaf Eneström and published from 1910 to 1913. The catalog, known as the Eneström index, numbers Euler's works from E1 to E866. The Euler Archive was started at Dartmouth College before moving to the Mathematical Association of America and, most recently, to University of the Pacific in 2017. In 1907, the Swiss Academy of Sciences created the Euler Commission and charged it with the publication of Euler's complete works. After several delays in the 19th century, the first volume of the Opera Omnia, was published in 1911. However, the discovery of new manuscripts continued to increase the magnitude of this project. Fortunately, the publication of Euler's Opera Omnia has made steady progress, with over 70 volumes (averaging 426 pages each) published by 2006 and 80 volumes published by 2022. These volumes are organized into four series. The first series compiles the works on analysis, algebra, and number theory; it consists of 29 volumes and numbers over 14,000 pages. The 31 volumes of Series II, amounting to 10,660 pages, contain the works on mechanics, astronomy, and engineering. Series III contains 12 volumes on physics. Series IV, which contains the massive amount of Euler's correspondence, unpublished manuscripts, and notes only began compilation in 1967. After publishing 8 print volumes in Series IV, the project decided in 2022 to publish its remaining projected volumes in Series IV in online format only. == Notes == == References == === Sources === == Further reading == == External links == Leonhard Euler at the Mathematics Genealogy Project The Euler Archive: Composition of Euler works with translations into English Opera-Bernoulli-Euler (compiled works of Euler, Bernoulli family, and contemporary peers) Euler Tercentenary 2007 The Euler Society Euleriana at the Berlin-Brandenburg Academy of Sciences and Humanities Euler Family Tree Euler's Correspondence with Frederick the Great, King of Prussia Works by Leonhard Euler at LibriVox (public domain audiobooks) O'Connor, John J.; Robertson, Edmund F. "Leonhard Euler". MacTutor History of Mathematics Archive. University of St Andrews. Dunham, William (24 September 2009). "An Evening with Leonhard Euler". YouTube. Muhlenberg College: philoctetesctr (published 9 November 2009). (talk given by William Dunham at ) Dunham, William (14 October 2008). "A Tribute to Euler – William Dunham". YouTube. Muhlenberg College: PoincareDuality (published 23 November 2011).
Wikipedia:Leonid Berlyand#0
Leonid Berlyand is a Soviet and American mathematician, a professor of Penn State University. He is known for his works on homogenization, Ginzburg–Landau theory, mathematical modeling of active matter and mathematical foundations of deep learning. == Life and career == Berlyand was born in Kharkov on September 20, 1957. His father, Viktor Berlyand, was a mechanical engineer, and his mother, Mayya Genkina, an electronics engineer. Upon his graduation in 1979 from the department of mathematics and mechanics at the National University of Kharkov, he began his doctoral studies at the same university and earned a Ph.D. in 1984. His Ph. D. thesis studied the homogenization of elasticity problems. He worked at the Semenov Institute of Chemical Physics in Moscow. In 1991 he moved to the United States and started working at Pennsylvania State University, where he has served as a full professor since 2003. He has held long-term visiting positions at Princeton University, the California Institute of Technology, the University of Chicago, the Max Planck Institute for Mathematics in the Sciences, Argonne and Los Alamos National Laboratories. His research has drawn support from the National Science Foundation(NSF), NIH/NIGMS, the Applied Mathematics Program of the DOE Office of Sciences, BSF (the Bi-National Science Foundation USA-Israel) and the NATO Science for Peace and Security Section. Berlyand has authored roughly 100 works on homogenization theory and PDE/variational problems in biology and material science. He has organized a number of professional conferences and serves as a co-director of the Center for Mathematics of Living and Mimetic Matter at Penn State University. He has supervised 17 graduate students and ten postdoctoral fellows. == Research == Drawing upon fundamental works in classical homogenization theory, Berlyand advanced the methods of homogenization in many versatile applications. He obtained mathematical results applicable to diverse scientific areas including biology, fluid mechanics, superconductivity, elasticity, and material science. His mathematical modeling explains striking experimental result in the collective swimming of bacteria. His homogenization approach to multi-scale problems was transformed into a practical computational tool by introducing a concept of polyharmonic homogenization which led to a new type of multiscale finite elements. Together with H. Owhadi, he introduced a "transfer-of-approximation" modeling concept, based on the similarity of the asymptotic behavior of the errors of Galerkin solutions for two elliptic PDEs. He also contributed to mathematical aspects of the Ginzburg–Landau theory of superconductivity/superfluidity by introducing a new class of semi-stiff boundary problems. == Awards and honors == C. I. Noll Award for Excellence in Teaching, Penn State University (2004). Honorary professor of the Moscow State University "for his important contribution to Applied Mathematics and Mathematical Physics" (2017) Humboldt Prize (2021) == Membership in professional associations == Society for Industrial and Applied Mathematics (since 1993) Society for Mathematical Biology (since 2012) == Editorship == Managing Editor of Networks and Heterogeneous Media Associate Editor of SIAM/ASA Journal on Uncertainty Quantification (2013–2016) Member of Editorial board of International Journal for Multiscale Computational Engineering == Books (author) == "Introduction to Network Approximation for Materials Modeling" (with A. Kolpakov and A. Novikov), Cambridge University Press, 2012. "Getting Acquainted with Homogenization and Multiscale" (with V. Rybalko), part of the Compact Textbooks in Mathematics book series, Springer, 2018. "Mathematics of Deep Learning. An Introduction" (with P.-E. Jabin) De Gruyter, In the series De Gruyter Textbook, 2023. == Selected publications == "Stability in the Training of Deep Neural Networks and Other Classifiers" (with P.-E. Jabin and C. A. Safsten), Mathematical Models and Methods in Applied Sciences (M3AS)}, v. 31(11), pp. 2345-2390 (2021) [3] "Phase-Field Model of Cell Motility: Traveling Waves and Sharp Interface Limit" (with M. Potomkin and V. Rybalko), Comptes Rendus Mathématique, 354(10), pp. 986–992 (2016) [4] "Rayleigh Approximation for ground states of the Bose and Coulomb glasses" (with S. D. Ryan, V. Mityushev, and V. M. Vinokur), Scientific Reports: Nature Publishing Group, 5, 7821 (2015) [5] "Flexibility of bacterial flagella in external shear results in complex swimming trajectories" (with M. Tournus, A. Kirshtein, and I. Aranson), Journal of the Royal Society Interface 12 (102) (2014) [6] "Vortex phase separation in mesoscopic superconductors" (with O. Iaroshenko, V. Rybalko, V. M. Vinokur), Scientific Reports: Nature Publishing Group 3 (2013) [7] "Effective viscosity of bacterial suspensions: A three-dimensional PDE model with stochastic torque" (with B.M. Haines, I.S. Aranson, D.A. Karpeev), Comm. Pure Appl. Anal., v. 11(1), pp. 19–46 (2012) [8] "Flux norm approach to finite dimensional homogenization approximations with non-separated scales and high contrast" (with H. Owhadi), Arch. Rat. Mech. Anal., v. 198, n. 2, pp. 677–721 (2010) [9] "Solutions with Vortices of a Semi-Stiff Boundary Value Problem for the Ginzburg-Landau Equation" (with V. Rybalko), J. European Math. Society v. 12 n. 6, pp. 1497–1531 (2009) [10] "Fictitious Fluid Approach and Anomalous Blow-up of the Dissipation Rate in a 2D Model of Concentrated Suspensions" (with Y. Gorb and A. Novikov), Arch. Rat. Mech. Anal., v. 193, n. 3, pp. 585–622, (2009), DOI:10.1007/s00205-008-0152-2 [11] "Effective Viscosity of Dilute Bacterial Suspensions: A Two-Dimensional Model" (with B. Haines, I. Aronson, and D. Karpeev), Physical Biology, 5:4, 046003 (9pp) (2008) [12] "Ginzburg-Landau minimizers with prescribed degrees. Capacity of the domain and emergence of vortices" (with P. Mironescu), Journal of Functional Analysis, v. 239, n. 1, pp. 76–99 (2006) [13] "Network Approximation in the Limit of Small Interparticle Distance of the Effective Properties of a High-Contrast Random Dispersed Composite" (with A. Kolpakov), Archive for Rational Mechanics and Analysis, 159, pp. 179–227 (2001) [14] "Non-Gaussian Limiting Behavior of the Percolation Threshold in a Large System" (with J.Wehr), Communications in Mathematical Physics, 185, 73–92 (1997), pdf. "Large Time Asymptotics of Solutions to a Model Combustion System with Critical Nonlinearity" (with J. Xin), Nonlinearity, 8:161–178 (1995) [15] "Asymptotics of the Homogenized Moduli for the Elastic Chess-Board Composite" (with S. Kozlov), Archive for Rational Mechanics and Analysis, 118, 95–112 (1992) [16] == References == == External links == Berlyand's page at the site of the Penn State University Press release of Berlyand's research on Coulomb glasses A conference in honor of Leonid Berlyand's 60th birthday
Wikipedia:Leonid Bunimovich#0
Leonid Abramowich Bunimovich (born August 1, 1947) is a Soviet and American mathematician, who made fundamental contributions to the theory of Dynamical Systems, Statistical Physics and various applications. Bunimovich received his bachelor's degree in 1967, master's degree in 1969 and PhD in 1973 from the University of Moscow. His masters and PhD thesis advisor was Yakov G. Sinai. In 1986 (after Perestroika started) he finally received Doctor of Sciences degree in "Theoretical and Mathematical Physics". Bunimovich is a Regents' Professor of Mathematics at the Georgia Institute of Technology. Bunimovich is a Fellow of the Institute of Physics and was awarded Humboldt Prize in Physics. == Biography == His Master's proved that some classes of quadratic maps of an interval have an absolutely continuous invariant measure and strong stochastic properties. Bunimovich is mostly known for discovery of a fundamental mechanism of chaos in dynamical systems called the mechanism of defocusing. This discovery came as a striking surprise not only to mathematics but to physics community as well. Physicists could not believe that such (physical!) phenomenon is possible (even though a rigorous mathematical proof was provided) until they conducted massive numerical experiments. The most famous class of chaotic dynamical systems of this type, dynamical billiards are focusing chaotic billiards such as the Bunimovich stadium ("Bunimovich flowers", elliptic flowers, etc.). Later Bunimovich proved that his mechanism of defocusing works in all dimensions despite the phenomenon of astigmatism. Bunimovich introduced absolutely focusing mirrors, which is a new notion in geometric optics, and proved that only such mirrors could be focusing parts of chaotic billiards. He also constructed so called Bunimovich mushrooms, which are visual examples of billiards with mixed regular and chaotic dynamics. Physical realizations of Bunimovich stadia have been constructed for both classical and quantum investigations.. Although the discovery of defocusing mechanism was just a part of Bunimovich's PhD thesis, he could not find any job after finishing graduate school thanks to antisemitic policies in the Soviet Union. Bunimovich was unable to ever find work as a mathematician in the Soviet Union. Moreover, while finally finding a work place he was not allowed to publish mathematical papers because in the places, where he worked, authorities refused to confirm that his mathematical papers contain no state secrets. Likewise for a long time he could not attend mathematical conferences even in the Soviet Union. However, although being trained as a pure mathematician, Bunimovich turned out to be able to work in applications to biomedical studies and in oceanology. Notably, Bunimovich introduced and investigated hierarchical models of human populations, which allowed to clarify laws of distribution of hereditary diseases and explain data on migration in industrial parts of developed countries. He realized and demonstrated that the lengths of remissions in schizophrenia form a Markov process, i.e. that a length of remission depends only upon the length of the previous remission. Besides he demonstrated that among the types of attacks in schizophrenia there are those which more (or less) likely to occur after the other types of attacks. Before it was just known that the same type of attack will basically always happen. Bunimovich discovered traps for internal waves in non-homogeneous stratified fluids and analyzed their dynamics in such traps which in particular allowed to explain some surprising observations on internal waves in the oceans. One of the most fundamental problems in Statistical Physics is to derive Deterministic time-irreversible macro-dynamics from deterministic time-reversible Newtonian micro-dynamics. This problem, which was considered before to be mathematically non-tractable was settled in the paper by Bunimovich with Ya.G. Sinai for diffusion of mass in periodic Lorentz gas. In their previous paper was constructed the first infinite Markov partition for chaotic systems with singularities which allowed to transform this deterministic problem into probabilistic one. Then in Bunimovich's paper with H.Spohn diffusion of shear and bulk viscosities in deterministic periodic fluid was rigorously derived. The paper by Bunimovich with Ya.G. Sinai pioneered rigorous studies of the space-time chaos. There were even no exact definition of this widely observed in experiments phenomenon. In this paper such definition was given and it was proved that space-time chaos gets realized in weakly interacting time-chaotic systems. Together with Ben Webb Bunimovich introduced and developed the theory of Isospectral transformations for analysis of multi-dimensional systems and networks. This approach allows to obtain various types of visualization of networks as well to uncover their hierarchical structure and hidden symmetries. Bunimovich pioneered a rigorous theory of Finite-time dynamics and finite-time predictions for strongly chaotic and random systems. Together with Skums and Khudyakov Bunimovich discovered phenomenon of local immunodeficiency which demonstrates how viruses can cooperate to overcome an immune response of a human organism. It allowed to clarify numerous unexplained phenomena in evolution of the Hepatitis C and serves as new tool to study any disease with cross-immunoreactivity. == References == == External links == Personal webpage Leonid Bunimovich at the Mathematics Genealogy Project American Mathematical Society on Stadium Billiards Wolfram MathWorld: Stadium Billiards
Wikipedia:Leonid Levin#0
Leonid Anatolievich Levin ( LAY-oh-NEED LEV-in; Russian: Леони́д Анато́льевич Ле́вин [lʲɪɐˈnʲit ɐnɐˈtolʲjɪvʲɪtɕ ˈlʲevʲɪn]; Ukrainian: Леоні́д Анато́лійович Ле́він [leoˈn⁽ʲ⁾id ɐnɐˈtɔl⁽ʲ⁾ijowɪtʃ ˈlɛwin]; born November 2, 1948) is a Soviet-American mathematician and computer scientist. He is known for his work in randomness in computing, algorithmic complexity and intractability, average-case complexity, foundations of mathematics and computer science, algorithmic probability, theory of computation, and information theory. He obtained his master's degree at Moscow University in 1970 where he studied under Andrey Kolmogorov and completed the Candidate Degree academic requirements in 1972. He and Stephen Cook independently discovered the existence of NP-complete problems. This NP-completeness theorem, often called the Cook–Levin theorem, was a basis for one of the seven Millennium Prize Problems declared by the Clay Mathematics Institute with a $1,000,000 prize offered. The Cook–Levin theorem was a breakthrough in computer science and an important step in the development of the theory of computational complexity. Levin was awarded the Knuth Prize in 2012 for his discovery of NP-completeness and the development of average-case complexity. He is a member of the US National Academy of Sciences and a fellow of the American Academy of Arts and Sciences. == Biography == He obtained his master's degree at Moscow University in 1970 where he studied under Andrey Kolmogorov and completed the Candidate Degree academic requirements in 1972. After researching algorithmic problems of information theory at the Moscow Institute of Information Transmission of the National Academy of Sciences in 1972–1973, and a position as senior research scientist at the Moscow National Research Institute of Integrated Automation for the Oil/Gas Industry in 1973–1977, he emigrated to the U.S. in 1978 and also earned a Ph.D. at the Massachusetts Institute of Technology (MIT) in 1979. His advisor at MIT was Albert R. Meyer. He is well known for his work in randomness in computing, algorithmic complexity and intractability, average-case complexity, foundations of mathematics and computer science, algorithmic probability, theory of computation, and information theory. His life is described in a chapter of the book Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists. Levin and Stephen Cook independently discovered the existence of NP-complete problems. This NP-completeness theorem, often called the Cook–Levin theorem, was a basis for one of the seven Millennium Prize Problems declared by the Clay Mathematics Institute with a $1,000,000 prize offered. The Cook–Levin theorem was a breakthrough in computer science and an important step in the development of the theory of computational complexity. Levin's journal article on this theorem was published in 1973; he had lectured on the ideas in it for some years before that time (see Trakhtenbrot's survey), though complete formal writing of the results took place after Cook's publication. Levin was awarded the Knuth Prize in 2012 for his discovery of NP-completeness and the development of average-case complexity. He is currently a professor of computer science at Boston University, where he began teaching in 1980. == Notes == == References == "Leonid A. Levin". Mathematics Genealogy Project. == External links == Levin's home page at Boston University. 2012 Knuth Prize to Leonid Levin
Wikipedia:Leonid Manevitch#0
Leonid Isakovich Manevitch (Russian: Леонид Исакович Маневич; 2 April 1938 – 20 August 2020) was a Soviet and Russian physicist, mechanical engineer, and mathematician. He made fundamental contributions to areas of nonlinear dynamics, composite and polymer physics, and asymptotology. == Biography == Manevitch was born on 2 April 1938 in Mogilev (USSR, now Belarus). He received his M.S. with great distinction in mechanics (1959) and his Candidate of Sciences (PhD, 1961) and Doctor of Sciences (1970) from Dnipro National University. From 1959 to 1964, Manevitch worked on missile design as an aerospace engineer and head of the Stress Analysis Team under Mikhail Yangel at the Yuzhnoye Design Office. In 1964, he became an associate professor at Dnipro National University. His doctoral thesis was devoted to asymptotic and group-theory methods in the mechanics of deformable solids. He was promoted in 1973 to Full Professor in the Department of Applied Theory of Elasticity. After moving to Moscow in 1976, he worked as a senior research fellow and later as head of the Division of Polymer Physics and Mechanics at the Semenov Institute of Chemical Physics of the Russian Academy of Sciences. In 1984, he was appointed a professor of polymer physics and mechanics at the Department of Molecular and Chemical Physics of the Moscow Institute of Physics and Technology. == Scientific activity == Several of his works are devoted to the connections between physics and mathematics, and in particular, asymptotology. Manevitch made significant contributions to the theory of nonlinear normal oscillations in essentially nonlinear systems, to nonstationary dynamics of nonlinear oscillatory systems; to molecular dynamics and physics of polymers and composite materials. His research has numerous applications in various fields of mechanical science and engineering, polymer physics, and nanotechnology. Under his leadership, the Division of Polymer Physics and Mechanics became one of the world's leading research teams in its field. His team actively collaborated with leading research centers in the USA, Italy, France, Israel, and Germany. A detailed review of the scientific activities of Prof. L.I. Manevitch can be found in == Publications == L.I. Manevitch was an active participant at many Russian and international symposia, conferences and congresses. As a guest speaker he repeatedly appeared at seminars of famous universities in the USA, European countries and Israel. His scientific results are presented in 20 monographs and in more than 400 publications. === Books === Problems of Nonlinear Mechanics and Physics of Materials (book) Manevitch L.I.: Interaction of Physics and Mathematics. Moscow-Izhevsk: Izhevsk Institute of Computer Researches (2018) (in Russian). Manevitch L.I., Kovaleva A.S., Smirnov V.V., Starosvetsky Yu.: Nonstationary Resonant Dynamics of Oscillatory Chains and Nanostructures. Singapore: Springer Nature (2017). Manevitch L.I., Gendelman O.V.: Analytically Solvable Models of Solid Mechanics. Moscow-Izhevsk: Izhevsk Institute of Computer Researches, (2016) (in Russian). Manevitch L.I., Gendelman O.V.: Tractable Models of Solid Mechanics. Formulation, Analysis and Interpretation. Berlin, Heidelberg, London, New York, Springer (2011). Manevitch L.I., Smirnov V.V.: Solitons in Macromolecular Systems. New York, Nova Science Publishers (2008). Manevich A.I., Manevitch L.I.: The Mechanics of Nonlinear Systems with Internal Resonances. Imperial College Press, London (2005). Andrianov I.V., Awrejcewicz J., Manevitch L.I.: Asymptotical Mechanics of Thin-Walled Structures. Berlin, Heidelberg, New York, Springer (2004). Andrianov I.V., Barantsev R.G., Manevitch L.I.: Asymptotical Mathematics and Synergetics, Moscow, URSS (2004) (in Russian). Manevitch L.I., Andrianov I.V., Oshmyan V.G.: Mechanics of Periodically. Heterogeneous Structures, Berlin, Heidelberg, New York, Springer (2002). Andrianov I.V., Manevitch L.I.: Asymptotology. Ideas, Methods, and Applications. Dordrecht, Boston, London. Kluwer Academic Publishers (2002). Awrejcewicz J., Andrianov I., Manevitch L.: Asymptotic Approaches in Nonlinear Dynamics: New Trends and Applications. Berlin-Heidelberg –New York, Springer-Verlag. (1998). Vakakis A.F., Manevitch L.I., Mikhlin Yu.V., Pilipchuk V.N., Zevin A.A.: Normal Modes and Localization in Nonlinear Systems. New York: Wiley (1996). Andrianov I.V., Manevich L.I.: Asymptotology: Ideas, Methods, Results, M., ASLAN (1994) (in Russian). Manevitch L.I., Pavlenko A.V.: Asymptotic Method in Micromechanics of Composite Materials. Kiev, Vyshchaya Shkola (High School) (1991) (in Russian). Andrianov I.V., Manevitch L.I.: Asymptotic Methods and Physical Theories. Moscow: Znanie (1989) (in Russian). Manevitch L.I., Mikhlin Yu.V., Pilipchuk V.N.: The Method of Normal Oscillations for Essentially Nonlinear Systems. Moscow: Nauka (1989) (in Russian). Andrianov I.V., Lesnichaya V.A., Loboda V.V., Manevitch L.I.: Investigation of Strength of Reinforced Shells of Engineering Structures. Kiev-Donetsk, Vyshchaya Shkola (High school) (1986) (in Russian). Andrianov I.V., Lesnichaya V.A., Manevitch L.I.: The Averaging Method in Statics and Dynamics of Ribbed Shells. Moscow: Nauka (1985) (in Russian). Manevitch L.I., Pavlenko A.V., Koblik S.G.: Asymptotic Methods in the Theory of Orthotropic Solids. Kiev: Vysshaya Shkola (High School) (1982) (in Russian). Mossakovskii V.I., Manevitch L.I., Mil’tzin A.M.: Modeling of Strength of Thin Shells. Kiev: Naukova Dumka (1977) (in Russian). == References == == External links == Leonid Manevitch publications indexed by Google Scholar Leonid Manevitch on Google Scholar (in Russian) Leonid Manevitch on MathNet Leonid Manevitch on MathSciNet Leonid Manevitch on ZBMath Leonid Manevitch on Scopus
Wikipedia:Leonid Polterovich#0
Leonid Polterovich (Hebrew: לאוניד פולטרוביץ'; Russian: Леонид В. Полтерович; born 30 August 1963) is a Russian-Israeli mathematician at Tel Aviv University. His research field includes symplectic geometry and dynamical systems. A native of Moscow, Polterovich earned his undergraduate degree at Moscow State University in 1984. He moved to Israel after the collapse of communism, earning his doctorate from Tel Aviv University in 1990. In 1996, he was awarded the EMS Prize, in 1998 the Erdős Prize, and in 2003 the Michael Bruno Memorial Award by Yad Hanadiv. In 1998, he was an Invited Speaker of the International Congress of Mathematicians in Berlin. In 2016, he gave a plenary lecture at the 7th European Congress of Mathematics in Berlin. He was a member of the faculty of the University of Chicago. He was elected to the Academia Europaea in 2024. == References == == External links == Website at Tel Aviv University Leonid Polterovich at the Mathematics Genealogy Project
Wikipedia:Leonid Sedov#0
Leonid Ivanovich Sedov (Russian: Леонид Иванович Седов; 14 November 1907, Rostov-on-Don – 5 September 1999, Moscow) was a Russian physicist who worked as an engineer in the former Soviet space program. In 1930 Sedov graduated from the Moscow State University, where he had been a student of Sergey Chaplygin, with the degree of Doctor of Physics and Mathematical Sciences. He later became a professor at the university. During World War II, he devised the so-called Sedov Similarity Solution for a blast wave. In 1947 he was awarded the Chaplygin Prize. He was the first chairman of the USSR Space Exploration program and broke first news of its existence in 1955. He was president of the International Astronautical Federation (IAF) from 1959 to 1961. Until recently, it had been thought that Sedov was the principal engineer behind the Soviet Sputnik project. == Awards and honors == Order of the Badge of Honour (1943) Two Orders of the Red Banner of Labour (1945, 1961) Stalin Prize, 2nd class (1952) Six Orders of Lenin (1954, 1963, 1967, 1975, 1980, 1987) Hero of Socialist Labour (1967) Allan D. Emil Memorial Award (1981) Order "For Merit to the Fatherland", 4th class (1998) == See also == Taylor–von Neumann–Sedov blast wave == References == == Bibliography == Reference to 1955 announcement Obituary notice in Minutes of General Assembly Meetings, 2000 section Sedov, L. I., 1959, Similarity and Dimensional Methods in Mechanics, 4th edn. Academic. L.I. Sedov, A course in continuum mechanics. Volumes. I-IV. Wolters-Noordhoff Publishing, Netherlands, 1971. Sedov, L. I., "Propagation of strong shock waves," Journal of Applied Mathematics and Mechanics, Vol. 10, pages 241–250 (1946). (See also: Barber–Layden–Power effect) Reference to confusion with Ukrainian physicist Sergei Korolyov [1].
Wikipedia:Leonid Vaserstein#0
Leonid Nisonovich Vaserstein (Russian: Леонид Нисонович Васерштейн) is a Russian-American mathematician, currently Professor of Mathematics at Penn State University. His research is focused on algebra and dynamical systems. He is well known for providing a simple proof of the Quillen–Suslin theorem, a result in commutative algebra, first conjectured by Jean-Pierre Serre in 1955, and then proved by Daniel Quillen and Andrei Suslin in 1976. Leonid Vaserstein got his Master's degree and doctorate in Moscow State University, where he was until 1978. He then moved to Europe and the United States. Alternate forms of the last name: Vaseršteĭn, Vasershtein, Wasserstein. The Wasserstein metric was named after him by R.L. Dobrushin in 1970. == Biography == Leonid Vaserstein grew up in the Soviet Union. In secondary school he won the second prize in the All-Russian High School Mathematical Olympiad. Vaserstein got his undergraduate, masters (1966), and doctoral degrees (1969) in mathematics from Moscow State University, where he worked as a lecturer concurrently with his doctoral research. After his doctoral graduation he worked for the Moscow State University-associated "Informelectro" Institute, a Federal State Unitary Enterprise focused on ways to develop industries in Russia with emphases on electrical engineering, energy efficiency, and environmental technologies like greenhouse gas mitigation. He started as a senior researcher for Informelectro and continued working there until 1978, eventually becoming head of his department. In 1978 and 1979 he made his way to the United States of America by way of Europe, taking a series of visiting professor positions at the University of Bielefeld, Institut des Hautes Études Scientifiques, University of Chicago, and Cornell University. In 1979, Vaserstein took a full-time position as a professor in the Department of Mathematics at Penn State University. Vaserstein's research interests extend across the areas of topology, algebra, and number theory, and the applications of these areas, including classical groups over rings, algebraic K-theory, systems with local interactions, and optimization and planning. Additionally, Vaserstein maintains the Penn State University Math Department's website on Algebra and Number Theory. == Selected publications == Vaserstein, Leonid N. (1969). "Markov processes over denumerable products of spaces describing large systems of automata". Problemy Peredači Informacii. 5 (3): 64–72. Vaserstein, Leonid N. (1986). "On normal subgroups of Chevalley groups over commutative rings". Tohoku Math. J. 38 (2): 219–230. doi:10.2748/tmj/1178228489. Vaserstein, Leonid N. (1986). "Vector bundles and projective modules". Trans. Amer. Math. Soc. 294 (2): 749–755. doi:10.1090/s0002-9947-1986-0825734-3. MR 0825734. Vaserstein, L. N. (1986). "Normal subgroups of the general linear groups over von Neumann regular rings". Proc. Amer. Math. Soc. 96 (2): 209–214. doi:10.1090/s0002-9939-1986-0818445-7. MR 0818445. Vaserstein, L. N. (1986). "An answer to a question of M. Newman on matrix completion". Proc. Amer. Math. Soc. 97 (2): 189–196. doi:10.1090/s0002-9939-1986-0835863-1. MR 0835863. Vaserstein, L. N. (1988). "Reduction of a matrix depending on parameters to a diagonal form by addition operations". Proc. Amer. Math. Soc. 103 (3): 741–746. doi:10.1090/s0002-9939-1988-0947649-x. MR 0947649. Vaserstein, L. N. (1988). "Normal subgroups of orthogonal groups over commutative rings". Amer. J. Math. 110 (5): 955–973. doi:10.2307/2374699. JSTOR 2374699. Vaserstein, L. N. (1991). "Sums of cubes in polynomial rings". Math. Comp. 56 (193): 349–357. Bibcode:1991MaCom..56..349V. doi:10.1090/s0025-5718-1991-1052104-3. MR 1052104. == See also == List of Russian mathematicians == References == == External links == Leonid Vaserstein at the Mathematics Genealogy Project A web page about Leonid N. Vaserstein's publications Leonid N. Vaserstein home page
Wikipedia:Leonidas Alaoglu#0
Leonidas (Leon) Alaoglu (Greek: Λεωνίδας Αλάογλου; March 19, 1914 – August 1981) was a mathematician best known for Alaoglu's theorem on the weak-star compactness of the closed unit ball in the dual of a normed space, also known as the Banach–Alaoglu theorem. == Life and work == Alaoglu was born in Red Deer, Alberta to Greek parents. He received his BS in 1936, Master's in 1937, and PhD in 1938 (at the age of 24), all from the University of Chicago. His dissertation, written under the direction of Lawrence M. Graves, was on Weak topologies of normed linear spaces and establishes Alaoglu's theorem. The Bourbaki–Alaoglu theorem is a generalization of this result by Bourbaki to dual topologies. After some years teaching at Pennsylvania State College, Harvard University and Purdue University, in 1944 he became an operations analyst for the United States Air Force. From 1953 to 1981, he worked as a senior scientist in operations research at the Lockheed Corporation in Burbank, California, where he wrote numerous research reports, some of them classified. During the Lockheed years, he took an active part in seminars and other mathematical activities at Caltech, UCLA and USC. After his death in 1981, a Leonidas Alaoglu Memorial Lecture Series Archived 2020-08-06 at the Wayback Machine was established at Caltech. Speakers have included Paul Erdős, Irving Kaplansky, Paul Halmos and Hugh Woodin. == See also == Axiom of Choice – The Banach–Alaoglu theorem is not provable from ZF without use of the Axiom of Choice. Banach–Alaoglu theorem Gelfand representation List of functional analysis topics Superabundant number – Article explains the 1944 results of Alaoglu and Erdős on this topic Tychonoff's theorem Weak topology – Leads to the weak-star topology to which the Banach–Alaoglu theorem applies. == Publications == Alaoglu, Leonidas (M.S. thesis, U. of Chicago, 1937). "The asymptotic Waring problem for fifth and sixth powers" (24 pages). Advisor: Leonard Eugene Dickson Alaoglu, Leonidas (Ph.D. thesis, U. of Chicago, 1938). "Weak topologies of normed linear spaces" Advisor: Lawrence Graves Alaoglu, Leonidas (1940). "Weak topologies of normed linear spaces". Annals of Mathematics. 41 (2): 252–267. doi:10.2307/1968829. JSTOR 1968829. MR 0001455. Alaoglu, Leonidas; J. H. Giese (1946). "Uniform isohedral tori". American Mathematical Monthly. 53 (1): 14–17. doi:10.2307/2306079. JSTOR 2306079. MR 0014230. Alaoglu, Leonidas; Paul Erdős (1944). "On highly composite and similar numbers" (PDF). Transactions of the American Mathematical Society. 56 (3): 448–469. doi:10.2307/1990319. JSTOR 1990319. MR 0011087. Alaoglu, Leonidas; Paul Erdős (1944). "A conjecture in elementary number theory". Bulletin of the American Mathematical Society. 50 (12): 881–882. doi:10.1090/S0002-9904-1944-08257-8. MR 0011086. Alaoglu, Leonidas; Garrett Birkhoff (1940). "General ergodic theorems". Annals of Mathematics. 41 (2): 252–267. doi:10.2307/1969004. JSTOR 1969004. MR 0002026. PMC 1077986. PMID 16588311. == References == Mac Lane, Saunders (December 1996). "Letter to the editor" (PDF). Notices of the American Mathematical Society: 1469–1471. == External links == Leonidas Alaoglu at the Mathematics Genealogy Project
Wikipedia:Leopoldo Nachbin#0
Leopoldo Nachbin (7 January 1922 – 3 April 1993) was a Brazilian mathematician of Jewish origins who dealt with topology and harmonic analysis. Nachbin was born in Recife, and is best known for Nachbin's theorem. He died, aged 71, in Rio de Janeiro. He went to primary school in Recife with Brazilian literary giant Clarice Lispector. He is featured in one of her short stories, called “As Grandes Punições” (The Great Punishments) in her book “Aprendendo a Viver,” by the Rocco publishing house. Nachbin was a Ph.D. student of Laurent Schwartz at the University of Chicago. His Ph.D. students include Francisco Antônio Dória and Seán Dineen. His monographs Topology and Order and The Haar Integral, both published by Van Nostrand-Reinhold in 1965, are regarded as exceptional expositions of the subjects. Overall he authored ten books, most of which were published internationally, and edited a dozen tomes of the prestigious North-Holland Mathematical Studies series between 1970 and the early 1980s. He is best known for a Tauberian-type theorem (Nachbin's theorem) on the growth rate of analytic functions, and for the so-called Hewitt–Nachbin space, a topological linear space that is bornological in the compact-open topology. He was an invited speaker at the International Congress of Mathematicians (ICM) of 1962 in Stockholm ("Résultats récents et problèmes de nature algébrique en théorie de l'approximation"), being the first Brazilian speaker at the ICM. == Bibliography == Topology and Order (Van Nostrand-Reinhold, 1965) The Haar Integral (Van Nostrand-Reinhold, 1965; Krieger 1976) Elements of Approximation Theory (Van Nostrand-Reinhold, 1967; Krieger 1976) Introdução à Álgebra (McGraw-Hill, 1971, in Portuguese) == References == Ralph A. Raimi - Leopoldo Nachbin, 1922-1993. Archived 2015-07-15 at the Wayback Machine Candido Lima da Silva Dias, Chaim Samuel Hönig, Luis Adauto da Justa Medeiros - Leopoldo Nachbin. J. Horváth. The life and works of Leopoldo Nachbin. == External links == Leopoldo Nachbin at the Mathematics Genealogy Project Leopoldo Nachbin, 1922-1993 Archived 2015-07-15 at the Wayback Machine Os trabalhos de Leopoldo Nachbin (1922-1993), by Jorge Mujica, in Portuguese, free translation: "The Works of Leopoldo Nachbin (1922-1993)"
Wikipedia:Leopoldo Penna Franca#0
Leopoldo Penna Franca (April 7, 1959 – September 19, 2012, in Rio de Janeiro, Brazil) was a Brazilian-American mathematician. He received his PhD in 1987 from Stanford University in engineering under Thomas J. R. Hughes. After graduation, he worked at the pt:Laboratório Nacional de Computação Científica - LNCC in Brazil. From 1993 to 2011, he was a full professor and researcher of mathematics at the University of Colorado Denver. From 2008 to 2010 he was a visiting professor and researcher of mathematics at the Federal University of Rio de Janeiro - UFRJ / Coppe at the Civil Engineering Department in collaboration to Alvaro Coutinho. From 2011 until 2012, he worked for IBM Research Brazil. He was known for his work on stabilized finite element methods. He was a recipient of the United States Association for Computational Mechanics - USACM. Received in 1999 the Gallagher Young Investigator Award for "outstanding accomplishments in computational mechanics, particularly in the published literature, by a researcher 40 years old or younger". He was listed as an ISI Highly Cited Author in Engineering by the ISI Web of Knowledge, Thomson Scientific Company. == References == == External links == Leopoldo Penna Franca at the Mathematics Genealogy Project Leopoldo Penna Franca publications indexed by Google Scholar
Wikipedia:Lerche–Newberger sum rule#0
The Lerche–Newberger, or Newberger, sum rule, discovered by B. S. Newberger in 1982, finds the sum of certain infinite series involving Bessel functions Jα of the first kind. It states that if μ is any non-integer complex number, γ ∈ ( 0 , 1 ] {\displaystyle \scriptstyle \gamma \in (0,1]} , and Re(α + β) > −1, then ∑ n = − ∞ ∞ ( − 1 ) n J α − γ n ( z ) J β + γ n ( z ) n + μ = π sin ⁡ μ π J α + γ μ ( z ) J β − γ μ ( z ) . {\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}J_{\alpha -\gamma n}(z)J_{\beta +\gamma n}(z)}{n+\mu }}={\frac {\pi }{\sin \mu \pi }}J_{\alpha +\gamma \mu }(z)J_{\beta -\gamma \mu }(z).} Newberger's formula generalizes a formula of this type proven by Lerche in 1966; Newberger discovered it independently. Lerche's formula has γ =1; both extend a standard rule for the summation of Bessel functions, and are useful in plasma physics. == References ==
Wikipedia:Lesley Cormack#0
Lesley B. Cormack (born 1957) is a Canadian historian of science and academic administrator specializing in the history of mathematics and of geography. She is the Deputy Vice-Chancellor and Principal of the University of British Columbia's Okanagan Campus. == Education and career == Cormack obtained her BA from the University of Calgary and her MA from the University of Toronto before earning her Ph.D. at the University of Toronto in 1988. She was a faculty member at the University of Alberta until 2007, when she moved to Simon Fraser University as Dean of the Faculty of Arts and Social Sciences. She returned to Alberta as Dean of the Faculty of Arts in 2010. While there, she increased the Indigenous student population by 100 percent. She moved to UBC Okanagan as Deputy Vice-Chancellor and Principal in 2020. Cormack specializes in 16th century English geography and mathematics. == Recognition == Cormack became a corresponding member of the International Academy of the History of Science in 2010, and a full member in 2015. == Other Work == Cormack served as director of the Citadel Theatre for nine years. == Books == Cormack is the author or editor of books including: Charting an Empire: Geography at the English Universities 1580-1620 (University of Chicago Press, 1997) Making Contact: Maps, Identity, and Travel (edited with Glenn Burger, Jonathan Hart, and Natalia Pylypiuk, University of Alberta Press, 2003) A History of Science in Society: From Philosophy to Utility (with Andrew Ede, 2 vols., Broadview Press, 2004; 2nd ed., University of Toronto Press, 2012; 3rd ed., 2016 and 2017) Mathematical Practitioners and the Transformation of Natural Knowledge in Early Modern Europe (edited with Stephen A. Walton and John A. Schuster, Springer, 2017) == References ==
Wikipedia:Lesley Ward#0
Lesley Ann Ward is an Australian mathematician specializing in harmonic analysis, complex analysis, and industrial applications of mathematics. She is a professor in the School of Information Technology and Mathematical Sciences of the University of South Australia, director of the Mathematics Clinic at the university, and former chair of the Women in Mathematics Group of the Australian Mathematical Society. == Education and career == Ward earned a bachelor's degree in mathematics with first class honours from Australian National University (ANU) in 1987; at ANU, she served as president of the Students' Association for 1984–1985. She went to Yale University for her graduate studies, also visiting the Mittag-Leffler Institute as a student, and completed her doctorate at Yale in 1994. Her dissertation, Fuchsian Groups, Quasiconformal Groups, and Conical Limit Sets, was supervised by Peter Jones. She worked as a G. C. Evans Instructor at Rice University from 1994 to 1997, overlapping with a postdoctoral fellowship at the Mathematical Sciences Research Institute in 1995. She then joined the Harvey Mudd College mathematics faculty, and in 2006 moved to the University of South Australia. == Selected publications == With Cristina Pereyra, Ward is the author of Harmonic Analysis: from Fourier to Wavelets (Student Mathematical Library 63, American Mathematical Society, 2012). She has also published highly cited work on the HITS algorithm for using link structure to rate web pages. == Recognition == Ward won the Henry L. Alder Award of the Mathematical Association of America] in 2006 for distinguished teaching as a beginning undergraduate teacher. She was included in the 2019 class of fellows of the Association for Women in Mathematics] "for her enduring commitment to supporting women in the mathematical sciences; for her mentoring in research; for her work on inclusivity; and for her leadership of the Women in Mathematics Special Interest Group in Australia". == References == == External links == Lesley Ward publications indexed by Google Scholar
Wikipedia:Leslie Fox#0
Leslie Fox (30 September 1918 – 1 August 1992) was a British mathematician noted for his contribution to numerical analysis. == Overview == Fox studied mathematics as a scholar of Christ Church, Oxford graduating with a first in 1939 and continued to undertake research in the engineering department. While working on his D.Phil. in computational and engineering mathematics under the supervision of Sir Richard Southwell he was also engaged in highly secret war work. He worked on the numerical solution of partial differential equations at a time when numerical linear algebra was performed on a desk calculator. Computational efficiency and accuracy was thus even more important than in the days of electronic computers. Some of this work was published after the end of the Second World War jointly with his supervisor Richard Southwell. On gaining his doctorate in 1942, Fox joined the Admiralty Computing service. Following World War II in 1945, he went to work in the mathematics division of the National Physical Laboratory. He left the National Physical Laboratory in 1956 and spent a year at the University of California. In 1957 Fox took up an appointment at Oxford University where he set up the Oxford University Computing Laboratory. In 1963, Fox was appointed as Professor of Numerical Analysis at Oxford and Fellow of Balliol College, Oxford. Fox's laboratory at Oxford was one of the founding organisations of the Numerical Algorithms Group (NAG), and Fox was also a dedicated supporter of the Institute of Mathematics and its Applications (IMA). The Leslie Fox Prize for Numerical Analysis of the IMA is named in his honour. == Mathematical work == A detailed description of Fox's mathematical research can be found in obituaries and is summarised here. His early work with Southwell was concerned with the numerical solution of partial differential equations arising in engineering problems that, due to the complexity of their geometry, did not have analytical solutions. Southwell's group developed efficient and accurate relaxation methods, which could be implemented on desk calculators. Fox's contributions were particularly notable because he combined practical skills with theoretical advances in relaxation methods, which were to become important areas of research in numerical analysis. During the 1950 automatic electronic computers were replacing manual electro-mechanical devices. This led to different problems in the implementation of numerical algorithms; however, the approach of approximating a partial differential equation by finite difference method and thus reducing the problem to a system of linear equations was the same. Careful analysis of the errors was a theme of many of Fox's early papers. His work at the Admiralty Computing Service and the National Physical Laboratory led to an interest in the computation of special functions, and his calculations were used in published tables. The techniques applied to the computation of special functions had much wider applicability including interpolation, stability of recurrence relations and asymptotic behaviour. During the 1950s, the group at the National Physics Laboratory worked on numerical linear algebra, which led to the publication of algorithms by Wilkinson and others. While not directly involved in development of numerical software, he supported others in this endeavour. Fox worked on procedures for solving differential equations in which the accuracy of the solution is estimated using asymptotic estimates. Fox's paper on this in 1947 led to the work of Victor Pereyra error-correcting algorithms for boundary-value problems and Stetter's results on defect correction and the resulting order of convergence. Fox was also interested in the treatment of singularities in partial differential equations, the Stefan problem and other cases of free and moving boundaries. Many of these problems arose from his collaboration with mathematicians in industry through the Oxford Study Groups. == Fox's wider influence == While Fox influenced the development of numerical analysis through his undergraduate teaching and postgraduate supervision (he supervised around 19 doctoral students), industrial collaboration he also made significant contributions to course material for the Open University. He lectured widely on 'meaningless answers', describing some of the pitfalls of numerical computation from the uncritical use of simple methods Fox played a significant part in the early days of the Numerical Algorithms Group (NAG), which set out as a collaborative venture between Oxford, Nottingham and Manchester to provide a reliable and well-tested mathematical subroutine library. The Oxford University Computing Laboratory was one of the founder members of NAG when it started in 1970; Fox supported it strongly and he became a member of its council when the Group was incorporated in 1976 continuing in this capacity until 1984. Fox was an active member of the Institute of Mathematics and its Applications from its beginnings, as a member of the Council and as an editor first of the main IMA Journal and later the specialised Journal of Numerical Analysis, started in 1981. The IMA marked his retirement from Oxford in 1983 by a special IMA symposium on 'The contributions of Leslie Fox to numerical analysis'. His interests extended to mathematics in schools and he participated the development of the School Mathematics Project, and was active in the local branch of the Mathematical Association, of which he was President in 1964. The first winner of the IMA's Leslie Fox prize for Numerical Analysis in 1985, Lloyd N. Trefethen, went on to be appointed to the chair in Numerical Analysis at Oxford that was created for Leslie Fox in 1963. == Personal life == Leslie Fox's mother was Annie Vincent and his father was Job Senior Fox who was a coalminer. Leslie Fox won a scholarship to Wheelwright Grammar School in Dewsbury, which produced several notable scientists from the same period as Fox. Fox was a keen sportsman and played football for the university Football Club as well as for Oxford City Football Club. At the National Physical Laboratory he was club tennis champion and captain of the cricket team, he also distinguished himself as a sprinter in the civil service championships. Fox, who had enjoyed good health up to 1981, suffered from heart problems during his retirement and died from a ruptured aneurysm in the John Radcliffe Hospital, Oxford, in 1992. == Selected publications == Leslie Fox, The Numerical Solution of Two-Point Boundary Problems in Ordinary Differential Equations, 1957, reprinted by Dover, 1990. ISBN 0-486-66495-3 L. Fox, An introduction to numerical linear algebra, 1964, Oxford University Press, Oxford, England. ISBN 0-19-500325-X L. Fox, D.F. Mayers, Numerical solution of ordinary differential equations. Chapman & Hall, London, 1987. ISBN 0-412-22650-2 == References == == External links == O'Connor, John J.; Robertson, Edmund F., "Leslie Fox", MacTutor History of Mathematics Archive, University of St Andrews Leslie Fox at the Mathematics Genealogy Project Fox Prize in Numerical Analysis Obituary by D.F. Mayers (University of Oxford) and J.E. Walsh (University of Manchester), Bulletin of the London Mathematical Society 31 (1999), 241–7, including a list of ninety publications
Wikipedia:Leslie Leland Locke#0
Leslie Leland Locke (1875–1943) was an American mathematician, historian, and educator, best known for his work towards deciphering ancient Andean knot records called quipus. Locke's most prominent work, The Ancient Quipu or Peruvian Knot Record (1923), demonstrated how the Inca tied knots on quipu cords using a base-10 positional number system. In addition to his work on quipus, Locke is also recognized for his research on the history of mathematics and mathematical instruments, particularly his research and collection of calculating machines. == Education == Locke earned both his B.A. (1896) and M.A. (1900) from Grove City College. He went on to study mathematics at Pennsylvania State University; Cornell University; and eventually at Teachers College at Columbia University, where he studied under Professor David Eugene Smith. As a graduate student studying the history of mathematics, Locke assisted Smith and Yoshio Mikami with their 1914 book, The History of Japanese Mathematics, by taking the many photographs used throughout the book. == Career == Early in his career, Locke held several short-term teaching positions, including at West Sunbury Academy in West Sunbury, Pennsylvania; a high school in Fredonia, Pennsylvania; and at Michigan State University in East Lansing, Michigan. In 1902, he moved to Brooklyn, New York, where he began teaching at Adelphi College, a position he held for six years. In 1906, he transitioned to the Maxwell Training School for Teachers, also in Brooklyn. In 1933, Locke joined Brooklyn Technical High School as a mechanical drawing instructor, a role he held until his retirement in 1942. Concurrently, from 1917 to 1938, he served as a professor of mathematics, teaching evening sessions at Brooklyn College. Aside from teaching, Locke authored several scientific publications (see section on selected publications), often writing under the abbreviated name "L. Leland Locke." He was a "Foundation Member" of the History of Science Society (HSS) and served as the society's Secretary at one point. Additionally, he was a member of several other academic organizations, including the American Mathematics Society (AMS), the National Council of Teachers of Mathematics (NCTM), and the Mathematics Association of America (MAA). === Quipu research === Under the guidance of Professor David Eugene Smith, Locke began studying Andean quipus, drawing on Smith's extensive collection of rare books on South America and his access to specimens housed at the American Museum of Natural History. Notably, an accession card for quipu B/8715 in the museum's collection indicates that the specimen was lent to Smith in November 1911, likely for Locke's research. Locke's first major work on the Andean quipu was published in 1912 as an article in American Anthropologist, titled "The Ancient Quipu, a Peruvian Knot Record". In this seminal work, Locke outlined a basic working model for how Inca quipus recorded numbers using three types of knots: the overhand knot, the figure-eight knot, and eight types of long knots. He showed that a knot's distance from the quipu's main cord was used to denote its value in a decimal system. He argued that quipus were not used directly for counting or calculating—e.g., an abacus—but rather solely to record information. Finally, he strongly believed quipu knots were used purely for numerical purposes. Locke later expanded his initial 1912 article into a full-length book, publishing The Ancient Quipu or Peruvian Knot Record through the American Museum of Natural History in 1923. Early reviews hailed the book as "the first serious attempt to elucidate the quipu mystery" and noted that "the conclusions reached by Professor Locke are very important." In the preface to one of his own works on quipus, Erland Nordenskiöld—a leading expert in South American archaeology and anthropology of the early 20th century—praised Locke as "the founder of the modern study of the quipu". === Calculating machines === After his work on quipus, Locke became interested in the history of the calculating machine. He soon became an avid collector of these devices and amassed a collection of well over 100 items, at least one of which was thought to have been the first of its kind. Several of the more rare pieces in Locke's collection included: The first direct multiplication machine (designed by Ramon Verea in 1878) A lever-set barrel calculating machine (patented by George B. Grant in 1887) A cylindrical slide rule (invented by George Fuller in 1878) In 1939, Locke donated his large collection to the Smithsonian Institution. According to the Smithsonian, Locke had initially intended his collection to go to the Museums of the Peaceful Arts in New York, but the museum closed before he could do so. === Personal Library === Over the course of his life, Locke amassed an extensive personal library reflecting his interests in mathematics and other topics. Many of the books in Locke's collection featured a personalized bookplate with a circular design surrounded by the text "EX LIBRIS" at the top and "LESLIE LELAND LOCKE" at the bottom, meaning "From the library of Leslie Leland Locke." The central image depicts a figure, possibly seated or standing, engaged in scholarly or artistic activity, with a detailed rectangular object below that may represent a table, altar, or chest. Locke's collection included notable mathematical works such as: Numerorum Mysteria (1591) by Pietro Bongo. Barlow’s Tables of Squares, Cubes, Square Roots, Cube Roots, Reciprocals of All Integer Numbers up to 10,000 (1840) by Peter Barlow. Mathematical Tables, Automatic Arithmetic: A New System for Multiplication and Division Without Mental Labour and Without the Use of Logarithms (1878) by John Sawyer. Along with his books, Locke collected various copies of tests and examinations, including (but not limited to) a test for a fifth-grade algebra class, college entrance examinations, and New York Training School certificate examinations. He also preserved lecture notes from his time as both a student and a mathematics teacher, providing insight into early 20th-century educational practices. Locke's library also revealed his other interests beyond mathematics. Among these were Tricks With Cards (1893) by Professor Hoffmann, a book on sleight of hand card tricks, and On the Economy of Machinery and Manufactures (1832) by Charles Babbage, a treatise on industrial efficiency and the application of scientific principles to manufacturing. Following his death, Locke left his collection of books on mathematics to his alma mater, Grove City College. However, one source also notes that Locke donated "valuable early American text-books" to the University of Michigan during his lifetime. The collection given to Grove City College was later donated by the college to the Smithsonian Libraries and Archives. == Death == Locke died at his home at 950 St. John's Place in Brooklyn, New York, on August 28, 1943. Some sources describe his death as sudden, while others report that he died "after a long illness". == Selected publications == Locke, L. Leland. 1909. "Pure Mathematics." The Science-History of the Universe, 8:1–187. New York: Current Literature Publishing Company. Locke, L. Leland. 1912. "The Ancient Quipu, a Peruvian Knot Record." American Anthropologist 14 (2): 325–32. Locke, L. Leland. 1923. The Ancient Quipu or Peruvian Knot Record. American Museum of Natural History. Locke, L. Leland. 1924. "The History of Modern Calculating Machines, An American Contribution." The American Mathematical Monthly 31 (9): 422–29. Locke, L. Leland. 1924. "Mathematics of the Calculating Machine." The Mathematics Teacher 17 (2):78-86. Locke, L. Leland. 1926. "The First Direct-Multiplication Machine." Typewriter Topics, November:16-18. Locke, L. Leland. 1927. "A Peruvian Quipu." In Contributions from the Museum of the American Indian, Heye Foundation, 7:3–11. New York: Museum of the American Indian, Heye Foundation. Locke, L. Leland. 1928. "Supplementary Notes on the Quipus in the American Museum of Natural History." Anthropological Papers of the American Museum of Natural History 30 (3): 43–73. == Notes ==
Wikipedia:Lester Skaggs#0
Lester Skaggs, Ph.D. (21 November 1911 – 3 April 2009) was a pioneer in the field of medical physics and radiation therapy, a teacher, and innovator. == Life and Times == Skaggs was born on 21 November 1911 in Trenton, Missouri. He grew up on a farm in northern Missouri. He attended a one-room schoolhouse and to get to high school, he had to ride a horse. Skaggs was the oldest of three children and his father planned for Skaggs to become a farmer. Instead, Skaggs had other interests and found amazement with tinkering, and enjoyed designing and building contraptions and made plans for a science career. He attended the University of Missouri and completed a B.S. in chemistry with a minor in mathematics in 1933 and M.S. in physics in 1934. He moved to Chicago in 1935, entered the University of Chicago and was accepted into the graduate program in nuclear physics. In 1939, Skaggs was awarded a Ph.D. in nuclear physics. At the University of Chicago, Skaggs had a post-doctoral fellowship in nuclear physics and secured part-time work at the Tumor Clinic at Michael Reese Hospital in radiation oncology. From 1941 – 1943, the war effort took him to Washington, D.C. where he served at the Carnegie Institution of Washington in the Department of Terrestrial Magnetism. Skaggs worked with physicist Nicholas Smith to design an airplane proximity detection system that utilized radio waves to locate and detonate anti-aircraft shells. == Manhattan Project == In 1943, he was sent to the Manhattan Project at Los Alamos, New Mexico, working under Robert Oppenheimer to develop the atomic bomb. At Los Alamos, Skaggs was charged with the task of adapting the anti-aircraft detection system into a failsafe "fuse" for the first bomb that would be used against Japan. From a distance of 20 miles, Skaggs witnessed the first test at Alamogordo, New Mexico. Skaggs immediately understood that the current plans left an unacceptable amount of time for the bombardiers to safely escape from the drop zone. He developed a system with two back-up systems that allowed additional time to for the plane and crew to make a secure exit from the skies over Japan. == Michael Reese Hospital == When World War II ended Skaggs returned home to Chicago and went back to work on the medical applications of radiation. He resumed his work at Michael Reese Hospital and went on assignment to the physics department at the University of Illinois to team with Donald Kerst for a physics research project. This began what was to become a classic case of collaborative work between two individuals. The goal was to utilize a Betatron to extract an electron beam for medical use. The betatron was invented by Kerst for physics experiments. As serendipity often plays a role in medical and scientific developments, sometimes accompanied by a dose of irony, chance presented a member of the team, who was a promising physics graduate student. The student was diagnosed with a brain tumor, glioblastoma multiformae and no current treatment options available for the tumor. The first clinical use of the high energy Betatron radiation for medical therapy proved beneficial to effectively reduce the mass of the tumor, yet not sufficient to eliminate the tumor and provide a cure. == University of Chicago == In 1948, Skaggs accepted a faculty appointment as assistant professor of radiology at the University of Chicago. In 1949, he took the promotion to associate professor with the responsibility for the development of radiation therapy equipment and facilities at Argonne Cancer Research Hospital (ACRH). The Atomic Energy Commission program, titled "Atoms for Peace" funded the facilities at ACRH. In 1953, the ACRH was among the initial list of hospitals dedicated to radiation therapy for cancer treatment. At ACRH, the next project for the Skaggs and Lanzl team was the design of a cobalt treatment unit, that was built for the most part, in the machine shops of ACRH and the University of Illinois. The duo of Skaggs and Lanzl began in the 1950s another project. This time the goal was to develop and establish a graduate program in medical physics, perhaps the first in the United States. In the 1960s, a doctorate program was launched that would award a Ph.D. degree in medical physics. In 1956, Skaggs, received a promotion to full professor. He designed and built an analog computer to calculate the radiation dose to tissue to be utilized in treatment plans for radiation therapy. The ‘computer’ was finally running by 1963 and the components occupied a small room. In the 1970s, Franca T. Kuchnir and Skaggs developed a method to produce neutrons for radiation therapy, maybe the first fast-neutron therapy facility in the United States. == References ==
Wikipedia:Leticia Brambila Paz#0
Gloria Leticia Brambila Paz (born 1953) is a Mexican mathematician specializing in algebraic geometry and the moduli of algebraic curves. She is a professor in the Centro de Investigación en Matemáticas (CIMAT) in Guanajuato, Mexico. == Education and career == Brambila was born on 26 January 1953. She went to Swansea University in the United Kingdom for doctoral study in mathematics, completing her PhD in 1986 with the dissertation Homomorphisms of Vector Bundles over Compact Riemann Surface supervised by Alan Thomas. She worked as an assistant professor at the National Autonomous University of Mexico from 1973 to 1976, and as a professor of mathematics at UAM Iztapalapa in Mexico City from 1983 to 1998, when she moved to her present position at CIMAT. She also became a life fellow of Clare Hall, Cambridge in 2011. == Recognition == Brambila was elected to the Mexican Academy of Sciences in 2001. == References ==