source
stringlengths 16
98
| text
stringlengths 40
168k
|
|---|---|
Wikipedia:Frederi Viens#0
|
Frederi G. Viens is an American statistician, mathematician, and academic. He is a professor in the Department of Statistics at Rice University, a founding member of the Diverse Rotations Improve Valuable Ecosystem Services Project, a senior research contributor to the Sustainability of Agrarian Societies in the Lake Chad Basin initiative, and moderator of the long-term scientific committee for the Seminar on Stochastic Processes conference series. Viens' primary research areas are probability theory, stochastic processes, quantitative finance, and Bayesian statistics. His research collaboration has focused on different areas, including climate change, agro-ecology, agricultural economics, development economics, nuclear physics, and human medicine. He was named a Franklin Fellow at the US State Department in 2010 and Fellow of the Institute of Mathematical Statistics in 2013. His work has been published in journals, including Annals of Probability, Annals of Statistics and American Journal of Agricultural Economics. == Education == Viens earned a Master's degree in Mathematics from the University of California at Irvine and a Master's degree in Pure Mathematics from the University of Paris, both in 1991. He went on to complete his PhD in Mathematics from the University of California at Irvine in 1996. == Career == In 1997, Viens became an assistant professor of mathematics at the University of North Texas, holding this post until 2000. In 2000, he joined Purdue University's Departments of Statistics and Mathematics as an assistant professor, was promoted to associate professor there in 2003 and professor in 2008. In 2016, he was appointed professor at Michigan State University's Department of Statistics and Probability. In 2022, he joined Rice University, where he has been serving as full professor in its Department of Statistics, in the School of Engineering and Computing. Viens was a science adviser for the Bureau of African Affairs at the US State Department from 2010 until 2011. Moreover, between 2015 and 2016, he served as program director in the Division of Mathematical Sciences at the US National Science Foundation. He also served as the chairperson of the Department of Statistics and Probability at Michigan State University between 2016 and 2020. At MSU, he served as the director of the BS Program in Actuarial Science and Quantitative Risk Analysis from 2017 to 2022. == Research == Viens has conducted research on stochastic processes, focusing on the existence and regularity properties of random processes in the context of stochastic differential equations, evaluated the predictive power of mathematical models to improve the quantification of nuclear binding and the nuclear saturation point, and demonstrated the consistency of statistical estimators in linear and nonlinear stochastic equations with long memory noise. His research in quantitative finance has focused on improving the modeling of risk uncertainty in insurance and estimated stochastic volatility for stock option pricing. His findings have offered insights for managing systemic risk in financial markets and have focused on improving risk management and policymaking in financial sectors. His research in agricultural and developmental economics has advanced mathematical models for estimating economic risks, using Bayesian hierarchical modeling to analyze U.S. agricultural R&D's impact on productivity, assess uncertainties in R&D lag structures, and explore long-term policy implications. He has also collaborated with climate scientists to help reconstruct Planet Earth's global mean surface temperatures over the past two millennia, including principled uncertainty quantification using Bayesian statistics. == Personal life == Viens is married to Carolyn Johnston, a history professor at Michigan State University. They have a daughter. He and Johnston operate a small-scale, sheep farm in Laingsburg, Michigan, which also received a humane farming grant from the Food Animal Concerns Trust in 2019. == Awards and honors == 2010 – Franklin Fellow, U.S. Department of State 2013 – Fellow, Institute of Mathematical Statistics 2013 – Inaugural College of Science Research Award, Purdue University 2021 – IMS Annals Quadfecta Twenty-Three Recognition, Institute of Mathematical Statistics 2024 – Wolfson Fellowship Senior Investigator, British Academy == Bibliography == === Book === David, Claire; Mustapha, Sami; Viens, Frederi; Capron, Nathalie (2014). Mathematiques Pour Les Sciences de La Vie - Tout Le Cours En Fiches: 140 Fiches de Cours, 200 Exercices Corriges Et Exemples D'Applications (in French). Dunod. ISBN 978-2-10-059977-6. === Selected articles === Tindel, S.; Tudor, C.A.; Viens, F. (October 2003). "Stochastic evolution equations with fractional Brownian motion". Probability Theory and Related Fields. 127 (2): 186–204. doi:10.1007/s00440-003-0282-2. Tudor, Ciprian A.; Viens, Frederi G. (July 2007). "Statistical aspects of the fractional stochastic calculus". The Annals of Statistics. 35 (3). doi:10.1214/009053606000001541. Chronopoulou, Alexandra; Viens, Frederi G. (May 2012). "Estimation and pricing under long-memory stochastic volatility". Annals of Finance. 8 (2–3): 379–403. doi:10.1007/s10436-010-0156-4. Barboza, Luis; Li, Bo; Tingley, Martin P.; Viens, Frederi G. (December 2014). "Reconstructing past temperatures from natural proxies and estimated climate forcings using short- and long-memory models". The Annals of Applied Statistics. 8 (4). doi:10.1214/14-AOAS785. Neufcourt, Léo; Cao, Yuchen; Nazarewicz, Witold; Viens, Frederi (24 September 2018). "Bayesian approach to model-based extrapolation of nuclear observables". Physical Review C. 98 (3): 034318. arXiv:1806.00552. Bibcode:2018PhRvC..98c4318N. doi:10.1103/PhysRevC.98.034318. == References ==
|
Wikipedia:Frederic Wan#0
|
Frederic Yui-Ming Wan is a Chinese-American applied mathematician, academic, author and consultant. He is a Professor Emeritus of Mathematics at the University of California, Irvine (UCI), and an Affiliate Professor of Applied Mathematics at the University of Washington (UW). Wan is most known for his research in applied mathematics, theoretical mechanics, resource economics, and biomathematics. He is the author of more than 150 archival journal research publications and 6 books. These and some of his educational and service programs have been recognized by his election as a Fellow of the American Academy of Mechanics (AAM), American Society of Mechanical Engineers (ASME), American Association for the Advancement of Science (AAAS), and Society for Industrial and Applied Mathematics (SIAM). There are two Lecture Series (at UCI and UW, respectively) in honor of him and his wife Julia and a conference room in his name in Lewis Hall at UW that houses the Department of Applied Mathematics. == Early life and education == Wan was born in 1936 in Shanghai, China to Olga Jung Wan and Wai-nam Wan. While his parents relocated to Paris France to work in the Chinese Embassy in the same year, Wan grew up in the care of his grandparents and went to school in Saigon and Cholon before he left for Seattle in 1954 as a derived citizen of an American mother. Wan graduated from Garfield High School of Seattle in 1955, and headed for undergraduate study at the Massachusetts Institute of Technology (MIT). In his freshman year, Wan pledged and was initiated into the Theta Deuteron chapter of the Theta Delta Chi fraternity at MIT. He received an S.B. degree in mathematics in 1959 and earned his S.M. and Ph.D. degrees in mathematics at the same institute in 1963 and 1965, respectively. His doctoral dissertation, "Twisting and Stretching of Helicoidal Shells", was supervised by E. Reissner. == Career == After receiving his SB at MIT, Wan served as a Research Staff Member at the MIT Lincoln Laboratory from 1959 to 1965. Upon receiving his Ph.D. in mathematics, he held a postdoctoral appointment as an instructor of mathematics at MIT, was promoted to assistant professor of Applied Mathematics in 1967, and to associate professor in 1969. From 1974 till 1983, he served as a professor of mathematics at the University of British Columbia (UBC), but left in 1983 for the University of Washington (UW) as Professor of Applied Mathematics. In 1995, he moved to the University of California, Irvine (UCI), as a Professor Mathematics with a joint appointment as Professor of Mechanical and Aerospace Engineering. From 1999 till 2005, he also held an appointment as Professor of Civil and Environmental Engineering. He retired from his regular faculty position at UCI in 2017 and became Professor Emeritus of Mathematics. Along with academic appointments, Wan also held a number of administrative positions in his career. In relocating to UBC in 1974, Wan also accepted the appointment as the first Director of the new Institute of Applied Mathematics and Statistics. While in Canada, he helped establish the Canadian Applied Mathematics Society and served as its President in 1981–83. He also served as member (1980–82) and Chair (1982–83) of the Committee of Pure and Applied Mathematics of Natural Sciences and Engineering Research Council (NSERC), the counterpart of the American National Science Foundation (NSF). In 1983, Wan moved to the University of Washington as the founding chair to establish its new Department of Applied Mathematics. In 1988, he assumed the Divisional Deanship of the Natural and Mathematical Sciences of the College of Arts and Sciences at UW and served until his temporary assignment in 1992 to become the Director of the Division of Mathematical Sciences of NSF. By assuming that position, Wan became the only person to have headed the government civilian funding agency for basic research in pure and applied mathematics in both Canada and the United States. In 1995, he was appointed Vice Chancellor for Research and Dean of Graduate Studies at UCI. Upon completing his five-year term of these appointments and returning to full-time faculty status in Mathematics in 2000, Wan led a team of research collaborators to develop an Interdisciplinary Gateway Graduate Program in Mathematical and Computational Biology (MCB) in 2007. He then extended similar educational opportunities to undergraduates (by the MCB for Undergraduate Program in 2011) and post-doctoral researchers (by a National Short Course on System Biology in 2010). He served as the Founding Director of these programs with funding support from the Howard Hughes Medical Institute (HHMI), National Institutes of Health (NIH) and the National Science Foundation (NSF). Wan has been engaged periodically as a consultant to industries and government on new problems involving mechanical structures. The most impactful one among these was a design study leading to the flexible lid that provides an air-tight seal to the household Tupperware container. == Research == Wan's research interests span fields of applied mathematics and their applications. He has more than 150 research articles and five books in the fields of theoretical and applied mechanics, resource economics, neurosciences, viral dynamics, and developmental and cell biology. His first publication on exhaustible resource economics (with R. M. Solow), foundations of plate and shell theory (with R.D. Gregory) and the morphogen transport (with A.D. Lander and Q. Nie) constitute his seminal work in these three areas, respectively. === Applied mathematics === In the broad area of applied mathematics, Wan's research generally pertains to variational and perturbation methods for exact and asymptotic solutions for problems in ODE and PDE. In the application of matched asymptotic expansions methods to residential land use and fishery problems, critical point analysis to exhaustible resource management and optimal control techniques to forest harvest rotations and the life cycle of the infectious Chlamydia bacterium, he introduced the use of a number of mathematical methods originally developed for physical sciences and engineering to scientific areas outside these fields. === Theoretical and applied mechanics === Wan's interest in theoretical mechanics started with his work on the large Haystack 37m Radio Antenna Project at Lincoln Laboratory of MIT for determining the distortion of thin paraboloidal shell of revolution. His research in this area centers on the elasto-statics of beam, plate and shell structures: the solution of specific boundary value problems (BVP) by one and two-dimensional theories for these three-dimensional structures and how they relate to the three-dimensional theory of elasticity (with the latter known as the foundations of beam, plate and shell theories). A principal contribution in the former category would be the reduction of large number of equations (as many as 30) for shell theories to two simultaneous equations of the Reissner and Marguerre types for the more restricted theories of axisymmetric deformations of the shells of revolution and shallow shells. A fundamental contribution in the latter category would be the determination of the interior (or outer-asymptotic expansion) solution for a BVP (for a plate or shell structure) independent of the boundary layer solution components of the corresponding exact solution. While the result reduces to the well-known Saint-Venant's principle for stress BVP, his work (jointly with R.D. Gregory) extends this well-known principle to cover the problems with mixed and pure displacement boundary data. === Mathematical life sciences === Wan had worked with the economist Robert M. Solow on the economics of exhaustible resources while at MIT. That area of his activities was broadened to include the economics and management of renewable resources such as fishery and forestry as well as neuroscience when he relocated to UBC. After relocating to UCI, Wan fully committed to research and education in mathematical projects for the life sciences. Wan initiated research collaboration with Qing Nie of Mathematics and Arthur Lander and J. Lawrence Marsh of Developmental and Cell Biology. Their initial project was to address a controversy on diffusion as a transport mechanism for morphogen in the extracellular space leading to his first of many publications in this area of life sciences. Among his other publications in mathematical biology, a recent work relating (by the Maximum Principle in optimal control theory) the life cycle of the infectious disease Chlamydia to the bacteria's drive for competitive survival shows the utility of (the Maximum Principle in) optimal control theory in the bio-theoretics of natural selection. == Books == Wan is the author of 6 books. His book Mathematical Models and Their Analysis, originally published in 1989, was reviewed by W.J. Satzer who wrote that "One of the real strengths of the book is the depth of experience teaching mathematical modeling that the author displays." The book has been republished in the SIAM Classics Series in Applied Mathematics in 2018. In his 1995 book Introduction to the Calculus of Variations and Its Applications, Wan has provided a detailed introduction to the calculus of variations and optimal control. R. Grinnell is of the view that "the author [of the book] is distinguished applied mathematician and his experience in pedagogy is realized through a style of exposition that is lively, personable, and very clear. This is definitely a book to be read and enjoyed." At the end of a lengthy review of Dynamical System Models in the Life Sciences and Their Underlying Scientific Issues, Reviewer J. Ibbotson found himself "... somewhat "out of breath" at the finish line — this is a compressed and driven journey through a large amount of mathematics." Wan’s 2019 book Stochastic Models in the Life Sciences and their Methods of Analysis was described by a review for CHOICE as "impressively accessible" and "approachable for biologists at all levels, including those interested in deepening their skills in mathematical modeling and those who seek an overview to aid them in communicating with collaborators in mathematics and statistics." His latest book entitled Spatial Dynamics Models in the Life Sciences and the Role of Feedback in Robust Developments was published by World Scientific in 2023. == Personal life == Wan has been married Julia Y.S. Chang since 1960. == Awards and honors == 1981 – Fellow, American Academy of Mechanics (AAM) (elected Secretary of the Fellows 1984 and President of the Academy in 1992) 1987 - Fellow, American Society of Mechanical Engineers (ASME) 1991 - The Arthur Beaumont Distinguished Service Award, Canadian Applied Mathematics Society 1994 - Certification of Recognition, National Science Foundation 1994 - Silver Anniversary Honor for Service as Academy President, American Academy of Mechanics 1995 - Fellow, American Association for the Advancement of Science (AAAS) 1999 - Foreign Member, Russian Academy of Natural Sciences 2004 - Visiting Chair Professorship, Zhou Pei – Yuan Center for Applied Mathematics, Tsinghua University, Beijing 2004 - Teaching Excellence, Division of Undergraduate Education (DUE), UCI 2005– 2010 - President of UCI Chapter, Sigma Xi (elected Associate Member in 1963 and Member in 1965, MIT Chapter) 2006 - UCI Chancellor's Award for Excellence in Fostering Undergraduate Research 2006 - Outstanding Contributions to Undergraduate Education award, School of Physical Sciences, UCI 2010 - Fellow, Society for Industrial and Applied Mathematics (SIAM) == Bibliography == === Selected articles === Solow, R. M., & Wan, F. Y. (1976). Extraction costs in the theory of exhaustible resources. The Bell Journal of Economics, 359–370. Lander, A. D., Nie, Q., & Wan, F. Y. (2002). Do morphogen gradients arise by diffusion?. Developmental cell, 2(6), 785–796. Mizutani, C. M., Nie, Q., Wan, F. Y., Zhang, Y. T., Vilmos, P., Sousa-Neves, R., ... & Lander, A. D. (2005). Formation of the BMP activity gradient in the Drosophila embryo. Developmental cell, 8(6), 915–924. Lo, W. C., Chou, C. S., Gokoffski, K. K., Wan, F. Y. M., Lander, A. D., Calof, A. L., & Nie, Q. (2009). Feedback regulation in multistage cell lineages. Mathematical biosciences and engineering: MBE, 6(1), 59. Lander, A. D., Gokoffski, K. K., Wan, F. Y. M., Nie, Q., & Calof, A. L. (2009). Cell lineages and the logic of proliferative control. PLoS biology, 7(1), e1000015. Enciso, G.A., Sütterlin, C., Tan, M. and Wan, F.Y.M. (2022). Stochastic Chlamydia dynamics and optimal spread, Bull. Math. Biol. 83: 24. == References ==
|
Wikipedia:Frederick Lincoln Emory#0
|
Frederick Lincoln Emory (April 10, 1867 – December 31, 1919) was an American college football coach and professor of mechanics and applied mathematics. He served as the first head football coach at West Virginia University, coaching one game in 1891. The single game that he coached was played on November 28, 1891, against Washington and Jefferson. The West Virginia Mountaineers lost by a score of 72 to zero, the second-worst loss in the history of the program. Emory died in 1919 from heart-related problems. == Head coaching record == == References == == External links == Frederick Lincoln Emory at Find a Grave
|
Wikipedia:Frederick Valentine Atkinson#0
|
Frederick Valentine "Derick" Atkinson (25 January 1916 – 13 November 2002) was a British mathematician, formerly of the University of Toronto, Canada, where he spent most of his career. Atkinson's theorem and Atkinson–Wilcox theorem are named after him. His PhD advisor at Oxford was Edward Charles Titchmarsh. == Early life and education == The following synopsis is condensed (with permission) from Mingarelli's tribute to Atkinson. He attended St Paul's School, London from 1929 to 1934. The High Master of St. Paul's once wrote of Atkinson: "Extremely promising: He should make a brilliant mathematician"! Atkinson attended The Queen's College, Oxford in 1934 with a scholarship. During his stay at Queen's, he was secretary of the Chinese Student Society, and a member of the Indian Student Society. Auto-didactic when it came to languages, he taught himself and became fluent in Latin, Ancient Greek, Urdu, German, Hungarian, and Russian with some proficiency in Spanish, Italian, and French. His dissertation at Oxford in 1939 established, among other such results, asymptotic formulae for the average value of the square of the Riemann zeta function on the critical line. His final Examining Board at Oxford University consisted of G.H. Hardy, J.E. Littlewood and E.C. Titchmarsh. == Career == His first academic appointment was at Magdalen College, Oxford, in 1939–1940, followed by a commission (1940) in the Government Code and Cypher School at Bletchley Park. At this time he met Dusja Haas, later to become his wife. He then took a position as Lecturer in Christ Church, Oxford. From 1948 to 1955 he was Full Professor in Mathematics (Chair, and Dean of Arts) at University College, Ibadan, in Nigeria. He joined Canberra University College (now part of Australian National University) in 1955 as Head of its Department of Mathematics. He left for the University of Toronto, in Toronto, Canada, in 1960 where he was Professor until his retirement in 1982 and Professor Emeritus until his death in 2002. == Honours == His honors include: Fellow of the Royal Society of Canada (1967), U. K. Science Research Council Visiting Fellow at the University of Dundee and at the University of Sussex (1970), British Council Lecturer to U. K. universities (1973), Honorary Fellow of the Royal Society of Edinburgh (1975), Royal Society of Edinburgh's Makdougall-Brisbane Prize (1974–1976), 29th President of the Canadian Mathematical Society (1989–1991), and winner of an Alexander Von Humboldt Research Award (1992). == Bibliography == Atkinson was the author of 3 books (one of them posthumous with Angelo B. Mingarelli) and more than 130 papers. He is best remembered for his classic text "Discrete and Continuous Boundary Problems" (1964), and his seminal contributions to differential equations as outlined in the margin. == External links == O'Connor, John J.; Robertson, Edmund F., "Frederick Valentine Atkinson", MacTutor History of Mathematics Archive, University of St Andrews Frederick (Derick) Valentine Atkinson Archived 1 June 2016 at the Wayback Machine by Angelo B. Mingarelli A glimpse into the life and times of F.V. Atkinson by Angelo B. Mingarelli == References ==
|
Wikipedia:Fredholm alternative#0
|
In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a theorem on Fredholm operators. Part of the result states that a non-zero complex number in the spectrum of a compact operator is an eigenvalue. == Linear algebra == If V is an n-dimensional vector space and T : V → V {\displaystyle T:V\to V} is a linear transformation, then exactly one of the following holds: For each vector v in V there is a vector u in V so that T ( u ) = v {\displaystyle T(u)=v} . In other words: T is surjective (and so also bijective, since V is finite-dimensional). dim ( ker ( T ) ) > 0. {\displaystyle \dim(\ker(T))>0.} A more elementary formulation, in terms of matrices, is as follows. Given an m×n matrix A and a m×1 column vector b, exactly one of the following must hold: Either: A x = b has a solution x Or: AT y = 0 has a solution y with yTb ≠ 0. In other words, A x = b has a solution ( b ∈ Im ( A ) ) {\displaystyle (\mathbf {b} \in \operatorname {Im} (A))} if and only if for any y such that AT y = 0, it follows that yTb = 0 ( i . e . , b ∈ ker ( A T ) ⊥ ) {\displaystyle (i.e.,\mathbf {b} \in \ker(A^{T})^{\bot })} . == Integral equations == Let K ( x , y ) {\displaystyle K(x,y)} be an integral kernel, and consider the homogeneous equation, the Fredholm integral equation, λ φ ( x ) − ∫ a b K ( x , y ) φ ( y ) d y = 0 {\displaystyle \lambda \varphi (x)-\int _{a}^{b}K(x,y)\varphi (y)\,dy=0} and the inhomogeneous equation λ φ ( x ) − ∫ a b K ( x , y ) φ ( y ) d y = f ( x ) . {\displaystyle \lambda \varphi (x)-\int _{a}^{b}K(x,y)\varphi (y)\,dy=f(x).} The Fredholm alternative is the statement that, for every non-zero fixed complex number λ ∈ C , {\displaystyle \lambda \in \mathbb {C} ,} either the first equation has a non-trivial solution, or the second equation has a solution for all f ( x ) {\displaystyle f(x)} . A sufficient condition for this statement to be true is for K ( x , y ) {\displaystyle K(x,y)} to be square integrable on the rectangle [ a , b ] × [ a , b ] {\displaystyle [a,b]\times [a,b]} (where a and/or b may be minus or plus infinity). The integral operator defined by such a K is called a Hilbert–Schmidt integral operator. == Functional analysis == Results about Fredholm operators generalize these results to complete normed vector spaces of infinite dimensions; that is, Banach spaces. The integral equation can be reformulated in terms of operator notation as follows. Write (somewhat informally) T = λ − K {\displaystyle T=\lambda -K} to mean T ( x , y ) = λ δ ( x − y ) − K ( x , y ) {\displaystyle T(x,y)=\lambda \;\delta (x-y)-K(x,y)} with δ ( x − y ) {\displaystyle \delta (x-y)} the Dirac delta function, considered as a distribution, or generalized function, in two variables. Then by convolution, T {\displaystyle T} induces a linear operator acting on a Banach space V {\displaystyle V} of functions φ ( x ) {\displaystyle \varphi (x)} V → V {\displaystyle V\to V} given by φ ↦ ψ {\displaystyle \varphi \mapsto \psi } with ψ {\displaystyle \psi } given by ψ ( x ) = ∫ a b T ( x , y ) φ ( y ) d y = λ φ ( x ) − ∫ a b K ( x , y ) φ ( y ) d y . {\displaystyle \psi (x)=\int _{a}^{b}T(x,y)\varphi (y)\,dy=\lambda \;\varphi (x)-\int _{a}^{b}K(x,y)\varphi (y)\,dy.} In this language, the Fredholm alternative for integral equations is seen to be analogous to the Fredholm alternative for finite-dimensional linear algebra. The operator K {\displaystyle K} given by convolution with an L 2 {\displaystyle L^{2}} kernel, as above, is known as a Hilbert–Schmidt integral operator. Such operators are always compact. More generally, the Fredholm alternative is valid when K {\displaystyle K} is any compact operator. The Fredholm alternative may be restated in the following form: a nonzero λ {\displaystyle \lambda } either is an eigenvalue of K , {\displaystyle K,} or lies in the domain of the resolvent R ( λ ; K ) = ( K − λ Id ) − 1 . {\displaystyle R(\lambda ;K)=(K-\lambda \operatorname {Id} )^{-1}.} == Elliptic partial differential equations == The Fredholm alternative can be applied to solving linear elliptic boundary value problems. The basic result is: if the equation and the appropriate Banach spaces have been set up correctly, then either (1) The homogeneous equation has a nontrivial solution, or (2) The inhomogeneous equation can be solved uniquely for each choice of data. The argument goes as follows. A typical simple-to-understand elliptic operator L would be the Laplacian plus some lower order terms. Combined with suitable boundary conditions and expressed on a suitable Banach space X (which encodes both the boundary conditions and the desired regularity of the solution), L becomes an unbounded operator from X to itself, and one attempts to solve L u = f , u ∈ dom ( L ) ⊆ X , {\displaystyle Lu=f,\qquad u\in \operatorname {dom} (L)\subseteq X,} where f ∈ X is some function serving as data for which we want a solution. The Fredholm alternative, together with the theory of elliptic equations, will enable us to organize the solutions of this equation. A concrete example would be an elliptic boundary-value problem like ( ∗ ) L u := − Δ u + h ( x ) u = f in Ω , {\displaystyle (*)\qquad Lu:=-\Delta u+h(x)u=f\qquad {\text{in }}\Omega ,} supplemented with the boundary condition ( ∗ ∗ ) u = 0 on ∂ Ω , {\displaystyle (**)\qquad u=0\qquad {\text{on }}\partial \Omega ,} where Ω ⊆ Rn is a bounded open set with smooth boundary and h(x) is a fixed coefficient function (a potential, in the case of a Schrödinger operator). The function f ∈ X is the variable data for which we wish to solve the equation. Here one would take X to be the space L2(Ω) of all square-integrable functions on Ω, and dom(L) is then the Sobolev space W 2,2(Ω) ∩ W1,20(Ω), which amounts to the set of all square-integrable functions on Ω whose weak first and second derivatives exist and are square-integrable, and which satisfy a zero boundary condition on ∂Ω. If X has been selected correctly (as it has in this example), then for μ0 >> 0 the operator L + μ0 is positive, and then employing elliptic estimates, one can prove that L + μ0 : dom(L) → X is a bijection, and its inverse is a compact, everywhere-defined operator K from X to X, with image equal to dom(L). We fix one such μ0, but its value is not important as it is only a tool. We may then transform the Fredholm alternative, stated above for compact operators, into a statement about the solvability of the boundary-value problem (*)–(**). The Fredholm alternative, as stated above, asserts: For each λ ∈ R, either λ is an eigenvalue of K, or the operator K − λ is bijective from X to itself. Let us explore the two alternatives as they play out for the boundary-value problem. Suppose λ ≠ 0. Then either (A) λ is an eigenvalue of K ⇔ there is a solution h ∈ dom(L) of (L + μ0) h = λ−1h ⇔ –μ0+λ−1 is an eigenvalue of L. (B) The operator K − λ : X → X is a bijection ⇔ (K − λ) (L + μ0) = Id − λ (L + μ0) : dom(L) → X is a bijection ⇔ L + μ0 − λ−1 : dom(L) → X is a bijection. Replacing -μ0+λ−1 by λ, and treating the case λ = −μ0 separately, this yields the following Fredholm alternative for an elliptic boundary-value problem: For each λ ∈ R, either the homogeneous equation (L − λ) u = 0 has a nontrivial solution, or the inhomogeneous equation (L − λ) u = f possesses a unique solution u ∈ dom(L) for each given datum f ∈ X. The latter function u solves the boundary-value problem (*)–(**) introduced above. This is the dichotomy that was claimed in (1)–(2) above. By the spectral theorem for compact operators, one also obtains that the set of λ for which the solvability fails is a discrete subset of R (the eigenvalues of L). The eigenvalues’ associated eigenfunctions can be thought of as "resonances" that block the solvability of the equation. == See also == Spectral theory of compact operators Farkas' lemma == References == Fredholm, E. I. (1903). "Sur une classe d'equations fonctionnelles". Acta Math. 27: 365–390. doi:10.1007/bf02421317. A. G. Ramm, "A Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators", American Mathematical Monthly, 108 (2001) p. 855. Khvedelidze, B.V. (2001) [1994], "Fredholm theorems", Encyclopedia of Mathematics, EMS Press "Fredholm alternative", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia:Fredholm's theorem#0
|
In mathematics, Fredholm's theorems are a set of celebrated results of Ivar Fredholm in the Fredholm theory of integral equations. There are several closely related theorems, which may be stated in terms of integral equations, in terms of linear algebra, or in terms of the Fredholm operator on Banach spaces. The Fredholm alternative is one of the Fredholm theorems. == Linear algebra == Fredholm's theorem in linear algebra is as follows: if M is a matrix, then the orthogonal complement of the row space of M is the null space of M: ( row M ) ⊥ = ker M . {\displaystyle (\operatorname {row} M)^{\bot }=\ker M.} Similarly, the orthogonal complement of the column space of M is the null space of the adjoint: ( col M ) ⊥ = ker M ∗ . {\displaystyle (\operatorname {col} M)^{\bot }=\ker M^{*}.} == Integral equations == Fredholm's theorem for integral equations is expressed as follows. Let K ( x , y ) {\displaystyle K(x,y)} be an integral kernel, and consider the homogeneous equations ∫ a b K ( x , y ) ϕ ( y ) d y = λ ϕ ( x ) {\displaystyle \int _{a}^{b}K(x,y)\phi (y)\,dy=\lambda \phi (x)} and its complex adjoint ∫ a b ψ ( x ) K ( x , y ) ¯ d x = λ ¯ ψ ( y ) . {\displaystyle \int _{a}^{b}\psi (x){\overline {K(x,y)}}\,dx={\overline {\lambda }}\psi (y).} Here, λ ¯ {\displaystyle {\overline {\lambda }}} denotes the complex conjugate of the complex number λ {\displaystyle \lambda } , and similarly for K ( x , y ) ¯ {\displaystyle {\overline {K(x,y)}}} . Then, Fredholm's theorem is that, for any fixed value of λ {\displaystyle \lambda } , these equations have either the trivial solution ψ ( x ) = ϕ ( x ) = 0 {\displaystyle \psi (x)=\phi (x)=0} or have the same number of linearly independent solutions ϕ 1 ( x ) , ⋯ , ϕ n ( x ) {\displaystyle \phi _{1}(x),\cdots ,\phi _{n}(x)} , ψ 1 ( y ) , ⋯ , ψ n ( y ) {\displaystyle \psi _{1}(y),\cdots ,\psi _{n}(y)} . A sufficient condition for this theorem to hold is for K ( x , y ) {\displaystyle K(x,y)} to be square integrable on the rectangle [ a , b ] × [ a , b ] {\displaystyle [a,b]\times [a,b]} (where a and/or b may be minus or plus infinity). Here, the integral is expressed as a one-dimensional integral on the real number line. In Fredholm theory, this result generalizes to integral operators on multi-dimensional spaces, including, for example, Riemannian manifolds. == Existence of solutions == One of Fredholm's theorems, closely related to the Fredholm alternative, concerns the existence of solutions to the inhomogeneous Fredholm equation λ ϕ ( x ) − ∫ a b K ( x , y ) ϕ ( y ) d y = f ( x ) . {\displaystyle \lambda \phi (x)-\int _{a}^{b}K(x,y)\phi (y)\,dy=f(x).} Solutions to this equation exist if and only if the function f ( x ) {\displaystyle f(x)} is orthogonal to the complete set of solutions { ψ n ( x ) } {\displaystyle \{\psi _{n}(x)\}} of the corresponding homogeneous adjoint equation: ∫ a b ψ n ( x ) ¯ f ( x ) d x = 0 {\displaystyle \int _{a}^{b}{\overline {\psi _{n}(x)}}f(x)\,dx=0} where ψ n ( x ) ¯ {\displaystyle {\overline {\psi _{n}(x)}}} is the complex conjugate of ψ n ( x ) {\displaystyle \psi _{n}(x)} and the former is one of the complete set of solutions to λ ψ ( y ) ¯ − ∫ a b ψ ( x ) ¯ K ( x , y ) d x = 0. {\displaystyle \lambda {\overline {\psi (y)}}-\int _{a}^{b}{\overline {\psi (x)}}K(x,y)\,dx=0.} A sufficient condition for this theorem to hold is for K ( x , y ) {\displaystyle K(x,y)} to be square integrable on the rectangle [ a , b ] × [ a , b ] {\displaystyle [a,b]\times [a,b]} . == References == E.I. Fredholm, "Sur une classe d'equations fonctionnelles", Acta Math., 27 (1903) pp. 365–390. Weisstein, Eric W. "Fredholm's Theorem". MathWorld. B.V. Khvedelidze (2001) [1994], "Fredholm theorems", Encyclopedia of Mathematics, EMS Press
|
Wikipedia:Fredrik Lange-Nielsen#0
|
Fredrik Lange-Nielsen (13 May 1891 – 16 May 1980) was a Norwegian mathematician and insurance company manager. He chaired the Norwegian Students' Society, edited Norsk matematisk Tidsskrift, and lectured at the University of Oslo. He was chief executive of the insurance company Norske Liv for nearly twenty years, was elected member of several governmental commissions, and a member of the Norwegian Academy for Language and Literature from its establishment in 1953. == Personal life == Lange-Nielsen was born in Eivindvik in Gulen; the son of physician Johan Fredrik Nielsen and Christine Lange. He married Laura Stang Lund in 1918. He was the father of judge Trygve Lange-Nielsen, and father-in-law of novelist Sissel Lange-Nielsen. He died in Oslo in 1980. == Career == Lange-Nielsen finished his secondary education in 1908, and attended the Norwegian Military Academy in 1909. He actively took part in social academic life. He was among the editors of the Norwegian Students' Society's magazine Samfundsbladet, and he was a board member of the society in the autumn of 1913, at the society's Centennial Anniversary. He chaired the Norwegian Students' Society in 1916. He graduated candidatus realium in 1917, and subsequently studied mathematics in Lund and in Paris. He was manager of the statistics department of De norske Livsforsikringsselskaper from 1920 to 1938, and edited the journal Norsk matematisk Tidsskrift from 1924 to 1929. He also lectured in mathematical subjects at the University of Oslo. From 1938 to 1945 he worked for the Norwegian Public Service Pension Fund, but was arrested by the Germans in December 1941 because of his participation in the Norwegian resistance movement, and did not return to the Pension Fund until 1945. He was imprisoned at Møllergata 19, and then incarcerated at the Grini concentration camp from January 1942 to May 1943. While at Grini, he took part in what has been described as the first actual political discussions in the camp, along with Erling Bühring-Dehli, Olaf Solumsmoen and others. From 1945 to 1964 he was chief executive of the insurance company Norske Liv. He was elected to several governmental commissions, including Pensjonslovkomiteen of 1935, Krigpensjoneringsutvalget of 1940, and Livsforsikringskomiteen of 1947. He was among the first members of the Norwegian Academy for Language and Literature from its foundation in 1953. He was awarded the Grand Cross of Den Gyldne Gris, and was decorated as a Knight, First Class of the Order of St. Olav, Knight of the Danish Order of Dannebrog, and Knight of the Swedish Order of the Polar Star. == References ==
|
Wikipedia:Free object#0
|
In mathematics, the idea of a free object is one of the basic concepts of abstract algebra. Informally, a free object over a set A can be thought of as being a "generic" algebraic structure over A: the only equations that hold between elements of the free object are those that follow from the defining axioms of the algebraic structure. Examples include free groups, tensor algebras, or free lattices. The concept is a part of universal algebra, in the sense that it relates to all types of algebraic structure (with finitary operations). It also has a formulation in terms of category theory, although this is in yet more abstract terms. == Definition == Free objects are the direct generalization to categories of the notion of basis in a vector space. A linear function u : E1 → E2 between vector spaces is entirely determined by its values on a basis of the vector space E1. The following definition translates this to any category. A concrete category is a category that is equipped with a faithful functor to Set, the category of sets. Let C be a concrete category with a faithful functor U : C → Set. Let X be a set (that is, an object in Set), which will be the basis of the free object to be defined. A free object on X is a pair consisting of an object A {\displaystyle A} in C and an injection i : X → U ( A ) {\displaystyle i:X\to U(A)} (called the canonical injection), that satisfies the following universal property: For any object B in C and any map between sets g : X → U ( B ) {\displaystyle g:X\to U(B)} , there exists a unique morphism f : A → B {\displaystyle f:A\to B} in C such that g = U ( f ) ∘ i {\displaystyle g=U(f)\circ i} . That is, the following diagram commutes: If free objects exist in C, the universal property implies every map between two sets induces a unique morphism between the free objects built on them, and this defines a functor F : S e t → C {\displaystyle F:\mathbf {Set} \to \mathbf {C} } . It follows that, if free objects exist in C, the functor F, called the free functor is a left adjoint to the faithful functor U; that is, there is a bijection Hom S e t ( X , U ( B ) ) ≅ Hom C ( F ( X ) , B ) . {\displaystyle \operatorname {Hom} _{\mathbf {Set} }(X,U(B))\cong \operatorname {Hom} _{\mathbf {C} }(F(X),B).} == Examples == The creation of free objects proceeds in two steps. For algebras that conform to the associative law, the first step is to consider the collection of all possible words formed from an alphabet. Then one imposes a set of equivalence relations upon the words, where the relations are the defining relations of the algebraic object at hand. The free object then consists of the set of equivalence classes. Consider, for example, the construction of the free group in two generators. One starts with an alphabet consisting of the five letters { e , a , b , a − 1 , b − 1 } {\displaystyle \{e,a,b,a^{-1},b^{-1}\}} . In the first step, there is not yet any assigned meaning to the "letters" a − 1 {\displaystyle a^{-1}} or b − 1 {\displaystyle b^{-1}} ; these will be given later, in the second step. Thus, one could equally well start with the alphabet in five letters that is S = { a , b , c , d , e } {\displaystyle S=\{a,b,c,d,e\}} . In this example, the set of all words or strings W ( S ) {\displaystyle W(S)} will include strings such as aebecede and abdc, and so on, of arbitrary finite length, with the letters arranged in every possible order. In the next step, one imposes a set of equivalence relations. The equivalence relations for a group are that of multiplication by the identity, g e = e g = g {\displaystyle ge=eg=g} , and the multiplication of inverses: g g − 1 = g − 1 g = e {\displaystyle gg^{-1}=g^{-1}g=e} . Applying these relations to the strings above, one obtains a e b e c e d e = a b a − 1 b − 1 , {\displaystyle aebecede=aba^{-1}b^{-1},} where it was understood that c {\displaystyle c} is a stand-in for a − 1 {\displaystyle a^{-1}} , and d {\displaystyle d} is a stand-in for b − 1 {\displaystyle b^{-1}} , while e {\displaystyle e} is the identity element. Similarly, one has a b d c = a b b − 1 a − 1 = e . {\displaystyle abdc=abb^{-1}a^{-1}=e.} Denoting the equivalence relation or congruence by ∼ {\displaystyle \sim } , the free object is then the collection of equivalence classes of words. Thus, in this example, the free group in two generators is the quotient F 2 = W ( S ) / ∼ . {\displaystyle F_{2}=W(S)/\sim .} This is often written as F 2 = W ( S ) / E {\displaystyle F_{2}=W(S)/E} where W ( S ) = { a 1 a 2 … a n | a k ∈ S ; n ∈ N } {\displaystyle W(S)=\{a_{1}a_{2}\ldots a_{n}\,\vert \;a_{k}\in S\,;\,n\in \mathbb {N} \}} is the set of all words, and E = { a 1 a 2 … a n | e = a 1 a 2 … a n ; a k ∈ S ; n ∈ N } {\displaystyle E=\{a_{1}a_{2}\ldots a_{n}\,\vert \;e=a_{1}a_{2}\ldots a_{n}\,;\,a_{k}\in S\,;\,n\in \mathbb {N} \}} is the equivalence class of the identity, after the relations defining a group are imposed. A simpler example are the free monoids. The free monoid on a set X, is the monoid of all finite strings using X as alphabet, with operation concatenation of strings. The identity is the empty string. In essence, the free monoid is simply the set of all words, with no equivalence relations imposed. This example is developed further in the article on the Kleene star. === General case === In the general case, the algebraic relations need not be associative, in which case the starting point is not the set of all words, but rather, strings punctuated with parentheses, which are used to indicate the non-associative groupings of letters. Such a string may equivalently be represented by a binary tree or a free magma; the leaves of the tree are the letters from the alphabet. The algebraic relations may then be general arities or finitary relations on the leaves of the tree. Rather than starting with the collection of all possible parenthesized strings, it can be more convenient to start with the Herbrand universe. Properly describing or enumerating the contents of a free object can be easy or difficult, depending on the particular algebraic object in question. For example, the free group in two generators is easily described. By contrast, little or nothing is known about the structure of free Heyting algebras in more than one generator. The problem of determining if two different strings belong to the same equivalence class is known as the word problem. As the examples suggest, free objects look like constructions from syntax; one may reverse that to some extent by saying that major uses of syntax can be explained and characterised as free objects, in a way that makes apparently heavy 'punctuation' explicable (and more memorable). == Free universal algebras == Let S {\displaystyle S} be a set and A {\displaystyle A} be an algebraic structure of type ρ {\displaystyle \rho } generated by S {\displaystyle S} . The underlying set of this algebraic structure A {\displaystyle A} , often called its universe, is denoted by A {\displaystyle A} . Let ψ : S → A {\displaystyle \psi :S\to A} be a function. We say that ( A , ψ ) {\displaystyle (A,\psi )} (or informally just A {\displaystyle A} ) is a free algebra of type ρ {\displaystyle \rho } on the set S {\displaystyle S} of free generators if the following universal property holds: For every algebra B {\displaystyle B} of type ρ {\displaystyle \rho } and every function τ : S → B {\displaystyle \tau :S\to B} , where B {\displaystyle B} is the universe of B {\displaystyle B} , there exists a unique homomorphism σ : A → B {\displaystyle \sigma :A\to B} such that the following diagram commutes: S → ψ A ↘ τ ↓ σ B {\displaystyle {\begin{array}{ccc}S&\xrightarrow {\psi } &A\\&\searrow _{\tau }&\downarrow ^{\sigma }\\&&B\ \end{array}}} This means that σ ∘ ψ = τ {\displaystyle \sigma \circ \psi =\tau } . == Free functor == The most general setting for a free object is in category theory, where one defines a functor, the free functor, that is the left adjoint to the forgetful functor. Consider a category C of algebraic structures; the objects can be thought of as sets plus operations, obeying some laws. This category has a functor, U : C → S e t {\displaystyle U:\mathbf {C} \to \mathbf {Set} } , the forgetful functor, which maps objects and morphisms in C to Set, the category of sets. The forgetful functor is very simple: it just ignores all of the operations. The free functor F, when it exists, is the left adjoint to U. That is, F : S e t → C {\displaystyle F:\mathbf {Set} \to \mathbf {C} } takes sets X in Set to their corresponding free objects F(X) in the category C. The set X can be thought of as the set of "generators" of the free object F(X). For the free functor to be a left adjoint, one must also have a Set-morphism η X : X → U ( F ( X ) ) {\displaystyle \eta _{X}:X\to U(F(X))\,\!} . More explicitly, F is, up to isomorphisms in C, characterized by the following universal property: Whenever B is an algebra in C, and g : X → U ( B ) {\displaystyle g\colon X\to U(B)} is a function (a morphism in the category of sets), then there is a unique C-morphism f : F ( X ) → B {\displaystyle f\colon F(X)\to B} such that g = U ( f ) ∘ η X {\displaystyle g=U(f)\circ \eta _{X}} . Concretely, this sends a set into the free object on that set; it is the "inclusion of a basis". Abusing notation, X → F ( X ) {\displaystyle X\to F(X)} (this abuses notation because X is a set, while F(X) is an algebra; correctly, it is X → U ( F ( X ) ) {\displaystyle X\to U(F(X))} ). The natural transformation η : id S e t → U F {\displaystyle \eta :\operatorname {id} _{\mathbf {Set} }\to UF} is called the unit; together with the counit ε : F U → id C {\displaystyle \varepsilon :FU\to \operatorname {id} _{\mathbf {C} }} , one may construct a T-algebra, and so a monad. The cofree functor is the right adjoint to the forgetful functor. === Existence === There are general existence theorems that apply; the most basic of them guarantees that Whenever C is a variety, then for every set X there is a free object F(X) in C. Here, a variety is a synonym for a finitary algebraic category, thus implying that the set of relations are finitary, and algebraic because it is monadic over Set. === General case === Other types of forgetfulness also give rise to objects quite like free objects, in that they are left adjoint to a forgetful functor, not necessarily to sets. For example, the tensor algebra construction on a vector space is the left adjoint to the functor on associative algebras that ignores the algebra structure. It is therefore often also called a free algebra. Likewise the symmetric algebra and exterior algebra are free symmetric and anti-symmetric algebras on a vector space. == List of free objects == Specific kinds of free objects include: free algebra free associative algebra free commutative algebra free category free strict monoidal category free group free abelian group free partially commutative group free Kleene algebra free lattice free Boolean algebra free distributive lattice free Heyting algebra free modular lattice free Lie algebra free magma free module, and in particular, vector space free monoid free commutative monoid free partially commutative monoid free ring free semigroup free semiring free commutative semiring free theory term algebra discrete space == See also == Generating set == Notes == == External links == In nLab: free functor, free object, vector space
|
Wikipedia:Free presentation#0
|
In algebra, a free presentation of a module M over a commutative ring R is an exact sequence of R-modules: ⨁ i ∈ I R → f ⨁ j ∈ J R → g M → 0. {\displaystyle \bigoplus _{i\in I}R\ {\overset {f}{\to }}\ \bigoplus _{j\in J}R\ {\overset {g}{\to }}\ M\to 0.} Note the image under g of the standard basis generates M. In particular, if J is finite, then M is a finitely generated module. If I and J are finite sets, then the presentation is called a finite presentation; a module is called finitely presented if it admits a finite presentation. Since f is a module homomorphism between free modules, it can be visualized as an (infinite) matrix with entries in R and M as its cokernel. A free presentation always exists: any module is a quotient of a free module: F → g M → 0 {\displaystyle F\ {\overset {g}{\to }}\ M\to 0} , but then the kernel of g is again a quotient of a free module: F ′ → f ker g → 0 {\displaystyle F'\ {\overset {f}{\to }}\ \ker g\to 0} . The combination of f and g is a free presentation of M. Now, one can obviously keep "resolving" the kernels in this fashion; the result is called a free resolution. Thus, a free presentation is the early part of the free resolution. A presentation is useful for computation. For example, since tensoring is right-exact, tensoring the above presentation with a module, say N, gives: ⨁ i ∈ I N → f ⊗ 1 ⨁ j ∈ J N → M ⊗ R N → 0. {\displaystyle \bigoplus _{i\in I}N\ {\overset {f\otimes 1}{\to }}\ \bigoplus _{j\in J}N\to M\otimes _{R}N\to 0.} This says that M ⊗ R N {\displaystyle M\otimes _{R}N} is the cokernel of f ⊗ 1 {\displaystyle f\otimes 1} . If N is also a ring (and hence an R-algebra), then this is the presentation of the N-module M ⊗ R N {\displaystyle M\otimes _{R}N} ; that is, the presentation extends under base extension. For left-exact functors, there is for example Proof: Applying F to a finite presentation R ⊕ n → R ⊕ m → M → 0 {\displaystyle R^{\oplus n}\to R^{\oplus m}\to M\to 0} results in 0 → F ( M ) → F ( R ⊕ m ) → F ( R ⊕ n ) . {\displaystyle 0\to F(M)\to F(R^{\oplus m})\to F(R^{\oplus n}).} This can be trivially extended to 0 → 0 → F ( M ) → F ( R ⊕ m ) → F ( R ⊕ n ) . {\displaystyle 0\to 0\to F(M)\to F(R^{\oplus m})\to F(R^{\oplus n}).} The same thing holds for G {\displaystyle G} . Now apply the five lemma. ◻ {\displaystyle \square } == See also == Coherent module Finitely related module Fitting ideal Quasi-coherent sheaf == References == Eisenbud, David, Commutative Algebra with a View Toward Algebraic Geometry, Graduate Texts in Mathematics, 150, Springer-Verlag, 1995, ISBN 0-387-94268-8.
|
Wikipedia:Free product of associative algebras#0
|
In mathematics, an associative algebra A over a commutative ring (often a field) K is a ring A together with a ring homomorphism from K into the center of A. This is thus an algebraic structure with an addition, a multiplication, and a scalar multiplication (the multiplication by the image of the ring homomorphism of an element of K). The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a module or vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over K. A standard first example of a K-algebra is a ring of square matrices over a commutative ring K, with the usual matrix multiplication. A commutative algebra is an associative algebra for which the multiplication is commutative, or, equivalently, an associative algebra that is also a commutative ring. In this article associative algebras are assumed to have a multiplicative identity, denoted 1; they are sometimes called unital associative algebras for clarification. In some areas of mathematics this assumption is not made, and we will call such structures non-unital associative algebras. We will also assume that all rings are unital, and all ring homomorphisms are unital. Every ring is an associative algebra over its center and over the integers. == Definition == Let R be a commutative ring (so R could be a field). An associative R-algebra A (or more simply, an R-algebra A) is a ring A that is also an R-module in such a way that the two additions (the ring addition and the module addition) are the same operation, and scalar multiplication satisfies r ⋅ ( x y ) = ( r ⋅ x ) y = x ( r ⋅ y ) {\displaystyle r\cdot (xy)=(r\cdot x)y=x(r\cdot y)} for all r in R and x, y in the algebra. (This definition implies that the algebra, being a ring, is unital, since rings are supposed to have a multiplicative identity.) Equivalently, an associative algebra A is a ring together with a ring homomorphism from R to the center of A. If f is such a homomorphism, the scalar multiplication is (r, x) ↦ f(r)x (here the multiplication is the ring multiplication); if the scalar multiplication is given, the ring homomorphism is given by r ↦ r ⋅ 1A. (See also § From ring homomorphisms below). Every ring is an associative Z-algebra, where Z denotes the ring of the integers. A commutative algebra is an associative algebra that is also a commutative ring. === As a monoid object in the category of modules === The definition is equivalent to saying that a unital associative R-algebra is a monoid object in R-Mod (the monoidal category of R-modules). By definition, a ring is a monoid object in the category of abelian groups; thus, the notion of an associative algebra is obtained by replacing the category of abelian groups with the category of modules. Pushing this idea further, some authors have introduced a "generalized ring" as a monoid object in some other category that behaves like the category of modules. Indeed, this reinterpretation allows one to avoid making an explicit reference to elements of an algebra A. For example, the associativity can be expressed as follows. By the universal property of a tensor product of modules, the multiplication (the R-bilinear map) corresponds to a unique R-linear map m : A ⊗ R A → A {\displaystyle m:A\otimes _{R}A\to A} . The associativity then refers to the identity: m ∘ ( id ⊗ m ) = m ∘ ( m ⊗ id ) . {\displaystyle m\circ ({\operatorname {id} }\otimes m)=m\circ (m\otimes \operatorname {id} ).} === From ring homomorphisms === An associative algebra amounts to a ring homomorphism whose image lies in the center. Indeed, starting with a ring A and a ring homomorphism η : R → A whose image lies in the center of A, we can make A an R-algebra by defining r ⋅ x = η ( r ) x {\displaystyle r\cdot x=\eta (r)x} for all r ∈ R and x ∈ A. If A is an R-algebra, taking x = 1, the same formula in turn defines a ring homomorphism η : R → A whose image lies in the center. If a ring is commutative then it equals its center, so that a commutative R-algebra can be defined simply as a commutative ring A together with a commutative ring homomorphism η : R → A. The ring homomorphism η appearing in the above is often called a structure map. In the commutative case, one can consider the category whose objects are ring homomorphisms R → A for a fixed R, i.e., commutative R-algebras, and whose morphisms are ring homomorphisms A → A′ that are under R; i.e., R → A → A′ is R → A′ (i.e., the coslice category of the category of commutative rings under R.) The prime spectrum functor Spec then determines an anti-equivalence of this category to the category of affine schemes over Spec R. How to weaken the commutativity assumption is a subject matter of noncommutative algebraic geometry and, more recently, of derived algebraic geometry. See also: Generic matrix ring. == Algebra homomorphisms == A homomorphism between two R-algebras is an R-linear ring homomorphism. Explicitly, φ : A1 → A2 is an associative algebra homomorphism if φ ( r ⋅ x ) = r ⋅ φ ( x ) φ ( x + y ) = φ ( x ) + φ ( y ) φ ( x y ) = φ ( x ) φ ( y ) φ ( 1 ) = 1 {\displaystyle {\begin{aligned}\varphi (r\cdot x)&=r\cdot \varphi (x)\\\varphi (x+y)&=\varphi (x)+\varphi (y)\\\varphi (xy)&=\varphi (x)\varphi (y)\\\varphi (1)&=1\end{aligned}}} The class of all R-algebras together with algebra homomorphisms between them form a category, sometimes denoted R-Alg. The subcategory of commutative R-algebras can be characterized as the coslice category R/CRing where CRing is the category of commutative rings. == Examples == The most basic example is a ring itself; it is an algebra over its center or any subring lying in the center. In particular, any commutative ring is an algebra over any of its subrings. Other examples abound both from algebra and other fields of mathematics. === Algebra === Any ring A can be considered as a Z-algebra. The unique ring homomorphism from Z to A is determined by the fact that it must send 1 to the identity in A. Therefore, rings and Z-algebras are equivalent concepts, in the same way that abelian groups and Z-modules are equivalent. Any ring of characteristic n is a (Z/nZ)-algebra in the same way. Given an R-module M, the endomorphism ring of M, denoted EndR(M) is an R-algebra by defining (r·φ)(x) = r·φ(x). Any ring of matrices with coefficients in a commutative ring R forms an R-algebra under matrix addition and multiplication. This coincides with the previous example when M is a finitely-generated, free R-module. In particular, the square n-by-n matrices with entries from the field K form an associative algebra over K. The complex numbers form a 2-dimensional commutative algebra over the real numbers. The quaternions form a 4-dimensional associative algebra over the reals (but not an algebra over the complex numbers, since the complex numbers are not in the center of the quaternions). Every polynomial ring R[x1, ..., xn] is a commutative R-algebra. In fact, this is the free commutative R-algebra on the set {x1, ..., xn}. The free R-algebra on a set E is an algebra of "polynomials" with coefficients in R and noncommuting indeterminates taken from the set E. The tensor algebra of an R-module is naturally an associative R-algebra. The same is true for quotients such as the exterior and symmetric algebras. Categorically speaking, the functor that maps an R-module to its tensor algebra is left adjoint to the functor that sends an R-algebra to its underlying R-module (forgetting the multiplicative structure). Given a module M over a commutative ring R, the direct sum of modules R ⊕ M has a structure of an R-algebra by thinking M consists of infinitesimal elements; i.e., the multiplication is given as (a + x)(b + y) = ab + ay + bx. The notion is sometimes called the algebra of dual numbers. A quasi-free algebra, introduced by Cuntz and Quillen, is a sort of generalization of a free algebra and a semisimple algebra over an algebraically closed field. === Representation theory === The universal enveloping algebra of a Lie algebra is an associative algebra that can be used to study the given Lie algebra. If G is a group and R is a commutative ring, the set of all functions from G to R with finite support form an R-algebra with the convolution as multiplication. It is called the group algebra of G. The construction is the starting point for the application to the study of (discrete) groups. If G is an algebraic group (e.g., semisimple complex Lie group), then the coordinate ring of G is the Hopf algebra A corresponding to G. Many structures of G translate to those of A. A quiver algebra (or a path algebra) of a directed graph is the free associative algebra over a field generated by the paths in the graph. === Analysis === Given any Banach space X, the continuous linear operators A : X → X form an associative algebra (using composition of operators as multiplication); this is a Banach algebra. Given any topological space X, the continuous real- or complex-valued functions on X form a real or complex associative algebra; here the functions are added and multiplied pointwise. The set of semimartingales defined on the filtered probability space (Ω, F, (Ft)t≥0, P) forms a ring under stochastic integration. The Weyl algebra An Azumaya algebra === Geometry and combinatorics === The Clifford algebras, which are useful in geometry and physics. Incidence algebras of locally finite partially ordered sets are associative algebras considered in combinatorics. The partition algebra and its subalgebras, including the Brauer algebra and the Temperley-Lieb algebra. A differential graded algebra is an associative algebra together with a grading and a differential. For example, the de Rham algebra Ω ( M ) = ⨁ p = 0 n Ω p ( M ) {\textstyle \Omega (M)=\bigoplus _{p=0}^{n}\Omega ^{p}(M)} , where Ω p ( M ) {\textstyle \Omega ^{p}(M)} consists of differential p-forms on a manifold M, is a differential graded algebra. === Mathematical physics === A Poisson algebra is a commutative associative algebra over a field together with a structure of a Lie algebra so that the Lie bracket {,} satisfies the Leibniz rule; i.e., {fg, h} = f{g, h} + g{f, h}. Given a Poisson algebra a {\displaystyle {\mathfrak {a}}} , consider the vector space a [ [ u ] ] {\displaystyle {\mathfrak {a}}[\![u]\!]} of formal power series over a {\displaystyle {\mathfrak {a}}} . If a [ [ u ] ] {\displaystyle {\mathfrak {a}}[\![u]\!]} has a structure of an associative algebra with multiplication ∗ {\displaystyle *} such that, for f , g ∈ a {\displaystyle f,g\in {\mathfrak {a}}} , f ∗ g = f g − 1 2 { f , g } u + ⋯ , {\displaystyle f*g=fg-{\frac {1}{2}}\{f,g\}u+\cdots ,} then a [ [ u ] ] {\displaystyle {\mathfrak {a}}[\![u]\!]} is called a deformation quantization of a {\displaystyle {\mathfrak {a}}} . A quantized enveloping algebra. The dual of such an algebra turns out to be an associative algebra (see § Dual of an associative algebra) and is, philosophically speaking, the (quantized) coordinate ring of a quantum group. Gerstenhaber algebra == Constructions == Subalgebras A subalgebra of an R-algebra A is a subset of A which is both a subring and a submodule of A. That is, it must be closed under addition, ring multiplication, scalar multiplication, and it must contain the identity element of A. Quotient algebras Let A be an R-algebra. Any ring-theoretic ideal I in A is automatically an R-module since r · x = (r1A)x. This gives the quotient ring A / I the structure of an R-module and, in fact, an R-algebra. It follows that any ring homomorphic image of A is also an R-algebra. Direct products The direct product of a family of R-algebras is the ring-theoretic direct product. This becomes an R-algebra with the obvious scalar multiplication. Free products One can form a free product of R-algebras in a manner similar to the free product of groups. The free product is the coproduct in the category of R-algebras. Tensor products The tensor product of two R-algebras is also an R-algebra in a natural way. See tensor product of algebras for more details. Given a commutative ring R and any ring A the tensor product R ⊗Z A can be given the structure of an R-algebra by defining r · (s ⊗ a) = (rs ⊗ a). The functor which sends A to R ⊗Z A is left adjoint to the functor which sends an R-algebra to its underlying ring (forgetting the module structure). See also: Change of rings. Free algebra A free algebra is an algebra generated by symbols. If one imposes commutativity; i.e., take the quotient by commutators, then one gets a polynomial algebra. == Dual of an associative algebra == Let A be an associative algebra over a commutative ring R. Since A is in particular a module, we can take the dual module A* of A. A priori, the dual A* need not have a structure of an associative algebra. However, A may come with an extra structure (namely, that of a Hopf algebra) so that the dual is also an associative algebra. For example, take A to be the ring of continuous functions on a compact group G. Then, not only A is an associative algebra, but it also comes with the co-multiplication Δ(f)(g, h) = f(gh) and co-unit ε(f) = f(1). The "co-" refers to the fact that they satisfy the dual of the usual multiplication and unit in the algebra axiom. Hence, the dual A* is an associative algebra. The co-multiplication and co-unit are also important in order to form a tensor product of representations of associative algebras (see § Representations below). == Enveloping algebra == Given an associative algebra A over a commutative ring R, the enveloping algebra Ae of A is the algebra A ⊗R Aop or Aop ⊗R A, depending on authors. Note that a bimodule over A is exactly a left module over Ae. == Separable algebra == Let A be an algebra over a commutative ring R. Then the algebra A is a right module over Ae := Aop ⊗R A with the action x ⋅ (a ⊗ b) = axb. Then, by definition, A is said to separable if the multiplication map A ⊗R A → A : x ⊗ y ↦ xy splits as an Ae-linear map, where A ⊗ A is an Ae-module by (x ⊗ y) ⋅ (a ⊗ b) = ax ⊗ yb. Equivalently, A is separable if it is a projective module over Ae; thus, the Ae-projective dimension of A, sometimes called the bidimension of A, measures the failure of separability. == Finite-dimensional algebra == Let A be a finite-dimensional algebra over a field k. Then A is an Artinian ring. === Commutative case === As A is Artinian, if it is commutative, then it is a finite product of Artinian local rings whose residue fields are algebras over the base field k. Now, a reduced Artinian local ring is a field and thus the following are equivalent A {\displaystyle A} is separable. A ⊗ k ¯ {\displaystyle A\otimes {\overline {k}}} is reduced, where k ¯ {\displaystyle {\overline {k}}} is some algebraic closure of k. A ⊗ k ¯ = k ¯ n {\displaystyle A\otimes {\overline {k}}={\overline {k}}^{n}} for some n. dim k A {\displaystyle \dim _{k}A} is the number of k {\displaystyle k} -algebra homomorphisms A → k ¯ {\displaystyle A\to {\overline {k}}} . Let Γ = Gal ( k s / k ) = lim ← Gal ( k ′ / k ) {\displaystyle \Gamma =\operatorname {Gal} (k_{s}/k)=\varprojlim \operatorname {Gal} (k'/k)} , the profinite group of finite Galois extensions of k. Then A ↦ X A = { k -algebra homomorphisms A → k s } {\displaystyle A\mapsto X_{A}=\{k{\text{-algebra homomorphisms }}A\to k_{s}\}} is an anti-equivalence of the category of finite-dimensional separable k-algebras to the category of finite sets with continuous Γ {\displaystyle \Gamma } -actions. === Noncommutative case === Since a simple Artinian ring is a (full) matrix ring over a division ring, if A is a simple algebra, then A is a (full) matrix algebra over a division algebra D over k; i.e., A = Mn(D). More generally, if A is a semisimple algebra, then it is a finite product of matrix algebras (over various division k-algebras), the fact known as the Artin–Wedderburn theorem. The fact that A is Artinian simplifies the notion of a Jacobson radical; for an Artinian ring, the Jacobson radical of A is the intersection of all (two-sided) maximal ideals (in contrast, in general, a Jacobson radical is the intersection of all left maximal ideals or the intersection of all right maximal ideals.) The Wedderburn principal theorem states: for a finite-dimensional algebra A with a nilpotent ideal I, if the projective dimension of A / I as a module over the enveloping algebra (A / I)e is at most one, then the natural surjection p : A → A / I splits; i.e., A contains a subalgebra B such that p|B : B ~→ A / I is an isomorphism. Taking I to be the Jacobson radical, the theorem says in particular that the Jacobson radical is complemented by a semisimple algebra. The theorem is an analog of Levi's theorem for Lie algebras. == Lattices and orders == Let R be a Noetherian integral domain with field of fractions K (for example, they can be Z, Q). A lattice L in a finite-dimensional K-vector space V is a finitely generated R-submodule of V that spans V; in other words, L ⊗R K = V. Let AK be a finite-dimensional K-algebra. An order in AK is an R-subalgebra that is a lattice. In general, there are a lot fewer orders than lattices; e.g., 1/2Z is a lattice in Q but not an order (since it is not an algebra). A maximal order is an order that is maximal among all the orders. == Related concepts == === Coalgebras === An associative algebra over K is given by a K-vector space A endowed with a bilinear map A × A → A having two inputs (multiplicator and multiplicand) and one output (product), as well as a morphism K → A identifying the scalar multiples of the multiplicative identity. If the bilinear map A × A → A is reinterpreted as a linear map (i.e., morphism in the category of K-vector spaces) A ⊗ A → A (by the universal property of the tensor product), then we can view an associative algebra over K as a K-vector space A endowed with two morphisms (one of the form A ⊗ A → A and one of the form K → A) satisfying certain conditions that boil down to the algebra axioms. These two morphisms can be dualized using categorial duality by reversing all arrows in the commutative diagrams that describe the algebra axioms; this defines the structure of a coalgebra. There is also an abstract notion of F-coalgebra, where F is a functor. This is vaguely related to the notion of coalgebra discussed above. == Representations == A representation of an algebra A is an algebra homomorphism ρ : A → End(V) from A to the endomorphism algebra of some vector space (or module) V. The property of ρ being an algebra homomorphism means that ρ preserves the multiplicative operation (that is, ρ(xy) = ρ(x)ρ(y) for all x and y in A), and that ρ sends the unit of A to the unit of End(V) (that is, to the identity endomorphism of V). If A and B are two algebras, and ρ : A → End(V) and τ : B → End(W) are two representations, then there is a (canonical) representation A ⊗ B → End(V ⊗ W) of the tensor product algebra A ⊗ B on the vector space V ⊗ W. However, there is no natural way of defining a tensor product of two representations of a single associative algebra in such a way that the result is still a representation of that same algebra (not of its tensor product with itself), without somehow imposing additional conditions. Here, by tensor product of representations, the usual meaning is intended: the result should be a linear representation of the same algebra on the product vector space. Imposing such additional structure typically leads to the idea of a Hopf algebra or a Lie algebra, as demonstrated below. === Motivation for a Hopf algebra === Consider, for example, two representations σ : A → End(V) and τ : A → End(W). One might try to form a tensor product representation ρ : x ↦ σ(x) ⊗ τ(x) according to how it acts on the product vector space, so that ρ ( x ) ( v ⊗ w ) = ( σ ( x ) ( v ) ) ⊗ ( τ ( x ) ( w ) ) . {\displaystyle \rho (x)(v\otimes w)=(\sigma (x)(v))\otimes (\tau (x)(w)).} However, such a map would not be linear, since one would have ρ ( k x ) = σ ( k x ) ⊗ τ ( k x ) = k σ ( x ) ⊗ k τ ( x ) = k 2 ( σ ( x ) ⊗ τ ( x ) ) = k 2 ρ ( x ) {\displaystyle \rho (kx)=\sigma (kx)\otimes \tau (kx)=k\sigma (x)\otimes k\tau (x)=k^{2}(\sigma (x)\otimes \tau (x))=k^{2}\rho (x)} for k ∈ K. One can rescue this attempt and restore linearity by imposing additional structure, by defining an algebra homomorphism Δ : A → A ⊗ A, and defining the tensor product representation as ρ = ( σ ⊗ τ ) ∘ Δ . {\displaystyle \rho =(\sigma \otimes \tau )\circ \Delta .} Such a homomorphism Δ is called a comultiplication if it satisfies certain axioms. The resulting structure is called a bialgebra. To be consistent with the definitions of the associative algebra, the coalgebra must be co-associative, and, if the algebra is unital, then the co-algebra must be co-unital as well. A Hopf algebra is a bialgebra with an additional piece of structure (the so-called antipode), which allows not only to define the tensor product of two representations, but also the Hom module of two representations (again, similarly to how it is done in the representation theory of groups). === Motivation for a Lie algebra === One can try to be more clever in defining a tensor product. Consider, for example, x ↦ ρ ( x ) = σ ( x ) ⊗ Id W + Id V ⊗ τ ( x ) {\displaystyle x\mapsto \rho (x)=\sigma (x)\otimes {\mbox{Id}}_{W}+{\mbox{Id}}_{V}\otimes \tau (x)} so that the action on the tensor product space is given by ρ ( x ) ( v ⊗ w ) = ( σ ( x ) v ) ⊗ w + v ⊗ ( τ ( x ) w ) {\displaystyle \rho (x)(v\otimes w)=(\sigma (x)v)\otimes w+v\otimes (\tau (x)w)} . This map is clearly linear in x, and so it does not have the problem of the earlier definition. However, it fails to preserve multiplication: ρ ( x y ) = σ ( x ) σ ( y ) ⊗ Id W + Id V ⊗ τ ( x ) τ ( y ) {\displaystyle \rho (xy)=\sigma (x)\sigma (y)\otimes {\mbox{Id}}_{W}+{\mbox{Id}}_{V}\otimes \tau (x)\tau (y)} . But, in general, this does not equal ρ ( x ) ρ ( y ) = σ ( x ) σ ( y ) ⊗ Id W + σ ( x ) ⊗ τ ( y ) + σ ( y ) ⊗ τ ( x ) + Id V ⊗ τ ( x ) τ ( y ) {\displaystyle \rho (x)\rho (y)=\sigma (x)\sigma (y)\otimes {\mbox{Id}}_{W}+\sigma (x)\otimes \tau (y)+\sigma (y)\otimes \tau (x)+{\mbox{Id}}_{V}\otimes \tau (x)\tau (y)} . This shows that this definition of a tensor product is too naive; the obvious fix is to define it such that it is antisymmetric, so that the middle two terms cancel. This leads to the concept of a Lie algebra. == Non-unital algebras == Some authors use the term "associative algebra" to refer to structures which do not necessarily have a multiplicative identity, and hence consider homomorphisms which are not necessarily unital. One example of a non-unital associative algebra is given by the set of all functions f : R → R whose limit as x nears infinity is zero. Another example is the vector space of continuous periodic functions, together with the convolution product. == See also == Abstract algebra Algebraic structure Algebra over a field Sheaf of algebras, a sort of an algebra over a ringed space Deligne's conjecture on Hochschild cohomology == Notes == == Citations == == References ==
|
Wikipedia:Free-standing Mathematics Qualifications#0
|
Free-standing Mathematics Qualifications (FSMQ) are a suite of mathematical qualifications available at levels 1 to 3 in the National Qualifications Framework – Foundation, Intermediate and Advanced. == Educational standard == They bridge a gap between GCSE and A-Level Mathematics. The advanced course is especially ideal for pupils who do not find GCSE maths particularly challenging and who often have extra time in their second year of GCSEs, having taken their Maths GCSE a year early. The qualification is commonly offered in private schools and is useful in allowing pupils to determine whether or not to pursue maths in subsequent stages of their schooling. The highest grade achievable is an A. An FSMQ Unit at Advanced level is roughly equivalent to a single AS module with candidates receiving 10 UCAS points for an A grade. Intermediate level is equivalent to a GCSE in Mathematics. Coursework is often a key part of the FSMQ, but is sometimes omitted depending on the examining board. == Exam boards == The only examining board currently offering FSMQs is OCR. Edexcel withdrew the qualification, the last exam being held in June 2004. AQA also withdrew the pilot advanced level FSMQ, the last exam being in June 2018, and a final re-sit opportunity in June 2019. == Examples == Additional Mathematics/AdMaths (OCR) (No coursework) == References == == External links == Edexcel Oxford, Cambridge and RSA (OCR) Assessment and Qualifications Alliance (AQA) Qualifications and Curriculum Authority (QCA)
|
Wikipedia:Freidlin–Wentzell theorem#0
|
In mathematics, the Freidlin–Wentzell theorem (due to Mark Freidlin and Alexander D. Wentzell) is a result in the large deviations theory of stochastic processes. Roughly speaking, the Freidlin–Wentzell theorem gives an estimate for the probability that a (scaled-down) sample path of an Itō diffusion will stray far from the mean path. This statement is made precise using rate functions. The Freidlin–Wentzell theorem generalizes Schilder's theorem for standard Brownian motion. == Statement == Let B be a standard Brownian motion on Rd starting at the origin, 0 ∈ Rd, and let Xε be an Rd-valued Itō diffusion solving an Itō stochastic differential equation of the form { d X t ε = b ( X t ε ) d t + ε d B t , X 0 ε = 0 , {\displaystyle {\begin{cases}dX_{t}^{\varepsilon }=b(X_{t}^{\varepsilon })\,dt+{\sqrt {\varepsilon }}\,dB_{t},\\X_{0}^{\varepsilon }=0,\end{cases}}} where the drift vector field b : Rd → Rd is uniformly Lipschitz continuous. Then, on the Banach space C0 = C0([0, T]; Rd) equipped with the supremum norm ||⋅||∞, the family of processes (Xε)ε>0 satisfies the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by I ( ω ) = 1 2 ∫ 0 T | ω ˙ t − b ( ω t ) | 2 d t {\displaystyle I(\omega )={\frac {1}{2}}\int _{0}^{T}|{\dot {\omega }}_{t}-b(\omega _{t})|^{2}\,dt} if ω lies in the Sobolev space H1([0, T]; Rd), and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0, lim sup ε ↓ 0 ( ε log P [ X ε ∈ F ] ) ≤ − inf ω ∈ F I ( ω ) {\displaystyle \limsup _{\varepsilon \downarrow 0}{\big (}\varepsilon \log \mathbf {P} {\big [}X^{\varepsilon }\in F{\big ]}{\big )}\leq -\inf _{\omega \in F}I(\omega )} and lim inf ε ↓ 0 ( ε log P [ X ε ∈ G ] ) ≥ − inf ω ∈ G I ( ω ) . {\displaystyle \liminf _{\varepsilon \downarrow 0}{\big (}\varepsilon \log \mathbf {P} {\big [}X^{\varepsilon }\in G{\big ]}{\big )}\geq -\inf _{\omega \in G}I(\omega ).} == References == Freidlin, Mark I.; Wentzell, Alexander D. (1998). Random perturbations of dynamical systems. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 260 (Second ed.). New York: Springer-Verlag. pp. xii+430. ISBN 0-387-98362-7. MR1652127 Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR1619036 (See chapter 5.6)
|
Wikipedia:Fresh variable#0
|
In formal reasoning, in particular in mathematical logic, computer algebra, and automated theorem proving, a fresh variable is a variable that did not occur in the context considered so far. The concept is often used without explanation. Fresh variables may be used to replace other variables, to eliminate variable shadowing or capture. For instance, in alpha-conversion, the processing of terms in the lambda calculus into equivalent terms with renamed variables, replacing variables with fresh variables can be helpful as a way to avoid accidentally capturing variables that should be free. Another use for fresh variables involves the development of loop invariants in formal program verification, where it is sometimes useful to replace constants by newly introduced fresh variables. == Example == For example, in term rewriting, before applying a rule l → r {\displaystyle l\to r} to a given term t {\displaystyle t} , each variable in l → r {\displaystyle l\to r} should be replaced by a fresh one to avoid clashes with variables occurring in t {\displaystyle t} . Given the rule a p p e n d ( c o n s ( x , y ) , z ) → c o n s ( x , a p p e n d ( y , z ) ) {\displaystyle append(cons(x,y),z)\to cons(x,append(y,z))} and the term a p p e n d ( c o n s ( x , c o n s ( y , n i l ) ) , c o n s ( 3 , n i l ) ) {\displaystyle append(cons(x,cons(y,nil)),cons(3,nil))} , attempting to find a matching substitution of the rule's left-hand side, a p p e n d ( c o n s ( x , y ) , z ) {\displaystyle append(cons(x,y),z)} , within a p p e n d ( c o n s ( x , c o n s ( y , n i l ) ) , c o n s ( 3 , n i l ) ) {\displaystyle append(cons(x,cons(y,nil)),cons(3,nil))} will fail, since y {\displaystyle y} cannot match c o n s ( y , n i l ) {\displaystyle cons(y,nil)} . However, if the rule is replaced by a fresh copy a p p e n d ( c o n s ( v 1 , v 2 ) , v 3 ) → c o n s ( v 1 , a p p e n d ( v 2 , v 3 ) ) {\displaystyle append(cons(v_{1},v_{2}),v_{3})\to cons(v_{1},append(v_{2},v_{3}))} before, matching will succeed with the answer substitution { v 1 ↦ x , v 2 ↦ c o n s ( y , n i l ) , v 3 ↦ c o n s ( 3 , n i l ) } {\displaystyle \{v_{1}\mapsto x,\;v_{2}\mapsto cons(y,nil),\;v_{3}\mapsto cons(3,nil)\}} . == Notes == == References ==
|
Wikipedia:Freshman's dream#0
|
The freshman's dream is a name given to the erroneous equation ( x + y ) n = x n + y n {\displaystyle (x+y)^{n}=x^{n}+y^{n}} , where n {\displaystyle n} is a real number (usually a positive integer greater than 1) and x , y {\displaystyle x,y} are non-zero real numbers. Beginning students commonly make this error in computing the power of a sum of real numbers, falsely assuming powers distribute over sums. When n = 2, it is easy to see why this is incorrect: (x + y)2 can be correctly computed as x2 + 2xy + y2 using distributivity (commonly known by students in the United States as the FOIL method). For larger positive integer values of n, the correct result is given by the binomial theorem. The name "freshman's dream" also sometimes refers to the theorem that says that for a prime number p, if x and y are members of a commutative ring of characteristic p, then (x + y)p = xp + yp. In this more exotic type of arithmetic, the "mistake" actually gives the correct result, since p divides all the binomial coefficients apart from the first and the last, making all the intermediate terms equal to zero. The identity is also actually true in the context of tropical geometry, where multiplication is replaced with addition, and addition is replaced with minimum. == Examples == ( 1 + 4 ) 2 = 5 2 = 25 {\displaystyle (1+4)^{2}=5^{2}=25} , but 1 2 + 4 2 = 17 {\displaystyle 1^{2}+4^{2}=17} . x 2 + y 2 {\displaystyle {\sqrt {x^{2}+y^{2}}}} does not equal x 2 + y 2 = | x | + | y | {\displaystyle {\sqrt {x^{2}}}+{\sqrt {y^{2}}}=|x|+|y|} . For example, 9 + 16 = 25 = 5 {\displaystyle {\sqrt {9+16}}={\sqrt {25}}=5} , which does not equal 3 + 4 = 7. In this example, the error is being committed with the exponent n = 1/2. == Prime characteristic == When p {\displaystyle p} is a prime number and x {\displaystyle x} and y {\displaystyle y} are members of a commutative ring of characteristic p {\displaystyle p} , then ( x + y ) p = x p + y p {\displaystyle (x+y)^{p}=x^{p}+y^{p}} . This can be seen by examining the prime factors of the binomial coefficients: the nth binomial coefficient is ( p n ) = p ! n ! ( p − n ) ! . {\displaystyle {\binom {p}{n}}={\frac {p!}{n!(p-n)!}}.} The numerator is p factorial(!), which is divisible by p. However, when 0 < n < p, both n! and (p − n)! are coprime with p since all the factors are less than p and p is prime. Since a binomial coefficient is always an integer, the nth binomial coefficient is divisible by p and hence equal to 0 in the ring. We are left with the zeroth and pth coefficients, which both equal 1, yielding the desired equation. Thus in characteristic p the freshman's dream is a valid identity. This result demonstrates that exponentiation by p produces an endomorphism, known as the Frobenius endomorphism of the ring. The demand that the characteristic p be a prime number is central to the truth of the freshman's dream. A related theorem states that if p is prime then (x + 1)p ≡ xp + 1 in the polynomial ring Z p [ x ] {\displaystyle \mathbb {Z} _{p}[x]} . This theorem is a key fact in modern primality testing. == History and alternate names == In 1938, Harold Willard Gleason published a poem titled «"Dark and Bloody Ground---" (The Freshman's Dream)» in The New York Sun on September 6, which was subsequently reprinted in various other newspapers and magazines. It consists of 2 stanzas, each containing 8 lines with alternating indentation; it has an ABCB rhyming scheme. Words and phrases that hint that it might be related to this concept include: "Algebra", "Wild corollaries twine", "surds", "of plus and minus sign", "binomial", "quadratic", "parenthesis", "exponents", "in terms of x and y", "remove the brackets, radicals, and do so with discretion", and "factor cubes". The history of the term "freshman's dream" is somewhat unclear. In a 1940 article on modular fields, Saunders Mac Lane quotes Stephen Kleene's remark that a knowledge of (a + b)2 = a2 + b2 in a field of characteristic 2 would corrupt freshman students of algebra. This may be the first connection between "freshman" and binomial expansion in fields of positive characteristic. Since then, authors of undergraduate algebra texts took note of the common error. The first actual attestation of the phrase "freshman's dream" seems to be in Hungerford's graduate algebra textbook (1974), where he states that the name is "due to" Vincent O. McBrien. Alternative terms include "freshman exponentiation", used in Fraleigh (1998). The term "freshman's dream" itself, in non-mathematical contexts, is recorded since the 19th century. Since the expansion of (x + y)n is correctly given by the binomial theorem, the freshman's dream is also known as the "child's binomial theorem" or "schoolboy binomial theorem". == See also == Pons asinorum Primality test Sophomore's dream Frobenius endomorphism == References ==
|
Wikipedia:Fridrikh Karpelevich#0
|
Fridrikh Israilevich Karpelevich (Russian: Фридрих Израилевич Карпелевич; 2 October 1927 – 5 July 2000) was a Russian mathematician known for his work on semisimple Lie algebras, geometry, and probability theory. Together with Simon Gindikin, he discovered the Gindikin–Karpelevich formula. == Notes == == References == Dynkin, E. B. (2003), "Friedrich Karpelevich: his early years in mathematics", in Gindikin, S. G. (ed.), Lie groups and symmetric spaces. In memory of F. I. Karpelevich, American Mathematical Society Translations, Series 2, vol. 210, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-3472-5, MR 2021622 Dynkin, E B; Gelfand, I M; Gindikin, S G (2001), "Fridrikh Izrailevich Karpelevich (obituary)", Russian Mathematical Surveys, 56 (1): 141–147, Bibcode:2001RuMaS..56..141D, doi:10.1070/RM2001v056n01ABEH000359, ISSN 0042-1316, MR 1845645, S2CID 250881771 Gindikin, S. G.; Karpelevič, F. I. (1962), "Plancherel measure for symmetric Riemannian spaces of non-positive curvature", Dokl. Akad. Nauk SSSR (in Russian), 145: 252–255, MR 0150239 Knapp, A.W. (2003), "The Gindikin-Karpelevic formula and intertwining operators", Lie groups and symmetric spaces. In memory of F. I. Karpelevich, American Mathematical Society Translations, Series 2, vol. 210, Providence, R.I.: American Mathematical Society, pp. 145–159, ISBN 978-0-8218-3472-5, MR 2018359
|
Wikipedia:Friedrich Karl Schmidt#0
|
Adolf Friedrich Karl Schmidt (July 23, 1860 – October 17, 1944) was a German geophysicist who examined geomagnetism. He was involved in both experimental studies and in theoretical work on geomagnetism. He also designed a magnetometer which goes by his name. He was a member of the International Commission for Terrestrial Magnetism and Atmospheric Electricity from 1898. == Life and work == Schmidt was born in Breslau to engineer Friedrich and Mathilde Eckstein. He studied mathematics physics and English in Breslau and received a doctorate in 1882 with a thesis on Cremona transformations especially those of the fourth order. In 1882–83, he worked at the Breslau Observatory, taking magnetic measurements for the International Polar Year. He would again attended the International Polar Year Commission meeting for 1932–33 in Copenhagen. He began to teach at the Gymnasium Ernestinum at Gotha until 1902 when he moved to the Potsdam Magnetic Observatory to replace the position held by the late Max Eschenhagen. From 1907, he was chair of meteorology at Berlin University. His first major contribution was on a mathematical methods for tracing magnetic potentials that took into account the shape of the Earth. He introduced ordinary harmonic analysis in 1894 and spherical harmonic analysis later. He also introduced statistical improvements to measurement and designed and improved several instruments. One of his inventions was to make recording of magnetic variation more economical and prevent the instruments from running out of recording paper during intense storms. Schmidt's design of the magnetic field-balances for separating vertical and horizontal components became a standard. He began an "Archiv des Erdmagnetismus” in 1903 at Gotha which kept systematic results of geomagnetic observations. In this Archiv he quoted Kant to answer the question on why he undertook the work when he could have spent his time on more attractive studies - "To make projects is often a luxurious, boastful occupation, whereby one gets the appearance of a creative genius by demanding what one cannot achieve oneself, criticising what one cannot do better, and proposing what one does not know oneself where it may be found." He tried to make his papers more accessible and was even a promoter of Esperanto for the cause. From 1917, his eyesight began to decline and he became nearly blind in 1922. == References == == External links == Biography Schmidt Magnetometer Roob, Helmut; Schmidt, Peter (1985). Adolf Schmidt, 1860 - 1944 : handschriftlicher Nachlass des Geomagnetikers und Bibliographie seiner Veröffentlichungen (in German). Gotha.{{cite book}}: CS1 maint: location missing publisher (link)
|
Wikipedia:Frigyes Riesz#0
|
Frigyes Riesz (Hungarian: Riesz Frigyes, pronounced [ˈriːs ˈfriɟɛʃ], sometimes known in English and French as Frederic Riesz; 22 January 1880 – 28 February 1956) was a Hungarian mathematician who made fundamental contributions to functional analysis, as did his younger brother Marcel Riesz. == Life and career == He was born into a Jewish family in Győr, Austria-Hungary and died in Budapest, Hungary. Between 1911 and 1919 he was a professor at the Franz Joseph University in Kolozsvár, Austria-Hungary. The post-WW1 Treaty of Trianon transferred former Austro-Hungarian territory including Kolozsvár to the Kingdom of Romania, whereupon Kolozsvár's name changed to Cluj and the University of Kolozsvár moved to Szeged, Hungary, becoming the University of Szeged. Then, Riesz was the rector and a professor at the University of Szeged, as well as a member of the Hungarian Academy of Sciences. and the Polish Academy of Learning. He was the older brother of the mathematician Marcel Riesz. Riesz did some of the fundamental work in developing functional analysis and his work has had a number of important applications in physics. He established the spectral theory for bounded symmetric operators in a form very much like that now regarded as standard. He also made many contributions to other areas including ergodic theory, topology and he gave an elementary proof of the mean ergodic theorem. Together with Alfréd Haar, Riesz founded the Acta Scientiarum Mathematicarum journal. He had an uncommon method of giving lectures: he entered the lecture hall with an assistant and a docent. The docent then began reading the proper passages from Riesz's handbook and the assistant wrote the appropriate equations on the blackboard—while Riesz himself stood aside, nodding occasionally. The Swiss-American mathematician Edgar Lorch spent 1934 in Szeged working under Riesz and wrote a reminiscence about his time there, including his collaboration with Riesz. The corpus of his bibliography was compiled by the mathematician Pál Medgyessy. == Publications == == See also == Proximity space Rising sun lemma Denjoy–Riesz theorem F. and M. Riesz theorem Riesz representation theorem Riesz-Fischer theorem Riesz groups Riesz's lemma Riesz projector Riesz sequence Riesz space Radon-Riesz property == References == == External links == Media related to Frigyes Riesz at Wikimedia Commons Frigyes Riesz at the Mathematics Genealogy Project Hersh, Reuben; John-Steiner, Vera (1993). "A Visit to Hungarian Mathematics" (PDF). Mathematical Intelligencer. 15 (2): 13–26. doi:10.1007/bf03024187. S2CID 122827181. Archived from the original (PDF) on 27 June 2013. Retrieved 22 August 2012.
|
Wikipedia:Frithiof Nevanlinna#0
|
Rolf Herman Nevanlinna (né Neovius; 22 October 1895 – 28 May 1980) was a Finnish mathematician who made significant contributions to complex analysis. == Background == Nevanlinna was born Rolf Herman Neovius, becoming Nevanlinna in 1906 when his father changed the family name. The Neovius-Nevanlinna family contained many mathematicians: Edvard Engelbert Neovius (Rolf's grandfather) taught mathematics and topography at a military academy; Edvard Rudolf Neovius (Rolf's uncle) was a professor of mathematics at the University of Helsinki from 1883 to 1900; Lars Theodor Neovius-Nevanlinna (Rolf's uncle) was an author of mathematical textbooks; and Otto Wilhelm Neovius-Nevanlinna (Rolf's father) was a physicist, astronomer and mathematician. After Otto obtained his Ph.D. in physics from the University of Helsinki, he studied at the Pulkovo Observatory with the German astronomer Herman Romberg, whose daughter, Margarete Henriette Louise Romberg, he married in 1892. Otto and Margarete then settled in Joensuu, where Otto taught physics, and there their four children were born: Frithiof (born 1894; also a mathematician), Rolf (born 1895), Anna (born 1896) and Erik (born 1901). == Education == Nevanlinna began his formal education at the age of 7. Having already been taught to read and write by his parents, he went straight into the second grade but still found the work boring and soon refused to attend the school. He was then homeschooled before being sent to a grammar school in 1903 when the family moved to Helsinki, where his father took up a new post as a teacher at Helsinki High School. At the new school, Nevanlinna studied French and German in addition to the languages he already spoke: Finnish and Swedish. He also attended an orchestra school and had a love of music, which was encouraged by his mother: Margarete was an excellent pianist and Frithiof and Rolf would lie under the piano and listen to her playing. At 13 they went to orchestra school and became accomplished musicians – Frithiof on the cello and Rolf on the violin. Through free tickets from the orchestra school they got to know and love the music of the great composers, Bach, Beethoven, Brahms, Schubert, Schumann, Chopin and Liszt, as well as the early symphonies of Sibelius. Rolf first met Sibelius's music in 1907, when he heard his Third Symphony. Although later he met Hilbert, Einstein, Thomas Mann and other famous people, Rolf said that none had had such a strong effect on him as Sibelius. The boys played trios with their mother and their love of music – in particular of chamber music – lasted all their lives. Nevanlinna then progressed onto the Helsinki High School, where his main interests were classics and mathematics. He was taught by a number of teachers during this time but the best of them all was his own father, who taught him physics and mathematics. He graduated in 1913 having performed very well, although he was not the top student of his year. He then went beyond the school syllabus in the summer of 1913 when he read Ernst Leonard Lindelöf's Introduction to Higher Analysis; from that time on, Nevanlinna had an enthusiastic interest in mathematical analysis. (Lindelöf was also a cousin of Nevanlinna's father, and so a part of the Neovius-Nevanlinna mathematical family.) Nevanlinna began his studies at the University of Helsinki in 1913, and received his Master of Philosophy in mathematics in 1917. Lindelöf taught at the university and Nevanlinna was further influenced by him. During his time at the University of Helsinki, World War I was underway and Nevanlinna wanted to join the 27th Jäger Battalion, but his parents convinced him to continue with his studies. He did however join the White Guard in the Finnish Civil War, but did not see active military action. In 1919, Nevanlinna presented his thesis, entitled Über beschränkte Funktionen die in gegebenen Punkten vorgeschriebene Werte annehmen ("On limited functions prescribed values at given points"), to Lindelöf, his doctoral advisor. The thesis, which was on complex analysis, was of high quality and Nevanlinna was awarded his Doctor of Philosophy on 2 June 1919. == Career == When Nevanlinna earned his doctorate in 1919, there were no university posts available so he became a school teacher. His brother, Frithiof, had received his doctorate in 1918 but likewise was unable to take up a post at a university, and instead began working as a mathematician for an insurance company. Frithiof recruited Rolf to the company, and Nevanlinna worked for the company and as a school teacher until he was appointed a Docent of Mathematics at the University of Helsinki in 1922. During this time, he had been contacted by Edmund Landau and requested to move to Germany to work at the University of Göttingen, but did not accept. After his appointment as Docent of Mathematics, he gave up his insurance job but did not resign his position as school teacher until he received a newly created full professorship at the university in 1926. Despite this heavy workload, it was between the years of 1922–25 that he developed what would become to be known as Nevanlinna theory. From 1947 Nevanlinna had a chair in the University of Zurich, which he held on a half-time basis after receiving in 1948 a permanent position as one of the 12 salaried Academicians in the newly created Academy of Finland. Rolf Nevanlinna's most important mathematical achievement is the value distribution theory of meromorphic functions. The roots of the theory go back to the result of Émile Picard in 1879, showing that a non-constant complex-valued function which is analytic in the entire complex plane assumes all complex values save at most one. In the early 1920s Rolf Nevanlinna, partly in collaboration with his brother Frithiof, extended the theory to cover meromorphic functions, i.e. functions analytic in the plane except for isolated points in which the Laurent series of the function has a finite number of terms with a negative power of the variable. Nevanlinna's value distribution theory or Nevanlinna theory is crystallised in its two Main Theorems. Qualitatively, the first one states that if a value is assumed less frequently than average, then the function comes close to that value more often than average. The Second Main Theorem, more difficult than the first one, states roughly that there are relatively few values which the function assumes less often than average. Rolf Nevanlinna's article Zur Theorie der meromorphen Funktionen which contains the Main Theorems was published in 1925 in the journal Acta Mathematica. Hermann Weyl has called it "one of the few great mathematical events of the [twentieth] century." Nevanlinna gave a fuller account of the theory in the monographs Le théoreme de Picard – Borel et la théorie des fonctions méromorphes (1929) and Eindeutige analytische Funktionen (1936). Nevanlinna theory touches also on a class of functions called the Nevanlinna class, or functions of "bounded type". When the Winter War broke out (1939), Nevanlinna was invited to join the Finnish Army's Ballistics Office to assist in improving artillery firing tables. These tables had been based on a calculation technique developed by General Vilho Petter Nenonen, but Nevanlinna now came up with a new method which made them considerably faster to compile. In recognition of his work he was awarded the Order of the Cross of Liberty, Second Class, and throughout his life he held this honour in especial esteem. Among Rolf Nevanlinna's later interests in mathematics were the theory of Riemann surfaces (the monograph Uniformisierung in 1953) and functional analysis (Absolute analysis in 1959, written in collaboration with his brother Frithiof). Nevanlinna also published in Finnish a book on the foundations of geometry and a semipopular account of the Theory of Relativity. His Finnish textbook on the elements of complex analysis, Funktioteoria (1963), written together with Veikko Paatero, has appeared in German, English and Russian translations. Rolf Nevanlinna supervised at least 28 doctoral theses. His first and most famous doctoral student was Lars Ahlfors, one of the first two Fields Medal recipients. The research for which Ahlfors was awarded the prize (proving the Denjoy Conjecture, now known as the Denjoy–Carleman–Ahlfors theorem) was strongly based on Nevanlinna's work. Nevanlinna's work was recognised in the form of honorary degrees which he held from the universities of Heidelberg, the University of Bucharest, the University of Giessen, the Free University of Berlin, the University of Glasgow, the University of Uppsala, the University of Istanbul and the University of Jyväskylä. He was an honorary member of several learned societies, among them the London Mathematical Society and the Hungarian Academy of Sciences. — The 1679 Nevanlinna main belt asteroid is named after him. == Administrative activities == From 1954, Rolf Nevanlinna chaired the committee which set about the first computer project in Finland. Rolf Nevanlinna served as President of the International Mathematical Union (IMU) from 1959 to 1963 and as President of the International Congress of Mathematicians (ICM) in 1962. In 1964, Nevanlinna's connections with President Urho Kekkonen were instrumental in bringing about a total reorganization of the Academy of Finland. From 1965 to 1970 Nevanlinna was Chancellor of the University of Turku. == Political activities == Although Nevanlinna did not participate actively in politics, he was known to sympathise with the right-wing Patriotic People's Movement and, partly because of his half-German parentage, was also sympathetic towards Nazi Germany; with many mathematics professors fired in the 1930s due to the Nuremberg Laws, mathematicians sympathetic to the Nazi policies were sought as replacements, and Nevanlinna accepted a position as professor at the University of Göttingen in 1936 and 1937. His sympathy towards the Nazis led to his removal from his position as Rector of the University of Helsinki after Finland made peace with the Soviet Union in 1944. In the spring of 1941, Finland contributed a Volunteer Battalion to the Waffen-SS. In 1942, a committee was established for the Volunteer Battalion to take care of the battalion's somewhat strained relations with its German commanders, and Nevanlinna was chosen to be the chairman of the committee, as he was a person respected in Germany but loyal to Finland. He stated in his autobiography that he accepted this role due to a "sense of duty". Nevanlinna's collaboration with Nazi Germany did not prevent mathematical contacts with Allied countries; after World War II, the Soviet mathematical community was isolated from the Western mathematical community and the International Colloquium on Function Theory in Helsinki in 1957, directed by Nevanlinna, was one of the first post-war occasions when Soviet mathematicians could contact their Western colleagues in person. In 1965, Nevanlinna was an honorary guest at a function theory congress in Soviet Armenia. == IMU Abacus Medal (formerly Nevanlinna Prize) == When the IMU in 1981 decided to create a prize, similar to the Fields Medal, in theoretical computer science and the funding for the prize was secured from Finland, the Union decided to give Nevanlinna's name to the prize; the Rolf Nevanlinna Prize is awarded every four years at the ICM. In 2018, the General Assembly of the IMU approved a resolution to remove Nevanlinna's name from the prize. Starting in 2022 the prize has been called the IMU Abacus Medal. == See also == Harmonic measure Nevanlinna theory Nevanlinna class (functions of bounded type) Nevanlinna function Nevanlinna invariant Nevanlinna–Pick interpolation Nevanlinna's criterion Nevanlinna Prize == References == == Sources == Lehto, Olli (2008). Erhabene Welten: Das Leben Rolf Nevanlinnas [High Worlds: The life of Rolf Nevanlinna] (in German). Translated by Manfred Stern. Birkhäuser. ISBN 978-3-7643-7701-4. == External links == Media related to Rolf Nevanlinna at Wikimedia Commons Rolf Nevanlinna at the Mathematics Genealogy Project Nevanlinna, Rolf. National Biography of Finland.
|
Wikipedia:Fritz Carlson#0
|
Fritz David Carlson (23 July 1888 – 28 November 1952) was a Swedish mathematician whose work on analytic functions and geometry left a lasting mark on twentieth-century mathematics. After the death of Torsten Carleman, he headed the Mittag-Leffler Institute. == Life and career == Born in Vimmerby on 23 July 1888, Fritz David Carlson completed his secondary schooling at Linköping in 1907 and went on to earn his doctorate at Uppsala University in 1914 with a thesis on a class of Taylor series whose coefficients vary analytically with the index. He was appointed professor of descriptive geometry at the Royal Institute of Technology in Stockholm in 1920 and in 1928 took up the chair of higher analysis at the Stockholm College of Advanced Studies. From 1930 he served on the editorial board of Acta Mathematica, and after the death of Torsten Carleman in early 1949 he was entrusted with the administration of the Mittag-Leffler Institute at Djursholm. Carlson's research ranged from the arithmetic properties of power series to Dirichlet series (an infinite series with applications in number theory) and the Riemann zeta function (a function closely tied to the distribution of prime numbers), yielding theorems that remain standard references. He also authored a three-volume Swedish textbook series on elementary and spatial geometry (published 1943–48) and for thirty years acted as examiner for the Swedish secondary-school baccalaureate examination. Hans Rådström, Germund Dahlquist, and Tord Ganelius were among his students. Carlson's contributions to analysis include Carlson's theorem, the Polyá–Carlson theorem on rational functions, and Carlson's inequality: ( ∑ n = 1 ∞ | a n | ) 4 ≤ π 2 ∑ n = 1 ∞ | a n | 2 ∑ n = 1 ∞ n 2 | a n | 2 . {\displaystyle \left(\sum _{n=1}^{\infty }|a_{n}|\right)^{4}\leq \pi ^{2}\sum _{n=1}^{\infty }|a_{n}|^{2}\,\sum _{n=1}^{\infty }n^{2}|a_{n}|^{2}~.} == References ==
|
Wikipedia:Fritz Gesztesy#0
|
Friedrich "Fritz" Gesztesy (born 5 November 1953 in Austria) is a well-known Austrian-American mathematical physicist and Professor of Mathematics at Baylor University, known for his important contributions in spectral theory, functional analysis, nonrelativistic quantum mechanics (particularly, Schrödinger operators), ordinary and partial differential operators, and completely integrable systems (soliton equations). He has authored more than 300 publications on mathematics and physics. == Career == After studying physics at the University of Graz, he continued with his PhD in theoretical physics. The title of his dissertation 1976 with Heimo Latal and Ludwig Streit was Renormalization, Nelson's symmetry and energy densities in a field theory with quadratic interaction. After working at the Institut for Theoretical Physics of the University of Graz (1977–82) and several stays abroad at the Bielefeld University (Alexander von Humboldt Scholarship 1980–81 and 1983–84) and at the California Institute of Technology (Max Kade Scholarship 1987–88) he was appointed to Professor at the University of Missouri in 1988 and as Houchins Distinguished Professor in 2002. In 2016 he joined the faculty of Baylor University as Storm Professor of Mathematics. In 1983 he got the Austrian Theodor Körner Award in Natural Sciences, in 1987 the Ludwig Boltzmann Prize of the Austrian Physical Society. In 2002 he was elected to the Royal Norwegian Society of Sciences and Letters. In 2013 he became a Fellow of the American Mathematical Society. 2022 he received an honorary doctorate from the Graz University of Technology. Among his students are Gerald Teschl, Karl Unterkofler [1], Selim Sukhtaiev [2], and Maxim Zinchenko [3]. == Selected publications == with Sergio Albeverio, Raphael Høegh-Krohn and Helge Holden: " Solvable Models in Quantum Mechanics", 2nd edition, AMS-Chelsea Series, Amer. Math. Soc., 2005 with Helge Holden: Soliton Equations and their Algebro-Geometric Solutions, Bd.1 (1+1 dimensional continuous models), Cambridge Studies in Advanced Mathematics Bd.79, Cambridge University Press 2003 with Helge Holden, Johanna Michor, and Gerald Teschl: Soliton Equations and their Algebro-Geometric Solutions, Bd.2 (1+1 dimensional discrete models), Cambridge Studies in Advanced Mathematics Bd.114, Cambridge University Press 2008 with Barry Simon, The xi function, Acta Math. 176 (1996), 49–71 with Rudi Weikard, Picard potentials and Hill's equation on a torus, Acta Math. 176 (1996), 73–107 with Rudi Weikard, A characterization of all elliptic algebro-geometric solutions of the AKNS hierarchy, Acta Math. 181 (1998), 63–108 with Barry Simon, A new approach to inverse spectral theory. II. General real potentials and the connection to the spectral measure, Ann. of Math. 2 152 (2000), 593–643 == Literature == Spectral Analysis, Differential Equations and Mathematical Physics: A Festschrift in Honor of Fritz Gesztesy's 60th Birthday, H. Holden, B. Simon and G. Teschl (eds), Proceedings of Symposia in Pure Mathematics 87, Amer. Math. Soc., 2013 (Preface) == References == == External links == Official Webpage
|
Wikipedia:Frobenius determinant theorem#0
|
In mathematics, the Frobenius determinant theorem was a conjecture made in 1896 by the mathematician Richard Dedekind, who wrote a letter to F. G. Frobenius about it (reproduced in (Dedekind 1968), with an English translation in (Curtis 2003, p. 51)). If one takes the multiplication table of a finite group G and replaces each entry g with the variable xg, and subsequently takes the determinant, then the determinant factors as a product of n irreducible polynomials, where n is the number of conjugacy classes. Moreover, each polynomial is raised to a power equal to its degree. Frobenius proved this surprising conjecture, and it became known as the Frobenius determinant theorem. == Formal statement == Let a finite group G {\displaystyle G} have elements g 1 , g 2 , … , g n {\displaystyle g_{1},g_{2},\dots ,g_{n}} , and let x g i {\displaystyle x_{g_{i}}} be associated with each element of G {\displaystyle G} . Define the matrix X G {\displaystyle X_{G}} with entries a i j = x g i g j {\displaystyle a_{ij}=x_{g_{i}g_{j}}} . Then det X G = ∏ j = 1 r P j ( x g 1 , x g 2 , … , x g n ) deg P j {\displaystyle \det X_{G}=\prod _{j=1}^{r}P_{j}(x_{g_{1}},x_{g_{2}},\dots ,x_{g_{n}})^{\deg P_{j}}} where the P j {\displaystyle P_{j}} 's are pairwise non-proportional irreducible polynomials and r {\displaystyle r} is the number of conjugacy classes of G. == References == Curtis, Charles W. (2003), Pioneers of Representation Theory: Frobenius, Burnside, Schur, and Brauer, History of Mathematics, Providence, R.I.: American Mathematical Society, doi:10.1090/S0273-0979-00-00867-3, ISBN 978-0-8218-2677-5, MR 1715145 Review Dedekind, Richard (1968) [1931], Fricke, Robert; Noether, Emmy; Ore, öystein (eds.), Gesammelte mathematische Werke. Bände I–III, New York: Chelsea Publishing Co., JFM 56.0024.05, MR 0237282 Etingof, Pavel (2005). "Lectures on Representation Theory" (PDF). Frobenius, Ferdinand Georg (1968), Serre, J.-P. (ed.), Gesammelte Abhandlungen. Bände I, II, III, Berlin, New York: Springer-Verlag, ISBN 978-3-540-04120-7, MR 0235974
|
Wikipedia:Frobenius formula#0
|
In mathematics, specifically in representation theory, the Frobenius formula, introduced by G. Frobenius, computes the characters of irreducible representations of the symmetric group Sn. Among the other applications, the formula can be used to derive the hook length formula. == Statement == Let χ λ {\displaystyle \chi _{\lambda }} be the character of an irreducible representation of the symmetric group S n {\displaystyle S_{n}} corresponding to a partition λ {\displaystyle \lambda } of n: n = λ 1 + ⋯ + λ k {\displaystyle n=\lambda _{1}+\cdots +\lambda _{k}} and ℓ j = λ j + k − j {\displaystyle \ell _{j}=\lambda _{j}+k-j} . For each partition μ {\displaystyle \mu } of n, let C ( μ ) {\displaystyle C(\mu )} denote the conjugacy class in S n {\displaystyle S_{n}} corresponding to it (cf. the example below), and let i j {\displaystyle i_{j}} denote the number of times j appears in μ {\displaystyle \mu } (so ∑ j i j j = n {\displaystyle \sum _{j}i_{j}j=n} ). Then the Frobenius formula states that the constant value of χ λ {\displaystyle \chi _{\lambda }} on C ( μ ) , {\displaystyle C(\mu ),} χ λ ( C ( μ ) ) , {\displaystyle \chi _{\lambda }(C(\mu )),} is the coefficient of the monomial x 1 ℓ 1 … x k ℓ k {\displaystyle x_{1}^{\ell _{1}}\dots x_{k}^{\ell _{k}}} in the homogeneous polynomial in k {\displaystyle k} variables ∏ i < j k ( x i − x j ) ∏ j P j ( x 1 , … , x k ) i j , {\displaystyle \prod _{i<j}^{k}(x_{i}-x_{j})\;\prod _{j}P_{j}(x_{1},\dots ,x_{k})^{i_{j}},} where P j ( x 1 , … , x k ) = x 1 j + ⋯ + x k j {\displaystyle P_{j}(x_{1},\dots ,x_{k})=x_{1}^{j}+\dots +x_{k}^{j}} is the j {\displaystyle j} -th power sum. Example: Take n = 4 {\displaystyle n=4} . Let λ : 4 = 2 + 2 = λ 1 + λ 2 {\displaystyle \lambda :4=2+2=\lambda _{1}+\lambda _{2}} and hence k = 2 {\displaystyle k=2} , ℓ 1 = 3 {\displaystyle \ell _{1}=3} , ℓ 2 = 2 {\displaystyle \ell _{2}=2} . If μ : 4 = 1 + 1 + 1 + 1 {\displaystyle \mu :4=1+1+1+1} ( i 1 = 4 {\displaystyle i_{1}=4} ), which corresponds to the class of the identity element, then χ λ ( C ( μ ) ) {\displaystyle \chi _{\lambda }(C(\mu ))} is the coefficient of x 1 3 x 2 2 {\displaystyle x_{1}^{3}x_{2}^{2}} in ( x 1 − x 2 ) P 1 ( x 1 , x 2 ) 4 = ( x 1 − x 2 ) ( x 1 + x 2 ) 4 {\displaystyle (x_{1}-x_{2})P_{1}(x_{1},x_{2})^{4}=(x_{1}-x_{2})(x_{1}+x_{2})^{4}} which is 2. Similarly, if μ : 4 = 3 + 1 {\displaystyle \mu :4=3+1} (the class of a 3-cycle times an 1-cycle) and i 1 = i 3 = 1 {\displaystyle i_{1}=i_{3}=1} , then χ λ ( C ( μ ) ) {\displaystyle \chi _{\lambda }(C(\mu ))} , given by ( x 1 − x 2 ) P 1 ( x 1 , x 2 ) P 3 ( x 1 , x 2 ) = ( x 1 − x 2 ) ( x 1 + x 2 ) ( x 1 3 + x 2 3 ) , {\displaystyle (x_{1}-x_{2})P_{1}(x_{1},x_{2})P_{3}(x_{1},x_{2})=(x_{1}-x_{2})(x_{1}+x_{2})(x_{1}^{3}+x_{2}^{3}),} is −1. For the identity representation, k = 1 {\displaystyle k=1} and λ 1 = n = ℓ 1 {\displaystyle \lambda _{1}=n=\ell _{1}} . The character χ λ ( C ( μ ) ) {\displaystyle \chi _{\lambda }(C(\mu ))} will be equal to the coefficient of x 1 n {\displaystyle x_{1}^{n}} in ∏ j P j ( x 1 ) i j = ∏ j x 1 i j j = x 1 ∑ j i j j = x 1 n {\displaystyle \prod _{j}P_{j}(x_{1})^{i_{j}}=\prod _{j}x_{1}^{i_{j}j}=x_{1}^{\sum _{j}i_{j}j}=x_{1}^{n}} , which is 1 for any μ {\displaystyle \mu } as expected. == Analogues == Arun Ram gives a q-analog of the Frobenius formula. == See also == Representation theory of symmetric groups == References == Ram, Arun (1991). "A Frobenius formula for the characters of the Hecke algebras". Inventiones Mathematicae. 106 (1): 461–488. Bibcode:1991InMat.106..461R. doi:10.1007/BF01243921. Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Macdonald, I. G. Symmetric functions and Hall polynomials. Second edition. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1995. x+475 pp. ISBN 0-19-853489-2 MR1354144
|
Wikipedia:Frobenius normal form#0
|
In linear algebra, the Frobenius normal form or rational canonical form of a square matrix A with entries in a field F is a canonical form for matrices obtained by conjugation by invertible matrices over F. The form reflects a minimal decomposition of the vector space into subspaces that are cyclic for A (i.e., spanned by some vector and its repeated images under A). Since only one normal form can be reached from a given matrix (whence the "canonical"), a matrix B is similar to A if and only if it has the same rational canonical form as A. Since this form can be found without any operations that might change when extending the field F (whence the "rational"), notably without factoring polynomials, this shows that whether two matrices are similar does not change upon field extensions. The form is named after German mathematician Ferdinand Georg Frobenius. Some authors use the term rational canonical form for a somewhat different form that is more properly called the primary rational canonical form. Instead of decomposing into a minimum number of cyclic subspaces, the primary form decomposes into a maximum number of cyclic subspaces. It is also defined over F, but has somewhat different properties: finding the form requires factorization of polynomials, and as a consequence the primary rational canonical form may change when the same matrix is considered over an extension field of F. This article mainly deals with the form that does not require factorization, and explicitly mentions "primary" when the form using factorization is meant. == Motivation == When trying to find out whether two square matrices A and B are similar, one approach is to try, for each of them, to decompose the vector space as far as possible into a direct sum of stable subspaces, and compare the respective actions on these subspaces. For instance if both are diagonalizable, then one can take the decomposition into eigenspaces (for which the action is as simple as it can get, namely by a scalar), and then similarity can be decided by comparing eigenvalues and their multiplicities. While in practice this is often a quite insightful approach, there are various drawbacks this has as a general method. First, it requires finding all eigenvalues, say as roots of the characteristic polynomial, but it may not be possible to give an explicit expression for them. Second, a complete set of eigenvalues might exist only in an extension of the field one is working over, and then one does not get a proof of similarity over the original field. Finally A and B might not be diagonalizable even over this larger field, in which case one must instead use a decomposition into generalized eigenspaces, and possibly into Jordan blocks. But obtaining such a fine decomposition is not necessary to just decide whether two matrices are similar. The rational canonical form is based on instead using a direct sum decomposition into stable subspaces that are as large as possible, while still allowing a very simple description of the action on each of them. These subspaces must be generated by a single nonzero vector v and all its images by repeated application of the linear operator associated to the matrix; such subspaces are called cyclic subspaces (by analogy with cyclic subgroups) and they are clearly stable under the linear operator. A basis of such a subspace is obtained by taking v and its successive images as long as they are linearly independent. The matrix of the linear operator with respect to such a basis is the companion matrix of a monic polynomial; this polynomial (the minimal polynomial of the operator restricted to the subspace, which notion is analogous to that of the order of a cyclic subgroup) determines the action of the operator on the cyclic subspace up to isomorphism, and is independent of the choice of the vector v generating the subspace. A direct sum decomposition into cyclic subspaces always exists, and finding one does not require factoring polynomials. However it is possible that cyclic subspaces do allow a decomposition as direct sum of smaller cyclic subspaces (essentially by the Chinese remainder theorem). Therefore, just having for both matrices some decomposition of the space into cyclic subspaces, and knowing the corresponding minimal polynomials, is not in itself sufficient to decide their similarity. An additional condition is imposed to ensure that for similar matrices one gets decompositions into cyclic subspaces that exactly match: in the list of associated minimal polynomials each one must divide the next (and the constant polynomial 1 is forbidden to exclude trivial cyclic subspaces). The resulting list of polynomials are called the invariant factors of (the K[X]-module defined by) the matrix, and two matrices are similar if and only if they have identical lists of invariant factors. The rational canonical form of a matrix A is obtained by expressing it on a basis adapted to a decomposition into cyclic subspaces whose associated minimal polynomials are the invariant factors of A; two matrices are similar if and only if they have the same rational canonical form. == Example == Consider the following matrix A, over Q: A = ( − 1 3 − 1 0 − 2 0 0 − 2 − 1 − 1 1 1 − 2 − 1 0 − 1 − 2 − 6 4 3 − 8 − 4 − 2 1 − 1 8 − 3 − 1 5 2 3 − 3 0 0 0 0 0 0 0 1 0 0 0 0 − 1 0 0 0 1 0 0 0 2 0 0 0 0 0 0 0 4 0 1 0 ) . {\displaystyle \scriptstyle A={\begin{pmatrix}-1&3&-1&0&-2&0&0&-2\\-1&-1&1&1&-2&-1&0&-1\\-2&-6&4&3&-8&-4&-2&1\\-1&8&-3&-1&5&2&3&-3\\0&0&0&0&0&0&0&1\\0&0&0&0&-1&0&0&0\\1&0&0&0&2&0&0&0\\0&0&0&0&4&0&1&0\end{pmatrix}}.} A has minimal polynomial μ = X 6 − 4 X 4 − 2 X 3 + 4 X 2 + 4 X + 1 {\displaystyle \mu =X^{6}-4X^{4}-2X^{3}+4X^{2}+4X+1} , so that the dimension of a subspace generated by the repeated images of a single vector is at most 6. The characteristic polynomial is χ = X 8 − X 7 − 5 X 6 + 2 X 5 + 10 X 4 + 2 X 3 − 7 X 2 − 5 X − 1 {\displaystyle \chi =X^{8}-X^{7}-5X^{6}+2X^{5}+10X^{4}+2X^{3}-7X^{2}-5X-1} , which is a multiple of the minimal polynomial by a factor X 2 − X − 1 {\displaystyle X^{2}-X-1} . There always exist vectors such that the cyclic subspace that they generate has the same minimal polynomial as the operator has on the whole space; indeed most vectors will have this property, and in this case the first standard basis vector e 1 {\displaystyle e_{1}} does so: the vectors A k ( e 1 ) {\displaystyle A^{k}(e_{1})} for k = 0 , 1 , … , 5 {\displaystyle k=0,1,\ldots ,5} are linearly independent and span a cyclic subspace with minimal polynomial μ {\displaystyle \mu } . There exist complementary stable subspaces (of dimension 2) to this cyclic subspace, and the space generated by vectors v = ( 3 , 4 , 8 , 0 , − 1 , 0 , 2 , − 1 ) ⊤ {\displaystyle v=(3,4,8,0,-1,0,2,-1)^{\top }} and w = ( 5 , 4 , 5 , 9 , − 1 , 1 , 1 , − 2 ) ⊤ {\displaystyle w=(5,4,5,9,-1,1,1,-2)^{\top }} is an example. In fact one has A ⋅ v = w {\displaystyle A\cdot v=w} , so the complementary subspace is a cyclic subspace generated by v {\displaystyle v} ; it has minimal polynomial X 2 − X − 1 {\displaystyle X^{2}-X-1} . Since μ {\displaystyle \mu } is the minimal polynomial of the whole space, it is clear that X 2 − X − 1 {\displaystyle X^{2}-X-1} must divide μ {\displaystyle \mu } (and it is easily checked that it does), and we have found the invariant factors X 2 − X − 1 {\displaystyle X^{2}-X-1} and μ = X 6 − 4 X 4 − 2 X 3 + 4 X 2 + 4 X + 1 {\displaystyle \mu =X^{6}-4X^{4}-2X^{3}+4X^{2}+4X+1} of A. Then the rational canonical form of A is the block diagonal matrix with the corresponding companion matrices as diagonal blocks, namely C = ( 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 − 1 0 0 1 0 0 0 0 − 4 0 0 0 1 0 0 0 − 4 0 0 0 0 1 0 0 2 0 0 0 0 0 1 0 4 0 0 0 0 0 0 1 0 ) . {\displaystyle \scriptstyle C=\left({\begin{array}{cc|cccccc}0&1&0&0&0&0&0&0\\1&1&0&0&0&0&0&0\\\hline 0&0&0&0&0&0&0&-1\\0&0&1&0&0&0&0&-4\\0&0&0&1&0&0&0&-4\\0&0&0&0&1&0&0&2\\0&0&0&0&0&1&0&4\\0&0&0&0&0&0&1&0\end{array}}\right).} A basis on which this form is attained is formed by the vectors v , w {\displaystyle v,w} above, followed by A k ( e 1 ) {\displaystyle A^{k}(e_{1})} for k = 0 , 1 , … , 5 {\displaystyle k=0,1,\ldots ,5} ; explicitly this means that for P = ( 3 5 1 − 1 0 0 − 4 0 4 4 0 − 1 − 1 − 2 − 3 − 5 8 5 0 − 2 − 5 − 2 − 11 − 6 0 9 0 − 1 3 − 2 0 0 − 1 − 1 0 0 0 1 − 1 4 0 1 0 0 0 0 − 1 1 2 1 0 1 − 1 0 2 − 6 − 1 − 2 0 0 1 − 1 4 − 2 ) {\displaystyle \scriptstyle P={\begin{pmatrix}3&5&1&-1&0&0&-4&0\\4&4&0&-1&-1&-2&-3&-5\\8&5&0&-2&-5&-2&-11&-6\\0&9&0&-1&3&-2&0&0\\-1&-1&0&0&0&1&-1&4\\0&1&0&0&0&0&-1&1\\2&1&0&1&-1&0&2&-6\\-1&-2&0&0&1&-1&4&-2\end{pmatrix}}} , one has A = P C P − 1 . {\displaystyle A=PCP^{-1}.} == General case and theory == Fix a base field F and a finite-dimensional vector space V over F. Given a polynomial P ∈ F[X], there is associated to it a companion matrix CP whose characteristic polynomial and minimal polynomial are both equal to P. Theorem: Let V be a finite-dimensional vector space over a field F, and A a square matrix over F. Then V (viewed as an F[X]-module with the action of X given by A) admits a F[X]-module isomorphism V ≅ F[X]/f1 ⊕ … ⊕ F[X]/fk where the fi ∈ F[X] may be taken to be monic polynomials of positive degree (so they are non-units in F[X]) that satisfy the relations f1 | f2 | … | fk (where "a | b" is notation for "a divides b"); with these conditions the list of polynomials fi is unique. Sketch of Proof: Apply the structure theorem for finitely generated modules over a principal ideal domain to V, viewing it as an F[X]-module. The structure theorem provides a decomposition into cyclic factors, each of which is a quotient of F[X] by a proper ideal; the zero ideal cannot be present since the resulting free module would be infinite-dimensional as F vector space, while V is finite-dimensional. For the polynomials fi one then takes the unique monic generators of the respective ideals, and since the structure theorem ensures containment of every ideal in the preceding ideal, one obtains the divisibility conditions for the fi. See [DF] for details. Given an arbitrary square matrix, the elementary divisors used in the construction of the Jordan normal form do not exist over F[X], so the invariant factors fi as given above must be used instead. The last of these factors fk is then the minimal polynomial, which all the invariant factors therefore divide, and the product of the invariant factors gives the characteristic polynomial. Note that this implies that the minimal polynomial divides the characteristic polynomial (which is essentially the Cayley-Hamilton theorem), and that every irreducible factor of the characteristic polynomial also divides the minimal polynomial (possibly with lower multiplicity). For each invariant factor fi one takes its companion matrix Cfi, and the block diagonal matrix formed from these blocks yields the rational canonical form of A. When the minimal polynomial is identical to the characteristic polynomial (the case k = 1), the Frobenius normal form is the companion matrix of the characteristic polynomial. As the rational canonical form is uniquely determined by the unique invariant factors associated to A, and these invariant factors are independent of basis, it follows that two square matrices A and B are similar if and only if they have the same rational canonical form. == A rational normal form generalizing the Jordan normal form == The Frobenius normal form does not reflect any form of factorization of the characteristic polynomial, even if it does exist over the ground field F. This implies that it is invariant when F is replaced by a different field (as long as it contains the entries of the original matrix A). On the other hand, this makes the Frobenius normal form rather different from other normal forms that do depend on factoring the characteristic polynomial, notably the diagonal form (if A is diagonalizable) or more generally the Jordan normal form (if the characteristic polynomial splits into linear factors). For instance, the Frobenius normal form of a diagonal matrix with distinct diagonal entries is just the companion matrix of its characteristic polynomial. There is another way to define a normal form, that, like the Frobenius normal form, is always defined over the same field F as A, but that does reflect a possible factorization of the characteristic polynomial (or equivalently the minimal polynomial) into irreducible factors over F, and which reduces to the Jordan normal form when this factorization only contains linear factors (corresponding to eigenvalues). This form is sometimes called the generalized Jordan normal form, or primary rational canonical form. It is based on the fact that the vector space can be canonically decomposed into a direct sum of stable subspaces corresponding to the distinct irreducible factors P of the characteristic polynomial (as stated by the lemme des noyaux), where the characteristic polynomial of each summand is a power of the corresponding P. These summands can be further decomposed, non-canonically, as a direct sum of cyclic F[x]-modules (like is done for the Frobenius normal form above), where the characteristic polynomial of each summand is still a (generally smaller) power of P. The primary rational canonical form is a block diagonal matrix corresponding to such a decomposition into cyclic modules, with a particular form called generalized Jordan block in the diagonal blocks, corresponding to a particular choice of a basis for the cyclic modules. This generalized Jordan block is itself a block matrix of the form ( C 0 ⋯ 0 U C ⋯ 0 ⋮ ⋱ ⋱ ⋮ 0 ⋯ U C ) {\displaystyle \scriptstyle {\begin{pmatrix}C&0&\cdots &0\\U&C&\cdots &0\\\vdots &\ddots &\ddots &\vdots \\0&\cdots &U&C\end{pmatrix}}} where C is the companion matrix of the irreducible polynomial P, and U is a matrix whose sole nonzero entry is a 1 in the upper right-hand corner. For the case of a linear irreducible factor P = x − λ, these blocks are reduced to single entries C = λ and U = 1 and, one finds a (transposed) Jordan block. In any generalized Jordan block, all entries immediately below the main diagonal are 1. A basis of the cyclic module giving rise to this form is obtained by choosing a generating vector v (one that is not annihilated by Pk−1(A) where the minimal polynomial of the cyclic module is Pk), and taking as basis v , A ( v ) , A 2 ( v ) , … , A d − 1 ( v ) , P ( A ) ( v ) , A ( P ( A ) ( v ) ) , … , A d − 1 ( P ( A ) ( v ) ) , P 2 ( A ) ( v ) , … , P k − 1 ( A ) ( v ) , … , A d − 1 ( P k − 1 ( A ) ( v ) ) {\displaystyle v,A(v),A^{2}(v),\ldots ,A^{d-1}(v),~P(A)(v),A(P(A)(v)),\ldots ,A^{d-1}(P(A)(v)),~P^{2}(A)(v),\ldots ,~P^{k-1}(A)(v),\ldots ,A^{d-1}(P^{k-1}(A)(v))} where d = deg P. == See also == Smith normal form == References == [DF] David S. Dummit and Richard M. Foote. Abstract Algebra. 2nd Edition, John Wiley & Sons. pp. 442, 446, 452-458. ISBN 0-471-36857-1. == External links == Rational Canonical Form (Mathworld) === Algorithms === An O(n3) Algorithm for Frobenius Normal Form An Algorithm for the Frobenius Normal Form (pdf) A rational canonical form Algorithm (pdf)
|
Wikipedia:Frobenius reciprocity theorem#0
|
In mathematics, and in particular representation theory, Frobenius reciprocity is a theorem expressing a duality between the process of restricting and inducting. It can be used to leverage knowledge about representations of a subgroup to find and classify representations of "large" groups that contain them. It is named for Ferdinand Georg Frobenius, the inventor of the representation theory of finite groups. == Statement == === Character theory === The theorem was originally stated in terms of character theory. Let G be a finite group with a subgroup H, let Res H G {\displaystyle \operatorname {Res} _{H}^{G}} denote the restriction of a character, or more generally, class function of G to H, and let Ind H G {\displaystyle \operatorname {Ind} _{H}^{G}} denote the induced class function of a given class function on H. For any finite group A, there is an inner product ⟨ − , − ⟩ A {\displaystyle \langle -,-\rangle _{A}} on the vector space of class functions A → C {\displaystyle A\to \mathbb {C} } (described in detail in the article Schur orthogonality relations). Now, for any class functions ψ : H → C {\displaystyle \psi :H\to \mathbb {C} } and φ : G → C {\displaystyle \varphi :G\to \mathbb {C} } , the following equality holds: ⟨ Ind H G ψ , φ ⟩ G = ⟨ ψ , Res H G φ ⟩ H . {\displaystyle \langle \operatorname {Ind} _{H}^{G}\psi ,\varphi \rangle _{G}=\langle \psi ,\operatorname {Res} _{H}^{G}\varphi \rangle _{H}.} In other words, Ind H G {\displaystyle \operatorname {Ind} _{H}^{G}} and Res H G {\displaystyle \operatorname {Res} _{H}^{G}} are Hermitian adjoint. === Module theory === As explained in the section Representation theory of finite groups#Representations, modules and the convolution algebra, the theory of the representations of a group G over a field K is, in a certain sense, equivalent to the theory of modules over the group algebra K[G]. Therefore, there is a corresponding Frobenius reciprocity theorem for K[G]-modules. Let G be a group with subgroup H, let M be an H-module, and let N be a G-module. In the language of module theory, the induced module K [ G ] ⊗ K [ H ] M {\displaystyle K[G]\otimes _{K[H]}M} corresponds to the induced representation Ind H G {\displaystyle \operatorname {Ind} _{H}^{G}} , whereas the restriction of scalars K [ H ] N {\displaystyle {_{K[H]}}N} corresponds to the restriction Res H G {\displaystyle \operatorname {Res} _{H}^{G}} . Accordingly, the statement is as follows: The following sets of module homomorphisms are in bijective correspondence: Hom K [ G ] ( K [ G ] ⊗ K [ H ] M , N ) ≅ Hom K [ H ] ( M , K [ H ] N ) {\displaystyle \operatorname {Hom} _{K[G]}(K[G]\otimes _{K[H]}M,N)\cong \operatorname {Hom} _{K[H]}(M,{_{K[H]}}N)} . As noted below in the section on category theory, this result applies to modules over all rings, not just modules over group algebras. === Category theory === Let G be a group with a subgroup H, and let Res H G , Ind H G {\displaystyle \operatorname {Res} _{H}^{G},\operatorname {Ind} _{H}^{G}} be defined as above. For any group A and field K let Rep A K {\displaystyle {\textbf {Rep}}_{A}^{K}} denote the category of linear representations of A over K. There is a forgetful functor Res H G : Rep G ⟶ Rep H ( V , ρ ) ⟼ Res H G ( V , ρ ) {\displaystyle {\begin{aligned}\operatorname {Res} _{H}^{G}:{\textbf {Rep}}_{G}&\longrightarrow {\textbf {Rep}}_{H}\\(V,\rho )&\longmapsto \operatorname {Res} _{H}^{G}(V,\rho )\end{aligned}}} This functor acts as the identity on morphisms. There is a functor going in the opposite direction: Ind H G : Rep H ⟶ Rep G ( W , τ ) ⟼ Ind H G ( W , τ ) {\displaystyle {\begin{aligned}\operatorname {Ind} _{H}^{G}:{\textbf {Rep}}_{H}&\longrightarrow {\textbf {Rep}}_{G}\\(W,\tau )&\longmapsto \operatorname {Ind} _{H}^{G}(W,\tau )\end{aligned}}} These functors form an adjoint pair Ind H G ⊣ Res H G {\displaystyle \operatorname {Ind} _{H}^{G}\dashv \operatorname {Res} _{H}^{G}} . In the case of finite groups, they are actually both left- and right-adjoint to one another. This adjunction gives rise to a universal property for the induced representation (for details, see Induced representation#Properties). In the language of module theory, the corresponding adjunction is an instance of the more general relationship between restriction and extension of scalars. == See also == See Restricted representation and Induced representation for definitions of the processes to which this theorem applies. See Representation theory of finite groups for a broad overview of the subject of group representations. See Selberg trace formula and the Arthur-Selberg trace formula for generalizations to discrete cofinite subgroups of certain locally compact groups. == Notes == == References ==
|
Wikipedia:Frostman lemma#0
|
Frostman's lemma provides a convenient tool for estimating the Hausdorff dimension of sets in mathematics, and more specifically, in the theory of fractal dimensions. == Lemma == Lemma: Let A be a Borel subset of Rn, and let s > 0. Then the following are equivalent: Hs(A) > 0, where Hs denotes the s-dimensional Hausdorff measure. There is an (unsigned) Borel measure μ on Rn satisfying μ(A) > 0, and such that μ ( B ( x , r ) ) ≤ r s {\displaystyle \mu (B(x,r))\leq r^{s}} holds for all x ∈ Rn and r>0. Otto Frostman proved this lemma for closed sets A as part of his PhD dissertation at Lund University in 1935. The generalization to Borel sets is more involved, and requires the theory of Suslin sets. A useful corollary of Frostman's lemma requires the notions of the s-capacity of a Borel set A ⊂ Rn, which is defined by C s ( A ) := sup { ( ∫ A × A d μ ( x ) d μ ( y ) | x − y | s ) − 1 : μ is a Borel measure and μ ( A ) = 1 } . {\displaystyle C_{s}(A):=\sup {\Bigl \{}{\Bigl (}\int _{A\times A}{\frac {d\mu (x)\,d\mu (y)}{|x-y|^{s}}}{\Bigr )}^{-1}:\mu {\text{ is a Borel measure and }}\mu (A)=1{\Bigr \}}.} (Here, we take inf ∅ = ∞ and 1⁄∞ = 0. As before, the measure μ {\displaystyle \mu } is unsigned.) It follows from Frostman's lemma that for Borel A ⊂ Rn d i m H ( A ) = sup { s ≥ 0 : C s ( A ) > 0 } . {\displaystyle \mathrm {dim} _{H}(A)=\sup\{s\geq 0:C_{s}(A)>0\}.} == Web pages == Illustrating Frostman measures == References == == Further reading == Mattila, Pertti (1995), Geometry of sets and measures in Euclidean spaces, Cambridge Studies in Advanced Mathematics, vol. 44, Cambridge University Press, ISBN 978-0-521-65595-8, MR 1333890
|
Wikipedia:Frucht's theorem#0
|
Frucht's theorem is a result in algebraic graph theory, conjectured by Dénes Kőnig in 1936 and proved by Robert Frucht in 1939. It states that every finite group is the group of symmetries of a finite undirected graph. More strongly, for any finite group G there exist infinitely many non-isomorphic simple connected graphs such that the automorphism group of each of them is isomorphic to G. == Proof idea == The main idea of the proof is to observe that the Cayley graph of G, with the addition of colors and orientations on its edges to distinguish the generators of G from each other, has the desired automorphism group. Therefore, if each of these edges is replaced by an appropriate subgraph, such that each replacement subgraph is itself asymmetric and two replacements are isomorphic if and only if they replace edges of the same color, then the undirected graph created by performing these replacements will also have G as its symmetry group. == Graph size == With three exceptions – the cyclic groups of orders 3, 4, and 5 – every group can be represented as the symmetries of a graph whose vertices have only two orbits. Therefore, the number of vertices in the graph is at most twice the order of the group. With a larger set of exceptions, most finite groups can be represented as the symmetries of a vertex-transitive graph, with a number of vertices equal to the order of the group. == Special families of graphs == There are stronger versions of Frucht's theorem that show that certain restricted families of graphs still contain enough graphs to realize any symmetry group. Frucht proved that in fact countably many 3-regular graphs with the desired property exist; for instance, the Frucht graph, a 3-regular graph with 12 vertices and 18 edges, has no nontrivial symmetries, providing a realization of this type for the trivial group. Gert Sabidussi showed that any group can be realized as the symmetry groups of countably many distinct k-regular graphs, k-vertex-connected graphs, or k-chromatic graphs, for all positive integer values k (with k ≥ 3 {\displaystyle k\geq 3} for regular graphs and k ≥ 2 {\displaystyle k\geq 2} for k-chromatic graphs). From the facts that every graph can be reconstructed from the containment partial order of its edges and vertices, that every finite partial order is equivalent by Birkhoff's representation theorem to a finite distributive lattice, it follows that every finite group can be realized as the symmetries of a distributive lattice, and of the graph of the lattice, a median graph. It is possible to realize every finite group as the group of symmetries of a strongly regular graph. Every finite group can also be realized as the symmetries of a graph with distinguishing number two: one can (improperly) color the graph with two colors so that none of the symmetries of the graph preserve the coloring. However, some important classes of graphs are incapable of realizing all groups as their symmetries. Camille Jordan characterized the symmetry groups of trees as being the smallest set of finite groups containing the trivial group and closed under direct products with each other and wreath products with symmetric groups; in particular, the cyclic group of order three is not the symmetry group of a tree. Planar graphs are also not capable of realizing all groups as their symmetries; for instance, the only finite simple groups that are symmetries of planar graphs are the cyclic groups and the alternating group A 5 {\displaystyle A_{5}} . More generally, every minor-closed graph family is incapable of representing all finite groups by the symmetries of its graphs. László Babai conjectures, more strongly, that each minor-closed family can represent only finitely many non-cyclic finite simple groups. == Infinite graphs and groups == Izbicki extended these results in 1959 and showed that there were uncountably many infinite graphs realizing any finite symmetry group. Finally, Johannes de Groot and Sabidussi in 1959/1960 independently proved that any group (dropping the assumption that the group be finite, but with the assumption of axiom of choice) could be realized as the group of symmetries of an infinite graph. == References == === Sources === Babai, László (1995), "Automorphism groups, isomorphism, reconstruction" (PDF), in Graham, Ronald L.; Grötschel, Martin; Lovász, László (eds.), Handbook of Combinatorics, vol. II, North-Holland, pp. 1447–1540. de Groot, Johannes (1959), "Groups represented by homeomorphism groups", Mathematische Annalen, 138: 80–102, doi:10.1007/BF01369667, hdl:10338.dmlcz/101909, ISSN 0025-5831, MR 0119193. Frucht, Robert (1939), "Herstellung von Graphen mit vorgegebener abstrakter Gruppe.", Compositio Mathematica (in German), 6: 239–250, ISSN 0010-437X, Zbl 0020.07804. Frucht, Robert (1949), "Graphs of degree three with a given abstract group", Canadian Journal of Mathematics, 1 (4): 365–378, doi:10.4153/CJM-1949-033-6, ISSN 0008-414X, MR 0032987. Izbicki, Herbert (1959), "Unendliche Graphen endlichen Grades mit vorgegebenen Eigenschaften", Monatshefte für Mathematik, 63 (3): 298–301, doi:10.1007/BF01295203, MR 0105372. Kőnig, Dénes (1936), Theorie der endlichen und unendlichen Graphen, Leipzig: Akademische Verlagsgesellschaft, p. 5. As cited by Babai (1995). Sabidussi, Gert (1957), "Graphs with given group and given graph-theoretical properties", Canadian Journal of Mathematics, 9: 515–525, doi:10.4153/cjm-1957-060-7, ISSN 0008-414X, MR 0094810. Sabidussi, Gert (1960), "Graphs with given infinite group", Monatshefte für Mathematik, 64: 64–67, doi:10.1007/BF01319053, MR 0115935.
|
Wikipedia:Frédérique Lenger#0
|
Frédérique Papy-Lenger (August 12, 1921 – January 9, 2005) was a Belgian mathematician and mathematics educator active in the New Math movement of the 1960s and 1970s. == Early life and education == Frédérique Lenger was born on August 12, 1921, in Arlon, Belgium, one of three daughters of a lawyer. After studying classics in the Lycée Royal d’Arlon, she studied for a licentiate in mathematics at the Université libre de Bruxelles from 1939 to 1943. The University officially closed in 1941 to prevent its takeover by the German occupation, and her studies continued underground. In 1968, she completed a doctorate with a two-part thesis, one part on mathematics education and the other on geometric transformation groups. == Career == From 1947 to 1950, Lenger taught mathematics at the l’Ecole Decroly, while working as an assistant to mathematician Paul Libois, who suggested that she perform research involving projective geometry and triality. This became a precursor to the work of another student of Libois, Jacques Tits. In 1950, Lenger joined the mathematics faculty of the Lycée Royal d’Arlon; in 1957, she was appointed prefect at Arlon and director of the State Normal School in Arlon. She became a professor of mathematics at the Berkendael State Normal School in Brussels in 1960. In 1961, with several other mathematicians, she became one of the founders of the Centre Belge de Pédagogie de la Mathématique (Belgian Center for the Pedagogy of Mathematics). From 1974 to 1980 she worked in the US, at the Comprehensive School Mathematics Program in St. Louis, Missouri. She returned to Berkendael in 1980. She retired in 1981 but continued to work as a volunteer at the French school in Nivelles until 1992. == Contributions == Lenger began her work on developing a modern school mathematics curriculum in 1958, working with Willy Servais and in consultation with Georges Papy, whom she married in 1960. With Madeleine Lepropre, Lenger ran an experimental training program for kindergarten teachers based on the new curriculum in 1958–1959, and was encouraged by the enthusiasm the kindergarten students showed for the material. With Papy, in the mid-1960s, she developed a six-volume high-school mathematics program based on the principles of set theory and abstract algebra. She was an invited plenary speaker at the first International Congress on Mathematical Education, speaking there on the "minicomputer" method for teaching binary number arithmetic to schoolchildren. She became the founding president of the International Research Group in Mathematical Pedagogy in 1971. Her books include L'enfant et les graphes (Didier, 1968), Mathématique moderne (Didier, 1970), Modern mathematics (two vols., Collier, 1968 and 1969), Graph Games (Crowell, 1971), and Graphs and the Child (Harvard University Press, 1979). She also produced many educational booklets through the Belgian Center for the Pedagogy of Mathematics and the Comprehensive School Mathematics Program. == Legacy == The rue Frédérique Lenger in Arlon is named after her. == References ==
|
Wikipedia:Frédérique Oggier#0
|
Frédérique Elise Oggier is a Swiss mathematician and coding theorist who works as an associate professor of physical and mathematical sciences at Nanyang Technological University in Singapore. == Education == After earning bachelor's and master's degrees in mathematics from the University of Geneva, Oggier completed her doctorate at the École Polytechnique Fédérale de Lausanne, in 2005, under the supervision of Eva Bayer-Fluckiger. == Books == Oggier is the author of: Lattices applied to coding for reliable and secure communications (with Costa, Campello, Belfiore, and Viterbo, Springer, 2017) An introduction to central simple algebras and their applications to wireless communication (with Grégory Berhuy, American Mathematical Society, 2013) Coding techniques for repairability in networked distributed storage systems (with Anwitaman Datta, Now Publishers, 2013) Cyclic division algebras: A tool for space-time coding (with Jean-Claude Belfiore and Emanuele Viterbo, Now Publishers, 2007) Algebraic number theory and code design for Rayleigh fading channels (with Emanuele Viterbo, Now Publishers, 2004) == References == == External links == Home page Frédérique Oggier publications indexed by Google Scholar Lecture notes and slides
|
Wikipedia:Fuchs's theorem#0
|
In mathematics, Fuchs's theorem, named after Lazarus Fuchs, states that a second-order differential equation of the form y ″ + p ( x ) y ′ + q ( x ) y = g ( x ) {\displaystyle y''+p(x)y'+q(x)y=g(x)} has a solution expressible by a generalised Frobenius series when p ( x ) {\displaystyle p(x)} , q ( x ) {\displaystyle q(x)} and g ( x ) {\displaystyle g(x)} are analytic at x = a {\displaystyle x=a} or a {\displaystyle a} is a regular singular point. That is, any solution to this second-order differential equation can be written as y = ∑ n = 0 ∞ a n ( x − a ) n + s , a 0 ≠ 0 {\displaystyle y=\sum _{n=0}^{\infty }a_{n}(x-a)^{n+s},\quad a_{0}\neq 0} for some positive real s, or y = y 0 ln ( x − a ) + ∑ n = 0 ∞ b n ( x − a ) n + r , b 0 ≠ 0 {\displaystyle y=y_{0}\ln(x-a)+\sum _{n=0}^{\infty }b_{n}(x-a)^{n+r},\quad b_{0}\neq 0} for some positive real r, where y0 is a solution of the first kind. Its radius of convergence is at least as large as the minimum of the radii of convergence of p ( x ) {\displaystyle p(x)} , q ( x ) {\displaystyle q(x)} and g ( x ) {\displaystyle g(x)} . == See also == Frobenius method == References == Asmar, Nakhlé H. (2005), Partial differential equations with Fourier series and boundary value problems, Upper Saddle River, NJ: Pearson Prentice Hall, ISBN 0-13-148096-0. Butkov, Eugene (1995), Mathematical Physics, Reading, MA: Addison-Wesley, ISBN 0-201-00727-4.
|
Wikipedia:Fuchsian group#0
|
In mathematics, a Fuchsian group is a discrete subgroup of PSL(2,R). The group PSL(2,R) can be regarded equivalently as a group of orientation-preserving isometries of the hyperbolic plane, or conformal transformations of the unit disc, or conformal transformations of the upper half plane, so a Fuchsian group can be regarded as a group acting on any of these spaces. There are some variations of the definition: sometimes the Fuchsian group is assumed to be finitely generated, sometimes it is allowed to be a subgroup of PGL(2,R) (so that it contains orientation-reversing elements), and sometimes it is allowed to be a Kleinian group (a discrete subgroup of PSL(2,C)) which is conjugate to a subgroup of PSL(2,R). Fuchsian groups are used to create Fuchsian models of Riemann surfaces. In this case, the group may be called the Fuchsian group of the surface. In some sense, Fuchsian groups do for non-Euclidean geometry what crystallographic groups do for Euclidean geometry. Some Escher graphics are based on them (for the disc model of hyperbolic geometry). General Fuchsian groups were first studied by Henri Poincaré (1882), who was motivated by the paper (Fuchs 1880), and therefore named them after Lazarus Fuchs. == Fuchsian groups on the upper half-plane == Let H = { z ∈ C | Im z > 0 } {\displaystyle H=\{z\in \mathbb {C} |\operatorname {Im} {z}>0\}} be the upper half-plane. Then H {\displaystyle H} is a model of the hyperbolic plane when endowed with the metric d s = 1 y d x 2 + d y 2 . {\displaystyle ds={\frac {1}{y}}{\sqrt {dx^{2}+dy^{2}}}.} The group PSL(2,R) acts on H {\displaystyle H} by linear fractional transformations (also known as Möbius transformations): ( a b c d ) ⋅ z = a z + b c z + d . {\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\cdot z={\frac {az+b}{cz+d}}.} This action is faithful, and in fact PSL(2,R) is isomorphic to the group of all orientation-preserving isometries of H {\displaystyle H} . A Fuchsian group Γ {\displaystyle \Gamma } may be defined to be a subgroup of PSL(2,R), which acts discontinuously on H {\displaystyle H} . That is, For every z {\displaystyle z} in H {\displaystyle H} , the orbit Γ z = { γ z : γ ∈ Γ } {\displaystyle \Gamma z=\{\gamma z:\gamma \in \Gamma \}} has no accumulation point in H {\displaystyle H} . An equivalent definition for Γ {\displaystyle \Gamma } to be Fuchsian is that Γ {\displaystyle \Gamma } be a discrete group, which means that: Every sequence γ n {\displaystyle \gamma _{n}} of elements of Γ {\displaystyle \Gamma } converging to the identity in the usual topology of point-wise convergence is eventually constant, i.e. there exists an integer N {\displaystyle \mathbb {N} } such that for all n > N {\displaystyle n>\mathbb {N} } , γ n = I {\displaystyle \gamma _{n}=I} , where I {\displaystyle I} is the identity matrix. Although discontinuity and discreteness are equivalent in this case, this is not generally true for the case of an arbitrary group of conformal homeomorphisms acting on the full Riemann sphere (as opposed to H {\displaystyle H} ). Indeed, the Fuchsian group PSL(2,Z) is discrete but has accumulation points on the real number line Im z = 0 {\displaystyle \operatorname {Im} z=0} : elements of PSL(2,Z) will carry z = 0 {\displaystyle z=0} to every rational number, and the rationals Q are dense in R. == General definition == A linear fractional transformation defined by a matrix from PSL(2,C) will preserve the Riemann sphere P1(C) = C ∪ ∞, but will send the upper-half plane H to some open disk Δ. Conjugating by such a transformation will send a discrete subgroup of PSL(2,R) to a discrete subgroup of PSL(2,C) preserving Δ. This motivates the following definition of a Fuchsian group. Let Γ ⊂ PSL(2,C) act invariantly on a proper, open disk Δ ⊂ C ∪ ∞, that is, Γ(Δ) = Δ. Then Γ is Fuchsian if and only if any of the following three equivalent properties hold: Γ is a discrete group (with respect to the standard topology on PSL(2,C)). Γ acts properly discontinuously at each point z ∈ Δ. The set Δ is a subset of the region of discontinuity Ω(Γ) of Γ. That is, any one of these three can serve as a definition of a Fuchsian group, the others following as theorems. The notion of an invariant proper subset Δ is important; the so-called Picard group PSL(2,Z[i]) is discrete but does not preserve any disk in the Riemann sphere. Indeed, even the modular group PSL(2,Z), which is a Fuchsian group, does not act discontinuously on the real number line; it has accumulation points at the rational numbers. Similarly, the idea that Δ is a proper subset of the region of discontinuity is important; when it is not, the subgroup is called a Kleinian group. It is most usual to take the invariant domain Δ to be either the open unit disk or the upper half-plane. == Limit sets == Because of the discrete action, the orbit Γz of a point z in the upper half-plane under the action of Γ has no accumulation points in the upper half-plane. There may, however, be limit points on the real axis. Let Λ(Γ) be the limit set of Γ, that is, the set of limit points of Γz for z ∈ H. Then Λ(Γ) ⊆ R ∪ ∞. The limit set may be empty, or may contain one or two points, or may contain an infinite number. In the latter case, there are two types: A Fuchsian group of the first type is a group for which the limit set is the closed real line R ∪ ∞. This happens if the quotient space H/Γ has finite volume, but there are Fuchsian groups of the first kind of infinite covolume. Otherwise, a Fuchsian group is said to be of the second type. Equivalently, this is a group for which the limit set is a perfect set that is nowhere dense on R ∪ ∞. Since it is nowhere dense, this implies that any limit point is arbitrarily close to an open set that is not in the limit set. In other words, the limit set is a Cantor set. The type of a Fuchsian group need not be the same as its type when considered as a Kleinian group: in fact, all Fuchsian groups are Kleinian groups of type 2, as their limit sets (as Kleinian groups) are proper subsets of the Riemann sphere, contained in some circle. == Examples == An example of a Fuchsian group is the modular group, PSL(2,Z). This is the subgroup of PSL(2,R) consisting of linear fractional transformations ( a b c d ) ⋅ z = a z + b c z + d {\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\cdot z={\frac {az+b}{cz+d}}} where a, b, c, d are integers. The quotient space H/PSL(2,Z) is the moduli space of elliptic curves. Other Fuchsian groups include the groups Γ(n) for each integer n > 0. Here Γ(n) consists of linear fractional transformations of the above form where the entries of the matrix ( a b c d ) {\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}} are congruent to those of the identity matrix modulo n. A co-compact example is the (ordinary, rotational) (2,3,7) triangle group, containing the Fuchsian groups of the Klein quartic and of the Macbeath surface, as well as other Hurwitz groups. More generally, any hyperbolic von Dyck group (the index 2 subgroup of a triangle group, corresponding to orientation-preserving isometries) is a Fuchsian group. All these are Fuchsian groups of the first kind. All hyperbolic and parabolic cyclic subgroups of PSL(2,R) are Fuchsian. Any elliptic cyclic subgroup is Fuchsian if and only if it is finite. Every abelian Fuchsian group is cyclic. No Fuchsian group is isomorphic to Z × Z. Let Γ be a non-abelian Fuchsian group. Then the normalizer of Γ in PSL(2,R) is Fuchsian. == Metric properties == If h is a hyperbolic element, the translation length L of its action in the upper half-plane is related to the trace of h as a 2×2 matrix by the relation | t r h | = 2 cosh L 2 . {\displaystyle |\mathrm {tr} \;h|=2\cosh {\frac {L}{2}}.} A similar relation holds for the systole of the corresponding Riemann surface, if the Fuchsian group is torsion-free and co-compact. == See also == Quasi-Fuchsian group Non-Euclidean crystallographic group Schottky group == References == Fuchs, Lazarus (1880), "Ueber eine Klasse von Funktionen mehrerer Variablen, welche durch Umkehrung der Integrale von Lösungen der linearen Differentialgleichungen mit rationalen Coeffizienten entstehen", J. Reine Angew. Math., 89: 151–169 Hershel M. Farkas, Irwin Kra, Theta Constants, Riemann Surfaces and the Modular Group, American Mathematical Society, Providence RI, ISBN 978-0-8218-1392-8 (See section 1.6) Henryk Iwaniec, Spectral Methods of Automorphic Forms, Second Edition, (2002) (Volume 53 in Graduate Studies in Mathematics), America Mathematical Society, Providence, RI ISBN 978-0-8218-3160-1 (See Chapter 2.) Svetlana Katok, Fuchsian Groups (1992), University of Chicago Press, Chicago ISBN 978-0-226-42583-2 David Mumford, Caroline Series, and David Wright, Indra's Pearls: The Vision of Felix Klein, (2002) Cambridge University Press ISBN 978-0-521-35253-6. (Provides an excellent exposition of theory and results, richly illustrated with diagrams.) Peter J. Nicholls, The Ergodic Theory of Discrete Groups, (1989) London Mathematical Society Lecture Note Series 143, Cambridge University Press, Cambridge ISBN 978-0-521-37674-7 Poincaré, Henri (1882), "Théorie des groupes fuchsiens", Acta Mathematica, 1, Springer Netherlands: 1–62, doi:10.1007/BF02592124, ISSN 0001-5962, JFM 14.0338.01 Vinberg, Ernest B. (2001) [1994], "Fuchsian group", Encyclopedia of Mathematics, EMS Press
|
Wikipedia:Fuensanta Aroca#0
|
Fuensanta Aroca Bisquert is a Spanish mathematician who works in Mexico as a researcher in the Institute of Mathematics (Oaxaca unit) of the National Autonomous University of Mexico (UNAM). Her mathematical research involves the use of power series to solve differential equations, singularity theory, and tropical geometry. She has also published research on screening for mental health, and has spoken on discrimination and harassment in mathematics. == Education and career == Aroca was an undergraduate at the Autonomous University of Madrid, where she graduated in 1992. She completed a Ph.D. in 2000 at the University of Valladolid; her dissertation, Métodos algebraicos en ecuaciones diferenciales de primer orden en el campo complejo, was supervised by José M. Aroca Hernández Ros. She has been a researcher at the UNAM Institute of Mathematics (Cuernavaca unit) since 2004. == Recognition == Aroca was elected to the Mexican Academy of Sciences in 2022. == References == == External links == Fuensanta Aroca publications indexed by Google Scholar
|
Wikipedia:Function (mathematics)#0
|
In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y. The set X is called the domain of the function and the set Y is called the codomain of the function. Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept. A function is often denoted by a letter such as f, g or h. The value of a function f at an element x of its domain (that is, the element of the codomain that is associated with x) is denoted by f(x); for example, the value of f at x = 4 is denoted by f(4). Commonly, a specific function is defined by means of an expression depending on x, such as f ( x ) = x 2 + 1 ; {\displaystyle f(x)=x^{2}+1;} in this case, some computation, called function evaluation, may be needed for deducing the value of the function at a particular value; for example, if f ( x ) = x 2 + 1 , {\displaystyle f(x)=x^{2}+1,} then f ( 4 ) = 4 2 + 1 = 17. {\displaystyle f(4)=4^{2}+1=17.} Given its domain and its codomain, a function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane. Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics. The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. See History of the function concept for details. == Definition == A function f from a set X to a set Y is an assignment of one element of Y to each element of X. The set X is called the domain of the function and the set Y is called the codomain of the function. If the element y in Y is assigned to x in X by the function f, one says that f maps x to y, and this is commonly written y = f ( x ) . {\displaystyle y=f(x).} In this notation, x is the argument or variable of the function. A specific element x of X is a value of the variable, and the corresponding element of Y is the value of the function at x, or the image of x under the function. The image of a function, sometimes called its range, is the set of the images of all elements in the domain. A function f, its domain X, and its codomain Y are often specified by the notation f : X → Y . {\displaystyle f:X\to Y.} One may write x ↦ y {\displaystyle x\mapsto y} instead of y = f ( x ) {\displaystyle y=f(x)} , where the symbol ↦ {\displaystyle \mapsto } (read 'maps to') is used to specify where a particular element x in the domain is mapped to by f. This allows the definition of a function without naming. For example, the square function is the function x ↦ x 2 . {\displaystyle x\mapsto x^{2}.} The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is a real function, the determination of the domain of the function x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} requires knowing the zeros of f. This is one of the reasons for which, in mathematical analysis, "a function from X to Y " may refer to a function having a proper subset of X as a domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable whose domain is a proper subset of the real numbers, typically a subset that contains a non-empty open interval. Such a function is then called a partial function. A function f on a set S means a function from the domain S, without specifying a codomain. However, some authors use it as shorthand for saying that the function is f : S → S. === Formal definition === The above definition of a function is essentially that of the founders of calculus, Leibniz, Newton and Euler. However, it cannot be formalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms of set theory. This set-theoretic definition is based on the fact that a function establishes a relation between the elements of the domain and some (possibly all) elements of the codomain. Mathematically, a binary relation between two sets X and Y is a subset of the set of all ordered pairs ( x , y ) {\displaystyle (x,y)} such that x ∈ X {\displaystyle x\in X} and y ∈ Y . {\displaystyle y\in Y.} The set of all these pairs is called the Cartesian product of X and Y and denoted X × Y . {\displaystyle X\times Y.} Thus, the above definition may be formalized as follows. A function with domain X and codomain Y is a binary relation R between X and Y that satisfies the two following conditions: For every x {\displaystyle x} in X {\displaystyle X} there exists y {\displaystyle y} in Y {\displaystyle Y} such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} If ( x , y ) ∈ R {\displaystyle (x,y)\in R} and ( x , z ) ∈ R , {\displaystyle (x,z)\in R,} then y = z . {\displaystyle y=z.} This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (including set-builder notation): A function is formed by three sets, the domain X , {\displaystyle X,} the codomain Y , {\displaystyle Y,} and the graph R {\displaystyle R} that satisfy the three following conditions. R ⊆ { ( x , y ) ∣ x ∈ X , y ∈ Y } {\displaystyle R\subseteq \{(x,y)\mid x\in X,y\in Y\}} ∀ x ∈ X , ∃ y ∈ Y , ( x , y ) ∈ R {\displaystyle \forall x\in X,\exists y\in Y,\left(x,y\right)\in R\qquad } ( x , y ) ∈ R ∧ ( x , z ) ∈ R ⟹ y = z {\displaystyle (x,y)\in R\land (x,z)\in R\implies y=z\qquad } === Partial functions === Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, a partial function from X to Y is a binary relation R between X and Y such that, for every x ∈ X , {\displaystyle x\in X,} there is at most one y in Y such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} Using functional notation, this means that, given x ∈ X , {\displaystyle x\in X,} either f ( x ) {\displaystyle f(x)} is in Y, or it is undefined. The set of the elements of X such that f ( x ) {\displaystyle f(x)} is defined and belongs to Y is called the domain of definition of the function. A partial function from X to Y is thus a ordinary function that has as its domain a subset of X called the domain of definition of the function. If the domain of definition equals X, one often says that the partial function is a total function. In several areas of mathematics the term "function" refers to partial functions rather than to ordinary functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain. In calculus, a real-valued function of a real variable or real function is a partial function from the set R {\displaystyle \mathbb {R} } of the real numbers to itself. Given a real function f : x ↦ f ( x ) {\displaystyle f:x\mapsto f(x)} its multiplicative inverse x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute the zeros of the function, the values where the function is defined but not its multiplicative inverse. Similarly, a function of a complex variable is generally a partial function with a domain of definition included in the set C {\displaystyle \mathbb {C} } of the complex numbers. The difficulty of determining the domain of definition of a complex function is illustrated by the multiplicative inverse of the Riemann zeta function: the determination of the domain of definition of the function z ↦ 1 / ζ ( z ) {\displaystyle z\mapsto 1/\zeta (z)} is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, the Riemann hypothesis. In computability theory, a general recursive function is a partial function from the integers to the integers whose values can be computed by an algorithm (roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether 0 belongs to its domain of definition (see Halting problem). === Multivariate functions === A multivariate function, multivariable function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed. Formally, a function of n variables is a function whose domain is a set of n-tuples. For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all ordered pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. The graph of a bivariate surface over a two-dimensional real domain may be interpreted as defining a parametric surface, as used in, e.g., bivariate interpolation. Commonly, an n-tuple is denoted enclosed between parentheses, such as in ( 1 , 2 , … , n ) . {\displaystyle (1,2,\ldots ,n).} When using functional notation, one usually omits the parentheses surrounding tuples, writing f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} instead of f ( ( x 1 , … , x n ) ) . {\displaystyle f((x_{1},\ldots ,x_{n})).} Given n sets X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} the set of all n-tuples ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} such that x 1 ∈ X 1 , … , x n ∈ X n {\displaystyle x_{1}\in X_{1},\ldots ,x_{n}\in X_{n}} is called the Cartesian product of X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} and denoted X 1 × ⋯ × X n . {\displaystyle X_{1}\times \cdots \times X_{n}.} Therefore, a multivariate function is a function that has a Cartesian product or a proper subset of a Cartesian product as a domain. f : U → Y , {\displaystyle f:U\to Y,} where the domain U has the form U ⊆ X 1 × ⋯ × X n . {\displaystyle U\subseteq X_{1}\times \cdots \times X_{n}.} If all the X i {\displaystyle X_{i}} are equal to the set R {\displaystyle \mathbb {R} } of the real numbers or to the set C {\displaystyle \mathbb {C} } of the complex numbers, one talks respectively of a function of several real variables or of a function of several complex variables. == Notation == There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below. === Functional notation === The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letter f. Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in f ( x ) , sin ( 3 ) , or f ( x 2 + 1 ) . {\displaystyle f(x),\quad \sin(3),\quad {\text{or}}\quad f(x^{2}+1).} The argument between the parentheses may be a variable, often x, that represents an arbitrary element of the domain of the function, a specific element of the domain (3 in the above example), or an expression that can be evaluated to an element of the domain ( x 2 + 1 {\displaystyle x^{2}+1} in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "let f ( x ) = sin ( x 2 + 1 ) {\displaystyle f(x)=\sin(x^{2}+1)} ". When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write sin x instead of sin(x). Functional notation was first used by Leonhard Euler in 1734. Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "sin" for the sine function, in contrast to italic font for single-letter symbols. The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "let f ( x ) {\displaystyle f(x)} be a function". This is an abuse of notation that is useful for a simpler formulation. === Arrow notation === Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example, x ↦ x + 1 {\displaystyle x\mapsto x+1} is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of R {\displaystyle \mathbb {R} } is implied. The domain and codomain can also be explicitly stated, for example: sqr : Z → Z x ↦ x 2 . {\displaystyle {\begin{aligned}\operatorname {sqr} \colon \mathbb {Z} &\to \mathbb {Z} \\x&\mapsto x^{2}.\end{aligned}}} This defines a function sqr from the integers to the integers that returns the square of its input. As a common application of the arrow notation, suppose f : X × X → Y ; ( x , t ) ↦ f ( x , t ) {\displaystyle f:X\times X\to Y;\;(x,t)\mapsto f(x,t)} is a function in two variables, and we want to refer to a partially applied function X → Y {\displaystyle X\to Y} produced by fixing the second argument to the value t0 without introducing a new function name. The map in question could be denoted x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} using the arrow notation. The expression x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} (read: "the map taking x to f of x comma t nought") represents this new function with just one argument, whereas the expression f(x0, t0) refers to the value of the function f at the point (x0, t0). === Index notation === Index notation may be used instead of functional notation. That is, instead of writing f (x), one writes f x . {\displaystyle f_{x}.} This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element f n {\displaystyle f_{n}} is called the nth element of the sequence. The index notation can also be used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map x ↦ f ( x , t ) {\displaystyle x\mapsto f(x,t)} (see above) would be denoted f t {\displaystyle f_{t}} using index notation, if we define the collection of maps f t {\displaystyle f_{t}} by the formula f t ( x ) = f ( x , t ) {\displaystyle f_{t}(x)=f(x,t)} for all x , t ∈ X {\displaystyle x,t\in X} . === Dot notation === In the notation x ↦ f ( x ) , {\displaystyle x\mapsto f(x),} the symbol x does not represent any value; it is simply a placeholder, meaning that, if x is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, x may be replaced by any symbol, often an interpunct " ⋅ ". This may be useful for distinguishing the function f (⋅) from its value f (x) at x. For example, a ( ⋅ ) 2 {\displaystyle a(\cdot )^{2}} may stand for the function x ↦ a x 2 {\displaystyle x\mapsto ax^{2}} , and ∫ a ( ⋅ ) f ( u ) d u {\textstyle \int _{a}^{\,(\cdot )}f(u)\,du} may stand for a function defined by an integral with variable upper bound: x ↦ ∫ a x f ( u ) d u {\textstyle x\mapsto \int _{a}^{x}f(u)\,du} . === Specialized notations === There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above. === Functions of more than one variable === In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a function f can be defined as mapping any pair of real numbers ( x , y ) {\displaystyle (x,y)} to the sum of their squares, x 2 + y 2 {\displaystyle x^{2}+y^{2}} . Such a function is commonly written as f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)=x^{2}+y^{2}} and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such as f ( w , x , y ) {\displaystyle f(w,x,y)} , f ( w , x , y , z ) {\displaystyle f(w,x,y,z)} . == Other terms == A function may also be called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map may be used in place of homomorphism for the sake of succinctness (e.g., linear map or map from G to H instead of group homomorphism from G to H). Some authors reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function. Some authors, such as Serge Lang, use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions. In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. See also Poincaré map. Whichever definition of map is used, related terms like domain, codomain, injective, continuous have the same meaning as for a function. == Specifying a function == Given a function f {\displaystyle f} , by definition, to each element x {\displaystyle x} of the domain of the function f {\displaystyle f} , there is a unique element associated to it, the value f ( x ) {\displaystyle f(x)} of f {\displaystyle f} at x {\displaystyle x} . There are several ways to specify or describe how x {\displaystyle x} is related to f ( x ) {\displaystyle f(x)} , both explicitly and implicitly. Sometimes, a theorem or an axiom asserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the function f {\displaystyle f} . === By listing function values === On a finite set a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, if A = { 1 , 2 , 3 } {\displaystyle A=\{1,2,3\}} , then one can define a function f : A → R {\displaystyle f:A\to \mathbb {R} } by f ( 1 ) = 2 , f ( 2 ) = 3 , f ( 3 ) = 4. {\displaystyle f(1)=2,f(2)=3,f(3)=4.} === By a formula === Functions are often defined by an expression that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain. For example, in the above example, f {\displaystyle f} can be defined by the formula f ( n ) = n + 1 {\displaystyle f(n)=n+1} , for n ∈ { 1 , 2 , 3 } {\displaystyle n\in \{1,2,3\}} . When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of the zeros of auxiliary functions. Similarly, if square roots occur in the definition of a function from R {\displaystyle \mathbb {R} } to R , {\displaystyle \mathbb {R} ,} the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative. For example, f ( x ) = 1 + x 2 {\displaystyle f(x)={\sqrt {1+x^{2}}}} defines a function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } whose domain is R , {\displaystyle \mathbb {R} ,} because 1 + x 2 {\displaystyle 1+x^{2}} is always positive if x is a real number. On the other hand, f ( x ) = 1 − x 2 {\displaystyle f(x)={\sqrt {1-x^{2}}}} defines a function from the reals to the reals whose domain is reduced to the interval [−1, 1]. (In old texts, such a domain was called the domain of definition of the function.) Functions can be classified by the nature of formulas that define them: A quadratic function is a function that may be written f ( x ) = a x 2 + b x + c , {\displaystyle f(x)=ax^{2}+bx+c,} where a, b, c are constants. More generally, a polynomial function is a function that can be defined by a formula involving only additions, subtractions, multiplications, and exponentiation to nonnegative integer powers. For example, f ( x ) = x 3 − 3 x − 1 {\displaystyle f(x)=x^{3}-3x-1} and f ( x ) = ( x − 1 ) ( x 3 + 1 ) + 2 x 2 − 1 {\displaystyle f(x)=(x-1)(x^{3}+1)+2x^{2}-1} are polynomial functions of x {\displaystyle x} . A rational function is the same, with divisions also allowed, such as f ( x ) = x − 1 x + 1 , {\displaystyle f(x)={\frac {x-1}{x+1}},} and f ( x ) = 1 x + 1 + 3 x − 2 x − 1 . {\displaystyle f(x)={\frac {1}{x+1}}+{\frac {3}{x}}-{\frac {2}{x-1}}.} An algebraic function is the same, with nth roots and roots of polynomials also allowed. An elementary function is the same, with logarithms and exponential functions allowed. === Inverse and implicit functions === A function f : X → Y , {\displaystyle f:X\to Y,} with domain X and codomain Y, is bijective, if for every y in Y, there is one and only one element x in X such that y = f(x). In this case, the inverse function of f is the function f − 1 : Y → X {\displaystyle f^{-1}:Y\to X} that maps y ∈ Y {\displaystyle y\in Y} to the element x ∈ X {\displaystyle x\in X} such that y = f(x). For example, the natural logarithm is a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called the exponential function, that maps the real numbers onto the positive numbers. If a function f : X → Y {\displaystyle f:X\to Y} is not bijective, it may occur that one can select subsets E ⊆ X {\displaystyle E\subseteq X} and F ⊆ Y {\displaystyle F\subseteq Y} such that the restriction of f to E is a bijection from E to F, and has thus an inverse. The inverse trigonometric functions are defined this way. For example, the cosine function induces, by restriction, a bijection from the interval [0, π] onto the interval [−1, 1], and its inverse function, called arccosine, maps [−1, 1] onto [0, π]. The other inverse trigonometric functions are defined similarly. More generally, given a binary relation R between two sets X and Y, let E be a subset of X such that, for every x ∈ E , {\displaystyle x\in E,} there is some y ∈ Y {\displaystyle y\in Y} such that x R y. If one has a criterion allowing selecting such a y for every x ∈ E , {\displaystyle x\in E,} this defines a function f : E → Y , {\displaystyle f:E\to Y,} called an implicit function, because it is implicitly defined by the relation R. For example, the equation of the unit circle x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} defines a relation on real numbers. If −1 < x < 1 there are two possible values of y, one positive and one negative. For x = ± 1, these two values become both equal to 0. Otherwise, there is no possible value of y. This means that the equation defines two implicit functions with domain [−1, 1] and respective codomains [0, +∞) and (−∞, 0]. In this example, the equation can be solved in y, giving y = ± 1 − x 2 , {\displaystyle y=\pm {\sqrt {1-x^{2}}},} but, in more complicated examples, this is impossible. For example, the relation y 5 + y + x = 0 {\displaystyle y^{5}+y+x=0} defines y as an implicit function of x, called the Bring radical, which has R {\displaystyle \mathbb {R} } as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations and nth roots. The implicit function theorem provides mild differentiability conditions for existence and uniqueness of an implicit function in the neighborhood of a point. === Using differential calculus === Many functions can be defined as the antiderivative of another function. This is the case of the natural logarithm, which is the antiderivative of 1/x that is 0 for x = 1. Another common example is the error function. More generally, many functions, including most special functions, can be defined as solutions of differential equations. The simplest example is probably the exponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 for x = 0. Power series can be used to define functions on the domain in which they converge. For example, the exponential function is given by e x = ∑ n = 0 ∞ x n n ! {\textstyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}} . However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of its Taylor series in some interval, this power series allows immediately enlarging the domain to a subset of the complex numbers, the disc of convergence of the series. Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number. === By recurrence === Functions whose domain are the nonnegative integers, known as sequences, are sometimes defined by recurrence relations. The factorial function on the nonnegative integers ( n ↦ n ! {\displaystyle n\mapsto n!} ) is a basic example, as it can be defined by the recurrence relation n ! = n ( n − 1 ) ! for n > 0 , {\displaystyle n!=n(n-1)!\quad {\text{for}}\quad n>0,} and the initial condition 0 ! = 1. {\displaystyle 0!=1.} == Representing a function == A graph is commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented by bar charts. === Graphs and plots === Given a function f : X → Y , {\displaystyle f:X\to Y,} its graph is, formally, the set G = { ( x , f ( x ) ) ∣ x ∈ X } . {\displaystyle G=\{(x,f(x))\mid x\in X\}.} In the frequent case where X and Y are subsets of the real numbers (or may be identified with such subsets, e.g. intervals), an element ( x , y ) ∈ G {\displaystyle (x,y)\in G} may be identified with a point having coordinates x, y in a 2-dimensional coordinate system, e.g. the Cartesian plane. Parts of this may create a plot that represents (parts of) the function. The use of plots is so ubiquitous that they too are called the graph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of the square function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} consisting of all points with coordinates ( x , x 2 ) {\displaystyle (x,x^{2})} for x ∈ R , {\displaystyle x\in \mathbb {R} ,} yields, when depicted in Cartesian coordinates, the well known parabola. If the same quadratic function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} with the same formal graph, consisting of pairs of numbers, is plotted instead in polar coordinates ( r , θ ) = ( x , x 2 ) , {\displaystyle (r,\theta )=(x,x^{2}),} the plot obtained is Fermat's spiral. === Tables === A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication function f : { 1 , … , 5 } 2 → R {\displaystyle f:\{1,\ldots ,5\}^{2}\to \mathbb {R} } defined as f ( x , y ) = x y {\displaystyle f(x,y)=xy} can be represented by the familiar multiplication table On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed, interpolation can be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places: Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions. === Bar chart === A bar chart can represent a function whose domain is a finite set, the natural numbers, or the integers. In this case, an element x of the domain is represented by an interval of the x-axis, and the corresponding value of the function, f(x), is represented by a rectangle whose base is the interval corresponding to x and whose height is f(x) (possibly negative, in which case the bar extends below the x-axis). == General properties == This section describes general properties of functions, that are independent of specific properties of the domain and the codomain. === Standard functions === There are a number of standard functions that occur frequently: For every set X, there is a unique function, called the empty function, or empty map, from the empty set to X. The graph of an empty function is the empty set. The existence of empty functions is needed both for the coherency of the theory and for avoiding exceptions concerning the empty set in many statements. Under the usual set-theoretic definition of a function as an ordered triplet (or equivalent ones), there is exactly one empty function for each set, thus the empty function ∅ → X {\displaystyle \varnothing \to X} is not equal to ∅ → Y {\displaystyle \varnothing \to Y} if and only if X ≠ Y {\displaystyle X\neq Y} , although their graphs are both the empty set. For every set X and every singleton set {s}, there is a unique function from X to {s}, which maps every element of X to s. This is a surjection (see below) unless X is the empty set. Given a function f : X → Y , {\displaystyle f:X\to Y,} the canonical surjection of f onto its image f ( X ) = { f ( x ) ∣ x ∈ X } {\displaystyle f(X)=\{f(x)\mid x\in X\}} is the function from X to f(X) that maps x to f(x). For every subset A of a set X, the inclusion map of A into X is the injective (see below) function that maps every element of A to itself. The identity function on a set X, often denoted by idX, is the inclusion of X into itself. === Function composition === Given two functions f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} such that the domain of g is the codomain of f, their composition is the function g ∘ f : X → Z {\displaystyle g\circ f:X\rightarrow Z} defined by ( g ∘ f ) ( x ) = g ( f ( x ) ) . {\displaystyle (g\circ f)(x)=g(f(x)).} That is, the value of g ∘ f {\displaystyle g\circ f} is obtained by first applying f to x to obtain y = f(x) and then applying g to the result y to obtain g(y) = g(f(x)). In this notation, the function that is applied first is always written on the right. The composition g ∘ f {\displaystyle g\circ f} is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. Even when both g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} satisfy these conditions, the composition is not necessarily commutative, that is, the functions g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} need not be equal, but may deliver different values for the same argument. For example, let f(x) = x2 and g(x) = x + 1, then g ( f ( x ) ) = x 2 + 1 {\displaystyle g(f(x))=x^{2}+1} and f ( g ( x ) ) = ( x + 1 ) 2 {\displaystyle f(g(x))=(x+1)^{2}} agree just for x = 0. {\displaystyle x=0.} The function composition is associative in the sense that, if one of ( h ∘ g ) ∘ f {\displaystyle (h\circ g)\circ f} and h ∘ ( g ∘ f ) {\displaystyle h\circ (g\circ f)} is defined, then the other is also defined, and they are equal, that is, ( h ∘ g ) ∘ f = h ∘ ( g ∘ f ) . {\displaystyle (h\circ g)\circ f=h\circ (g\circ f).} Therefore, it is usual to just write h ∘ g ∘ f . {\displaystyle h\circ g\circ f.} The identity functions id X {\displaystyle \operatorname {id} _{X}} and id Y {\displaystyle \operatorname {id} _{Y}} are respectively a right identity and a left identity for functions from X to Y. That is, if f is a function with domain X, and codomain Y, one has f ∘ id X = id Y ∘ f = f . {\displaystyle f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.} === Image and preimage === Let f : X → Y . {\displaystyle f:X\to Y.} The image under f of an element x of the domain X is f(x). If A is any subset of X, then the image of A under f, denoted f(A), is the subset of the codomain Y consisting of all images of elements of A, that is, f ( A ) = { f ( x ) ∣ x ∈ A } . {\displaystyle f(A)=\{f(x)\mid x\in A\}.} The image of f is the image of the whole domain, that is, f(X). It is also called the range of f, although the term range may also refer to the codomain. On the other hand, the inverse image or preimage under f of an element y of the codomain Y is the set of all elements of the domain X whose images under f equal y. In symbols, the preimage of y is denoted by f − 1 ( y ) {\displaystyle f^{-1}(y)} and is given by the equation f − 1 ( y ) = { x ∈ X ∣ f ( x ) = y } . {\displaystyle f^{-1}(y)=\{x\in X\mid f(x)=y\}.} Likewise, the preimage of a subset B of the codomain Y is the set of the preimages of the elements of B, that is, it is the subset of the domain X consisting of all elements of X whose images belong to B. It is denoted by f − 1 ( B ) {\displaystyle f^{-1}(B)} and is given by the equation f − 1 ( B ) = { x ∈ X ∣ f ( x ) ∈ B } . {\displaystyle f^{-1}(B)=\{x\in X\mid f(x)\in B\}.} For example, the preimage of { 4 , 9 } {\displaystyle \{4,9\}} under the square function is the set { − 3 , − 2 , 2 , 3 } {\displaystyle \{-3,-2,2,3\}} . By definition of a function, the image of an element x of the domain is always a single element of the codomain. However, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of an element y of the codomain may be empty or contain any number of elements. For example, if f is the function from the integers to themselves that maps every integer to 0, then f − 1 ( 0 ) = Z {\displaystyle f^{-1}(0)=\mathbb {Z} } . If f : X → Y {\displaystyle f:X\to Y} is a function, A and B are subsets of X, and C and D are subsets of Y, then one has the following properties: A ⊆ B ⟹ f ( A ) ⊆ f ( B ) {\displaystyle A\subseteq B\Longrightarrow f(A)\subseteq f(B)} C ⊆ D ⟹ f − 1 ( C ) ⊆ f − 1 ( D ) {\displaystyle C\subseteq D\Longrightarrow f^{-1}(C)\subseteq f^{-1}(D)} A ⊆ f − 1 ( f ( A ) ) {\displaystyle A\subseteq f^{-1}(f(A))} C ⊇ f ( f − 1 ( C ) ) {\displaystyle C\supseteq f(f^{-1}(C))} f ( f − 1 ( f ( A ) ) ) = f ( A ) {\displaystyle f(f^{-1}(f(A)))=f(A)} f − 1 ( f ( f − 1 ( C ) ) ) = f − 1 ( C ) {\displaystyle f^{-1}(f(f^{-1}(C)))=f^{-1}(C)} The preimage by f of an element y of the codomain is sometimes called, in some contexts, the fiber of y under f. If a function f has an inverse (see below), this inverse is denoted f − 1 . {\displaystyle f^{-1}.} In this case f − 1 ( C ) {\displaystyle f^{-1}(C)} may denote either the image by f − 1 {\displaystyle f^{-1}} or the preimage by f of C. This is not a problem, as these sets are equal. The notation f ( A ) {\displaystyle f(A)} and f − 1 ( C ) {\displaystyle f^{-1}(C)} may be ambiguous in the case of sets that contain some subsets as elements, such as { x , { x } } . {\displaystyle \{x,\{x\}\}.} In this case, some care may be needed, for example, by using square brackets f [ A ] , f − 1 [ C ] {\displaystyle f[A],f^{-1}[C]} for images and preimages of subsets and ordinary parentheses for images and preimages of elements. === Injective, surjective and bijective functions === Let f : X → Y {\displaystyle f:X\to Y} be a function. The function f is injective (or one-to-one, or is an injection) if f(a) ≠ f(b) for every two different elements a and b of X. Equivalently, f is injective if and only if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains at most one element. An empty function is always injective. If X is not the empty set, then f is injective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} that is, if f has a left inverse. Proof: If f is injective, for defining g, one chooses an element x 0 {\displaystyle x_{0}} in X (which exists as X is supposed to be nonempty), and one defines g by g ( y ) = x {\displaystyle g(y)=x} if y = f ( x ) {\displaystyle y=f(x)} and g ( y ) = x 0 {\displaystyle g(y)=x_{0}} if y ∉ f ( X ) . {\displaystyle y\not \in f(X).} Conversely, if g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} and y = f ( x ) , {\displaystyle y=f(x),} then x = g ( y ) , {\displaystyle x=g(y),} and thus f − 1 ( y ) = { x } . {\displaystyle f^{-1}(y)=\{x\}.} The function f is surjective (or onto, or is a surjection) if its range f ( X ) {\displaystyle f(X)} equals its codomain Y {\displaystyle Y} , that is, if, for each element y {\displaystyle y} of the codomain, there exists some element x {\displaystyle x} of the domain such that f ( x ) = y {\displaystyle f(x)=y} (in other words, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of every y ∈ Y {\displaystyle y\in Y} is nonempty). If, as usual in modern mathematics, the axiom of choice is assumed, then f is surjective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that f ∘ g = id Y , {\displaystyle f\circ g=\operatorname {id} _{Y},} that is, if f has a right inverse. The axiom of choice is needed, because, if f is surjective, one defines g by g ( y ) = x , {\displaystyle g(y)=x,} where x {\displaystyle x} is an arbitrarily chosen element of f − 1 ( y ) . {\displaystyle f^{-1}(y).} The function f is bijective (or is a bijection or a one-to-one correspondence) if it is both injective and surjective. That is, f is bijective if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains exactly one element. The function f is bijective if and only if it admits an inverse function, that is, a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X {\displaystyle g\circ f=\operatorname {id} _{X}} and f ∘ g = id Y . {\displaystyle f\circ g=\operatorname {id} _{Y}.} (Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward). Every function f : X → Y {\displaystyle f:X\to Y} may be factorized as the composition i ∘ s {\displaystyle i\circ s} of a surjection followed by an injection, where s is the canonical surjection of X onto f(X) and i is the canonical injection of f(X) into Y. This is the canonical factorization of f. "One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement "f maps X onto Y" differs from "f maps X into B", in that the former implies that f is surjective, while the latter makes no assertion about the nature of f. In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical. === Restriction and extension === If f : X → Y {\displaystyle f:X\to Y} is a function and S is a subset of X, then the restriction of f {\displaystyle f} to S, denoted f | S {\displaystyle f|_{S}} , is the function from S to Y defined by f | S ( x ) = f ( x ) {\displaystyle f|_{S}(x)=f(x)} for all x in S. Restrictions can be used to define partial inverse functions: if there is a subset S of the domain of a function f {\displaystyle f} such that f | S {\displaystyle f|_{S}} is injective, then the canonical surjection of f | S {\displaystyle f|_{S}} onto its image f | S ( S ) = f ( S ) {\displaystyle f|_{S}(S)=f(S)} is a bijection, and thus has an inverse function from f ( S ) {\displaystyle f(S)} to S. One application is the definition of inverse trigonometric functions. For example, the cosine function is injective when restricted to the interval [0, π]. The image of this restriction is the interval [−1, 1], and thus the restriction has an inverse function from [−1, 1] to [0, π], which is called arccosine and is denoted arccos. Function restriction may also be used for "gluing" functions together. Let X = ⋃ i ∈ I U i {\textstyle X=\bigcup _{i\in I}U_{i}} be the decomposition of X as a union of subsets, and suppose that a function f i : U i → Y {\displaystyle f_{i}:U_{i}\to Y} is defined on each U i {\displaystyle U_{i}} such that for each pair i , j {\displaystyle i,j} of indices, the restrictions of f i {\displaystyle f_{i}} and f j {\displaystyle f_{j}} to U i ∩ U j {\displaystyle U_{i}\cap U_{j}} are equal. Then this defines a unique function f : X → Y {\displaystyle f:X\to Y} such that f | U i = f i {\displaystyle f|_{U_{i}}=f_{i}} for all i. This is the way that functions on manifolds are defined. An extension of a function f is a function g such that f is a restriction of g. A typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane. Here is another classical example of a function extension that is encountered when studying homographies of the real line. A homography is a function h ( x ) = a x + b c x + d {\displaystyle h(x)={\frac {ax+b}{cx+d}}} such that ad − bc ≠ 0. Its domain is the set of all real numbers different from − d / c , {\displaystyle -d/c,} and its image is the set of all real numbers different from a / c . {\displaystyle a/c.} If one extends the real line to the projectively extended real line by including ∞, one may extend h to a bijection from the extended real line to itself by setting h ( ∞ ) = a / c {\displaystyle h(\infty )=a/c} and h ( − d / c ) = ∞ {\displaystyle h(-d/c)=\infty } . == In calculus == The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. At that time, only real-valued functions of a real variable were considered, and all functions were assumed to be smooth. But the definition was soon extended to functions of several variables and to functions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined. Functions are now used throughout all areas of mathematics. In introductory calculus, when the word function is used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis. === Real function === A real function is a real-valued function of a real variable, that is, a function whose codomain is the field of real numbers and whose domain is a set of real numbers that contains an interval. In this section, these functions are simply called functions. The functions that are most commonly considered in mathematics and its applications have some regularity, that is they are continuous, differentiable, and even analytic. This regularity insures that these functions can be visualized by their graphs. In this section, all functions are differentiable in some interval. Functions enjoy pointwise operations, that is, if f and g are functions, their sum, difference and product are functions defined by ( f + g ) ( x ) = f ( x ) + g ( x ) ( f − g ) ( x ) = f ( x ) − g ( x ) ( f ⋅ g ) ( x ) = f ( x ) ⋅ g ( x ) . {\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(f-g)(x)&=f(x)-g(x)\\(f\cdot g)(x)&=f(x)\cdot g(x)\\\end{aligned}}.} The domains of the resulting functions are the intersection of the domains of f and g. The quotient of two functions is defined similarly by f g ( x ) = f ( x ) g ( x ) , {\displaystyle {\frac {f}{g}}(x)={\frac {f(x)}{g(x)}},} but the domain of the resulting function is obtained by removing the zeros of g from the intersection of the domains of f and g. The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. They include constant functions, linear functions and quadratic functions. Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. The simplest rational function is the function x ↦ 1 x , {\displaystyle x\mapsto {\frac {1}{x}},} whose graph is a hyperbola, and whose domain is the whole real line except for 0. The derivative of a real differentiable function is a real function. An antiderivative of a continuous real function is a real function that has the original function as a derivative. For example, the function x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero for x = 1, is a differentiable function called the natural logarithm. A real function f is monotonic in an interval if the sign of f ( x ) − f ( y ) x − y {\displaystyle {\frac {f(x)-f(y)}{x-y}}} does not depend of the choice of x and y in the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real function f is monotonic in an interval I, it has an inverse function, which is a real function with domain f(I) and image I. This is how inverse trigonometric functions are defined in terms of trigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is a bijection between the real numbers and the positive real numbers. This inverse is the exponential function. Many other real functions are defined either by the implicit function theorem (the inverse function is a particular instance) or as solutions of differential equations. For example, the sine and the cosine functions are the solutions of the linear differential equation y ″ + y = 0 {\displaystyle y''+y=0} such that sin 0 = 0 , cos 0 = 1 , ∂ sin x ∂ x ( 0 ) = 1 , ∂ cos x ∂ x ( 0 ) = 0. {\displaystyle \sin 0=0,\quad \cos 0=1,\quad {\frac {\partial \sin x}{\partial x}}(0)=1,\quad {\frac {\partial \cos x}{\partial x}}(0)=0.} === Vector-valued function === When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid its velocity vector is a vector-valued function. Some vector-valued functions are defined on a subset of R n {\displaystyle \mathbb {R} ^{n}} or other spaces that share geometric or topological properties of R n {\displaystyle \mathbb {R} ^{n}} , such as manifolds. These vector-valued functions are given the name vector fields. == Function space == In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions. Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces. == Multi-valued functions == Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting point x 0 , {\displaystyle x_{0},} there are several possible starting values for the function. For example, in defining the square root as the inverse function of the square function, for any positive real number x 0 , {\displaystyle x_{0},} there are two choices for the value of the square root, one of which is positive and denoted x 0 , {\displaystyle {\sqrt {x_{0}}},} and another which is negative and denoted − x 0 . {\displaystyle -{\sqrt {x_{0}}}.} These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positive x, one value for 0 and no value for negative x. In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider the implicit function that maps y to a root x of x 3 − 3 x − y = 0 {\displaystyle x^{3}-3x-y=0} (see the figure on the right). For y = 0 one may choose either 0 , 3 , or − 3 {\displaystyle 0,{\sqrt {3}},{\text{ or }}-{\sqrt {3}}} for x. By the implicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval [−2, 2] and the image is [−1, 1]; for the second one, the domain is [−2, ∞) and the image is [1, ∞); for the last one, the domain is (−∞, 2] and the image is (−∞, −1]. As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a single multi-valued function of y that has three values for −2 < y < 2, and only one value for y ≤ −2 and y ≥ −2. Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typically analytic functions. The domain to which a complex function may be extended by analytic continuation generally consists of almost the whole complex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one gets i for the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets −i. There are generally two ways of solving the problem. One may define a function that is not continuous along some curve, called a branch cut. Such a function is called the principal value of the function. The other way is to consider that one has a multi-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called the monodromy. == In the foundations of mathematics == The definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions. For example, the singleton set may be considered as a function x ↦ { x } . {\displaystyle x\mapsto \{x\}.} Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions. These generalized functions may be critical in the development of a formalization of the foundations of mathematics. For example, Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is a class. This theory includes the replacement axiom, which may be stated as: If X is a set and F is a function, then F[X] is a set. In alternative formulations of the foundations of mathematics using type theory rather than set theory, functions are taken as primitive notions rather than defined from other kinds of object. They are the inhabitants of function types, and may be constructed using expressions in the lambda calculus. == In computer science == In computer programming, a function is, in general, a subroutine which implements the abstract concept of function. That is, it is a program unit that produces an output for each input. Functional programming is the programming paradigm consisting of building programs by using only subroutines that behave like mathematical functions, meaning that they have no side effects and depend only on their arguments: they are referentially transparent. For example, if_then_else is a function that takes three (nullary) functions as arguments, and, depending on the value of the first argument (true or false), returns the value of either the second or the third argument. An important advantage of functional programming is that it makes easier program proofs, as being based on a well founded theory, the lambda calculus (see below). However, side effects are generally necessary for practical programs, ones that perform input/output. There is a class of purely functional languages, such as Haskell, which encapsulate the possibility of side effects in the type of a function. Others, such as the ML family, simply allow side effects. In many programming languages, every subroutine is called a function, even when there is no output but only side effects, and when the functionality consists simply of modifying some data in the computer memory. Outside the context of programming languages, "function" has the usual mathematical meaning in computer science. In this area, a property of major interest is the computability of a function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus, and Turing machine. The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. The Church–Turing thesis is the claim that every philosophically acceptable definition of a computable function defines also the same functions. General recursive functions are partial functions from integers to integers that can be defined from constant functions, successor, and projection functions via the operators composition, primitive recursion, and minimization. Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties: a computation is the manipulation of finite sequences of symbols (digits of numbers, formulas, etc.), every sequence of symbols may be coded as a sequence of bits, a bit sequence can be interpreted as the binary representation of an integer. Lambda calculus is a theory that defines computable functions without using set theory, and is the theoretical background of functional programming. It consists of terms that are either variables, function definitions (𝜆-terms), or applications of functions to terms. Terms are manipulated by interpreting its axioms (the α-equivalence, the β-reduction, and the η-conversion) as rewriting rules, which can be used for computation. In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name of type in typed lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus. == See also == === Subpages === === Generalizations === === Related topics === == Notes == == References == == Sources == == Further reading == == External links == The Wolfram Functions – website giving formulae and visualizations of many mathematical functions NIST Digital Library of Mathematical Functions
|
Wikipedia:Function application#0
|
In mathematics, function application is the act of applying a function to an argument from its domain so as to obtain the corresponding value from its range. In this sense, function application can be thought of as the opposite of function abstraction. == Representation == Function application is usually depicted by juxtaposing the variable representing the function with its argument encompassed in parentheses. For example, the following expression represents the application of the function ƒ to its argument x. f ( x ) {\displaystyle f(x)} In some instances, a different notation is used where the parentheses aren't required, and function application can be expressed just by juxtaposition. For example, the following expression can be considered the same as the previous one: f x {\displaystyle f\;x} The latter notation is especially useful in combination with the currying isomorphism. Given a function f : ( X × Y ) → Z {\displaystyle f:(X\times Y)\to Z} , its application is represented as f ( x , y ) {\displaystyle f(x,y)} by the former notation and f ( x , y ) {\displaystyle f\;(x,y)} (or f ⟨ x , y ⟩ {\displaystyle f\;\langle x,y\rangle } with the argument ⟨ x , y ⟩ ∈ X × Y {\displaystyle \langle x,y\rangle \in X\times Y} written with the less common angle brackets) by the latter. However, functions in curried form f : X → ( Y → Z ) {\displaystyle f:X\to (Y\to Z)} can be represented by juxtaposing their arguments: f x y {\displaystyle f\;x\;y} , rather than f ( x ) ( y ) {\displaystyle f(x)(y)} . This relies on function application being left-associative. U+2061 FUNCTION APPLICATION (⁡, ⁡) — a contiguity operator indicating application of a function; that is an invisible zero width character intended to distinguish concatenation meaning function application from concatenation meaning multiplication. == Set theory == In axiomatic set theory, especially Zermelo–Fraenkel set theory, a function f : X ↦ Y {\displaystyle f:X\mapsto Y} is often defined as a relation ( f ⊆ X × Y {\displaystyle f\subseteq X\times Y} ) having the property that, for any x ∈ X {\displaystyle x\in X} there is a unique y ∈ Y {\displaystyle y\in Y} such that ( x , y ) ∈ f {\displaystyle (x,y)\in f} . One is usually not content to write " ( x , y ) ∈ f {\displaystyle (x,y)\in f} " to specify that y {\displaystyle y} , and usually wishes for the more common function notation " f ( x ) = y {\displaystyle f(x)=y} ", thus function application, or more specifically, the notation " f ( x ) {\displaystyle f(x)} ", is defined by an axiom schema. Given any function f {\displaystyle f} with a given domain X {\displaystyle X} and codomain Y {\displaystyle Y} : ∀ x ∈ X , ∀ y ∈ Y ( f ( x ) = y ⟺ {\displaystyle \forall x\in X,\forall y\in Y(f(x)=y\iff } ∃ ! z ∈ Y ( ( x , z ) ∈ f ) ∧ ( x , y ) ∈ f ) {\displaystyle \exists !z\in Y((x,z)\in f)\,\land \,(x,y)\in f)} Stating "For all x {\displaystyle x} in X {\displaystyle X} and y {\displaystyle y} in Y {\displaystyle Y} , f ( x ) {\displaystyle f(x)} is equal to y {\displaystyle y} if and only if there is a unique z {\displaystyle z} in Y {\displaystyle Y} such that ( x , z ) {\displaystyle (x,z)} is in f {\displaystyle f} and ( x , y ) {\displaystyle (x,y)} is in f {\displaystyle f} ". The notation f ( x ) {\displaystyle f(x)} here being defined is a new functional predicate from the underlying logic, where each y is a term in x. Since f {\displaystyle f} , as a functional predicate, must map every object in the language, objects not in the specified domain are chosen to map to an arbitrary object, suct as the empty set. == As an operator == Function application can be defined as an operator, called apply or $ {\displaystyle \$} , by the following definition: f $ x = f ( x ) {\displaystyle f\mathop {\,\$\,} x=f(x)} The operator may also be denoted by a backtick (`). If the operator is understood to be of low precedence and right-associative, the application operator can be used to cut down on the number of parentheses needed in an expression. For example; f ( g ( h ( j ( x ) ) ) ) {\displaystyle f(g(h(j(x))))} can be rewritten as: f $ g $ h $ j $ x {\displaystyle f\mathop {\,\$\,} g\mathop {\,\$\,} h\mathop {\,\$\,} j\mathop {\,\$\,} x} However, this is perhaps more clearly expressed by using function composition instead: ( f ∘ g ∘ h ∘ j ) ( x ) {\displaystyle (f\circ g\circ h\circ j)(x)} or even: ( f ∘ g ∘ h ∘ j ∘ x ) ( ) {\displaystyle (f\circ g\circ h\circ j\circ x)()} if one considers x {\displaystyle x} to be a constant function returning x {\displaystyle x} . == Other instances == Function application in the lambda calculus is expressed by β-reduction. The Curry–Howard correspondence relates function application to the logical rule of modus ponens. == See also == Polish notation == References ==
|
Wikipedia:Function composition#0
|
In mathematics, the composition operator ∘ {\displaystyle \circ } takes two functions, f {\displaystyle f} and g {\displaystyle g} , and returns a new function h ( x ) := ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle h(x):=(g\circ f)(x)=g(f(x))} . Thus, the function g is applied after applying f to x. ( g ∘ f ) {\displaystyle (g\circ f)} is pronounced "the composition of g and f". Reverse composition, sometimes denoted f ↦ g {\displaystyle f\mapsto g} , applies the operation in the opposite order, applying f {\displaystyle f} first and g {\displaystyle g} second. Intuitively, reverse composition is a chaining process in which the output of function f feeds the input of function g. The composition of functions is a special case of the composition of relations, sometimes also denoted by ∘ {\displaystyle \circ } . As a result, all properties of composition of relations are true of composition of functions, such as associativity. == Examples == Composition of functions on a finite set: If f = {(1, 1), (2, 3), (3, 1), (4, 2)}, and g = {(1, 2), (2, 3), (3, 1), (4, 2)}, then g ∘ f = {(1, 2), (2, 1), (3, 2), (4, 3)}, as shown in the figure. Composition of functions on an infinite set: If f: R → R (where R is the set of all real numbers) is given by f(x) = 2x + 4 and g: R → R is given by g(x) = x3, then: If an airplane's altitude at time t is a(t), and the air pressure at altitude x is p(x), then (p ∘ a)(t) is the pressure around the plane at time t. Function defined on finite sets which change the order of their elements such as permutations can be composed on the same set, this being composition of permutations. == Properties == The composition of functions is always associative—a property inherited from the composition of relations. That is, if f, g, and h are composable, then f ∘ (g ∘ h) = (f ∘ g) ∘ h. Since the parentheses do not change the result, they are generally omitted. In a strict sense, the composition g ∘ f is only meaningful if the codomain of f equals the domain of g; in a wider sense, it is sufficient that the former be an improper subset of the latter. Moreover, it is often convenient to tacitly restrict the domain of f, such that f produces only values in the domain of g. For example, the composition g ∘ f of the functions f : R → (−∞,+9] defined by f(x) = 9 − x2 and g : [0,+∞) → R defined by g ( x ) = x {\displaystyle g(x)={\sqrt {x}}} can be defined on the interval [−3,+3]. The functions g and f are said to commute with each other if g ∘ f = f ∘ g. Commutativity is a special property, attained only by particular functions, and often in special circumstances. For example, |x| + 3 = |x + 3| only when x ≥ 0. The picture shows another example. The composition of one-to-one (injective) functions is always one-to-one. Similarly, the composition of onto (surjective) functions is always onto. It follows that the composition of two bijections is also a bijection. The inverse function of a composition (assumed invertible) has the property that (f ∘ g)−1 = g−1∘ f−1. Derivatives of compositions involving differentiable functions can be found using the chain rule. Higher derivatives of such functions are given by Faà di Bruno's formula. Composition of functions is sometimes described as a kind of multiplication on a function space, but has very different properties from pointwise multiplication of functions (e.g. composition is not commutative). == Composition monoids == Suppose one has two (or more) functions f: X → X, g: X → X having the same domain and codomain; these are often called transformations. Then one can form chains of transformations composed together, such as f ∘ f ∘ g ∘ f. Such chains have the algebraic structure of a monoid, called a transformation monoid or (much more seldom) a composition monoid. In general, transformation monoids can have remarkably complicated structure. One particular notable example is the de Rham curve. The set of all functions f: X → X is called the full transformation semigroup or symmetric semigroup on X. (One can actually define two semigroups depending how one defines the semigroup operation as the left or right composition of functions.) If the given transformations are bijective (and thus invertible), then the set of all possible combinations of these functions forms a transformation group (also known as a permutation group); and one says that the group is generated by these functions. The set of all bijective functions f: X → X (called permutations) forms a group with respect to function composition. This is the symmetric group, also sometimes called the composition group. A fundamental result in group theory, Cayley's theorem, essentially says that any group is in fact just a subgroup of a symmetric group (up to isomorphism). In the symmetric semigroup (of all transformations) one also finds a weaker, non-unique notion of inverse (called a pseudoinverse) because the symmetric semigroup is a regular semigroup. == Functional powers == If Y ⊆ X, then f : X → Y {\displaystyle f:X\to Y} may compose with itself; this is sometimes denoted as f 2 {\displaystyle f^{2}} . That is: More generally, for any natural number n ≥ 2, the nth functional power can be defined inductively by f n = f ∘ f n−1 = f n−1 ∘ f, a notation introduced by Hans Heinrich Bürmann and John Frederick William Herschel. Repeated composition of such a function with itself is called function iteration. By convention, f 0 is defined as the identity map on f 's domain, idX. If Y = X and f: X → X admits an inverse function f −1, negative functional powers f −n are defined for n > 0 as the negated power of the inverse function: f −n = (f −1)n. Note: If f takes its values in a ring (in particular for real or complex-valued f ), there is a risk of confusion, as f n could also stand for the n-fold product of f, e.g. f 2(x) = f(x) · f(x). For trigonometric functions, usually the latter is meant, at least for positive exponents. For example, in trigonometry, this superscript notation represents standard exponentiation when used with trigonometric functions: sin2(x) = sin(x) · sin(x). However, for negative exponents (especially −1), it nevertheless usually refers to the inverse function, e.g., tan−1 = arctan ≠ 1/tan. In some cases, when, for a given function f, the equation g ∘ g = f has a unique solution g, that function can be defined as the functional square root of f, then written as g = f 1/2. More generally, when gn = f has a unique solution for some natural number n > 0, then f m/n can be defined as gm. Under additional restrictions, this idea can be generalized so that the iteration count becomes a continuous parameter; in this case, such a system is called a flow, specified through solutions of Schröder's equation. Iterated functions and flows occur naturally in the study of fractals and dynamical systems. To avoid ambiguity, some mathematicians choose to use ∘ to denote the compositional meaning, writing f∘n(x) for the n-th iterate of the function f(x), as in, for example, f∘3(x) meaning f(f(f(x))). For the same purpose, f[n](x) was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested nf(x) instead. == Alternative notations == Many mathematicians, particularly in group theory, omit the composition symbol, writing gf for g ∘ f. During the mid-20th century, some mathematicians adopted postfix notation, writing xf for f(x) and (xf)g for g(f(x)). This can be more natural than prefix notation in many cases, such as in linear algebra when x is a row vector and f and g denote matrices and the composition is by matrix multiplication. The order is important because function composition is not necessarily commutative. Having successive transformations applying and composing to the right agrees with the left-to-right reading sequence. Mathematicians who use postfix notation may write "fg", meaning first apply f and then apply g, in keeping with the order the symbols occur in postfix notation, thus making the notation "fg" ambiguous. Computer scientists may write "f ; g" for this, thereby disambiguating the order of composition. To distinguish the left composition operator from a text semicolon, in the Z notation the ⨾ character is used for left relation composition. Since all functions are binary relations, it is correct to use the [fat] semicolon for function composition as well (see the article on composition of relations for further details on this notation). == Composition operator == Given a function g, the composition operator Cg is defined as that operator which maps functions to functions as C g f = f ∘ g . {\displaystyle C_{g}f=f\circ g.} Composition operators are studied in the field of operator theory. == In programming languages == Function composition appears in one form or another in numerous programming languages. == Multivariate functions == Partial composition is possible for multivariate functions. The function resulting when some argument xi of the function f is replaced by the function g is called a composition of f and g in some computer engineering contexts, and is denoted f |xi = g f | x i = g = f ( x 1 , … , x i − 1 , g ( x 1 , x 2 , … , x n ) , x i + 1 , … , x n ) . {\displaystyle f|_{x_{i}=g}=f(x_{1},\ldots ,x_{i-1},g(x_{1},x_{2},\ldots ,x_{n}),x_{i+1},\ldots ,x_{n}).} When g is a simple constant b, composition degenerates into a (partial) valuation, whose result is also known as restriction or co-factor. f | x i = b = f ( x 1 , … , x i − 1 , b , x i + 1 , … , x n ) . {\displaystyle f|_{x_{i}=b}=f(x_{1},\ldots ,x_{i-1},b,x_{i+1},\ldots ,x_{n}).} In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition of primitive recursive function. Given f, a n-ary function, and n m-ary functions g1, ..., gn, the composition of f with g1, ..., gn, is the m-ary function h ( x 1 , … , x m ) = f ( g 1 ( x 1 , … , x m ) , … , g n ( x 1 , … , x m ) ) . {\displaystyle h(x_{1},\ldots ,x_{m})=f(g_{1}(x_{1},\ldots ,x_{m}),\ldots ,g_{n}(x_{1},\ldots ,x_{m})).} This is sometimes called the generalized composite or superposition of f with g1, ..., gn. The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosen projection functions. Here g1, ..., gn can be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition. A set of finitary operations on some base set X is called a clone if it contains all projections and is closed under generalized composition. A clone generally contains operations of various arities. The notion of commutation also finds an interesting generalization in the multivariate case; a function f of arity n is said to commute with a function g of arity m if f is a homomorphism preserving g, and vice versa, that is: f ( g ( a 11 , … , a 1 m ) , … , g ( a n 1 , … , a n m ) ) = g ( f ( a 11 , … , a n 1 ) , … , f ( a 1 m , … , a n m ) ) . {\displaystyle f(g(a_{11},\ldots ,a_{1m}),\ldots ,g(a_{n1},\ldots ,a_{nm}))=g(f(a_{11},\ldots ,a_{n1}),\ldots ,f(a_{1m},\ldots ,a_{nm})).} A unary operation always commutes with itself, but this is not necessarily the case for a binary (or higher arity) operation. A binary (or higher arity) operation that commutes with itself is called medial or entropic. == Generalizations == Composition can be generalized to arbitrary binary relations. If R ⊆ X × Y and S ⊆ Y × Z are two binary relations, then their composition amounts to R ∘ S = { ( x , z ) ∈ X × Z : ( ∃ y ∈ Y ) ( ( x , y ) ∈ R ∧ ( y , z ) ∈ S ) } {\displaystyle R\circ S=\{(x,z)\in X\times Z:(\exists y\in Y)((x,y)\in R\,\land \,(y,z)\in S)\}} . Considering a function as a special case of a binary relation (namely functional relations), function composition satisfies the definition for relation composition. A small circle R∘S has been used for the infix notation of composition of relations, as well as functions. When used to represent composition of functions ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle (g\circ f)(x)\ =\ g(f(x))} however, the text sequence is reversed to illustrate the different operation sequences accordingly. The composition is defined in the same way for partial functions and Cayley's theorem has its analogue called the Wagner–Preston theorem. The category of sets with functions as morphisms is the prototypical category. The axioms of a category are in fact inspired from the properties (and also the definition) of function composition. The structures given by composition are axiomatized and generalized in category theory with the concept of morphism as the category-theoretical replacement of functions. The reversed order of composition in the formula (f ∘ g)−1 = (g−1 ∘ f −1) applies for composition of relations using converse relations, and thus in group theory. These structures form dagger categories.The standard "foundation" for mathematics starts with sets and their elements. It is possible to start differently, by axiomatising not elements of sets but functions between sets. This can be done by using the language of categories and universal constructions. . . . the membership relation for sets can often be replaced by the composition operation for functions. This leads to an alternative foundation for Mathematics upon categories -- specifically, on the category of all functions. Now much of Mathematics is dynamic, in that it deals with morphisms of an object into another object of the same kind. Such morphisms (like functions) form categories, and so the approach via categories fits well with the objective of organizing and understanding Mathematics. That, in truth, should be the goal of a proper philosophy of Mathematics. - Saunders Mac Lane, Mathematics: Form and Function == Typography == The composition symbol ∘ is encoded as U+2218 ∘ RING OPERATOR (∘, ∘); see the Degree symbol article for similar-appearing Unicode characters. In TeX, it is written \circ. == See also == Cobweb plot – a graphical technique for functional composition Combinatory logic Composition ring, a formal axiomatization of the composition operation Flow (mathematics) Function composition (computer science) Function of random variable, distribution of a function of a random variable Functional decomposition Functional square root Functional equation Higher-order function Infinite compositions of analytic functions Iterated function Lambda calculus == Notes == == References == == External links == "Composite function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Composition of Functions" by Bruce Atwood, the Wolfram Demonstrations Project, 2007.
|
Wikipedia:Function of a real variable#0
|
In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers R {\displaystyle \mathbb {R} } , or a subset of R {\displaystyle \mathbb {R} } that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers. Nevertheless, the codomain of a function of a real variable may be any set. However, it is often assumed to have a structure of R {\displaystyle \mathbb {R} } -vector space over the reals. That is, the codomain may be a Euclidean space, a coordinate vector, the set of matrices of real numbers of a given size, or an R {\displaystyle \mathbb {R} } -algebra, such as the complex numbers or the quaternions. The structure R {\displaystyle \mathbb {R} } -vector space of the codomain induces a structure of R {\displaystyle \mathbb {R} } -vector space on the functions. If the codomain has a structure of R {\displaystyle \mathbb {R} } -algebra, the same is true for the functions. The image of a function of a real variable is a curve in the codomain. In this context, a function that defines curve is called a parametric equation of the curve. When the codomain of a function of a real variable is a finite-dimensional vector space, the function may be viewed as a sequence of real functions. This is often used in applications. == Real function == A real function is a function from a subset of R {\displaystyle \mathbb {R} } to R , {\displaystyle \mathbb {R} ,} where R {\displaystyle \mathbb {R} } denotes as usual the set of real numbers. That is, the domain of a real function is a subset R {\displaystyle \mathbb {R} } , and its codomain is R . {\displaystyle \mathbb {R} .} It is generally assumed that the domain contains an interval of positive length. === Basic examples === For many commonly used real functions, the domain is the whole set of real numbers, and the function is continuous and differentiable at every point of the domain. One says that these functions are defined, continuous and differentiable everywhere. This is the case of: All polynomial functions, including constant functions and linear functions Sine and cosine functions Exponential function Some functions are defined everywhere, but not continuous at some points. For example The Heaviside step function is defined everywhere, but not continuous at zero. Some functions are defined and continuous everywhere, but not everywhere differentiable. For example The absolute value is defined and continuous everywhere, and is differentiable everywhere, except for zero. The cubic root is defined and continuous everywhere, and is differentiable everywhere, except for zero. Many common functions are not defined everywhere, but are continuous and differentiable everywhere where they are defined. For example: A rational function is a quotient of two polynomial functions, and is not defined at the zeros of the denominator. The tangent function is not defined for π 2 + k π , {\displaystyle {\frac {\pi }{2}}+k\pi ,} where k is any integer. The logarithm function is defined only for positive values of the variable. Some functions are continuous in their whole domain, and not differentiable at some points. This is the case of: The square root is defined only for nonnegative values of the variable, and not differentiable at 0 (it is differentiable for all positive values of the variable). == General definition == A real-valued function of a real variable is a function that takes as input a real number, commonly represented by the variable x, for producing another real number, the value of the function, commonly denoted f(x). For simplicity, in this article a real-valued function of a real variable will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified. Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable is taken in a subset X of R {\displaystyle \mathbb {R} } , the domain of the function, which is always supposed to contain an interval of positive length. In other words, a real-valued function of a real variable is a function f : X → R {\displaystyle f:X\to \mathbb {R} } such that its domain X is a subset of R {\displaystyle \mathbb {R} } that contains an interval of positive length. A simple example of a function in one variable could be: f : X → R {\displaystyle f:X\to \mathbb {R} } X = { x ∈ R : x ≥ 0 } {\displaystyle X=\{x\in \mathbb {R} \,:\,x\geq 0\}} f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} which is the square root of x. === Image === The image of a function f ( x ) {\displaystyle f(x)} is the set of all values of f when the variable x runs in the whole domain of f. For a continuous (see below for a definition) real-valued function with a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function. The preimage of a given real number y is the set of the solutions of the equation y = f(x). === Domain === The domain of a function of several real variables is a subset of R {\displaystyle \mathbb {R} } that is sometimes explicitly defined. In fact, if one restricts the domain X of a function f to a subset Y ⊂ X, one gets formally a different function, the restriction of f to Y, which is denoted f|Y. In practice, it is often not harmful to identify f and f|Y, and to omit the subscript |Y. Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation. This means that it is not worthy to explicitly define the domain of a function of a real variable. === Algebraic structure === The arithmetic operations may be applied to the functions in the following way: For every real number r, the constant function ( x ) ↦ r {\displaystyle (x)\mapsto r} , is everywhere defined. For every real number r and every function f, the function r f : ( x ) ↦ r f ( x ) {\displaystyle rf:(x)\mapsto rf(x)} has the same domain as f (or is everywhere defined if r = 0). If f and g are two functions of respective domains X and Y such that X∩Y contains an open subset of R {\displaystyle \mathbb {R} } , then f + g : ( x ) ↦ f ( x ) + g ( x ) {\displaystyle f+g:(x)\mapsto f(x)+g(x)} and f g : ( x ) ↦ f ( x ) g ( x ) {\displaystyle f\,g:(x)\mapsto f(x)\,g(x)} are functions that have a domain containing X∩Y. It follows that the functions of n variables that are everywhere defined and the functions of n variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals ( R {\displaystyle \mathbb {R} } -algebras). One may similarly define 1 / f : ( x ) ↦ 1 / f ( x ) , {\displaystyle 1/f:(x)\mapsto 1/f(x),} which is a function only if the set of the points (x) in the domain of f such that f(x) ≠ 0 contains an open subset of R {\displaystyle \mathbb {R} } . This constraint implies that the above two algebras are not fields. === Continuity and limit === Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of a real variable are ubiquitous in mathematics, it is worth defining this notion without reference to the general notion of continuous maps between topological space. For defining the continuity, it is useful to consider the distance function of R {\displaystyle \mathbb {R} } , which is an everywhere defined function of 2 real variables: d ( x , y ) = | x − y | {\displaystyle d(x,y)=|x-y|} A function f is continuous at a point a {\displaystyle a} which is interior to its domain, if, for every positive real number ε, there is a positive real number φ such that | f ( x ) − f ( a ) | < ε {\displaystyle |f(x)-f(a)|<\varepsilon } for all x {\displaystyle x} such that d ( x , a ) < φ . {\displaystyle d(x,a)<\varphi .} In other words, φ may be chosen small enough for having the image by f of the interval of radius φ centered at a {\displaystyle a} contained in the interval of length 2ε centered at f ( a ) . {\displaystyle f(a).} A function is continuous if it is continuous at every point of its domain. The limit of a real-valued function of a real variable is as follows. Let a be a point in topological closure of the domain X of the function f. The function, f has a limit L when x tends toward a, denoted L = lim x → a f ( x ) , {\displaystyle L=\lim _{x\to a}f(x),} if the following condition is satisfied: For every positive real number ε > 0, there is a positive real number δ > 0 such that | f ( x ) − L | < ε {\displaystyle |f(x)-L|<\varepsilon } for all x in the domain such that d ( x , a ) < δ . {\displaystyle d(x,a)<\delta .} If the limit exists, it is unique. If a is in the interior of the domain, the limit exists if and only if the function is continuous at a. In this case, we have f ( a ) = lim x → a f ( x ) . {\displaystyle f(a)=\lim _{x\to a}f(x).} When a is in the boundary of the domain of f, and if f has a limit at a, the latter formula allows to "extend by continuity" the domain of f to a. == Calculus == One can collect a number of functions each of a real variable, say y 1 = f 1 ( x ) , y 2 = f 2 ( x ) , … , y n = f n ( x ) {\displaystyle y_{1}=f_{1}(x)\,,\quad y_{2}=f_{2}(x)\,,\ldots ,y_{n}=f_{n}(x)} into a vector parametrized by x: y = ( y 1 , y 2 , … , y n ) = [ f 1 ( x ) , f 2 ( x ) , … , f n ( x ) ] {\displaystyle \mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})=[f_{1}(x),f_{2}(x),\ldots ,f_{n}(x)]} The derivative of the vector y is the vector derivatives of fi(x) for i = 1, 2, ..., n: d y d x = ( d y 1 d x , d y 2 d x , … , d y n d x ) {\displaystyle {\frac {d\mathbf {y} }{dx}}=\left({\frac {dy_{1}}{dx}},{\frac {dy_{2}}{dx}},\ldots ,{\frac {dy_{n}}{dx}}\right)} One can also perform line integrals along a space curve parametrized by x, with position vector r = r(x), by integrating with respect to the variable x: ∫ a b y ( x ) ⋅ d r = ∫ a b y ( x ) ⋅ d r ( x ) d x d x {\displaystyle \int _{a}^{b}\mathbf {y} (x)\cdot d\mathbf {r} =\int _{a}^{b}\mathbf {y} (x)\cdot {\frac {d\mathbf {r} (x)}{dx}}dx} where · is the dot product, and x = a and x = b are the start and endpoints of the curve. === Theorems === With the definitions of integration and derivatives, key theorems can be formulated, including the fundamental theorem of calculus, integration by parts, and Taylor's theorem. Evaluating a mixture of integrals and derivatives can be done by using theorem differentiation under the integral sign. == Implicit functions == A real-valued implicit function of a real variable is not written in the form "y = f(x)". Instead, the mapping is from the space R {\displaystyle \mathbb {R} } 2 to the zero element in R {\displaystyle \mathbb {R} } (just the ordinary zero 0): ϕ : R 2 → { 0 } {\displaystyle \phi :\mathbb {R} ^{2}\to \{0\}} and ϕ ( x , y ) = 0 {\displaystyle \phi (x,y)=0} is an equation in the variables. Implicit functions are a more general way to represent functions, since if: y = f ( x ) {\displaystyle y=f(x)} then we can always define: ϕ ( x , y ) = y − f ( x ) = 0 {\displaystyle \phi (x,y)=y-f(x)=0} but the converse is not always possible, i.e. not all implicit functions have the form of this equation. == One-dimensional space curves in == R {\displaystyle \mathbb {R} } n === Formulation === Given the functions r1 = r1(t), r2 = r2(t), ..., rn = rn(t) all of a common variable t, so that: r 1 : R → R r 2 : R → R ⋯ r n : R → R r 1 = r 1 ( t ) r 2 = r 2 ( t ) ⋯ r n = r n ( t ) {\displaystyle {\begin{aligned}r_{1}:\mathbb {R} \rightarrow \mathbb {R} &\quad r_{2}:\mathbb {R} \rightarrow \mathbb {R} &\cdots &\quad r_{n}:\mathbb {R} \rightarrow \mathbb {R} \\r_{1}=r_{1}(t)&\quad r_{2}=r_{2}(t)&\cdots &\quad r_{n}=r_{n}(t)\\\end{aligned}}} or taken together: r : R → R n , r = r ( t ) {\displaystyle \mathbf {r} :\mathbb {R} \rightarrow \mathbb {R} ^{n}\,,\quad \mathbf {r} =\mathbf {r} (t)} then the parametrized n-tuple, r ( t ) = [ r 1 ( t ) , r 2 ( t ) , … , r n ( t ) ] {\displaystyle \mathbf {r} (t)=[r_{1}(t),r_{2}(t),\ldots ,r_{n}(t)]} describes a one-dimensional space curve. === Tangent line to curve === At a point r(t = c) = a = (a1, a2, ..., an) for some constant t = c, the equations of the one-dimensional tangent line to the curve at that point are given in terms of the ordinary derivatives of r1(t), r2(t), ..., rn(t), and r with respect to t: r 1 ( t ) − a 1 d r 1 ( t ) / d t = r 2 ( t ) − a 2 d r 2 ( t ) / d t = ⋯ = r n ( t ) − a n d r n ( t ) / d t {\displaystyle {\frac {r_{1}(t)-a_{1}}{dr_{1}(t)/dt}}={\frac {r_{2}(t)-a_{2}}{dr_{2}(t)/dt}}=\cdots ={\frac {r_{n}(t)-a_{n}}{dr_{n}(t)/dt}}} === Normal plane to curve === The equation of the n-dimensional hyperplane normal to the tangent line at r = a is: ( p 1 − a 1 ) d r 1 ( t ) d t + ( p 2 − a 2 ) d r 2 ( t ) d t + ⋯ + ( p n − a n ) d r n ( t ) d t = 0 {\displaystyle (p_{1}-a_{1}){\frac {dr_{1}(t)}{dt}}+(p_{2}-a_{2}){\frac {dr_{2}(t)}{dt}}+\cdots +(p_{n}-a_{n}){\frac {dr_{n}(t)}{dt}}=0} or in terms of the dot product: ( p − a ) ⋅ d r ( t ) d t = 0 {\displaystyle (\mathbf {p} -\mathbf {a} )\cdot {\frac {d\mathbf {r} (t)}{dt}}=0} where p = (p1, p2, ..., pn) are points in the plane, not on the space curve. === Relation to kinematics === The physical and geometric interpretation of dr(t)/dt is the "velocity" of a point-like particle moving along the path r(t), treating r as the spatial position vector coordinates parametrized by time t, and is a vector tangent to the space curve for all t in the instantaneous direction of motion. At t = c, the space curve has a tangent vector dr(t)/dt|t = c, and the hyperplane normal to the space curve at t = c is also normal to the tangent at t = c. Any vector in this plane (p − a) must be normal to dr(t)/dt|t = c. Similarly, d2r(t)/dt2 is the "acceleration" of the particle, and is a vector normal to the curve directed along the radius of curvature. == Matrix valued functions == A matrix can also be a function of a single variable. For example, the rotation matrix in 2d: R ( θ ) = [ cos θ − sin θ sin θ cos θ ] {\displaystyle R(\theta )={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}} is a matrix valued function of rotation angle of about the origin. Similarly, in special relativity, the Lorentz transformation matrix for a pure boost (without rotations): Λ ( β ) = [ 1 1 − β 2 − β 1 − β 2 0 0 − β 1 − β 2 1 1 − β 2 0 0 0 0 1 0 0 0 0 1 ] {\displaystyle \Lambda (\beta )={\begin{bmatrix}{\frac {1}{\sqrt {1-\beta ^{2}}}}&-{\frac {\beta }{\sqrt {1-\beta ^{2}}}}&0&0\\-{\frac {\beta }{\sqrt {1-\beta ^{2}}}}&{\frac {1}{\sqrt {1-\beta ^{2}}}}&0&0\\0&0&1&0\\0&0&0&1\\\end{bmatrix}}} is a function of the boost parameter β = v/c, in which v is the relative velocity between the frames of reference (a continuous variable), and c is the speed of light, a constant. == Banach and Hilbert spaces and quantum mechanics == Generalizing the previous section, the output of a function of a real variable can also lie in a Banach space or a Hilbert space. In these spaces, division and multiplication and limits are all defined, so notions such as derivative and integral still apply. This occurs especially often in quantum mechanics, where one takes the derivative of a ket or an operator. This occurs, for instance, in the general time-dependent Schrödinger equation: i ℏ ∂ ∂ t Ψ = H ^ Ψ {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi ={\hat {H}}\Psi } where one takes the derivative of a wave function, which can be an element of several different Hilbert spaces. == Complex-valued function of a real variable == A complex-valued function of a real variable may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values. If f(x) is such a complex valued function, it may be decomposed as f(x) = g(x) + ih(x), where g and h are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions. == Cardinality of sets of functions of a real variable == The cardinality of the set of real-valued functions of a real variable, R R = { f : R → R } {\displaystyle \mathbb {R} ^{\mathbb {R} }=\{f:\mathbb {R} \to \mathbb {R} \}} , is ℶ 2 = 2 c {\displaystyle \beth _{2}=2^{\mathfrak {c}}} , which is strictly larger than the cardinality of the continuum (i.e., set of all real numbers). This fact is easily verified by cardinal arithmetic: c a r d ( R R ) = c a r d ( R ) c a r d ( R ) = c c = ( 2 ℵ 0 ) c = 2 ℵ 0 ⋅ c = 2 c . {\displaystyle \mathrm {card} (\mathbb {R} ^{\mathbb {R} })=\mathrm {card} (\mathbb {R} )^{\mathrm {card} (\mathbb {R} )}={\mathfrak {c}}^{\mathfrak {c}}=(2^{\aleph _{0}})^{\mathfrak {c}}=2^{\aleph _{0}\cdot {\mathfrak {c}}}=2^{\mathfrak {c}}.} Furthermore, if X {\displaystyle X} is a set such that 2 ≤ c a r d ( X ) ≤ c {\displaystyle 2\leq \mathrm {card} (X)\leq {\mathfrak {c}}} , then the cardinality of the set X R = { f : R → X } {\displaystyle X^{\mathbb {R} }=\{f:\mathbb {R} \to X\}} is also 2 c {\displaystyle 2^{\mathfrak {c}}} , since 2 c = c a r d ( 2 R ) ≤ c a r d ( X R ) ≤ c a r d ( R R ) = 2 c . {\displaystyle 2^{\mathfrak {c}}=\mathrm {card} (2^{\mathbb {R} })\leq \mathrm {card} (X^{\mathbb {R} })\leq \mathrm {card} (\mathbb {R} ^{\mathbb {R} })=2^{\mathfrak {c}}.} However, the set of continuous functions C 0 ( R ) = { f : R → R : f c o n t i n u o u s } {\displaystyle C^{0}(\mathbb {R} )=\{f:\mathbb {R} \to \mathbb {R} :f\ \mathrm {continuous} \}} has a strictly smaller cardinality, the cardinality of the continuum, c {\displaystyle {\mathfrak {c}}} . This follows from the fact that a continuous function is completely determined by its value on a dense subset of its domain. Thus, the cardinality of the set of continuous real-valued functions on the reals is no greater than the cardinality of the set of real-valued functions of a rational variable. By cardinal arithmetic: c a r d ( C 0 ( R ) ) ≤ c a r d ( R Q ) = ( 2 ℵ 0 ) ℵ 0 = 2 ℵ 0 ⋅ ℵ 0 = 2 ℵ 0 = c . {\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))\leq \mathrm {card} (\mathbb {R} ^{\mathbb {Q} })=(2^{\aleph _{0}})^{\aleph _{0}}=2^{\aleph _{0}\cdot \aleph _{0}}=2^{\aleph _{0}}={\mathfrak {c}}.} On the other hand, since there is a clear bijection between R {\displaystyle \mathbb {R} } and the set of constant functions { f : R → R : f ( x ) ≡ x 0 } {\displaystyle \{f:\mathbb {R} \to \mathbb {R} :f(x)\equiv x_{0}\}} , which forms a subset of C 0 ( R ) {\displaystyle C^{0}(\mathbb {R} )} , c a r d ( C 0 ( R ) ) ≥ c {\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))\geq {\mathfrak {c}}} must also hold. Hence, c a r d ( C 0 ( R ) ) = c {\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))={\mathfrak {c}}} . == See also == Real analysis Function of several real variables Complex analysis Function of several complex variables == References == F. Ayres, E. Mendelson (2009). Calculus. Schaum's outline series (5th ed.). McGraw Hill. ISBN 978-0-07-150861-2. R. Wrede, M. R. Spiegel (2010). Advanced calculus. Schaum's outline series (3rd ed.). McGraw Hill. ISBN 978-0-07-162366-7. N. Bourbaki (2004). Functions of a Real Variable: Elementary Theory. Springer. ISBN 354-065-340-6. == External links == Multivariable Calculus L. A. Talman (2007) Differentiability for Multivariable Functions
|
Wikipedia:Function of several real variables#0
|
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article. The domain of a function of n variables is the subset of R n {\displaystyle \mathbb {R} ^{n}} for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of R n {\displaystyle \mathbb {R} ^{n}} . == General definition == A real-valued function of n real variables is a function that takes as input n real numbers, commonly represented by the variables x1, x2, …, xn, for producing another real number, the value of the function, commonly denoted f(x1, x2, …, xn). For simplicity, in this article a real-valued function of several real variables will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified. Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable are taken in a subset X of Rn, the domain of the function, which is always supposed to contain an open subset of Rn. In other words, a real-valued function of n real variables is a function f : X → R {\displaystyle f:X\to \mathbb {R} } such that its domain X is a subset of Rn that contains a nonempty open set. An element of X being an n-tuple (x1, x2, …, xn) (usually delimited by parentheses), the general notation for denoting functions would be f((x1, x2, …, xn)). The common usage, much older than the general definition of functions between sets, is to not use double parentheses and to simply write f(x1, x2, …, xn). It is also common to abbreviate the n-tuple (x1, x2, …, xn) by using a notation similar to that for vectors, like boldface x, underline x, or overarrow x→. This article will use bold. A simple example of a function in two variables could be: V : X → R X = { ( A , h ) ∈ R 2 ∣ A > 0 , h > 0 } V ( A , h ) = 1 3 A h {\displaystyle {\begin{aligned}&V:X\to \mathbb {R} \\&X=\left\{(A,h)\in \mathbb {R} ^{2}\mid A>0,h>0\right\}\\&V(A,h)={\frac {1}{3}}Ah\end{aligned}}} which is the volume V of a cone with base area A and height h measured perpendicularly from the base. The domain restricts all variables to be positive since lengths and areas must be positive. For an example of a function in two variables: z : R 2 → R z ( x , y ) = a x + b y {\displaystyle {\begin{aligned}&z:\mathbb {R} ^{2}\to \mathbb {R} \\&z(x,y)=ax+by\end{aligned}}} where a and b are real non-zero constants. Using the three-dimensional Cartesian coordinate system, where the xy plane is the domain R2 and the z axis is the codomain R, one can visualize the image to be a two-dimensional plane, with a slope of a in the positive x direction and a slope of b in the positive y direction. The function is well-defined at all points (x, y) in R2. The previous example can be extended easily to higher dimensions: z : R p → R z ( x 1 , x 2 , … , x p ) = a 1 x 1 + a 2 x 2 + ⋯ + a p x p {\displaystyle {\begin{aligned}&z:\mathbb {R} ^{p}\to \mathbb {R} \\&z(x_{1},x_{2},\ldots ,x_{p})=a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{p}x_{p}\end{aligned}}} for p non-zero real constants a1, a2, …, ap, which describes a p-dimensional hyperplane. The Euclidean norm: f ( x ) = ‖ x ‖ = x 1 2 + ⋯ + x n 2 {\displaystyle f({\boldsymbol {x}})=\|{\boldsymbol {x}}\|={\sqrt {x_{1}^{2}+\cdots +x_{n}^{2}}}} is also a function of n variables which is everywhere defined, while g ( x ) = 1 f ( x ) {\displaystyle g({\boldsymbol {x}})={\frac {1}{f({\boldsymbol {x}})}}} is defined only for x ≠ (0, 0, …, 0). For a non-linear example function in two variables: z : X → R X = { ( x , y ) ∈ R 2 : x 2 + y 2 ≤ 8 , x ≠ 0 , y ≠ 0 } z ( x , y ) = 1 2 x y x 2 + y 2 {\displaystyle {\begin{aligned}&z:X\to \mathbb {R} \\&X=\left\{(x,y)\in \mathbb {R} ^{2}\,:\,x^{2}+y^{2}\leq 8\,,\,x\neq 0\,,\,y\neq 0\right\}\\&z(x,y)={\frac {1}{2xy}}{\sqrt {x^{2}+y^{2}}}\end{aligned}}} which takes in all points in X, a disk of radius √8 "punctured" at the origin (x, y) = (0, 0) in the plane R2, and returns a point in R. The function does not include the origin (x, y) = (0, 0), if it did then f would be ill-defined at that point. Using a 3d Cartesian coordinate system with the xy-plane as the domain R2, and the z axis the codomain R, the image can be visualized as a curved surface. The function can be evaluated at the point (x, y) = (2, √3) in X: z ( 2 , 3 ) = 1 2 ⋅ 2 ⋅ 3 ( 2 ) 2 + ( 3 ) 2 = 1 4 3 7 , {\displaystyle z\left(2,{\sqrt {3}}\right)={\frac {1}{2\cdot 2\cdot {\sqrt {3}}}}{\sqrt {\left(2\right)^{2}+\left({\sqrt {3}}\right)^{2}}}={\frac {1}{4{\sqrt {3}}}}{\sqrt {7}}\,,} However, the function couldn't be evaluated at, say ( x , y ) = ( 65 , 10 ) ⇒ x 2 + y 2 = ( 65 ) 2 + ( 10 ) 2 > 8 {\displaystyle (x,y)=(65,{\sqrt {10}})\,\Rightarrow \,x^{2}+y^{2}=(65)^{2}+({\sqrt {10}})^{2}>8} since these values of x and y do not satisfy the domain's rule. === Image === The image of a function f(x1, x2, …, xn) is the set of all values of f when the n-tuple (x1, x2, …, xn) runs in the whole domain of f. For a continuous (see below for a definition) real-valued function which has a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function. The preimage of a given real number c is called a level set. It is the set of the solutions of the equation f(x1, x2, …, xn) = c. === Domain === The domain of a function of several real variables is a subset of Rn that is sometimes, but not always, explicitly defined. In fact, if one restricts the domain X of a function f to a subset Y ⊂ X, one gets formally a different function, the restriction of f to Y, which is denoted f | Y {\displaystyle f|_{Y}} . In practice, it is often (but not always) not harmful to identify f and f | Y {\displaystyle f|_{Y}} , and to omit the restrictor |Y. Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation. Moreover, many functions are defined in such a way that it is difficult to specify explicitly their domain. For example, given a function f, it may be difficult to specify the domain of the function g ( x ) = 1 / f ( x ) . {\displaystyle g({\boldsymbol {x}})=1/f({\boldsymbol {x}}).} If f is a multivariate polynomial, (which has R n {\displaystyle \mathbb {R} ^{n}} as a domain), it is even difficult to test whether the domain of g is also R n {\displaystyle \mathbb {R} ^{n}} . This is equivalent to test whether a polynomial is always positive, and is the object of an active research area (see Positive polynomial). === Algebraic structure === The usual operations of arithmetic on the reals may be extended to real-valued functions of several real variables in the following way: For every real number r, the constant function ( x 1 , … , x n ) ↦ r {\displaystyle (x_{1},\ldots ,x_{n})\mapsto r} is everywhere defined. For every real number r and every function f, the function: r f : ( x 1 , … , x n ) ↦ r f ( x 1 , … , x n ) {\displaystyle rf:(x_{1},\ldots ,x_{n})\mapsto rf(x_{1},\ldots ,x_{n})} has the same domain as f (or is everywhere defined if r = 0). If f and g are two functions of respective domains X and Y such that X ∩ Y contains a nonempty open subset of Rn, then f g : ( x 1 , … , x n ) ↦ f ( x 1 , … , x n ) g ( x 1 , … , x n ) {\displaystyle f\,g:(x_{1},\ldots ,x_{n})\mapsto f(x_{1},\ldots ,x_{n})\,g(x_{1},\ldots ,x_{n})} and g f : ( x 1 , … , x n ) ↦ g ( x 1 , … , x n ) f ( x 1 , … , x n ) {\displaystyle g\,f:(x_{1},\ldots ,x_{n})\mapsto g(x_{1},\ldots ,x_{n})\,f(x_{1},\ldots ,x_{n})} are functions that have a domain containing X ∩ Y. It follows that the functions of n variables that are everywhere defined and the functions of n variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (R-algebras). This is a prototypical example of a function space. One may similarly define 1 / f : ( x 1 , … , x n ) ↦ 1 / f ( x 1 , … , x n ) , {\displaystyle 1/f:(x_{1},\ldots ,x_{n})\mapsto 1/f(x_{1},\ldots ,x_{n}),} which is a function only if the set of the points (x1, …,xn) in the domain of f such that f(x1, …, xn) ≠ 0 contains an open subset of Rn. This constraint implies that the above two algebras are not fields. === Univariable functions associated with a multivariable function === One can easily obtain a function in one real variable by giving a constant value to all but one of the variables. For example, if (a1, …, an) is a point of the interior of the domain of the function f, we can fix the values of x2, …, xn to a2, …, an respectively, to get a univariable function x ↦ f ( x , a 2 , … , a n ) , {\displaystyle x\mapsto f(x,a_{2},\ldots ,a_{n}),} whose domain contains an interval centered at a1. This function may also be viewed as the restriction of the function f to the line defined by the equations xi = ai for i = 2, …, n. Other univariable functions may be defined by restricting f to any line passing through (a1, …, an). These are the functions x ↦ f ( a 1 + c 1 x , a 2 + c 2 x , … , a n + c n x ) , {\displaystyle x\mapsto f(a_{1}+c_{1}x,a_{2}+c_{2}x,\ldots ,a_{n}+c_{n}x),} where the ci are real numbers that are not all zero. In next section, we will show that, if the multivariable function is continuous, so are all these univariable functions, but the converse is not necessarily true. === Continuity and limit === Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of several real variables are ubiquitous in mathematics, it is worth to define this notion without reference to the general notion of continuous maps between topological space. For defining the continuity, it is useful to consider the distance function of Rn, which is an everywhere defined function of 2n real variables: d ( x , y ) = d ( x 1 , … , x n , y 1 , … , y n ) = ( x 1 − y 1 ) 2 + ⋯ + ( x n − y n ) 2 {\displaystyle d({\boldsymbol {x}},{\boldsymbol {y}})=d(x_{1},\ldots ,x_{n},y_{1},\ldots ,y_{n})={\sqrt {(x_{1}-y_{1})^{2}+\cdots +(x_{n}-y_{n})^{2}}}} A function f is continuous at a point a = (a1, …, an) which is interior to its domain, if, for every positive real number ε, there is a positive real number φ such that |f(x) − f(a)| < ε for all x such that d(x a) < φ. In other words, φ may be chosen small enough for having the image by f of the ball of radius φ centered at a contained in the interval of length 2ε centered at f(a). A function is continuous if it is continuous at every point of its domain. If a function is continuous at f(a), then all the univariate functions that are obtained by fixing all the variables xi except one at the value ai, are continuous at f(a). The converse is false; this means that all these univariate functions may be continuous for a function that is not continuous at f(a). For an example, consider the function f such that f(0, 0) = 0, and is otherwise defined by f ( x , y ) = x 2 y x 4 + y 2 . {\displaystyle f(x,y)={\frac {x^{2}y}{x^{4}+y^{2}}}.} The functions x ↦ f(x, 0) and y ↦ f(0, y) are both constant and equal to zero, and are therefore continuous. The function f is not continuous at (0, 0), because, if ε < 1/2 and y = x2 ≠ 0, we have f(x, y) = 1/2, even if |x| is very small. Although not continuous, this function has the further property that all the univariate functions obtained by restricting it to a line passing through (0, 0) are also continuous. In fact, we have f ( x , λ x ) = λ x x 2 + λ 2 {\displaystyle f(x,\lambda x)={\frac {\lambda x}{x^{2}+\lambda ^{2}}}} for λ ≠ 0. The limit at a point of a real-valued function of several real variables is defined as follows. Let a = (a1, a2, …, an) be a point in topological closure of the domain X of the function f. The function, f has a limit L when x tends toward a, denoted L = lim x → a f ( x ) , {\displaystyle L=\lim _{{\boldsymbol {x}}\to {\boldsymbol {a}}}f({\boldsymbol {x}}),} if the following condition is satisfied: For every positive real number ε > 0, there is a positive real number δ > 0 such that | f ( x ) − L | < ε {\displaystyle |f({\boldsymbol {x}})-L|<\varepsilon } for all x in the domain such that d ( x , a ) < δ . {\displaystyle d({\boldsymbol {x}},{\boldsymbol {a}})<\delta .} If the limit exists, it is unique. If a is in the interior of the domain, the limit exists if and only if the function is continuous at a. In this case, we have f ( a ) = lim x → a f ( x ) . {\displaystyle f({\boldsymbol {a}})=\lim _{{\boldsymbol {x}}\to {\boldsymbol {a}}}f({\boldsymbol {x}}).} When a is in the boundary of the domain of f, and if f has a limit at a, the latter formula allows to "extend by continuity" the domain of f to a. == Symmetry == A symmetric function is a function f that is unchanged when two variables xi and xj are interchanged: f ( … , x i , … , x j , … ) = f ( … , x j , … , x i , … ) {\displaystyle f(\ldots ,x_{i},\ldots ,x_{j},\ldots )=f(\ldots ,x_{j},\ldots ,x_{i},\ldots )} where i and j are each one of 1, 2, …, n. For example: f ( x , y , z , t ) = t 2 − x 2 − y 2 − z 2 {\displaystyle f(x,y,z,t)=t^{2}-x^{2}-y^{2}-z^{2}} is symmetric in x, y, z since interchanging any pair of x, y, z leaves f unchanged, but is not symmetric in all of x, y, z, t, since interchanging t with x or y or z gives a different function. == Function composition == Suppose the functions ξ 1 = ξ 1 ( x 1 , x 2 , … , x n ) , ξ 2 = ξ 2 ( x 1 , x 2 , … , x n ) , … ξ m = ξ m ( x 1 , x 2 , … , x n ) , {\displaystyle \xi _{1}=\xi _{1}(x_{1},x_{2},\ldots ,x_{n}),\quad \xi _{2}=\xi _{2}(x_{1},x_{2},\ldots ,x_{n}),\ldots \xi _{m}=\xi _{m}(x_{1},x_{2},\ldots ,x_{n}),} or more compactly ξ = ξ(x), are all defined on a domain X. As the n-tuple x = (x1, x2, …, xn) varies in X, a subset of Rn, the m-tuple ξ = (ξ1, ξ2, …, ξm) varies in another region Ξ a subset of Rm. To restate this: ξ : X → Ξ . {\displaystyle {\boldsymbol {\xi }}:X\to \Xi .} Then, a function ζ of the functions ξ(x) defined on Ξ, ζ : Ξ → R , ζ = ζ ( ξ 1 , ξ 2 , … , ξ m ) , {\displaystyle {\begin{aligned}&\zeta :\Xi \to \mathbb {R} ,\\&\zeta =\zeta (\xi _{1},\xi _{2},\ldots ,\xi _{m}),\end{aligned}}} is a function composition defined on X, in other terms the mapping ζ : X → R , ζ = ζ ( ξ 1 , ξ 2 , … , ξ m ) = f ( x 1 , x 2 , … , x n ) . {\displaystyle {\begin{aligned}&\zeta :X\to \mathbb {R} ,\\&\zeta =\zeta (\xi _{1},\xi _{2},\ldots ,\xi _{m})=f(x_{1},x_{2},\ldots ,x_{n}).\end{aligned}}} Note the numbers m and n do not need to be equal. For example, the function f ( x , y ) = e x y [ sin 3 ( x − y ) − cos 2 ( x + y ) ] {\displaystyle f(x,y)=e^{xy}[\sin 3(x-y)-\cos 2(x+y)]} defined everywhere on R2 can be rewritten by introducing ( α , β , γ ) = ( α ( x , y ) , β ( x , y ) , γ ( x , y ) ) = ( x y , x − y , x + y ) {\displaystyle (\alpha ,\beta ,\gamma )=(\alpha (x,y),\beta (x,y),\gamma (x,y))=(xy,x-y,x+y)} which is also everywhere defined in R3 to obtain f ( x , y ) = ζ ( α ( x , y ) , β ( x , y ) , γ ( x , y ) ) = ζ ( α , β , γ ) = e α [ sin ( 3 β ) − cos ( 2 γ ) ] . {\displaystyle f(x,y)=\zeta (\alpha (x,y),\beta (x,y),\gamma (x,y))=\zeta (\alpha ,\beta ,\gamma )=e^{\alpha }[\sin(3\beta )-\cos(2\gamma )]\,.} Function composition can be used to simplify functions, which is useful for carrying out multiple integrals and solving partial differential equations. == Calculus == Elementary calculus is the calculus of real-valued functions of one real variable, and the principal ideas of differentiation and integration of such functions can be extended to functions of more than one real variable; this extension is multivariable calculus. === Partial derivatives === Partial derivatives can be defined with respect to each variable: ∂ ∂ x 1 f ( x 1 , x 2 , … , x n ) , ∂ ∂ x 2 f ( x 1 , x 2 , … x n ) , … , ∂ ∂ x n f ( x 1 , x 2 , … , x n ) . {\displaystyle {\frac {\partial }{\partial x_{1}}}f(x_{1},x_{2},\ldots ,x_{n})\,,\quad {\frac {\partial }{\partial x_{2}}}f(x_{1},x_{2},\ldots x_{n})\,,\ldots ,{\frac {\partial }{\partial x_{n}}}f(x_{1},x_{2},\ldots ,x_{n}).} Partial derivatives themselves are functions, each of which represents the rate of change of f parallel to one of the x1, x2, …, xn axes at all points in the domain (if the derivatives exist and are continuous—see also below). A first derivative is positive if the function increases along the direction of the relevant axis, negative if it decreases, and zero if there is no increase or decrease. Evaluating a partial derivative at a particular point in the domain gives the rate of change of the function at that point in the direction parallel to a particular axis, a real number. For real-valued functions of a real variable, y = f(x), its ordinary derivative dy/dx is geometrically the gradient of the tangent line to the curve y = f(x) at all points in the domain. Partial derivatives extend this idea to tangent hyperplanes to a curve. The second order partial derivatives can be calculated for every pair of variables: ∂ 2 ∂ x 1 2 f ( x 1 , x 2 , … , x n ) , ∂ 2 ∂ x 1 x 2 f ( x 1 , x 2 , … x n ) , … , ∂ 2 ∂ x n 2 f ( x 1 , x 2 , … , x n ) . {\displaystyle {\frac {\partial ^{2}}{\partial x_{1}^{2}}}f(x_{1},x_{2},\ldots ,x_{n})\,,\quad {\frac {\partial ^{2}}{\partial x_{1}x_{2}}}f(x_{1},x_{2},\ldots x_{n})\,,\ldots ,{\frac {\partial ^{2}}{\partial x_{n}^{2}}}f(x_{1},x_{2},\ldots ,x_{n}).} Geometrically, they are related to the local curvature of the function's image at all points in the domain. At any point where the function is well-defined, the function could be increasing along some axes, and/or decreasing along other axes, and/or not increasing or decreasing at all along other axes. This leads to a variety of possible stationary points: global or local maxima, global or local minima, and saddle points—the multidimensional analogue of inflection points for real functions of one real variable. The Hessian matrix is a matrix of all the second order partial derivatives, which are used to investigate the stationary points of the function, important for mathematical optimization. In general, partial derivatives of higher order p have the form: ∂ p ∂ x 1 p 1 ∂ x 2 p 2 ⋯ ∂ x n p n f ( x 1 , x 2 , … , x n ) ≡ ∂ p 1 ∂ x 1 p 1 ∂ p 2 ∂ x 2 p 2 ⋯ ∂ p n ∂ x n p n f ( x 1 , x 2 , … , x n ) {\displaystyle {\frac {\partial ^{p}}{\partial x_{1}^{p_{1}}\partial x_{2}^{p_{2}}\cdots \partial x_{n}^{p_{n}}}}f(x_{1},x_{2},\ldots ,x_{n})\equiv {\frac {\partial ^{p_{1}}}{\partial x_{1}^{p_{1}}}}{\frac {\partial ^{p_{2}}}{\partial x_{2}^{p_{2}}}}\cdots {\frac {\partial ^{p_{n}}}{\partial x_{n}^{p_{n}}}}f(x_{1},x_{2},\ldots ,x_{n})} where p1, p2, …, pn are each integers between 0 and p such that p1 + p2 + ⋯ + pn = p, using the definitions of zeroth partial derivatives as identity operators: ∂ 0 ∂ x 1 0 f ( x 1 , x 2 , … , x n ) = f ( x 1 , x 2 , … , x n ) , … , ∂ 0 ∂ x n 0 f ( x 1 , x 2 , … , x n ) = f ( x 1 , x 2 , … , x n ) . {\displaystyle {\frac {\partial ^{0}}{\partial x_{1}^{0}}}f(x_{1},x_{2},\ldots ,x_{n})=f(x_{1},x_{2},\ldots ,x_{n})\,,\quad \ldots ,\,{\frac {\partial ^{0}}{\partial x_{n}^{0}}}f(x_{1},x_{2},\ldots ,x_{n})=f(x_{1},x_{2},\ldots ,x_{n})\,.} The number of possible partial derivatives increases with p, although some mixed partial derivatives (those with respect to more than one variable) are superfluous, because of the symmetry of second order partial derivatives. This reduces the number of partial derivatives to calculate for some p. === Multivariable differentiability === A function f(x) is differentiable in a neighborhood of a point a if there is an n-tuple of numbers dependent on a in general, A(a) = (A1(a), A2(a), …, An(a)), so that: f ( x ) = f ( a ) + A ( a ) ⋅ ( x − a ) + α ( x ) | x − a | {\displaystyle f({\boldsymbol {x}})=f({\boldsymbol {a}})+{\boldsymbol {A}}({\boldsymbol {a}})\cdot ({\boldsymbol {x}}-{\boldsymbol {a}})+\alpha ({\boldsymbol {x}})|{\boldsymbol {x}}-{\boldsymbol {a}}|} where α ( x ) → 0 {\displaystyle \alpha ({\boldsymbol {x}})\to 0} as | x − a | → 0 {\displaystyle |{\boldsymbol {x}}-{\boldsymbol {a}}|\to 0} . This means that if f is differentiable at a point a, then f is continuous at x = a, although the converse is not true - continuity in the domain does not imply differentiability in the domain. If f is differentiable at a then the first order partial derivatives exist at a and: ∂ f ( x ) ∂ x i | x = a = A i ( a ) {\displaystyle \left.{\frac {\partial f({\boldsymbol {x}})}{\partial x_{i}}}\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}=A_{i}({\boldsymbol {a}})} for i = 1, 2, …, n, which can be found from the definitions of the individual partial derivatives, so the partial derivatives of f exist. Assuming an n-dimensional analogue of a rectangular Cartesian coordinate system, these partial derivatives can be used to form a vectorial linear differential operator, called the gradient (also known as "nabla" or "del") in this coordinate system: ∇ f ( x ) = ( ∂ ∂ x 1 , ∂ ∂ x 2 , … , ∂ ∂ x n ) f ( x ) {\displaystyle \nabla f({\boldsymbol {x}})=\left({\frac {\partial }{\partial x_{1}}},{\frac {\partial }{\partial x_{2}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\right)f({\boldsymbol {x}})} used extensively in vector calculus, because it is useful for constructing other differential operators and compactly formulating theorems in vector calculus. Then substituting the gradient ∇f (evaluated at x = a) with a slight rearrangement gives: f ( x ) − f ( a ) = ∇ f ( a ) ⋅ ( x − a ) + α | x − a | {\displaystyle f({\boldsymbol {x}})-f({\boldsymbol {a}})=\nabla f({\boldsymbol {a}})\cdot ({\boldsymbol {x}}-{\boldsymbol {a}})+\alpha |{\boldsymbol {x}}-{\boldsymbol {a}}|} where · denotes the dot product. This equation represents the best linear approximation of the function f at all points x within a neighborhood of a. For infinitesimal changes in f and x as x → a: d f = ∂ f ( x ) ∂ x 1 | x = a d x 1 + ∂ f ( x ) ∂ x 2 | x = a d x 2 + ⋯ + ∂ f ( x ) ∂ x n | x = a d x n = ∇ f ( a ) ⋅ d x {\displaystyle df=\left.{\frac {\partial f({\boldsymbol {x}})}{\partial x_{1}}}\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}dx_{1}+\left.{\frac {\partial f({\boldsymbol {x}})}{\partial x_{2}}}\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}dx_{2}+\dots +\left.{\frac {\partial f({\boldsymbol {x}})}{\partial x_{n}}}\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}dx_{n}=\nabla f({\boldsymbol {a}})\cdot d{\boldsymbol {x}}} which is defined as the total differential, or simply differential, of f, at a. This expression corresponds to the total infinitesimal change of f, by adding all the infinitesimal changes of f in all the xi directions. Also, df can be construed as a covector with basis vectors as the infinitesimals dxi in each direction and partial derivatives of f as the components. Geometrically ∇f is perpendicular to the level sets of f, given by f(x) = c which for some constant c describes an (n − 1)-dimensional hypersurface. The differential of a constant is zero: d f = ( ∇ f ) ⋅ d x = 0 {\displaystyle df=(\nabla f)\cdot d{\boldsymbol {x}}=0} in which dx is an infinitesimal change in x in the hypersurface f(x) = c, and since the dot product of ∇f and dx is zero, this means ∇f is perpendicular to dx. In arbitrary curvilinear coordinate systems in n dimensions, the explicit expression for the gradient would not be so simple - there would be scale factors in terms of the metric tensor for that coordinate system. For the above case used throughout this article, the metric is just the Kronecker delta and the scale factors are all 1. === Differentiability classes === If all first order partial derivatives evaluated at a point a in the domain: ∂ ∂ x 1 f ( x ) | x = a , ∂ ∂ x 2 f ( x ) | x = a , … , ∂ ∂ x n f ( x ) | x = a {\displaystyle \left.{\frac {\partial }{\partial x_{1}}}f({\boldsymbol {x}})\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}\,,\quad \left.{\frac {\partial }{\partial x_{2}}}f({\boldsymbol {x}})\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}\,,\ldots ,\left.{\frac {\partial }{\partial x_{n}}}f({\boldsymbol {x}})\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}} exist and are continuous for all a in the domain, f has differentiability class C1. In general, if all order p partial derivatives evaluated at a point a: ∂ p ∂ x 1 p 1 ∂ x 2 p 2 ⋯ ∂ x n p n f ( x ) | x = a {\displaystyle \left.{\frac {\partial ^{p}}{\partial x_{1}^{p_{1}}\partial x_{2}^{p_{2}}\cdots \partial x_{n}^{p_{n}}}}f({\boldsymbol {x}})\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}} exist and are continuous, where p1, p2, …, pn, and p are as above, for all a in the domain, then f is differentiable to order p throughout the domain and has differentiability class C p. If f is of differentiability class C∞, f has continuous partial derivatives of all order and is called smooth. If f is an analytic function and equals its Taylor series about any point in the domain, the notation Cω denotes this differentiability class. === Multiple integration === Definite integration can be extended to multiple integration over the several real variables with the notation; ∫ R n ⋯ ∫ R 2 ∫ R 1 f ( x 1 , x 2 , … , x n ) d x 1 d x 2 ⋯ d x n ≡ ∫ R f ( x ) d n x {\displaystyle \int _{R_{n}}\cdots \int _{R_{2}}\int _{R_{1}}f(x_{1},x_{2},\ldots ,x_{n})\,dx_{1}dx_{2}\cdots dx_{n}\equiv \int _{R}f({\boldsymbol {x}})\,d^{n}{\boldsymbol {x}}} where each region R1, R2, …, Rn is a subset of or all of the real line: R 1 ⊆ R , R 2 ⊆ R , … , R n ⊆ R , {\displaystyle R_{1}\subseteq \mathbb {R} \,,\quad R_{2}\subseteq \mathbb {R} \,,\ldots ,R_{n}\subseteq \mathbb {R} ,} and their Cartesian product gives the region to integrate over as a single set: R = R 1 × R 2 × ⋯ × R n , R ⊆ R n , {\displaystyle R=R_{1}\times R_{2}\times \dots \times R_{n}\,,\quad R\subseteq \mathbb {R} ^{n}\,,} an n-dimensional hypervolume. When evaluated, a definite integral is a real number if the integral converges in the region R of integration (the result of a definite integral may diverge to infinity for a given region, in such cases the integral remains ill-defined). The variables are treated as "dummy" or "bound" variables which are substituted for numbers in the process of integration. The integral of a real-valued function of a real variable y = f(x) with respect to x has geometric interpretation as the area bounded by the curve y = f(x) and the x-axis. Multiple integrals extend the dimensionality of this concept: assuming an n-dimensional analogue of a rectangular Cartesian coordinate system, the above definite integral has the geometric interpretation as the n-dimensional hypervolume bounded by f(x) and the x1, x2, …, xn axes, which may be positive, negative, or zero, depending on the function being integrated (if the integral is convergent). While bounded hypervolume is a useful insight, the more important idea of definite integrals is that they represent total quantities within space. This has significance in applied mathematics and physics: if f is some scalar density field and x are the position vector coordinates, i.e. some scalar quantity per unit n-dimensional hypervolume, then integrating over the region R gives the total amount of quantity in R. The more formal notions of hypervolume is the subject of measure theory. Above we used the Lebesgue measure, see Lebesgue integration for more on this topic. === Theorems === With the definitions of multiple integration and partial derivatives, key theorems can be formulated, including the fundamental theorem of calculus in several real variables (namely Stokes' theorem), integration by parts in several real variables, the symmetry of higher partial derivatives and Taylor's theorem for multivariable functions. Evaluating a mixture of integrals and partial derivatives can be done by using theorem differentiation under the integral sign. === Vector calculus === One can collect a number of functions each of several real variables, say y 1 = f 1 ( x 1 , x 2 , … , x n ) , y 2 = f 2 ( x 1 , x 2 , … , x n ) , … , y m = f m ( x 1 , x 2 , ⋯ x n ) {\displaystyle y_{1}=f_{1}(x_{1},x_{2},\ldots ,x_{n})\,,\quad y_{2}=f_{2}(x_{1},x_{2},\ldots ,x_{n})\,,\ldots ,y_{m}=f_{m}(x_{1},x_{2},\cdots x_{n})} into an m-tuple, or sometimes as a column vector or row vector, respectively: ( y 1 , y 2 , … , y m ) ↔ [ f 1 ( x 1 , x 2 , … , x n ) f 2 ( x 1 , x 2 , ⋯ x n ) ⋮ f m ( x 1 , x 2 , … , x n ) ] ↔ [ f 1 ( x 1 , x 2 , … , x n ) f 2 ( x 1 , x 2 , … , x n ) ⋯ f m ( x 1 , x 2 , … , x n ) ] {\displaystyle (y_{1},y_{2},\ldots ,y_{m})\leftrightarrow {\begin{bmatrix}f_{1}(x_{1},x_{2},\ldots ,x_{n})\\f_{2}(x_{1},x_{2},\cdots x_{n})\\\vdots \\f_{m}(x_{1},x_{2},\ldots ,x_{n})\end{bmatrix}}\leftrightarrow {\begin{bmatrix}f_{1}(x_{1},x_{2},\ldots ,x_{n})&f_{2}(x_{1},x_{2},\ldots ,x_{n})&\cdots &f_{m}(x_{1},x_{2},\ldots ,x_{n})\end{bmatrix}}} all treated on the same footing as an m-component vector field, and use whichever form is convenient. All the above notations have a common compact notation y = f(x). The calculus of such vector fields is vector calculus. For more on the treatment of row vectors and column vectors of multivariable functions, see matrix calculus. == Implicit functions == A real-valued implicit function of several real variables is not written in the form "y = f(…)". Instead, the mapping is from the space Rn + 1 to the zero element in R (just the ordinary zero 0): ϕ : R n + 1 → { 0 } ϕ ( x 1 , x 2 , … , x n , y ) = 0 {\displaystyle {\begin{aligned}&\phi :\mathbb {R} ^{n+1}\to \{0\}\\&\phi (x_{1},x_{2},\ldots ,x_{n},y)=0\end{aligned}}} is an equation in all the variables. Implicit functions are a more general way to represent functions, since if: y = f ( x 1 , x 2 , … , x n ) {\displaystyle y=f(x_{1},x_{2},\ldots ,x_{n})} then we can always define: ϕ ( x 1 , x 2 , … , x n , y ) = y − f ( x 1 , x 2 , … , x n ) = 0 {\displaystyle \phi (x_{1},x_{2},\ldots ,x_{n},y)=y-f(x_{1},x_{2},\ldots ,x_{n})=0} but the converse is not always possible, i.e. not all implicit functions have an explicit form. For example, using interval notation, let ϕ : X → { 0 } ϕ ( x , y , z ) = ( x a ) 2 + ( y b ) 2 + ( z c ) 2 − 1 = 0 X = [ − a , a ] × [ − b , b ] × [ − c , c ] = { ( x , y , z ) ∈ R 3 : − a ≤ x ≤ a , − b ≤ y ≤ b , − c ≤ z ≤ c } . {\displaystyle {\begin{aligned}&\phi :X\to \{0\}\\&\phi (x,y,z)=\left({\frac {x}{a}}\right)^{2}+\left({\frac {y}{b}}\right)^{2}+\left({\frac {z}{c}}\right)^{2}-1=0\\&X=[-a,a]\times [-b,b]\times [-c,c]=\left\{(x,y,z)\in \mathbb {R} ^{3}\,:\,-a\leq x\leq a,-b\leq y\leq b,-c\leq z\leq c\right\}.\end{aligned}}} Choosing a 3-dimensional (3D) Cartesian coordinate system, this function describes the surface of a 3D ellipsoid centered at the origin (x, y, z) = (0, 0, 0) with constant semi-major axes a, b, c, along the positive x, y and z axes respectively. In the case a = b = c = r, we have a sphere of radius r centered at the origin. Other conic section examples which can be described similarly include the hyperboloid and paraboloid, more generally so can any 2D surface in 3D Euclidean space. The above example can be solved for x, y or z; however it is much tidier to write it in an implicit form. For a more sophisticated example: ϕ : R 4 → { 0 } ϕ ( t , x , y , z ) = C t z e t x − y z + A sin ( 3 ω t ) ( x 2 z − B y 6 ) = 0 {\displaystyle {\begin{aligned}&\phi :\mathbb {R} ^{4}\to \{0\}\\&\phi (t,x,y,z)=Ctze^{tx-yz}+A\sin(3\omega t)\left(x^{2}z-By^{6}\right)=0\end{aligned}}} for non-zero real constants A, B, C, ω, this function is well-defined for all (t, x, y, z), but it cannot be solved explicitly for these variables and written as "t =", "x =", etc. The implicit function theorem of more than two real variables deals with the continuity and differentiability of the function, as follows. Let ϕ(x1, x2, …, xn) be a continuous function with continuous first order partial derivatives, and let ϕ evaluated at a point (a, b) = (a1, a2, …, an, b) be zero: ϕ ( a , b ) = 0 ; {\displaystyle \phi ({\boldsymbol {a}},b)=0;} and let the first partial derivative of ϕ with respect to y evaluated at (a, b) be non-zero: ∂ ϕ ( x , y ) ∂ y | ( x , y ) = ( a , b ) ≠ 0. {\displaystyle \left.{\frac {\partial \phi ({\boldsymbol {x}},y)}{\partial y}}\right|_{({\boldsymbol {x}},y)=({\boldsymbol {a}},b)}\neq 0.} Then, there is an interval [y1, y2] containing b, and a region R containing (a, b), such that for every x in R there is exactly one value of y in [y1, y2] satisfying ϕ(x, y) = 0, and y is a continuous function of x so that ϕ(x, y(x)) = 0. The total differentials of the functions are: d y = ∂ y ∂ x 1 d x 1 + ∂ y ∂ x 2 d x 2 + ⋯ + ∂ y ∂ x n d x n ; {\displaystyle dy={\frac {\partial y}{\partial x_{1}}}dx_{1}+{\frac {\partial y}{\partial x_{2}}}dx_{2}+\dots +{\frac {\partial y}{\partial x_{n}}}dx_{n};} d ϕ = ∂ ϕ ∂ x 1 d x 1 + ∂ ϕ ∂ x 2 d x 2 + ⋯ + ∂ ϕ ∂ x n d x n + ∂ ϕ ∂ y d y . {\displaystyle d\phi ={\frac {\partial \phi }{\partial x_{1}}}dx_{1}+{\frac {\partial \phi }{\partial x_{2}}}dx_{2}+\dots +{\frac {\partial \phi }{\partial x_{n}}}dx_{n}+{\frac {\partial \phi }{\partial y}}dy.} Substituting dy into the latter differential and equating coefficients of the differentials gives the first order partial derivatives of y with respect to xi in terms of the derivatives of the original function, each as a solution of the linear equation ∂ ϕ ∂ x i + ∂ ϕ ∂ y ∂ y ∂ x i = 0 {\displaystyle {\frac {\partial \phi }{\partial x_{i}}}+{\frac {\partial \phi }{\partial y}}{\frac {\partial y}{\partial x_{i}}}=0} for i = 1, 2, …, n. == Complex-valued function of several real variables == A complex-valued function of several real variables may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values. If f(x1, …, xn) is such a complex valued function, it may be decomposed as f ( x 1 , … , x n ) = g ( x 1 , … , x n ) + i h ( x 1 , … , x n ) , {\displaystyle f(x_{1},\ldots ,x_{n})=g(x_{1},\ldots ,x_{n})+ih(x_{1},\ldots ,x_{n}),} where g and h are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions. This reduction works for the general properties. However, for an explicitly given function, such as: z ( x , y , α , a , q ) = q 2 π [ ln ( x + i y − a e i α ) − ln ( x + i y + a e − i α ) ] {\displaystyle z(x,y,\alpha ,a,q)={\frac {q}{2\pi }}\left[\ln \left(x+iy-ae^{i\alpha }\right)-\ln \left(x+iy+ae^{-i\alpha }\right)\right]} the computation of the real and the imaginary part may be difficult. == Applications == Multivariable functions of real variables arise inevitably in engineering and physics, because observable physical quantities are real numbers (with associated units and dimensions), and any one physical quantity will generally depend on a number of other quantities. === Examples of real-valued functions of several real variables === Examples in continuum mechanics include the local mass density ρ of a mass distribution, a scalar field which depends on the spatial position coordinates (here Cartesian to exemplify), r = (x, y, z), and time t: ρ = ρ ( r , t ) = ρ ( x , y , z , t ) {\displaystyle \rho =\rho (\mathbf {r} ,t)=\rho (x,y,z,t)} Similarly for electric charge density for electrically charged objects, and numerous other scalar potential fields. Another example is the velocity field, a vector field, which has components of velocity v = (vx, vy, vz) that are each multivariable functions of spatial coordinates and time similarly: v ( r , t ) = v ( x , y , z , t ) = [ v x ( x , y , z , t ) , v y ( x , y , z , t ) , v z ( x , y , z , t ) ] {\displaystyle \mathbf {v} (\mathbf {r} ,t)=\mathbf {v} (x,y,z,t)=[v_{x}(x,y,z,t),v_{y}(x,y,z,t),v_{z}(x,y,z,t)]} Similarly for other physical vector fields such as electric fields and magnetic fields, and vector potential fields. Another important example is the equation of state in thermodynamics, an equation relating pressure P, temperature T, and volume V of a fluid, in general it has an implicit form: f ( P , V , T ) = 0 {\displaystyle f(P,V,T)=0} The simplest example is the ideal gas law: f ( P , V , T ) = P V − n R T = 0 {\displaystyle f(P,V,T)=PV-nRT=0} where n is the number of moles, constant for a fixed amount of substance, and R the gas constant. Much more complicated equations of state have been empirically derived, but they all have the above implicit form. Real-valued functions of several real variables appear pervasively in economics. In the underpinnings of consumer theory, utility is expressed as a function of the amounts of various goods consumed, each amount being an argument of the utility function. The result of maximizing utility is a set of demand functions, each expressing the amount demanded of a particular good as a function of the prices of the various goods and of income or wealth. In producer theory, a firm is usually assumed to maximize profit as a function of the quantities of various goods produced and of the quantities of various factors of production employed. The result of the optimization is a set of demand functions for the various factors of production and a set of supply functions for the various products; each of these functions has as its arguments the prices of the goods and of the factors of production. === Examples of complex-valued functions of several real variables === Some "physical quantities" may be actually complex valued - such as complex impedance, complex permittivity, complex permeability, and complex refractive index. These are also functions of real variables, such as frequency or time, as well as temperature. In two-dimensional fluid mechanics, specifically in the theory of the potential flows used to describe fluid motion in 2d, the complex potential F ( x , y , … ) = φ ( x , y , … ) + i ψ ( x , y , … ) {\displaystyle F(x,y,\ldots )=\varphi (x,y,\ldots )+i\psi (x,y,\ldots )} is a complex valued function of the two spatial coordinates x and y, and other real variables associated with the system. The real part is the velocity potential and the imaginary part is the stream function. The spherical harmonics occur in physics and engineering as the solution to Laplace's equation, as well as the eigenfunctions of the z-component angular momentum operator, which are complex-valued functions of real-valued spherical polar angles: Y ℓ m = Y ℓ m ( θ , ϕ ) {\displaystyle Y_{\ell }^{m}=Y_{\ell }^{m}(\theta ,\phi )} In quantum mechanics, the wavefunction is necessarily complex-valued, but is a function of real spatial coordinates (or momentum components), as well as time t: Ψ = Ψ ( r , t ) = Ψ ( x , y , z , t ) , Φ = Φ ( p , t ) = Φ ( p x , p y , p z , t ) {\displaystyle \Psi =\Psi (\mathbf {r} ,t)=\Psi (x,y,z,t)\,,\quad \Phi =\Phi (\mathbf {p} ,t)=\Phi (p_{x},p_{y},p_{z},t)} where each is related by a Fourier transform. == See also == Real coordinate space Real analysis Complex analysis Function of several complex variables Multivariate interpolation Scalar fields == References == F. Ayres, E. Mendelson (2009). Calculus. Schaum's outline series (5th ed.). McGraw Hill. ISBN 978-0-07-150861-2. R. Wrede, M. R. Spiegel (2010). Advanced calculus. Schaum's outline series (3rd ed.). McGraw Hill. ISBN 978-0-07-162366-7. W. F. Hughes, J. A. Brighton (1999). Fluid Dynamics. Schaum's outline series (3rd ed.). McGraw Hill. p. 160. ISBN 978-0-07-031118-3. R. Penrose (2005). The Road to Reality. Vintage books. ISBN 978-00994-40680. S. Dineen (2001). Multivariate Calculus and Geometry. Springer Undergraduate Mathematics Series (2 ed.). Springer. ISBN 185-233-472-X. N. Bourbaki (2004). Functions of a Real Variable: Elementary Theory. Springer. ISBN 354-065-340-6. M. A. Moskowitz, F. Paliogiannis (2011). Functions of Several Real Variables. World Scientific. ISBN 978-981-429-927-5. W. Fleming (1977). Functions of Several Variables. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-902-066.
|
Wikipedia:Function problem#0
|
In computational complexity theory, a function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem. For function problems, the output is not simply 'yes' or 'no'. == Definition == A functional problem P {\displaystyle P} is defined by a relation R {\displaystyle R} over strings of an arbitrary alphabet Σ {\displaystyle \Sigma } : R ⊆ Σ ∗ × Σ ∗ . {\displaystyle R\subseteq \Sigma ^{*}\times \Sigma ^{*}.} An algorithm solves P {\displaystyle P} if for every input x {\displaystyle x} such that there exists a y {\displaystyle y} satisfying ( x , y ) ∈ R {\displaystyle (x,y)\in R} , the algorithm produces one such y {\displaystyle y} , and if there are no such y {\displaystyle y} , it rejects. A promise function problem is allowed to do anything (thus may not terminate) if no such y {\displaystyle y} exists. == Examples == A well-known function problem is given by the Functional Boolean Satisfiability Problem, FSAT for short. The problem, which is closely related to the SAT decision problem, can be formulated as follows: Given a boolean formula φ {\displaystyle \varphi } with variables x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} , find an assignment x i → { TRUE , FALSE } {\displaystyle x_{i}\rightarrow \{{\text{TRUE}},{\text{FALSE}}\}} such that φ {\displaystyle \varphi } evaluates to TRUE {\displaystyle {\text{TRUE}}} or decide that no such assignment exists. In this case the relation R {\displaystyle R} is given by tuples of suitably encoded boolean formulas and satisfying assignments. While a SAT algorithm, fed with a formula φ {\displaystyle \varphi } , only needs to return "unsatisfiable" or "satisfiable", an FSAT algorithm needs to return some satisfying assignment in the latter case. Other notable examples include the travelling salesman problem, which asks for the route taken by the salesman, and the integer factorization problem, which asks for the list of factors. == Relationship to other complexity classes == Consider an arbitrary decision problem L {\displaystyle L} in the class NP. By the definition of NP, each problem instance x {\displaystyle x} that is answered 'yes' has a polynomial-size certificate y {\displaystyle y} which serves as a proof for the 'yes' answer. Thus, the set of these tuples ( x , y ) {\displaystyle (x,y)} forms a relation, representing the function problem "given x {\displaystyle x} in L {\displaystyle L} , find a certificate y {\displaystyle y} for x {\displaystyle x} ". This function problem is called the function variant of L {\displaystyle L} ; it belongs to the class FNP. FNP can be thought of as the function class analogue of NP, in that solutions of FNP problems can be efficiently (i.e., in polynomial time in terms of the length of the input) verified, but not necessarily efficiently found. In contrast, the class FP, which can be thought of as the function class analogue of P, consists of function problems whose solutions can be found in polynomial time. == Self-reducibility == Observe that the problem FSAT introduced above can be solved using only polynomially many calls to a subroutine which decides the SAT problem: An algorithm can first ask whether the formula φ {\displaystyle \varphi } is satisfiable. After that the algorithm can fix variable x 1 {\displaystyle x_{1}} to TRUE and ask again. If the resulting formula is still satisfiable the algorithm keeps x 1 {\displaystyle x_{1}} fixed to TRUE and continues to fix x 2 {\displaystyle x_{2}} , otherwise it decides that x 1 {\displaystyle x_{1}} has to be FALSE and continues. Thus, FSAT is solvable in polynomial time using an oracle deciding SAT. In general, a problem in NP is called self-reducible if its function variant can be solved in polynomial time using an oracle deciding the original problem. Every NP-complete problem is self-reducible. It is conjectured that the integer factorization problem is not self-reducible, because deciding whether an integer is prime is in P (easy), while the integer factorization problem is believed to be hard for a classical computer. There are several (slightly different) notions of self-reducibility. == Reductions and complete problems == Function problems can be reduced much like decision problems: Given function problems Π R {\displaystyle \Pi _{R}} and Π S {\displaystyle \Pi _{S}} we say that Π R {\displaystyle \Pi _{R}} reduces to Π S {\displaystyle \Pi _{S}} if there exists polynomially-time computable functions f {\displaystyle f} and g {\displaystyle g} such that for all instances x {\displaystyle x} of R {\displaystyle R} and possible solutions y {\displaystyle y} of S {\displaystyle S} , it holds that If x {\displaystyle x} has an R {\displaystyle R} -solution, then f ( x ) {\displaystyle f(x)} has an S {\displaystyle S} -solution. ( f ( x ) , y ) ∈ S ⟹ ( x , g ( x , y ) ) ∈ R . {\displaystyle (f(x),y)\in S\implies (x,g(x,y))\in R.} It is therefore possible to define FNP-complete problems analogous to the NP-complete problem: A problem Π R {\displaystyle \Pi _{R}} is FNP-complete if every problem in FNP can be reduced to Π R {\displaystyle \Pi _{R}} . The complexity class of FNP-complete problems is denoted by FNP-C or FNPC. Hence the problem FSAT is also an FNP-complete problem, and it holds that P = N P {\displaystyle \mathbf {P} =\mathbf {NP} } if and only if F P = F N P {\displaystyle \mathbf {FP} =\mathbf {FNP} } . == Total function problems == The relation R ( x , y ) {\displaystyle R(x,y)} used to define function problems has the drawback of being incomplete: Not every input x {\displaystyle x} has a counterpart y {\displaystyle y} such that ( x , y ) ∈ R {\displaystyle (x,y)\in R} . Therefore the question of computability of proofs is not separated from the question of their existence. To overcome this problem it is convenient to consider the restriction of function problems to total relations yielding the class TFNP as a subclass of FNP. This class contains problems such as the computation of pure Nash equilibria in certain strategic games where a solution is guaranteed to exist. In addition, if TFNP contains any FNP-complete problem it follows that N P = co-NP {\displaystyle \mathbf {NP} ={\textbf {co-NP}}} . == See also == Decision problem Search problem Counting problem (complexity) Optimization problem == References ==
|
Wikipedia:Function series#0
|
In calculus, a function series is a series where each of its terms is a function, not just a real or complex number. == Examples == Examples of function series include ordinary power series, Laurent series, Fourier series, Liouville-Neumann series, formal power series, and Puiseux series. == Convergence == There exist many types of convergence for a function series, such as uniform convergence, pointwise convergence, and convergence almost everywhere. Each type of convergence corresponds to a different metric for the space of functions that are added together in the series, and thus a different type of limit. The Weierstrass M-test is a useful result in studying convergence of function series. == See also == Function space == References == Chun Wa Wong (2013) Introduction to Mathematical Physics: Methods & Concepts Oxford University Press p. 655
|
Wikipedia:Function space#0
|
In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set X into a vector space has a natural vector space structure given by pointwise addition and scalar multiplication. In other scenarios, the function space might inherit a topological or metric structure, hence the name function space. == In linear algebra == Let F be a field and let X be any set. The functions X → F can be given the structure of a vector space over F where the operations are defined pointwise, that is, for any f, g : X → F, any x in X, and any c in F, define ( f + g ) ( x ) = f ( x ) + g ( x ) ( c ⋅ f ) ( x ) = c ⋅ f ( x ) {\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(c\cdot f)(x)&=c\cdot f(x)\end{aligned}}} When the domain X has additional structure, one might consider instead the subset (or subspace) of all such functions which respect that structure. For example, if V and also X itself are vector spaces over F, the set of linear maps X → V form a vector space over F with pointwise operations (often denoted Hom(X,V)). One such space is the dual space of X: the set of linear functionals X → F with addition and scalar multiplication defined pointwise. The cardinal dimension of a function space with no extra structure can be found by the Erdős–Kaplansky theorem. == Examples == Function spaces appear in various areas of mathematics: In set theory, the set of functions from X to Y may be denoted {X → Y} or YX. As a special case, the power set of a set X may be identified with the set of all functions from X to {0, 1}, denoted 2X. The set of bijections from X to Y is denoted X ↔ Y {\displaystyle X\leftrightarrow Y} . The factorial notation X! may be used for permutations of a single set X. In functional analysis, the same is seen for continuous linear transformations, including topologies on the vector spaces in the above, and many of the major examples are function spaces carrying a topology; the best known examples include Hilbert spaces and Banach spaces. In functional analysis, the set of all functions from the natural numbers to some set X is called a sequence space. It consists of the set of all possible sequences of elements of X. In topology, one may attempt to put a topology on the space of continuous functions from a topological space X to another one Y, with utility depending on the nature of the spaces. A commonly used example is the compact-open topology, e.g. loop space. Also available is the product topology on the space of set theoretic functions (i.e. not necessarily continuous functions) YX. In this context, this topology is also referred to as the topology of pointwise convergence. In algebraic topology, the study of homotopy theory is essentially that of discrete invariants of function spaces; In the theory of stochastic processes, the basic technical problem is how to construct a probability measure on a function space of paths of the process (functions of time); In category theory, the function space is called an exponential object or map object. It appears in one way as the representation canonical bifunctor; but as (single) functor, of type [ X , − ] {\displaystyle [X,-]} , it appears as an adjoint functor to a functor of type − × X {\displaystyle -\times X} on objects; In functional programming and lambda calculus, function types are used to express the idea of higher-order functions In programming more generally, many higher-order function concepts occur with or without explicit typing, such as closures. In domain theory, the basic idea is to find constructions from partial orders that can model lambda calculus, by creating a well-behaved Cartesian closed category. In the representation theory of finite groups, given two finite-dimensional representations V and W of a group G, one can form a representation of G over the vector space of linear maps Hom(V,W) called the Hom representation. == Functional analysis == Functional analysis is organized around adequate techniques to bring function spaces as topological vector spaces within reach of the ideas that would apply to normed spaces of finite dimension. Here we use the real line as an example domain, but the spaces below exist on suitable open subsets Ω ⊆ R n {\displaystyle \Omega \subseteq \mathbb {R} ^{n}} C ( R ) {\displaystyle C(\mathbb {R} )} continuous functions endowed with the uniform norm topology C c ( R ) {\displaystyle C_{c}(\mathbb {R} )} continuous functions with compact support B ( R ) {\displaystyle B(\mathbb {R} )} bounded functions C 0 ( R ) {\displaystyle C_{0}(\mathbb {R} )} continuous functions which vanish at infinity C r ( R ) {\displaystyle C^{r}(\mathbb {R} )} continuous functions that have r continuous derivatives. C ∞ ( R ) {\displaystyle C^{\infty }(\mathbb {R} )} smooth functions C c ∞ ( R ) {\displaystyle C_{c}^{\infty }(\mathbb {R} )} smooth functions with compact support (i.e. the set of bump functions) C ω ( R ) {\displaystyle C^{\omega }(\mathbb {R} )} real analytic functions L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} , for 1 ≤ p ≤ ∞ {\displaystyle 1\leq p\leq \infty } , is the Lp space of measurable functions whose p-norm ‖ f ‖ p = ( ∫ R | f | p ) 1 / p {\textstyle \|f\|_{p}=\left(\int _{\mathbb {R} }|f|^{p}\right)^{1/p}} is finite S ( R ) {\displaystyle {\mathcal {S}}(\mathbb {R} )} , the Schwartz space of rapidly decreasing smooth functions and its continuous dual, S ′ ( R ) {\displaystyle {\mathcal {S}}'(\mathbb {R} )} tempered distributions D ( R ) {\displaystyle D(\mathbb {R} )} compact support in limit topology W k , p {\displaystyle W^{k,p}} Sobolev space of functions whose weak derivatives up to order k are in L p {\displaystyle L^{p}} O U {\displaystyle {\mathcal {O}}_{U}} holomorphic functions linear functions piecewise linear functions continuous functions, compact open topology all functions, space of pointwise convergence Hardy space Hölder space Càdlàg functions, also known as the Skorokhod space Lip 0 ( R ) {\displaystyle {\text{Lip}}_{0}(\mathbb {R} )} , the space of all Lipschitz functions on R {\displaystyle \mathbb {R} } that vanish at zero. == Norm == If y is an element of the function space C ( a , b ) {\displaystyle {\mathcal {C}}(a,b)} of all continuous functions that are defined on a closed interval [a, b], the norm ‖ y ‖ ∞ {\displaystyle \|y\|_{\infty }} defined on C ( a , b ) {\displaystyle {\mathcal {C}}(a,b)} is the maximum absolute value of y (x) for a ≤ x ≤ b, ‖ y ‖ ∞ ≡ max a ≤ x ≤ b | y ( x ) | where y ∈ C ( a , b ) {\displaystyle \|y\|_{\infty }\equiv \max _{a\leq x\leq b}|y(x)|\qquad {\text{where}}\ \ y\in {\mathcal {C}}(a,b)} is called the uniform norm or supremum norm ('sup norm'). == Bibliography == Kolmogorov, A. N., & Fomin, S. V. (1967). Elements of the theory of functions and functional analysis. Courier Dover Publications. Stein, Elias; Shakarchi, R. (2011). Functional Analysis: An Introduction to Further Topics in Analysis. Princeton University Press. == See also == List of mathematical functions Clifford algebra Tensor field Spectral theory Functional determinant == References ==
|
Wikipedia:Functional decomposition#0
|
In engineering, functional decomposition is the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts. This process of decomposition may be undertaken to gain insight into the identity of the constituent components, which may reflect individual physical processes of interest. Also, functional decomposition may result in a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of modularity (i.e., independence or non-interaction). Interaction (statistics)(a situation in which one causal variable depends on the state of a second causal variable) between the components are critical to the function of the collection. All interactions may not be observable, or measured, but possibly deduced through repetitive perception, synthesis, validation and verification of composite behavior. == Motivation for decomposition == Decomposition of a function into non-interacting components generally permits more economical representations of the function. Intuitively, this reduction in representation size is achieved simply because each variable depends only on a subset of the other variables. Thus, variable x 1 {\displaystyle x_{1}} only depends directly on variable x 2 {\displaystyle x_{2}} , rather than depending on the entire set of variables. We would say that variable x 2 {\displaystyle x_{2}} screens off variable x 1 {\displaystyle x_{1}} from the rest of the world. Practical examples of this phenomenon surround us. Consider the particular case of "northbound traffic on the West Side Highway." Let us assume this variable ( x 1 {\displaystyle {x_{1}}} ) takes on three possible values of {"moving slow", "moving deadly slow", "not moving at all"}. Now, let's say the variable x 1 {\displaystyle {x_{1}}} depends on two other variables, "weather" with values of {"sun", "rain", "snow"}, and "GW Bridge traffic" with values {"10mph", "5mph", "1mph"}. The point here is that while there are certainly many secondary variables that affect the weather variable (e.g., low pressure system over Canada, butterfly flapping in Japan, etc.) and the Bridge traffic variable (e.g., an accident on I-95, presidential motorcade, etc.) all these other secondary variables are not directly relevant to the West Side Highway traffic. All we need (hypothetically) in order to predict the West Side Highway traffic is the weather and the GW Bridge traffic, because these two variables screen off West Side Highway traffic from all other potential influences. That is, all other influences act through them. == Applications == Practical applications of functional decomposition are found in Bayesian networks, structural equation modeling, linear systems, and database systems. == Knowledge representation == Processes related to functional decomposition are prevalent throughout the fields of knowledge representation and machine learning. Hierarchical model induction techniques such as Logic circuit minimization, decision trees, grammatical inference, hierarchical clustering, and quadtree decomposition are all examples of function decomposition. Many statistical inference methods can be thought of as implementing a function decomposition process in the presence of noise; that is, where functional dependencies are only expected to hold approximately. Among such models are mixture models and the recently popular methods referred to as "causal decompositions" or Bayesian networks. == Database theory == See database normalization. == Machine learning == In practical scientific applications, it is almost never possible to achieve perfect functional decomposition because of the incredible complexity of the systems under study. This complexity is manifested in the presence of "noise," which is just a designation for all the unwanted and untraceable influences on our observations. However, while perfect functional decomposition is usually impossible, the spirit lives on in a large number of statistical methods that are equipped to deal with noisy systems. When a natural or artificial system is intrinsically hierarchical, the joint distribution on system variables should provide evidence of this hierarchical structure. The task of an observer who seeks to understand the system is then to infer the hierarchical structure from observations of these variables. This is the notion behind the hierarchical decomposition of a joint distribution, the attempt to recover something of the intrinsic hierarchical structure which generated that joint distribution. As an example, Bayesian network methods attempt to decompose a joint distribution along its causal fault lines, thus "cutting nature at its seams". The essential motivation behind these methods is again that within most systems (natural or artificial), relatively few components/events interact with one another directly on equal footing. Rather, one observes pockets of dense connections (direct interactions) among small subsets of components, but only loose connections between these densely connected subsets. There is thus a notion of "causal proximity" in physical systems under which variables naturally precipitate into small clusters. Identifying these clusters and using them to represent the joint provides the basis for great efficiency of storage (relative to the full joint distribution) as well as for potent inference algorithms. == Software architecture == Functional Decomposition is a design method intending to produce a non-implementation, architectural description of a computer program. The software architect first establishes a series of functions and types that accomplishes the main processing problem of the computer program, decomposes each to reveal common functions and types, and finally derives Modules from this activity. == Signal processing == Functional decomposition is used in the analysis of many signal processing systems, such as LTI systems. The input signal to an LTI system can be expressed as a function, f ( t ) {\displaystyle f(t)} . Then f ( t ) {\displaystyle f(t)} can be decomposed into a linear combination of other functions, called component signals: f ( t ) = a 1 ⋅ g 1 ( t ) + a 2 ⋅ g 2 ( t ) + a 3 ⋅ g 3 ( t ) + ⋯ + a n ⋅ g n ( t ) {\displaystyle f(t)=a_{1}\cdot g_{1}(t)+a_{2}\cdot g_{2}(t)+a_{3}\cdot g_{3}(t)+\dots +a_{n}\cdot g_{n}(t)} Here, { g 1 ( t ) , g 2 ( t ) , g 3 ( t ) , … , g n ( t ) } {\displaystyle \{g_{1}(t),g_{2}(t),g_{3}(t),\dots ,g_{n}(t)\}} are the component signals. Note that { a 1 , a 2 , a 3 , … , a n } {\displaystyle \{a_{1},a_{2},a_{3},\dots ,a_{n}\}} are constants. This decomposition aids in analysis, because now the output of the system can be expressed in terms of the components of the input. If we let T { } {\displaystyle T\{\}} represent the effect of the system, then the output signal is T { f ( t ) } {\displaystyle T\{f(t)\}} , which can be expressed as: T { f ( t ) } = T { a 1 ⋅ g 1 ( t ) + a 2 ⋅ g 2 ( t ) + a 3 ⋅ g 3 ( t ) + ⋯ + a n ⋅ g n ( t ) } {\displaystyle T\{f(t)\}=T\{a_{1}\cdot g_{1}(t)+a_{2}\cdot g_{2}(t)+a_{3}\cdot g_{3}(t)+\dots +a_{n}\cdot g_{n}(t)\}} = a 1 ⋅ T { g 1 ( t ) } + a 2 ⋅ T { g 2 ( t ) } + a 3 ⋅ T { g 3 ( t ) } + ⋯ + a n ⋅ T { g n ( t ) } {\displaystyle =a_{1}\cdot T\{g_{1}(t)\}+a_{2}\cdot T\{g_{2}(t)\}+a_{3}\cdot T\{g_{3}(t)\}+\dots +a_{n}\cdot T\{g_{n}(t)\}} In other words, the system can be seen as acting separately on each of the components of the input signal. Commonly used examples of this type of decomposition are the Fourier series and the Fourier transform. == Systems engineering == Functional decomposition in systems engineering refers to the process of defining a system in functional terms, then defining lower-level functions and sequencing relationships from these higher level systems functions. The basic idea is to try to divide a system in such a way that each block of a block diagram can be described without an "and" or "or" in the description. This exercise forces each part of the system to have a pure function. When a system is designed as pure functions, they can be reused, or replaced. A usual side effect is that the interfaces between blocks become simple and generic. Since the interfaces usually become simple, it is easier to replace a pure function with a related, similar function. For example, say that one needs to make a stereo system. One might functionally decompose this into speakers, amplifier, a tape deck and a front panel. Later, when a different model needs an audio CD, it can probably fit the same interfaces. == See also == Bayesian networks Currying Database normalization Function composition (computer science) Inductive inference Knowledge representation == Further reading == Zupan, Blaž; Bohanec, Marko; Bratko, Ivan; Demšar, Janez (July 1997). "Machine learning by function decomposition". In Douglas H. Fisher (ed.). Proceedings of the Fourteenth International Conference on Machine Learning. ICML '97: July 8–12, 1997. San Francisco: Morgan Kaufmann Publishers. pp. 421–429. ISBN 978-1-55860-486-5. A review of other applications and function decomposition. Also presents methods based on information theory and graph theory. == Notes == == References ==
|
Wikipedia:Functional derivative#0
|
In the calculus of variations, a field of mathematical analysis, the functional derivative (or variational derivative) relates a change in a functional (a functional in this sense is a function that acts on functions) to a change in a function on which the functional depends. In the calculus of variations, functionals are usually expressed in terms of an integral of functions, their arguments, and their derivatives. In an integrand L of a functional, if a function f is varied by adding to it another function δf that is arbitrarily small, and the resulting integrand is expanded in powers of δf, the coefficient of δf in the first order term is called the functional derivative. For example, consider the functional J [ f ] = ∫ a b L ( x , f ( x ) , f ′ ( x ) ) d x , {\displaystyle J[f]=\int _{a}^{b}L(\,x,f(x),f'{(x)}\,)\,dx\,,} where f ′(x) ≡ df/dx. If f is varied by adding to it a function δf, and the resulting integrand L(x, f +δf, f ′+δf ′) is expanded in powers of δf, then the change in the value of J to first order in δf can be expressed as follows: δ J = ∫ a b ( ∂ L ∂ f δ f ( x ) + ∂ L ∂ f ′ d d x δ f ( x ) ) d x = ∫ a b ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) δ f ( x ) d x + ∂ L ∂ f ′ ( b ) δ f ( b ) − ∂ L ∂ f ′ ( a ) δ f ( a ) {\displaystyle {\begin{aligned}\delta J&=\int _{a}^{b}\left({\frac {\partial L}{\partial f}}\delta f(x)+{\frac {\partial L}{\partial f'}}{\frac {d}{dx}}\delta f(x)\right)\,dx\,\\[1ex]&=\int _{a}^{b}\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\delta f(x)\,dx\,+\,{\frac {\partial L}{\partial f'}}(b)\delta f(b)\,-\,{\frac {\partial L}{\partial f'}}(a)\delta f(a)\end{aligned}}} where the variation in the derivative, δf ′ was rewritten as the derivative of the variation (δf) ′, and integration by parts was used in these derivatives. == Definition == In this section, the functional differential (or variation or first variation) is defined. Then the functional derivative is defined in terms of the functional differential. === Functional differential === Suppose B {\displaystyle B} is a Banach space and F {\displaystyle F} is a functional defined on B {\displaystyle B} . The differential of F {\displaystyle F} at a point ρ ∈ B {\displaystyle \rho \in B} is the linear functional δ F [ ρ , ⋅ ] {\displaystyle \delta F[\rho ,\cdot ]} on B {\displaystyle B} defined by the condition that, for all ϕ ∈ B {\displaystyle \phi \in B} , F [ ρ + ϕ ] − F [ ρ ] = δ F [ ρ ; ϕ ] + ε ‖ ϕ ‖ {\displaystyle F[\rho +\phi ]-F[\rho ]=\delta F[\rho ;\phi ]+\varepsilon \left\|\phi \right\|} where ε {\displaystyle \varepsilon } is a real number that depends on ‖ ϕ ‖ {\displaystyle \|\phi \|} in such a way that ε → 0 {\displaystyle \varepsilon \to 0} as ‖ ϕ ‖ → 0 {\displaystyle \|\phi \|\to 0} . This means that δ F [ ρ , ⋅ ] {\displaystyle \delta F[\rho ,\cdot ]} is the Fréchet derivative of F {\displaystyle F} at ρ {\displaystyle \rho } . However, this notion of functional differential is so strong it may not exist, and in those cases a weaker notion, like the Gateaux derivative is preferred. In many practical cases, the functional differential is defined as the directional derivative δ F [ ρ , ϕ ] = lim ε → 0 F [ ρ + ε ϕ ] − F [ ρ ] ε = [ d d ε F [ ρ + ε ϕ ] ] ε = 0 . {\displaystyle {\begin{aligned}\delta F[\rho ,\phi ]&=\lim _{\varepsilon \to 0}{\frac {F[\rho +\varepsilon \phi ]-F[\rho ]}{\varepsilon }}\\[1ex]&=\left[{\frac {d}{d\varepsilon }}F[\rho +\varepsilon \phi ]\right]_{\varepsilon =0}.\end{aligned}}} Note that this notion of the functional differential can even be defined without a norm. === Functional derivative === In many applications, the domain of the functional F {\displaystyle F} is a space of differentiable functions ρ {\displaystyle \rho } defined on some space Ω {\displaystyle \Omega } and F {\displaystyle F} is of the form F [ ρ ] = ∫ Ω L ( x , ρ ( x ) , D ρ ( x ) ) d x {\displaystyle F[\rho ]=\int _{\Omega }L(x,\rho (x),D\rho (x))\,dx} for some function L ( x , ρ ( x ) , D ρ ( x ) ) {\displaystyle L(x,\rho (x),D\rho (x))} that may depend on x {\displaystyle x} , the value ρ ( x ) {\displaystyle \rho (x)} and the derivative D ρ ( x ) {\displaystyle D\rho (x)} . If this is the case and, moreover, δ F [ ρ , ϕ ] {\displaystyle \delta F[\rho ,\phi ]} can be written as the integral of ϕ {\displaystyle \phi } times another function (denoted δF/δρ) δ F [ ρ , ϕ ] = ∫ Ω δ F δ ρ ( x ) ϕ ( x ) d x {\displaystyle \delta F[\rho ,\phi ]=\int _{\Omega }{\frac {\delta F}{\delta \rho }}(x)\ \phi (x)\ dx} then this function δF/δρ is called the functional derivative of F at ρ. If F {\displaystyle F} is restricted to only certain functions ρ {\displaystyle \rho } (for example, if there are some boundary conditions imposed) then ϕ {\displaystyle \phi } is restricted to functions such that ρ + ε ϕ {\displaystyle \rho +\varepsilon \phi } continues to satisfy these conditions. Heuristically, ϕ {\displaystyle \phi } is the change in ρ {\displaystyle \rho } , so we 'formally' have ϕ = δ ρ {\displaystyle \phi =\delta \rho } , and then this is similar in form to the total differential of a function F ( ρ 1 , ρ 2 , … , ρ n ) {\displaystyle F(\rho _{1},\rho _{2},\dots ,\rho _{n})} , d F = ∑ i = 1 n ∂ F ∂ ρ i d ρ i , {\displaystyle dF=\sum _{i=1}^{n}{\frac {\partial F}{\partial \rho _{i}}}\ d\rho _{i},} where ρ 1 , ρ 2 , … , ρ n {\displaystyle \rho _{1},\rho _{2},\dots ,\rho _{n}} are independent variables. Comparing the last two equations, the functional derivative δ F / δ ρ ( x ) {\displaystyle \delta F/\delta \rho (x)} has a role similar to that of the partial derivative ∂ F / ∂ ρ i {\displaystyle \partial F/\partial \rho _{i}} , where the variable of integration x {\displaystyle x} is like a continuous version of the summation index i {\displaystyle i} . One thinks of δF/δρ as the gradient of F at the point ρ, so the value δF/δρ(x) measures how much the functional F will change if the function ρ is changed at the point x. Hence the formula ∫ δ F δ ρ ( x ) ϕ ( x ) d x {\displaystyle \int {\frac {\delta F}{\delta \rho }}(x)\phi (x)\;dx} is regarded as the directional derivative at point ρ {\displaystyle \rho } in the direction of ϕ {\displaystyle \phi } . This is analogous to vector calculus, where the inner product of a vector v {\displaystyle v} with the gradient gives the directional derivative in the direction of v {\displaystyle v} . == Properties == Like the derivative of a function, the functional derivative satisfies the following properties, where F[ρ] and G[ρ] are functionals: Linearity: δ ( λ F + μ G ) [ ρ ] δ ρ ( x ) = λ δ F [ ρ ] δ ρ ( x ) + μ δ G [ ρ ] δ ρ ( x ) , {\displaystyle {\frac {\delta (\lambda F+\mu G)[\rho ]}{\delta \rho (x)}}=\lambda {\frac {\delta F[\rho ]}{\delta \rho (x)}}+\mu {\frac {\delta G[\rho ]}{\delta \rho (x)}},} where λ, μ are constants. Product rule: δ ( F G ) [ ρ ] δ ρ ( x ) = δ F [ ρ ] δ ρ ( x ) G [ ρ ] + F [ ρ ] δ G [ ρ ] δ ρ ( x ) , {\displaystyle {\frac {\delta (FG)[\rho ]}{\delta \rho (x)}}={\frac {\delta F[\rho ]}{\delta \rho (x)}}G[\rho ]+F[\rho ]{\frac {\delta G[\rho ]}{\delta \rho (x)}}\,,} Chain rules: If F is a functional and G another functional, then δ F [ G [ ρ ] ] δ ρ ( y ) = ∫ d x δ F [ G ] δ G ( x ) G = G [ ρ ] ⋅ δ G [ ρ ] ( x ) δ ρ ( y ) . {\displaystyle {\frac {\delta F[G[\rho ]]}{\delta \rho (y)}}=\int dx{\frac {\delta F[G]}{\delta G(x)}}_{G=G[\rho ]}\cdot {\frac {\delta G[\rho ](x)}{\delta \rho (y)}}\ .} If G is an ordinary differentiable function (local functional) g, then this reduces to δ F [ g ( ρ ) ] δ ρ ( y ) = δ F [ g ( ρ ) ] δ g [ ρ ( y ) ] d g ( ρ ) d ρ ( y ) . {\displaystyle {\frac {\delta F[g(\rho )]}{\delta \rho (y)}}={\frac {\delta F[g(\rho )]}{\delta g[\rho (y)]}}\ {\frac {dg(\rho )}{d\rho (y)}}\ .} == Determining functional derivatives == A formula to determine functional derivatives for a common class of functionals can be written as the integral of a function and its derivatives. This is a generalization of the Euler–Lagrange equation: indeed, the functional derivative was introduced in physics within the derivation of the Lagrange equation of the second kind from the principle of least action in Lagrangian mechanics (18th century). The first three examples below are taken from density functional theory (20th century), the fourth from statistical mechanics (19th century). === Formula === Given a functional F [ ρ ] = ∫ f ( r , ρ ( r ) , ∇ ρ ( r ) ) d r , {\displaystyle F[\rho ]=\int f({\boldsymbol {r}},\rho ({\boldsymbol {r}}),\nabla \rho ({\boldsymbol {r}}))\,d{\boldsymbol {r}},} and a function ϕ ( r ) {\displaystyle \phi ({\boldsymbol {r}})} that vanishes on the boundary of the region of integration, from a previous section Definition, ∫ δ F δ ρ ( r ) ϕ ( r ) d r = [ d d ε ∫ f ( r , ρ + ε ϕ , ∇ ρ + ε ∇ ϕ ) d r ] ε = 0 = ∫ ( ∂ f ∂ ρ ϕ + ∂ f ∂ ∇ ρ ⋅ ∇ ϕ ) d r = ∫ [ ∂ f ∂ ρ ϕ + ∇ ⋅ ( ∂ f ∂ ∇ ρ ϕ ) − ( ∇ ⋅ ∂ f ∂ ∇ ρ ) ϕ ] d r = ∫ [ ∂ f ∂ ρ ϕ − ( ∇ ⋅ ∂ f ∂ ∇ ρ ) ϕ ] d r = ∫ ( ∂ f ∂ ρ − ∇ ⋅ ∂ f ∂ ∇ ρ ) ϕ ( r ) d r . {\displaystyle {\begin{aligned}\int {\frac {\delta F}{\delta \rho ({\boldsymbol {r}})}}\,\phi ({\boldsymbol {r}})\,d{\boldsymbol {r}}&=\left[{\frac {d}{d\varepsilon }}\int f({\boldsymbol {r}},\rho +\varepsilon \phi ,\nabla \rho +\varepsilon \nabla \phi )\,d{\boldsymbol {r}}\right]_{\varepsilon =0}\\&=\int \left({\frac {\partial f}{\partial \rho }}\,\phi +{\frac {\partial f}{\partial \nabla \rho }}\cdot \nabla \phi \right)d{\boldsymbol {r}}\\&=\int \left[{\frac {\partial f}{\partial \rho }}\,\phi +\nabla \cdot \left({\frac {\partial f}{\partial \nabla \rho }}\,\phi \right)-\left(\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi \right]d{\boldsymbol {r}}\\&=\int \left[{\frac {\partial f}{\partial \rho }}\,\phi -\left(\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi \right]d{\boldsymbol {r}}\\&=\int \left({\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi ({\boldsymbol {r}})\ d{\boldsymbol {r}}\,.\end{aligned}}} The second line is obtained using the total derivative, where ∂f /∂∇ρ is a derivative of a scalar with respect to a vector. The third line was obtained by use of a product rule for divergence. The fourth line was obtained using the divergence theorem and the condition that ϕ = 0 {\displaystyle \phi =0} on the boundary of the region of integration. Since ϕ {\displaystyle \phi } is also an arbitrary function, applying the fundamental lemma of calculus of variations to the last line, the functional derivative is δ F δ ρ ( r ) = ∂ f ∂ ρ − ∇ ⋅ ∂ f ∂ ∇ ρ {\displaystyle {\frac {\delta F}{\delta \rho ({\boldsymbol {r}})}}={\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}} where ρ = ρ(r) and f = f (r, ρ, ∇ρ). This formula is for the case of the functional form given by F[ρ] at the beginning of this section. For other functional forms, the definition of the functional derivative can be used as the starting point for its determination. (See the example Coulomb potential energy functional.) The above equation for the functional derivative can be generalized to the case that includes higher dimensions and higher order derivatives. The functional would be, F [ ρ ( r ) ] = ∫ f ( r , ρ ( r ) , ∇ ρ ( r ) , ∇ ( 2 ) ρ ( r ) , … , ∇ ( N ) ρ ( r ) ) d r , {\displaystyle F[\rho ({\boldsymbol {r}})]=\int f({\boldsymbol {r}},\rho ({\boldsymbol {r}}),\nabla \rho ({\boldsymbol {r}}),\nabla ^{(2)}\rho ({\boldsymbol {r}}),\dots ,\nabla ^{(N)}\rho ({\boldsymbol {r}}))\,d{\boldsymbol {r}},} where the vector r ∈ Rn, and ∇(i) is a tensor whose ni components are partial derivative operators of order i, [ ∇ ( i ) ] α 1 α 2 ⋯ α i = ∂ i ∂ r α 1 ∂ r α 2 ⋯ ∂ r α i where α 1 , α 2 , … , α i = 1 , 2 , … , n . {\displaystyle \left[\nabla ^{(i)}\right]_{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}={\frac {\partial ^{\,i}}{\partial r_{\alpha _{1}}\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}\qquad \qquad {\text{where}}\quad \alpha _{1},\alpha _{2},\dots ,\alpha _{i}=1,2,\dots ,n\ .} An analogous application of the definition of the functional derivative yields δ F [ ρ ] δ ρ = ∂ f ∂ ρ − ∇ ⋅ ∂ f ∂ ( ∇ ρ ) + ∇ ( 2 ) ⋅ ∂ f ∂ ( ∇ ( 2 ) ρ ) + ⋯ + ( − 1 ) N ∇ ( N ) ⋅ ∂ f ∂ ( ∇ ( N ) ρ ) = ∂ f ∂ ρ + ∑ i = 1 N ( − 1 ) i ∇ ( i ) ⋅ ∂ f ∂ ( ∇ ( i ) ρ ) . {\displaystyle {\begin{aligned}{\frac {\delta F[\rho ]}{\delta \rho }}&{}={\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial (\nabla \rho )}}+\nabla ^{(2)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(2)}\rho \right)}}+\dots +(-1)^{N}\nabla ^{(N)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(N)}\rho \right)}}\\&{}={\frac {\partial f}{\partial \rho }}+\sum _{i=1}^{N}(-1)^{i}\nabla ^{(i)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}\ .\end{aligned}}} In the last two equations, the ni components of the tensor ∂ f ∂ ( ∇ ( i ) ρ ) {\displaystyle {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}} are partial derivatives of f with respect to partial derivatives of ρ, [ ∂ f ∂ ( ∇ ( i ) ρ ) ] α 1 α 2 ⋯ α i = ∂ f ∂ ρ α 1 α 2 ⋯ α i {\displaystyle \left[{\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}\right]_{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}={\frac {\partial f}{\partial \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}}}} where ρ α 1 α 2 ⋯ α i ≡ ∂ i ρ ∂ r α 1 ∂ r α 2 ⋯ ∂ r α i {\displaystyle \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}\equiv {\frac {\partial ^{\,i}\rho }{\partial r_{\alpha _{1}}\,\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}} , and the tensor scalar product is, ∇ ( i ) ⋅ ∂ f ∂ ( ∇ ( i ) ρ ) = ∑ α 1 , α 2 , ⋯ , α i = 1 n ∂ i ∂ r α 1 ∂ r α 2 ⋯ ∂ r α i ∂ f ∂ ρ α 1 α 2 ⋯ α i . {\displaystyle \nabla ^{(i)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}=\sum _{\alpha _{1},\alpha _{2},\cdots ,\alpha _{i}=1}^{n}\ {\frac {\partial ^{\,i}}{\partial r_{\alpha _{1}}\,\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}\ {\frac {\partial f}{\partial \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}}}\ .} === Examples === ==== Thomas–Fermi kinetic energy functional ==== The Thomas–Fermi model of 1927 used a kinetic energy functional for a noninteracting uniform electron gas in a first attempt of density-functional theory of electronic structure: T T F [ ρ ] = C F ∫ ρ 5 / 3 ( r ) d r . {\displaystyle T_{\mathrm {TF} }[\rho ]=C_{\mathrm {F} }\int \rho ^{5/3}(\mathbf {r} )\,d\mathbf {r} \,.} Since the integrand of TTF[ρ] does not involve derivatives of ρ(r), the functional derivative of TTF[ρ] is, δ T T F δ ρ ( r ) = C F ∂ ρ 5 / 3 ( r ) ∂ ρ ( r ) = 5 3 C F ρ 2 / 3 ( r ) . {\displaystyle {\frac {\delta T_{\mathrm {TF} }}{\delta \rho ({\boldsymbol {r}})}}=C_{\mathrm {F} }{\frac {\partial \rho ^{5/3}(\mathbf {r} )}{\partial \rho (\mathbf {r} )}}={\frac {5}{3}}C_{\mathrm {F} }\rho ^{2/3}(\mathbf {r} )\,.} ==== Coulomb potential energy functional ==== The electron-nucleus potential energy is V [ ρ ] = ∫ ρ ( r ) | r | d r . {\displaystyle V[\rho ]=\int {\frac {\rho ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}.} Applying the definition of functional derivative, ∫ δ V δ ρ ( r ) ϕ ( r ) d r = [ d d ε ∫ ρ ( r ) + ε ϕ ( r ) | r | d r ] ε = 0 = ∫ ϕ ( r ) | r | d r . {\displaystyle {\begin{aligned}\int {\frac {\delta V}{\delta \rho ({\boldsymbol {r}})}}\ \phi ({\boldsymbol {r}})\ d{\boldsymbol {r}}&{}=\left[{\frac {d}{d\varepsilon }}\int {\frac {\rho ({\boldsymbol {r}})+\varepsilon \phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}\right]_{\varepsilon =0}\\[1ex]&{}=\int {\frac {\phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}\,.\end{aligned}}} So, δ V δ ρ ( r ) = 1 | r | . {\displaystyle {\frac {\delta V}{\delta \rho ({\boldsymbol {r}})}}={\frac {1}{|{\boldsymbol {r}}|}}\ .} The functional derivative of the classical part of the electron-electron interaction (often called Hartree energy) is J [ ρ ] = 1 2 ∬ ρ ( r ) ρ ( r ′ ) | r − r ′ | d r d r ′ . {\displaystyle J[\rho ]={\frac {1}{2}}\iint {\frac {\rho (\mathbf {r} )\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,d\mathbf {r} d\mathbf {r} '\,.} From the definition of the functional derivative, ∫ δ J δ ρ ( r ) ϕ ( r ) d r = [ d d ε J [ ρ + ε ϕ ] ] ε = 0 = [ d d ε ( 1 2 ∬ [ ρ ( r ) + ε ϕ ( r ) ] [ ρ ( r ′ ) + ε ϕ ( r ′ ) ] | r − r ′ | d r d r ′ ) ] ε = 0 = 1 2 ∬ ρ ( r ′ ) ϕ ( r ) | r − r ′ | d r d r ′ + 1 2 ∬ ρ ( r ) ϕ ( r ′ ) | r − r ′ | d r d r ′ {\displaystyle {\begin{aligned}\int {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}\phi ({\boldsymbol {r}})d{\boldsymbol {r}}&{}=\left[{\frac {d\ }{d\varepsilon }}\,J[\rho +\varepsilon \phi ]\right]_{\varepsilon =0}\\&{}=\left[{\frac {d\ }{d\varepsilon }}\,\left({\frac {1}{2}}\iint {\frac {[\rho ({\boldsymbol {r}})+\varepsilon \phi ({\boldsymbol {r}})]\,[\rho ({\boldsymbol {r}}')+\varepsilon \phi ({\boldsymbol {r}}')]}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'\right)\right]_{\varepsilon =0}\\&{}={\frac {1}{2}}\iint {\frac {\rho ({\boldsymbol {r}}')\phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'+{\frac {1}{2}}\iint {\frac {\rho ({\boldsymbol {r}})\phi ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'\\\end{aligned}}} The first and second terms on the right hand side of the last equation are equal, since r and r′ in the second term can be interchanged without changing the value of the integral. Therefore, ∫ δ J δ ρ ( r ) ϕ ( r ) d r = ∫ ( ∫ ρ ( r ′ ) | r − r ′ | d r ′ ) ϕ ( r ) d r {\displaystyle \int {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}\phi ({\boldsymbol {r}})d{\boldsymbol {r}}=\int \left(\int {\frac {\rho ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}d{\boldsymbol {r}}'\right)\phi ({\boldsymbol {r}})d{\boldsymbol {r}}} and the functional derivative of the electron-electron Coulomb potential energy functional J[ρ] is, δ J δ ρ ( r ) = ∫ ρ ( r ′ ) | r − r ′ | d r ′ . {\displaystyle {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}=\int {\frac {\rho ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}d{\boldsymbol {r}}'\,.} The second functional derivative is δ 2 J [ ρ ] δ ρ ( r ′ ) δ ρ ( r ) = ∂ ∂ ρ ( r ′ ) ( ρ ( r ′ ) | r − r ′ | ) = 1 | r − r ′ | . {\displaystyle {\frac {\delta ^{2}J[\rho ]}{\delta \rho (\mathbf {r} ')\delta \rho (\mathbf {r} )}}={\frac {\partial }{\partial \rho (\mathbf {r} ')}}\left({\frac {\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\right)={\frac {1}{|\mathbf {r} -\mathbf {r} '|}}.} ==== von Weizsäcker kinetic energy functional ==== In 1935 von Weizsäcker proposed to add a gradient correction to the Thomas-Fermi kinetic energy functional to make it better suit a molecular electron cloud: T W [ ρ ] = 1 8 ∫ ∇ ρ ( r ) ⋅ ∇ ρ ( r ) ρ ( r ) d r = ∫ t W ( r ) d r , {\displaystyle T_{\mathrm {W} }[\rho ]={\frac {1}{8}}\int {\frac {\nabla \rho (\mathbf {r} )\cdot \nabla \rho (\mathbf {r} )}{\rho (\mathbf {r} )}}d\mathbf {r} =\int t_{\mathrm {W} }(\mathbf {r} )\ d\mathbf {r} \,,} where t W ≡ 1 8 ∇ ρ ⋅ ∇ ρ ρ and ρ = ρ ( r ) . {\displaystyle t_{\mathrm {W} }\equiv {\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho }}\qquad {\text{and}}\ \ \rho =\rho ({\boldsymbol {r}})\ .} Using a previously derived formula for the functional derivative, δ T W δ ρ = ∂ t W ∂ ρ − ∇ ⋅ ∂ t W ∂ ∇ ρ = − 1 8 ∇ ρ ⋅ ∇ ρ ρ 2 − ( 1 4 ∇ 2 ρ ρ − 1 4 ∇ ρ ⋅ ∇ ρ ρ 2 ) where ∇ 2 = ∇ ⋅ ∇ , {\displaystyle {\begin{aligned}{\frac {\delta T_{\mathrm {W} }}{\delta \rho }}&={\frac {\partial t_{\mathrm {W} }}{\partial \rho }}-\nabla \cdot {\frac {\partial t_{\mathrm {W} }}{\partial \nabla \rho }}\\&=-{\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}-\left({\frac {1}{4}}{\frac {\nabla ^{2}\rho }{\rho }}-{\frac {1}{4}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}\right)\qquad {\text{where}}\ \ \nabla ^{2}=\nabla \cdot \nabla \ ,\end{aligned}}} and the result is, δ T W δ ρ = 1 8 ∇ ρ ⋅ ∇ ρ ρ 2 − 1 4 ∇ 2 ρ ρ . {\displaystyle {\frac {\delta T_{\mathrm {W} }}{\delta \rho }}=\ \ \,{\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}-{\frac {1}{4}}{\frac {\nabla ^{2}\rho }{\rho }}\ .} ==== Entropy ==== The entropy of a discrete random variable is a functional of the probability mass function. H [ p ( x ) ] = − ∑ x p ( x ) log p ( x ) {\displaystyle H[p(x)]=-\sum _{x}p(x)\log p(x)} Thus, ∑ x δ H δ p ( x ) ϕ ( x ) = [ d d ε H [ p ( x ) + ε ϕ ( x ) ] ] ε = 0 = [ − d d ε ∑ x [ p ( x ) + ε ϕ ( x ) ] log [ p ( x ) + ε ϕ ( x ) ] ] ε = 0 = − ∑ x [ 1 + log p ( x ) ] ϕ ( x ) . {\displaystyle {\begin{aligned}\sum _{x}{\frac {\delta H}{\delta p(x)}}\,\phi (x)&{}=\left[{\frac {d}{d\varepsilon }}H[p(x)+\varepsilon \phi (x)]\right]_{\varepsilon =0}\\&{}=\left[-\,{\frac {d}{d\varepsilon }}\sum _{x}\,[p(x)+\varepsilon \phi (x)]\ \log[p(x)+\varepsilon \phi (x)]\right]_{\varepsilon =0}\\&{}=-\sum _{x}\,[1+\log p(x)]\ \phi (x)\,.\end{aligned}}} Thus, δ H δ p ( x ) = − 1 − log p ( x ) . {\displaystyle {\frac {\delta H}{\delta p(x)}}=-1-\log p(x).} ==== Exponential ==== Let F [ φ ( x ) ] = e ∫ φ ( x ) g ( x ) d x . {\displaystyle F[\varphi (x)]=e^{\int \varphi (x)g(x)dx}.} Using the delta function as a test function, δ F [ φ ( x ) ] δ φ ( y ) = lim ε → 0 F [ φ ( x ) + ε δ ( x − y ) ] − F [ φ ( x ) ] ε = lim ε → 0 e ∫ ( φ ( x ) + ε δ ( x − y ) ) g ( x ) d x − e ∫ φ ( x ) g ( x ) d x ε = e ∫ φ ( x ) g ( x ) d x lim ε → 0 e ε ∫ δ ( x − y ) g ( x ) d x − 1 ε = e ∫ φ ( x ) g ( x ) d x lim ε → 0 e ε g ( y ) − 1 ε = e ∫ φ ( x ) g ( x ) d x g ( y ) . {\displaystyle {\begin{aligned}{\frac {\delta F[\varphi (x)]}{\delta \varphi (y)}}&{}=\lim _{\varepsilon \to 0}{\frac {F[\varphi (x)+\varepsilon \delta (x-y)]-F[\varphi (x)]}{\varepsilon }}\\&{}=\lim _{\varepsilon \to 0}{\frac {e^{\int (\varphi (x)+\varepsilon \delta (x-y))g(x)dx}-e^{\int \varphi (x)g(x)dx}}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}\lim _{\varepsilon \to 0}{\frac {e^{\varepsilon \int \delta (x-y)g(x)dx}-1}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}\lim _{\varepsilon \to 0}{\frac {e^{\varepsilon g(y)}-1}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}g(y).\end{aligned}}} Thus, δ F [ φ ( x ) ] δ φ ( y ) = g ( y ) F [ φ ( x ) ] . {\displaystyle {\frac {\delta F[\varphi (x)]}{\delta \varphi (y)}}=g(y)F[\varphi (x)].} This is particularly useful in calculating the correlation functions from the partition function in quantum field theory. ==== Functional derivative of a function ==== A function can be written in the form of an integral like a functional. For example, ρ ( r ) = F [ ρ ] = ∫ ρ ( r ′ ) δ ( r − r ′ ) d r ′ . {\displaystyle \rho ({\boldsymbol {r}})=F[\rho ]=\int \rho ({\boldsymbol {r}}')\delta ({\boldsymbol {r}}-{\boldsymbol {r}}')\,d{\boldsymbol {r}}'.} Since the integrand does not depend on derivatives of ρ, the functional derivative of ρ(r) is, δ ρ ( r ) δ ρ ( r ′ ) ≡ δ F δ ρ ( r ′ ) = ∂ ∂ ρ ( r ′ ) [ ρ ( r ′ ) δ ( r − r ′ ) ] = δ ( r − r ′ ) . {\displaystyle {\frac {\delta \rho ({\boldsymbol {r}})}{\delta \rho ({\boldsymbol {r}}')}}\equiv {\frac {\delta F}{\delta \rho ({\boldsymbol {r}}')}}={\frac {\partial \ \ }{\partial \rho ({\boldsymbol {r}}')}}\,[\rho ({\boldsymbol {r}}')\delta ({\boldsymbol {r}}-{\boldsymbol {r}}')]=\delta ({\boldsymbol {r}}-{\boldsymbol {r}}').} ==== Functional derivative of iterated function ==== The functional derivative of the iterated function f ( f ( x ) ) {\displaystyle f(f(x))} is given by: δ f ( f ( x ) ) δ f ( y ) = f ′ ( f ( x ) ) δ ( x − y ) + δ ( f ( x ) − y ) {\displaystyle {\frac {\delta f(f(x))}{\delta f(y)}}=f'(f(x))\delta (x-y)+\delta (f(x)-y)} and δ f ( f ( f ( x ) ) ) δ f ( y ) = f ′ ( f ( f ( x ) ) ( f ′ ( f ( x ) ) δ ( x − y ) + δ ( f ( x ) − y ) ) + δ ( f ( f ( x ) ) − y ) {\displaystyle {\frac {\delta f(f(f(x)))}{\delta f(y)}}=f'(f(f(x))(f'(f(x))\delta (x-y)+\delta (f(x)-y))+\delta (f(f(x))-y)} In general: δ f N ( x ) δ f ( y ) = f ′ ( f N − 1 ( x ) ) δ f N − 1 ( x ) δ f ( y ) + δ ( f N − 1 ( x ) − y ) {\displaystyle {\frac {\delta f^{N}(x)}{\delta f(y)}}=f'(f^{N-1}(x)){\frac {\delta f^{N-1}(x)}{\delta f(y)}}+\delta (f^{N-1}(x)-y)} Putting in N = 0 gives: δ f − 1 ( x ) δ f ( y ) = − δ ( f − 1 ( x ) − y ) f ′ ( f − 1 ( x ) ) {\displaystyle {\frac {\delta f^{-1}(x)}{\delta f(y)}}=-{\frac {\delta (f^{-1}(x)-y)}{f'(f^{-1}(x))}}} == Using the delta function as a test function == In physics, it is common to use the Dirac delta function δ ( x − y ) {\displaystyle \delta (x-y)} in place of a generic test function ϕ ( x ) {\displaystyle \phi (x)} , for yielding the functional derivative at the point y {\displaystyle y} (this is a point of the whole functional derivative as a partial derivative is a component of the gradient): δ F [ ρ ( x ) ] δ ρ ( y ) = lim ε → 0 F [ ρ ( x ) + ε δ ( x − y ) ] − F [ ρ ( x ) ] ε . {\displaystyle {\frac {\delta F[\rho (x)]}{\delta \rho (y)}}=\lim _{\varepsilon \to 0}{\frac {F[\rho (x)+\varepsilon \delta (x-y)]-F[\rho (x)]}{\varepsilon }}.} This works in cases when F [ ρ ( x ) + ε f ( x ) ] {\displaystyle F[\rho (x)+\varepsilon f(x)]} formally can be expanded as a series (or at least up to first order) in ε {\displaystyle \varepsilon } . The formula is however not mathematically rigorous, since F [ ρ ( x ) + ε δ ( x − y ) ] {\displaystyle F[\rho (x)+\varepsilon \delta (x-y)]} is usually not even defined. The definition given in a previous section is based on a relationship that holds for all test functions ϕ ( x ) {\displaystyle \phi (x)} , so one might think that it should hold also when ϕ ( x ) {\displaystyle \phi (x)} is chosen to be a specific function such as the delta function. However, the latter is not a valid test function (it is not even a proper function). In the definition, the functional derivative describes how the functional F [ ρ ( x ) ] {\displaystyle F[\rho (x)]} changes as a result of a small change in the entire function ρ ( x ) {\displaystyle \rho (x)} . The particular form of the change in ρ ( x ) {\displaystyle \rho (x)} is not specified, but it should stretch over the whole interval on which x {\displaystyle x} is defined. Employing the particular form of the perturbation given by the delta function has the meaning that ρ ( x ) {\displaystyle \rho (x)} is varied only in the point y {\displaystyle y} . Except for this point, there is no variation in ρ ( x ) {\displaystyle \rho (x)} . == Notes == == Footnotes == == References == Courant, Richard; Hilbert, David (1953). "Chapter IV. The Calculus of Variations". Methods of Mathematical Physics. Vol. I (First English ed.). New York, New York: Interscience Publishers, Inc. pp. 164–274. ISBN 978-0471504474. MR 0065391. Zbl 0001.00501. {{cite book}}: ISBN / Date incompatibility (help). Frigyik, Béla A.; Srivastava, Santosh; Gupta, Maya R. (January 2008), Introduction to Functional Derivatives (PDF), UWEE Tech Report, vol. UWEETR-2008-0001, Seattle, WA: Department of Electrical Engineering at the University of Washington, p. 7, archived from the original (PDF) on 2017-02-17, retrieved 2013-10-23. Gelfand, I. M.; Fomin, S. V. (2000) [1963], Calculus of variations, translated and edited by Richard A. Silverman (Revised English ed.), Mineola, N.Y.: Dover Publications, ISBN 978-0486414485, MR 0160139, Zbl 0127.05402. Giaquinta, Mariano; Hildebrandt, Stefan (1996), Calculus of Variations 1. The Lagrangian Formalism, Grundlehren der Mathematischen Wissenschaften, vol. 310 (1st ed.), Berlin: Springer-Verlag, ISBN 3-540-50625-X, MR 1368401, Zbl 0853.49001. Greiner, Walter; Reinhardt, Joachim (1996), "Section 2.3 – Functional derivatives", Field quantization, With a foreword by D. A. Bromley, Berlin–Heidelberg–New York: Springer-Verlag, pp. 36–38, ISBN 3-540-59179-6, MR 1383589, Zbl 0844.00006. Parr, R. G.; Yang, W. (1989). "Appendix A, Functionals". Density-Functional Theory of Atoms and Molecules. New York: Oxford University Press. pp. 246–254. ISBN 978-0195042795. == External links == "Functional derivative", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia:Functional equation (L-function)#0
|
In mathematics, the L-functions of number theory are expected to have several characteristic properties, one of which is that they satisfy certain functional equations. There is an elaborate theory of what these equations should be, much of which is still conjectural. == Introduction == A prototypical example, the Riemann zeta function has a functional equation relating its value at the complex number s with its value at 1 − s. In every case this relates to some value ζ(s) that is only defined by analytic continuation from the infinite series definition. That is, writing – as is conventional – σ for the real part of s, the functional equation relates the cases σ > 1 and σ < 0, and also changes a case with 0 < σ < 1 in the critical strip to another such case, reflected in the line σ = ½. Therefore, use of the functional equation is basic, in order to study the zeta-function in the whole complex plane. The functional equation in question for the Riemann zeta function takes the simple form Z ( s ) = Z ( 1 − s ) {\displaystyle Z(s)=Z(1-s)\,} where Z(s) is ζ(s) multiplied by a gamma-factor, involving the gamma function. This is now read as an 'extra' factor in the Euler product for the zeta-function, corresponding to the infinite prime. Just the same shape of functional equation holds for the Dedekind zeta function of a number field K, with an appropriate gamma-factor that depends only on the embeddings of K (in algebraic terms, on the tensor product of K with the real field). There is a similar equation for the Dirichlet L-functions, but this time relating them in pairs: Λ ( s , χ ) = ε Λ ( 1 − s , χ ∗ ) {\displaystyle \Lambda (s,\chi )=\varepsilon \Lambda (1-s,\chi ^{*})} with χ a primitive Dirichlet character, χ* its complex conjugate, Λ the L-function multiplied by a gamma-factor, and ε a complex number of absolute value 1, of shape G ( χ ) | G ( χ ) | {\displaystyle G(\chi ) \over {\left|G(\chi )\right\vert }} where G(χ) is a Gauss sum formed from χ. This equation has the same function on both sides if and only if χ is a real character, taking values in {0,1,−1}. Then ε must be 1 or −1, and the case of the value −1 would imply a zero of Λ(s) at s = ½. According to the theory (of Gauss, in effect) of Gauss sums, the value is always 1, so no such simple zero can exist (the function is even about the point). == Theory of functional equations == A unified theory of such functional equations was given by Erich Hecke, and the theory was taken up again in Tate's thesis by John Tate. Hecke found generalised characters of number fields, now called Hecke characters, for which his proof (based on theta functions) also worked. These characters and their associated L-functions are now understood to be strictly related to complex multiplication, as the Dirichlet characters are to cyclotomic fields. There are also functional equations for the local zeta-functions, arising at a fundamental level for the (analogue of) Poincaré duality in étale cohomology. The Euler products of the Hasse–Weil zeta-function for an algebraic variety V over a number field K, formed by reducing modulo prime ideals to get local zeta-functions, are conjectured to have a global functional equation; but this is currently considered out of reach except in special cases. The definition can be read directly out of étale cohomology theory, again; but in general some assumption coming from automorphic representation theory seems required to get the functional equation. The Taniyama–Shimura conjecture was a particular case of this as general theory. By relating the gamma-factor aspect to Hodge theory, and detailed studies of the expected ε factor, the theory as empirical has been brought to quite a refined state, even if proofs are missing. == See also == Explicit formula (L-function) Riemann–Siegel formula (particular approximate functional equation) == References == == External links == Weisstein, Eric W. "Functional Equation". MathWorld.
|
Wikipedia:Functional renormalization group#0
|
In theoretical physics, functional renormalization group (FRG) is an implementation of the renormalization group (RG) concept which is used in quantum and statistical field theory, especially when dealing with strongly interacting systems. The method combines functional methods of quantum field theory with the intuitive renormalization group idea of Kenneth G. Wilson. This technique allows to interpolate smoothly between the known microscopic laws and the complicated macroscopic phenomena in physical systems. In this sense, it bridges the transition from simplicity of microphysics to complexity of macrophysics. Figuratively speaking, FRG acts as a microscope with a variable resolution. One starts with a high-resolution picture of the known microphysical laws and subsequently decreases the resolution to obtain a coarse-grained picture of macroscopic collective phenomena. The method is nonperturbative, meaning that it does not rely on an expansion in a small coupling constant. Mathematically, FRG is based on an exact functional differential equation for a scale-dependent effective action. == The flow equation for the effective action == In quantum field theory, the effective action Γ {\displaystyle \Gamma } is an analogue of the classical action functional S {\displaystyle S} and depends on the fields of a given theory. It includes all quantum and thermal fluctuations. Variation of Γ {\displaystyle \Gamma } yields exact quantum field equations, for example for cosmology or the electrodynamics of superconductors. Mathematically, Γ {\displaystyle \Gamma } is the generating functional of the one-particle irreducible Feynman diagrams. Interesting physics, as propagators and effective couplings for interactions, can be straightforwardly extracted from it. In a generic interacting field theory the effective action Γ {\displaystyle \Gamma } , however, is difficult to obtain. FRG provides a practical tool to calculate Γ {\displaystyle \Gamma } employing the renormalization group concept. The central object in FRG is a scale-dependent effective action functional Γ k {\displaystyle \Gamma _{k}} often called average action or flowing action. The dependence on the RG sliding scale k {\displaystyle k} is introduced by adding a regulator (infrared cutoff) R k {\displaystyle R_{k}} to the full inverse propagator Γ k ( 2 ) {\displaystyle \Gamma _{k}^{(2)}} . Roughly speaking, the regulator R k {\displaystyle R_{k}} decouples slow modes with momenta q ≲ k {\displaystyle q\lesssim k} by giving them a large mass, while high momentum modes are not affected. Thus, Γ k {\displaystyle \Gamma _{k}} includes all quantum and statistical fluctuations with momenta q ≳ k {\displaystyle q\gtrsim k} . The flowing action Γ k {\displaystyle \Gamma _{k}} obeys the exact functional flow equation k ∂ k Γ k = 1 2 STr k ∂ k R k ( Γ k ( 1 , 1 ) + R k ) − 1 , {\displaystyle k\,\partial _{k}\Gamma _{k}={\frac {1}{2}}{\text{STr}}\,k\,\partial _{k}R_{k}\,(\Gamma _{k}^{(1,1)}+R_{k})^{-1},} derived by Christof Wetterich and Tim R. Morris in 1993. Here ∂ k {\displaystyle \partial _{k}} denotes a derivative with respect to the RG scale k {\displaystyle k} at fixed values of the fields. Furthermore, Γ k ( 1 , 1 ) {\displaystyle \Gamma _{k}^{(1,1)}} denotes the functional derivative of Γ k {\displaystyle \Gamma _{k}} from the left-hand-side and the right-hand-side respectively, due to the tensor structure of the equation. This feature is often shown simplified by the second derivative of the effective action. The functional differential equation for Γ k {\displaystyle \Gamma _{k}} must be supplemented with the initial condition Γ k → Λ = S {\displaystyle \Gamma _{k\to \Lambda }=S} , where the "classical action" S {\displaystyle S} describes the physics at the microscopic ultraviolet scale k = Λ {\displaystyle k=\Lambda } . Importantly, in the infrared limit k → 0 {\displaystyle k\to 0} the full effective action Γ = Γ k → 0 {\displaystyle \Gamma =\Gamma _{k\to 0}} is obtained. In the Wetterich equation STr {\displaystyle {\text{STr}}} denotes a supertrace which sums over momenta, frequencies, internal indices, and fields (taking bosons with a plus and fermions with a minus sign). The exact flow equation for Γ k {\displaystyle \Gamma _{k}} has a one-loop structure. This is an important simplification compared to perturbation theory, where multi-loop diagrams must be included. The second functional derivative Γ k ( 2 ) = Γ k ( 1 , 1 ) {\displaystyle \Gamma _{k}^{(2)}=\Gamma _{k}^{(1,1)}} is the full inverse field propagator modified by the presence of the regulator R k {\displaystyle R_{k}} . The renormalization group evolution of Γ k {\displaystyle \Gamma _{k}} can be illustrated in the theory space, which is a multi-dimensional space of all possible running couplings { c n } {\displaystyle \{c_{n}\}} allowed by the symmetries of the problem. As schematically shown in the figure, at the microscopic ultraviolet scale k = Λ {\displaystyle k=\Lambda } one starts with the initial condition Γ k = Λ = S {\displaystyle \Gamma _{k=\Lambda }=S} . As the sliding scale k {\displaystyle k} is lowered, the flowing action Γ k {\displaystyle \Gamma _{k}} evolves in the theory space according to the functional flow equation. The choice of the regulator R k {\displaystyle R_{k}} is not unique, which introduces some scheme dependence into the renormalization group flow. For this reason, different choices of the regulator R k {\displaystyle R_{k}} correspond to the different paths in the figure. At the infrared scale k = 0 {\displaystyle k=0} , however, the full effective action Γ k = 0 = Γ {\displaystyle \Gamma _{k=0}=\Gamma } is recovered for every choice of the cut-off R k {\displaystyle R_{k}} , and all trajectories meet at the same point in the theory space. In most cases of interest the Wetterich equation can only be solved approximately. Usually some type of expansion of Γ k {\displaystyle \Gamma _{k}} is performed, which is then truncated at finite order leading to a finite system of ordinary differential equations. Different systematic expansion schemes (such as the derivative expansion, vertex expansion, etc.) were developed. The choice of the suitable scheme should be physically motivated and depends on a given problem. The expansions do not necessarily involve a small parameter (like an interaction coupling constant) and thus they are, in general, of nonperturbative nature. Note however, that due to multiple choices regarding (prefactor-)conventions and the concrete definition of the effective action, one can find other (equivalent) versions of the Wetterich equation in the literature. == Aspects of functional renormalization == The Wetterich flow equation is an exact equation. However, in practice, the functional differential equation must be truncated, i.e. it must be projected to functions of a few variables or even onto some finite-dimensional sub-theory space. As in every nonperturbative method, the question of error estimate is nontrivial in functional renormalization. One way to estimate the error in FRG is to improve the truncation in successive steps, i.e. to enlarge the sub-theory space by including more and more running couplings. The difference in the flows for different truncations gives a good estimate of the error. Alternatively, one can use different regulator functions R k {\displaystyle R_{k}} in a given (fixed) truncation and determine the difference of the RG flows in the infrared for the respective regulator choices. If bosonization is used, one can check the insensitivity of final results with respect to different bosonization procedures. In FRG, as in all RG methods, a lot of insight about a physical system can be gained from the topology of RG flows. Specifically, identification of fixed points of the renormalization group evolution is of great importance. Near fixed points the flow of running couplings effectively stops and RG β {\displaystyle \beta } -functions approach zero. The presence of (partially) stable infrared fixed points is closely connected to the concept of universality. Universality manifests itself in the observation that some very distinct physical systems have the same critical behavior. For instance, to good accuracy, critical exponents of the liquid–gas phase transition in water and the ferromagnetic phase transition in magnets are the same. In the renormalization group language, different systems from the same universality class flow to the same (partially) stable infrared fixed point. In this way macrophysics becomes independent of the microscopic details of the particular physical model. Compared to the perturbation theory, functional renormalization does not make a strict distinction between renormalizable and nonrenormalizable couplings. All running couplings that are allowed by symmetries of the problem are generated during the FRG flow. However, the nonrenormalizable couplings approach partial fixed points very quickly during the evolution towards the infrared, and thus the flow effectively collapses on a hypersurface of the dimension given by the number of renormalizable couplings. Taking the nonrenormalizable couplings into account allows to study nonuniversal features that are sensitive to the concrete choice of the microscopic action S {\displaystyle S} and the finite ultraviolet cutoff Λ {\displaystyle \Lambda } . The Wetterich equation can be obtained from the Legendre transformation of the Polchinski functional equation, derived by Joseph Polchinski in 1984. The concept of the effective average action, used in FRG, is, however, more intuitive than the flowing bare action in the Polchinski equation. In addition, the FRG method proved to be more suitable for practical calculations. Typically, low-energy physics of strongly interacting systems is described by macroscopic degrees of freedom (i.e. particle excitations) which are very different from microscopic high-energy degrees of freedom. For instance, quantum chromodynamics is a field theory of interacting quarks and gluons. At low energies, however, proper degrees of freedom are baryons and mesons. Another example is the BEC/BCS crossover problem in condensed matter physics. While the microscopic theory is defined in terms of two-component nonrelativistic fermions, at low energies a composite (particle-particle) dimer becomes an additional degree of freedom, and it is advisable to include it explicitly in the model. The low-energy composite degrees of freedom can be introduced in the description by the method of partial bosonization (Hubbard–Stratonovich transformation). This transformation, however, is done once and for all at the UV scale Λ {\displaystyle \Lambda } . In FRG a more efficient way to incorporate macroscopic degrees of freedom was introduced, which is known as flowing bosonization or rebosonization. With the help of a scale-dependent field transformation, this allows to perform the Hubbard–Stratonovich transformation continuously at all RG scales k {\displaystyle k} . == Functional renormalization-group for Wick-ordered effective interaction == Contrary to the flow equation for the effective action, this scheme is formulated for the effective interaction V [ η , η + ] = − ln Z [ G 0 − 1 η , G 0 − 1 η + ] − η G 0 − 1 η + {\displaystyle {\mathcal {V}}[\eta ,\eta ^{+}]=-\ln Z[G_{0}^{-1}\eta ,G_{0}^{-1}\eta ^{+}]-\eta G_{0}^{-1}\eta ^{+}} which generates n-particle interaction vertices, amputated by the bare propagators G 0 {\displaystyle G_{0}} ; Z [ η , η + ] {\displaystyle Z[\eta ,\eta ^{+}]} is the "standard" generating functional for the n-particle Green functions. The Wick ordering of effective interaction with respect to Green function D {\displaystyle D} can be defined by W [ η , η + ] = exp ( − Δ D ) V [ η , η + ] {\displaystyle {\mathcal {W}}[\eta ,\eta ^{+}]=\exp(-\Delta _{D}){\mathcal {V}}[\eta ,\eta ^{+}]} . where Δ = D δ 2 / ( δ η δ η + ) {\displaystyle \Delta =D\delta ^{2}/(\delta \eta \delta \eta ^{+})} is the Laplacian in the field space. This operation is similar to Normal order and excludes from the interaction all possible terms, formed by a convolution of source fields with respective Green function D. Introducing some cutoff Λ {\displaystyle \Lambda } the Polchinskii equation ∂ ∂ Λ V Λ ( ψ ) = − Δ ˙ G 0 , Λ V Λ ( ψ ) + Δ G ˙ 0 , Λ 12 V Λ ( 1 ) V Λ ( 2 ) {\displaystyle {\frac {\partial }{\partial \Lambda }}{{V}_{\Lambda }}(\psi )=-{{\dot {\Delta }}_{G_{0,\Lambda }}}{{V}_{\Lambda }}(\psi )+\Delta _{{\dot {G}}_{0,\Lambda }}^{12}{\mathcal {V}}_{\Lambda }^{(1)}{\mathcal {V}}_{\Lambda }^{(2)}} takes the form of the Wick-ordered equation ∂ Λ W Λ = − Δ D ˙ Λ + G ˙ 0 , Λ W Λ + e − Δ D Λ 12 Δ G ˙ 0 , Λ 12 W Λ ( 1 ) W Λ ( 2 ) {\displaystyle {\partial _{\Lambda }}{{\mathcal {W}}_{\Lambda }}=-{\Delta _{{{\dot {D}}_{\Lambda }}+{{\dot {G}}_{0,\Lambda }}}}{{\mathcal {W}}_{\Lambda }}+{e^{-\Delta _{D_{\Lambda }}^{12}}}\Delta _{{\dot {G}}_{0,\Lambda }}^{12}{\mathcal {W}}_{\Lambda }^{(1)}{\mathcal {W}}_{\Lambda }^{(2)}} where Δ G ˙ 0 , Λ 12 V Λ ( 1 ) V Λ ( 2 ) = 1 2 ( δ V Λ ( ψ ) δ ψ , G ˙ 0 , Λ δ V Λ ( ψ ) δ ψ ) {\displaystyle \Delta _{{\dot {G}}_{0,\Lambda }}^{12}{\mathcal {V}}_{\Lambda }^{(1)}{\mathcal {V}}_{\Lambda }^{(2)}={\frac {1}{2}}\left({{\frac {\delta {{V}_{\Lambda }}(\psi )}{\delta \psi }},{{\dot {G}}_{0,\Lambda }}{\frac {\delta {{V}_{\Lambda }}(\psi )}{\delta \psi }}}\right)} == Applications == The method was applied to numerous problems in physics, e.g.: In statistical field theory, FRG provided a unified picture of phase transitions in classical linear O ( N ) {\displaystyle O(N)} -symmetric scalar theories in different dimensions d {\displaystyle d} , including critical exponents for d = 3 {\displaystyle d=3} and the Berezinskii–Kosterlitz–Thouless phase transition for d = 2 {\displaystyle d=2} , N = 2 {\displaystyle N=2} . In gauge quantum field theory, FRG was used, for instance, to investigate the chiral phase transition and infrared properties of QCD and its large-flavor extensions. In condensed matter physics, the method proved to be successful to treat lattice models (e.g. the Hubbard model or frustrated magnetic systems), repulsive Bose gas, BEC/BCS crossover for two-component Fermi gas, Kondo effect, disordered systems and nonequilibrium phenomena. Application of FRG to gravity provided arguments in favor of nonperturbative renormalizability of quantum gravity in four spacetime dimensions, known as the asymptotic safety scenario. In mathematical physics FRG was used to prove renormalizability of different field theories. == See also == Renormalization group Renormalization Critical phenomena Scale invariance Asymptotic safety in quantum gravity == References == === Papers === Wetterich, C. (1993), "Exact evolution equation for the effective potential", Phys. Lett. B, 301 (1): 90, arXiv:1710.05815, Bibcode:1993PhLB..301...90W, doi:10.1016/0370-2693(93)90726-X, S2CID 119536989 Morris, T. R. (1994), "The Exact renormalization group and approximate solutions", Int. J. Mod. Phys. A, A (14): 2411–2449, arXiv:hep-ph/9308265, Bibcode:1994IJMPA...9.2411M, doi:10.1142/S0217751X94000972, S2CID 15749927 Polchinski, J. (1984), "Renormalization and Effective Lagrangians", Nucl. Phys. B, 231 (2): 269, Bibcode:1984NuPhB.231..269P, doi:10.1016/0550-3213(84)90287-6 Reuter, M. (1998), "Nonperturbative evolution equation for quantum gravity", Phys. Rev. D, 57 (2): 971–985, arXiv:hep-th/9605030, Bibcode:1998PhRvD..57..971R, CiteSeerX 10.1.1.263.3439, doi:10.1103/PhysRevD.57.971, S2CID 119454616 === Pedagogic reviews === J. Berges; N. Tetradis; C. Wetterich (2002), "Non-perturbative renormalization flow in quantum field theory and statistical mechanics", Phys. Rep., 363 (4–6): 223–386, arXiv:hep-ph/0005122, Bibcode:2002PhR...363..223B, doi:10.1016/S0370-1573(01)00098-9, S2CID 119033356 J. Polonyi, Janos (2003), "Lectures on the functional renormalization group method", Cent. Eur. J. Phys., 1 (1): 1–71, arXiv:hep-th/0110026, Bibcode:2003CEJPh...1....1P, doi:10.2478/BF02475552, S2CID 53407529 H.Gies (2006). "Introduction to the functional RG and applications to gauge theories". Renormalization Group and Effective Field Theory Approaches to Many-Body Systems. Lecture Notes in Physics. Vol. 852. pp. 287–348. arXiv:hep-ph/0611146. doi:10.1007/978-3-642-27320-9_6. ISBN 978-3-642-27319-3. S2CID 15127186. B. Delamotte (2007). "An introduction to the nonperturbative renormalization group". Renormalization Group and Effective Field Theory Approaches to Many-Body Systems. Lecture Notes in Physics. Vol. 852. pp. 49–132. arXiv:cond-mat/0702365. doi:10.1007/978-3-642-27320-9_2. ISBN 978-3-642-27319-3. S2CID 34308305. Salmhofer, Manfred; Honerkamp, Carsten (2001), "Fermionic renormalization group flows: Technique and theory", Prog. Theor. Phys., 105 (1): 1, Bibcode:2001PThPh.105....1S, doi:10.1143/PTP.105.1 M. Reuter and F. Saueressig; Frank Saueressig (2007). "Functional Renormalization Group Equations, Asymptotic Safety, and Quantum Einstein Gravity". arXiv:0708.1317 [hep-th].
|
Wikipedia:Functional square root#0
|
In mathematics, a functional square root (sometimes called a half iterate) is a square root of a function with respect to the operation of function composition. In other words, a functional square root of a function g is a function f satisfying f(f(x)) = g(x) for all x. == Notation == Notations expressing that f is a functional square root of g are f = g[1/2] and f = g1/2, or rather f = g 1/2 (see Iterated Function), although this leaves the usual ambiguity with taking the function to that power in the multiplicative sense, just as f ² = f ∘ f can be misinterpreted as x ↦ f(x)². == History == The functional square root of the exponential function (now known as a half-exponential function) was studied by Hellmuth Kneser in 1950, later providing the basis for extending tetration to non-integer heights in 2017. The solutions of f(f(x)) = x over R {\displaystyle \mathbb {R} } (the involutions of the real numbers) were first studied by Charles Babbage in 1815, and this equation is called Babbage's functional equation. A particular solution is f(x) = (b − x)/(1 + cx) for bc ≠ −1. Babbage noted that for any given solution f, its functional conjugate Ψ−1∘ f ∘ Ψ by an arbitrary invertible function Ψ is also a solution. In other words, the group of all invertible functions on the real line acts on the subset consisting of solutions to Babbage's functional equation by conjugation. == Solutions == A systematic procedure to produce arbitrary functional n-roots (including arbitrary real, negative, and infinitesimal n) of functions g : C → C {\displaystyle g:\mathbb {C} \rightarrow \mathbb {C} } relies on the solutions of Schröder's equation. Infinitely many trivial solutions exist when the domain of a root function f is allowed to be sufficiently larger than that of g. == Examples == f(x) = 2x2 is a functional square root of g(x) = 8x4. A functional square root of the nth Chebyshev polynomial, g ( x ) = T n ( x ) {\displaystyle g(x)=T_{n}(x)} , is f ( x ) = cos ( n arccos ( x ) ) {\displaystyle f(x)=\cos {({\sqrt {n}}\arccos(x))}} , which in general is not a polynomial. f ( x ) = x / ( 2 + x ( 1 − 2 ) ) {\displaystyle f(x)=x/({\sqrt {2}}+x(1-{\sqrt {2}}))} is a functional square root of g ( x ) = x / ( 2 − x ) {\displaystyle g(x)=x/(2-x)} . sin[2](x) = sin(sin(x)) [red curve] sin[1](x) = sin(x) = rin(rin(x)) [blue curve] sin[1/2](x) = rin(x) = qin(qin(x)) [orange curve], although this is not unique, the opposite - rin being a solution of sin = rin ∘ rin, too. sin[1/4](x) = qin(x) [black curve above the orange curve] sin[–1](x) = arcsin(x) [dashed curve] Using this extension, sin[1/2](1) can be shown to be approximately equal to 0.90871. (See. For the notation, see [1] Archived 2022-12-05 at the Wayback Machine.) == See also == == References ==
|
Wikipedia:Fundamental theorem of algebraic K-theory#0
|
In algebra, the fundamental theorem of algebraic K-theory describes the effects of changing the ring of K-groups from a ring R to R [ t ] {\displaystyle R[t]} or R [ t , t − 1 ] {\displaystyle R[t,t^{-1}]} . The theorem was first proved by Hyman Bass for K 0 , K 1 {\displaystyle K_{0},K_{1}} and was later extended to higher K-groups by Daniel Quillen. == Description == Let G i ( R ) {\displaystyle G_{i}(R)} be the algebraic K-theory of the category of finitely generated modules over a noetherian ring R; explicitly, we can take G i ( R ) = π i ( B + f-gen-Mod R ) {\displaystyle G_{i}(R)=\pi _{i}(B^{+}{\text{f-gen-Mod}}_{R})} , where B + = Ω B Q {\displaystyle B^{+}=\Omega BQ} is given by Quillen's Q-construction. If R is a regular ring (i.e., has finite global dimension), then G i ( R ) = K i ( R ) , {\displaystyle G_{i}(R)=K_{i}(R),} the i-th K-group of R. This is an immediate consequence of the resolution theorem, which compares the K-theories of two different categories (with inclusion relation.) For a noetherian ring R, the fundamental theorem states: (i) G i ( R [ t ] ) = G i ( R ) , i ≥ 0 {\displaystyle G_{i}(R[t])=G_{i}(R),\,i\geq 0} . (ii) G i ( R [ t , t − 1 ] ) = G i ( R ) ⊕ G i − 1 ( R ) , i ≥ 0 , G − 1 ( R ) = 0 {\displaystyle G_{i}(R[t,t^{-1}])=G_{i}(R)\oplus G_{i-1}(R),\,i\geq 0,\,G_{-1}(R)=0} . The proof of the theorem uses the Q-construction. There is also a version of the theorem for the singular case (for K i {\displaystyle K_{i}} ); this is the version proved in Grayson's paper. == See also == Basic theorems in algebraic K-theory == Notes == == References == Daniel Grayson, Higher algebraic K-theory II [after Daniel Quillen], 1976 Srinivas, V. (2008), Algebraic K-theory, Modern Birkhäuser Classics (Paperback reprint of the 1996 2nd ed.), Boston, MA: Birkhäuser, ISBN 978-0-8176-4736-0, Zbl 1125.19300 Weibel, Charles (2013). "The K-book: An introduction to algebraic K-theory". Graduate Studies in Math. Graduate Studies in Mathematics. 145. doi:10.1090/gsm/145. ISBN 978-0-8218-9132-2.
|
Wikipedia:Fundamental theorem of finitely generated abelian groups#0
|
In abstract algebra, an abelian group ( G , + ) {\displaystyle (G,+)} is called finitely generated if there exist finitely many elements x 1 , … , x s {\displaystyle x_{1},\dots ,x_{s}} in G {\displaystyle G} such that every x {\displaystyle x} in G {\displaystyle G} can be written in the form x = n 1 x 1 + n 2 x 2 + ⋯ + n s x s {\displaystyle x=n_{1}x_{1}+n_{2}x_{2}+\cdots +n_{s}x_{s}} for some integers n 1 , … , n s {\displaystyle n_{1},\dots ,n_{s}} . In this case, we say that the set { x 1 , … , x s } {\displaystyle \{x_{1},\dots ,x_{s}\}} is a generating set of G {\displaystyle G} or that x 1 , … , x s {\displaystyle x_{1},\dots ,x_{s}} generate G {\displaystyle G} . So, finitely generated abelian groups can be thought of as a generalization of cyclic groups. Every finite abelian group is finitely generated. The finitely generated abelian groups can be completely classified. == Examples == The integers, ( Z , + ) {\displaystyle \left(\mathbb {Z} ,+\right)} , are a finitely generated abelian group. The integers modulo n {\displaystyle n} , ( Z / n Z , + ) {\displaystyle \left(\mathbb {Z} /n\mathbb {Z} ,+\right)} , are a finite (hence finitely generated) abelian group. Any direct sum of finitely many finitely generated abelian groups is again a finitely generated abelian group. Every lattice forms a finitely generated free abelian group. There are no other examples (up to isomorphism). In particular, the group ( Q , + ) {\displaystyle \left(\mathbb {Q} ,+\right)} of rational numbers is not finitely generated: if x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are rational numbers, pick a natural number k {\displaystyle k} coprime to all the denominators; then 1 / k {\displaystyle 1/k} cannot be generated by x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} . The group ( Q ∗ , ⋅ ) {\displaystyle \left(\mathbb {Q} ^{*},\cdot \right)} of non-zero rational numbers is also not finitely generated. The groups of real numbers under addition ( R , + ) {\displaystyle \left(\mathbb {R} ,+\right)} and non-zero real numbers under multiplication ( R ∗ , ⋅ ) {\displaystyle \left(\mathbb {R} ^{*},\cdot \right)} are also not finitely generated. == Classification == The fundamental theorem of finitely generated abelian groups can be stated two ways, generalizing the two forms of the fundamental theorem of finite abelian groups. The theorem, in both forms, in turn generalizes to the structure theorem for finitely generated modules over a principal ideal domain, which in turn admits further generalizations. === Primary decomposition === The primary decomposition formulation states that every finitely generated abelian group G is isomorphic to a direct sum of primary cyclic groups and infinite cyclic groups. A primary cyclic group is one whose order is a power of a prime. That is, every finitely generated abelian group is isomorphic to a group of the form Z n ⊕ Z / q 1 Z ⊕ ⋯ ⊕ Z / q t Z , {\displaystyle \mathbb {Z} ^{n}\oplus \mathbb {Z} /q_{1}\mathbb {Z} \oplus \cdots \oplus \mathbb {Z} /q_{t}\mathbb {Z} ,} where n ≥ 0 is the rank, and the numbers q1, ..., qt are powers of (not necessarily distinct) prime numbers. In particular, G is finite if and only if n = 0. The values of n, q1, ..., qt are (up to rearranging the indices) uniquely determined by G, that is, there is one and only one way to represent G as such a decomposition. The proof of this statement uses the basis theorem for finite abelian group: every finite abelian group is a direct sum of primary cyclic groups. Denote the torsion subgroup of G as tG. Then, G/tG is a torsion-free abelian group and thus it is free abelian. tG is a direct summand of G, which means there exists a subgroup F of G s.t. G = t G ⊕ F {\displaystyle G=tG\oplus F} , where F ≅ G / t G {\displaystyle F\cong G/tG} . Then, F is also free abelian. Since tG is finitely generated and each element of tG has finite order, tG is finite. By the basis theorem for finite abelian group, tG can be written as direct sum of primary cyclic groups. === Invariant factor decomposition === We can also write any finitely generated abelian group G as a direct sum of the form Z n ⊕ Z / k 1 Z ⊕ ⋯ ⊕ Z / k u Z , {\displaystyle \mathbb {Z} ^{n}\oplus \mathbb {Z} /{k_{1}}\mathbb {Z} \oplus \cdots \oplus \mathbb {Z} /{k_{u}}\mathbb {Z} ,} where k1 divides k2, which divides k3 and so on up to ku. Again, the rank n and the invariant factors k1, ..., ku are uniquely determined by G (here with a unique order). The rank and the sequence of invariant factors determine the group up to isomorphism. === Equivalence === These statements are equivalent as a result of the Chinese remainder theorem, which implies that Z j k ≅ Z j ⊕ Z k {\displaystyle \mathbb {Z} _{jk}\cong \mathbb {Z} _{j}\oplus \mathbb {Z} _{k}} if and only if j and k are coprime. === History === The history and credit for the fundamental theorem is complicated by the fact that it was proven when group theory was not well-established, and thus early forms, while essentially the modern result and proof, are often stated for a specific case. Briefly, an early form of the finite case was proven by Gauss in 1801, the finite case was proven by Kronecker in 1870, and stated in group-theoretic terms by Frobenius and Stickelberger in 1878. The finitely presented case is solved by Smith normal form, and hence frequently credited to (Smith 1861), though the finitely generated case is sometimes instead credited to Poincaré in 1900; details follow. Group theorist László Fuchs states: As far as the fundamental theorem on finite abelian groups is concerned, it is not clear how far back in time one needs to go to trace its origin. ... it took a long time to formulate and prove the fundamental theorem in its present form ... The fundamental theorem for finite abelian groups was proven by Leopold Kronecker in 1870, using a group-theoretic proof, though without stating it in group-theoretic terms; a modern presentation of Kronecker's proof is given in (Stillwell 2012), 5.2.2 Kronecker's Theorem, 176–177. This generalized an earlier result of Carl Friedrich Gauss from Disquisitiones Arithmeticae (1801), which classified quadratic forms; Kronecker cited this result of Gauss's. The theorem was stated and proved in the language of groups by Ferdinand Georg Frobenius and Ludwig Stickelberger in 1878. Another group-theoretic formulation was given by Kronecker's student Eugen Netto in 1882. The fundamental theorem for finitely presented abelian groups was proven by Henry John Stephen Smith in (Smith 1861), as integer matrices correspond to finite presentations of abelian groups (this generalizes to finitely presented modules over a principal ideal domain), and Smith normal form corresponds to classifying finitely presented abelian groups. The fundamental theorem for finitely generated abelian groups was proven by Henri Poincaré in 1900, using a matrix proof (which generalizes to principal ideal domains). This was done in the context of computing the homology of a complex, specifically the Betti number and torsion coefficients of a dimension of the complex, where the Betti number corresponds to the rank of the free part, and the torsion coefficients correspond to the torsion part. Kronecker's proof was generalized to finitely generated abelian groups by Emmy Noether in 1926. == Corollaries == Stated differently the fundamental theorem says that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of those being unique up to isomorphism. The finite abelian group is just the torsion subgroup of G. The rank of G is defined as the rank of the torsion-free part of G; this is just the number n in the above formulas. A corollary to the fundamental theorem is that every finitely generated torsion-free abelian group is free abelian. The finitely generated condition is essential here: Q {\displaystyle \mathbb {Q} } is torsion-free but not free abelian. Every subgroup and factor group of a finitely generated abelian group is again finitely generated abelian. The finitely generated abelian groups, together with the group homomorphisms, form an abelian category which is a Serre subcategory of the category of abelian groups. == Non-finitely generated abelian groups == Note that not every abelian group of finite rank is finitely generated; the rank 1 group Q {\displaystyle \mathbb {Q} } is one counterexample, and the rank-0 group given by a direct sum of countably infinitely many copies of Z 2 {\displaystyle \mathbb {Z} _{2}} is another one. == See also == The composition series in the Jordan–Hölder theorem is a non-abelian generalization. == Notes == == References ==
|
Wikipedia:Fundamental theorem of linear programming#0
|
In mathematical optimization, the fundamental theorem of linear programming states, in a weak formulation, that the maxima and minima of a linear function over a convex polygonal region occur at the region's corners. Further, if an extreme value occurs at two corners, then it must also occur everywhere on the line segment between them. == Statement == Consider the optimization problem min c T x subject to x ∈ P {\displaystyle \min c^{T}x{\text{ subject to }}x\in P} Where P = { x ∈ R n : A x ≤ b } {\displaystyle P=\{x\in \mathbb {R} ^{n}:Ax\leq b\}} . If P {\displaystyle P} is a bounded polyhedron (and thus a polytope) and x ∗ {\displaystyle x^{\ast }} is an optimal solution to the problem, then x ∗ {\displaystyle x^{\ast }} is either an extreme point (vertex) of P {\displaystyle P} , or lies on a face F ⊂ P {\displaystyle F\subset P} of optimal solutions. == Proof == Suppose, for the sake of contradiction, that x ∗ ∈ i n t ( P ) {\displaystyle x^{\ast }\in \mathrm {int} (P)} . Then there exists some ϵ > 0 {\displaystyle \epsilon >0} such that the ball of radius ϵ {\displaystyle \epsilon } centered at x ∗ {\displaystyle x^{\ast }} is contained in P {\displaystyle P} , that is B ϵ ( x ∗ ) ⊂ P {\displaystyle B_{\epsilon }(x^{\ast })\subset P} . Therefore, x ∗ − ϵ 2 c | | c | | ∈ P {\displaystyle x^{\ast }-{\frac {\epsilon }{2}}{\frac {c}{||c||}}\in P} and c T ( x ∗ − ϵ 2 c | | c | | ) = c T x ∗ − ϵ 2 c T c | | c | | = c T x ∗ − ϵ 2 | | c | | < c T x ∗ . {\displaystyle c^{T}\left(x^{\ast }-{\frac {\epsilon }{2}}{\frac {c}{||c||}}\right)=c^{T}x^{\ast }-{\frac {\epsilon }{2}}{\frac {c^{T}c}{||c||}}=c^{T}x^{\ast }-{\frac {\epsilon }{2}}||c||<c^{T}x^{\ast }.} Hence x ∗ {\displaystyle x^{\ast }} is not an optimal solution, a contradiction. Therefore, x ∗ {\displaystyle x^{\ast }} must live on the boundary of P {\displaystyle P} . If x ∗ {\displaystyle x^{\ast }} is not a vertex itself, it must be the convex combination of vertices of P {\displaystyle P} , say x 1 , . . . , x t {\displaystyle x_{1},...,x_{t}} . Then x ∗ = ∑ i = 1 t λ i x i {\displaystyle x^{\ast }=\sum _{i=1}^{t}\lambda _{i}x_{i}} with λ i ≥ 0 {\displaystyle \lambda _{i}\geq 0} and ∑ i = 1 t λ i = 1 {\displaystyle \sum _{i=1}^{t}\lambda _{i}=1} . Observe that 0 = c T ( ( ∑ i = 1 t λ i x i ) − x ∗ ) = c T ( ∑ i = 1 t λ i ( x i − x ∗ ) ) = ∑ i = 1 t λ i ( c T x i − c T x ∗ ) . {\displaystyle 0=c^{T}\left(\left(\sum _{i=1}^{t}\lambda _{i}x_{i}\right)-x^{\ast }\right)=c^{T}\left(\sum _{i=1}^{t}\lambda _{i}(x_{i}-x^{\ast })\right)=\sum _{i=1}^{t}\lambda _{i}(c^{T}x_{i}-c^{T}x^{\ast }).} Since x ∗ {\displaystyle x^{\ast }} is an optimal solution, all terms in the sum are nonnegative. Since the sum is equal to zero, we must have that each individual term is equal to zero. Hence, c T x ∗ = c T x i {\displaystyle c^{T}x^{\ast }=c^{T}x_{i}} for each x i {\displaystyle x_{i}} , so every x i {\displaystyle x_{i}} is also optimal, and therefore all points on the face whose vertices are x 1 , . . . , x t {\displaystyle x_{1},...,x_{t}} , are optimal solutions. == References == Bertsekas, Dimitri P. (1995). Nonlinear Programming (1st ed.). Belmont, Massachusetts: Athena Scientific. p. Proposition B.21(c). ISBN 1-886529-14-0. "The Fundamental Theorem of Linear Programming". WOLFRAM Demonstrations Project. Retrieved 25 September 2024.
|
Wikipedia:Fusion frame#0
|
In mathematics, a fusion frame of a vector space is a natural extension of a frame. It is an additive construct of several, potentially "overlapping" frames. The motivation for this concept comes from the event that a signal can not be acquired by a single sensor alone (a constraint found by limitations of hardware or data throughput), rather the partial components of the signal must be collected via a network of sensors, and the partial signal representations are then fused into the complete signal. By construction, fusion frames easily lend themselves to parallel or distributed processing of sensor networks consisting of arbitrary overlapping sensor fields. == Definition == Given a Hilbert space H {\displaystyle {\mathcal {H}}} , let { W i } i ∈ I {\displaystyle \{W_{i}\}_{i\in {\mathcal {I}}}} be closed subspaces of H {\displaystyle {\mathcal {H}}} , where I {\displaystyle {\mathcal {I}}} is an index set. Let { v i } i ∈ I {\displaystyle \{v_{i}\}_{i\in {\mathcal {I}}}} be a set of positive scalar weights. Then { W i , v i } i ∈ I {\displaystyle \{W_{i},v_{i}\}_{i\in {\mathcal {I}}}} is a fusion frame of H {\displaystyle {\mathcal {H}}} if there exist constants 0 < A ≤ B < ∞ {\displaystyle 0<A\leq B<\infty } such that A ‖ f ‖ 2 ≤ ∑ i ∈ I v i 2 ‖ P W i f ‖ 2 ≤ B ‖ f ‖ 2 , ∀ f ∈ H , {\displaystyle A\|f\|^{2}\leq \sum _{i\in {\mathcal {I}}}v_{i}^{2}{\big \|}P_{W_{i}}f{\big \|}^{2}\leq B\|f\|^{2},\quad \forall f\in {\mathcal {H}},} where P W i {\displaystyle P_{W_{i}}} denotes the orthogonal projection onto the subspace W i {\displaystyle W_{i}} . The constants A {\displaystyle A} and B {\displaystyle B} are called lower and upper bound, respectively. When the lower and upper bounds are equal to each other, { W i , v i } i ∈ I {\displaystyle \{W_{i},v_{i}\}_{i\in {\mathcal {I}}}} becomes a A {\displaystyle A} -tight fusion frame. Furthermore, if A = B = 1 {\displaystyle A=B=1} , we can call { W i , v i } i ∈ I {\displaystyle \{W_{i},v_{i}\}_{i\in {\mathcal {I}}}} Parseval fusion frame. Assume { f i j } i ∈ I , j ∈ J i {\displaystyle \{f_{ij}\}_{i\in {\mathcal {I}},j\in J_{i}}} is a frame for W i {\displaystyle W_{i}} . Then { ( W i , v i , { f i j } j ∈ J i ) } i ∈ I {\displaystyle \{\left(W_{i},v_{i},\{f_{ij}\}_{j\in J_{i}}\right)\}_{i\in {\mathcal {I}}}} is called a fusion frame system for H {\displaystyle {\mathcal {H}}} . === Relation to global frames === Let { W i } i ∈ H {\displaystyle \{W_{i}\}_{i\in {\mathcal {H}}}} be closed subspaces of H {\displaystyle {\mathcal {H}}} with positive weights { v i } i ∈ I {\displaystyle \{v_{i}\}_{i\in {\mathcal {I}}}} . Suppose { f i j } i ∈ I , j ∈ J i {\displaystyle \{f_{ij}\}_{i\in {\mathcal {I}},j\in J_{i}}} is a frame for W i {\displaystyle W_{i}} with frame bounds C i {\displaystyle C_{i}} and D i {\displaystyle D_{i}} . Let C = inf i ∈ I C i {\textstyle C=\inf _{i\in {\mathcal {I}}}C_{i}} and D = inf i ∈ I D i {\textstyle D=\inf _{i\in {\mathcal {I}}}D_{i}} , which satisfy that 0 < C ≤ D < ∞ {\displaystyle 0<C\leq D<\infty } . Then { W i , v i } i ∈ I {\displaystyle \{W_{i},v_{i}\}_{i\in {\mathcal {I}}}} is a fusion frame of H {\displaystyle {\mathcal {H}}} if and only if { v i f i j } i ∈ I , j ∈ J i {\displaystyle \{v_{i}f_{ij}\}_{i\in {\mathcal {I}},j\in J_{i}}} is a frame of H {\displaystyle {\mathcal {H}}} . Additionally, if { ( W i , v i , { f i j } j ∈ J i ) } i ∈ I {\displaystyle \{\left(W_{i},v_{i},\{f_{ij}\}_{j\in J_{i}}\right)\}_{i\in {\mathcal {I}}}} is a fusion frame system for H {\displaystyle {\mathcal {H}}} with lower and upper bounds A {\displaystyle A} and B {\displaystyle B} , then { v i f i j } i ∈ I , j ∈ J i {\displaystyle \{v_{i}f_{ij}\}_{i\in {\mathcal {I}},j\in J_{i}}} is a frame of H {\displaystyle {\mathcal {H}}} with lower and upper bounds A C {\displaystyle AC} and B D {\displaystyle BD} . And if { v i f i j } i ∈ I , j ∈ J i {\displaystyle \{v_{i}f_{ij}\}_{i\in {\mathcal {I}},j\in J_{i}}} is a frame of H {\displaystyle {\mathcal {H}}} with lower and upper bounds E {\displaystyle E} and F {\displaystyle F} , then { ( W i , v i , { f i j } j ∈ J i ) } i ∈ I {\displaystyle \{\left(W_{i},v_{i},\{f_{ij}\}_{j\in J_{i}}\right)\}_{i\in {\mathcal {I}}}} is a fusion frame system for H {\displaystyle {\mathcal {H}}} with lower and upper bounds E / D {\displaystyle E/D} and F / C {\displaystyle F/C} . === Local frame representation === Let W ⊂ H {\displaystyle W\subset {\mathcal {H}}} be a closed subspace, and let { x n } {\displaystyle \{x_{n}\}} be an orthonormal basis of W {\displaystyle W} . Then the orthogonal projection of f ∈ H {\displaystyle f\in {\mathcal {H}}} onto W {\displaystyle W} is given by P W f = ∑ ⟨ f , x n ⟩ x n . {\displaystyle P_{W}f=\sum \langle f,x_{n}\rangle x_{n}.} We can also express the orthogonal projection of f {\displaystyle f} onto W {\displaystyle W} in terms of given local frame { f k } {\displaystyle \{f_{k}\}} of W {\displaystyle W} P W f = ∑ ⟨ f , f k ⟩ f ~ k , {\displaystyle P_{W}f=\sum \langle f,f_{k}\rangle {\tilde {f}}_{k},} where { f ~ k } {\displaystyle \{{\tilde {f}}_{k}\}} is a dual frame of the local frame { f k } {\displaystyle \{f_{k}\}} . == Fusion frame operator == === Definition === Let { W i , v i } i ∈ I {\displaystyle \{W_{i},v_{i}\}_{i\in {\mathcal {I}}}} be a fusion frame for H {\displaystyle {\mathcal {H}}} . Let { ∑ ⨁ W i } l 2 {\displaystyle \{\sum \bigoplus W_{i}\}_{l_{2}}} be representation space for projection. The analysis operator T W : H → { ∑ ⨁ W i } l 2 {\displaystyle T_{W}:{\mathcal {H}}\rightarrow \{\sum \bigoplus W_{i}\}_{l_{2}}} is defined by T W ( f ) = { v i P W i ( f ) } i ∈ I . {\displaystyle T_{W}\left(f\right)=\{v_{i}P_{W_{i}}\left(f\right)\}_{i\in {\mathcal {I}}}.} The adjoint is called the synthesis operator T W ∗ : { ∑ ⨁ W i } l 2 → H {\displaystyle T_{W}^{\ast }:\{\sum \bigoplus W_{i}\}_{l_{2}}\rightarrow {\mathcal {H}}} , defined as T W ∗ ( g ) = ∑ v i f i , {\displaystyle T_{W}^{\ast }\left(g\right)=\sum v_{i}f_{i},} where g = { f i } i ∈ I ∈ { ∑ ⨁ W i } l 2 {\displaystyle g=\{f_{i}\}_{i\in {\mathcal {I}}}\in \{\sum \bigoplus W_{i}\}_{l_{2}}} . The fusion frame operator S W : H → H {\displaystyle S_{W}:{\mathcal {H}}\rightarrow {\mathcal {H}}} is defined by S W ( f ) = T W ∗ T W ( f ) = ∑ v i 2 P W i ( f ) . {\displaystyle S_{W}\left(f\right)=T_{W}^{\ast }T_{W}\left(f\right)=\sum v_{i}^{2}P_{W_{i}}\left(f\right).} === Properties === Given the lower and upper bounds of the fusion frame { W i , v i } i ∈ I {\displaystyle \{W_{i},v_{i}\}_{i\in {\mathcal {I}}}} , A {\displaystyle A} and B {\displaystyle B} , the fusion frame operator S W {\displaystyle S_{W}} can be bounded by A I ≤ S W ≤ B I , {\displaystyle AI\leq S_{W}\leq BI,} where I {\displaystyle I} is the identity operator. Therefore, the fusion frame operator S W {\displaystyle S_{W}} is positive and invertible. === Representation === Given a fusion frame system { ( W i , v i , F i ) } i ∈ I {\displaystyle \{\left(W_{i},v_{i},{\mathcal {F}}_{i}\right)\}_{i\in {\mathcal {I}}}} for H {\displaystyle {\mathcal {H}}} , where F i = { f i j } j ∈ J i {\displaystyle {\mathcal {F}}_{i}=\{f_{ij}\}_{j\in J_{i}}} , and F ~ i = { f ~ i j } j ∈ J i {\displaystyle {\tilde {\mathcal {F}}}_{i}=\{{\tilde {f}}_{ij}\}_{j\in J_{i}}} , which is a dual frame for F i {\displaystyle {\mathcal {F}}_{i}} , the fusion frame operator S W {\displaystyle S_{W}} can be expressed as S W = ∑ v i 2 T F ~ i ∗ T F i = ∑ v i 2 T F i ∗ T F ~ i {\displaystyle S_{W}=\sum v_{i}^{2}T_{{\tilde {\mathcal {F}}}_{i}}^{\ast }T_{{\mathcal {F}}_{i}}=\sum v_{i}^{2}T_{{\mathcal {F}}_{i}}^{\ast }T_{{\tilde {\mathcal {F}}}_{i}}} , where T F i {\displaystyle T_{{\mathcal {F}}_{i}}} , T F ~ i {\displaystyle T_{{\tilde {\mathcal {F}}}_{i}}} are analysis operators for F i {\displaystyle {\mathcal {F}}_{i}} and F ~ i {\displaystyle {\tilde {\mathcal {F}}}_{i}} respectively, and T F i ∗ {\displaystyle T_{{\mathcal {F}}_{i}}^{\ast }} , T F ~ i ∗ {\displaystyle T_{{\tilde {\mathcal {F}}}_{i}}^{\ast }} are synthesis operators for F i {\displaystyle {\mathcal {F}}_{i}} and F ~ i {\displaystyle {\tilde {\mathcal {F}}}_{i}} respectively. For finite frames (i.e., dim H =: N < ∞ {\displaystyle \dim {\mathcal {H}}=:N<\infty } and | I | < ∞ {\displaystyle |{\mathcal {I}}|<\infty } ), the fusion frame operator can be constructed with a matrix. Let { W i , v i } i ∈ I {\displaystyle \{W_{i},v_{i}\}_{i\in {\mathcal {I}}}} be a fusion frame for H N {\displaystyle {\mathcal {H}}_{N}} , and let { f i j } j ∈ J i {\displaystyle \{f_{ij}\}_{j\in {\mathcal {J}}_{i}}} be a frame for the subspace W i {\displaystyle W_{i}} and J i {\displaystyle J_{i}} an index set for each i ∈ I {\displaystyle i\in {\mathcal {I}}} . Then the fusion frame operator S : H → H {\displaystyle S:{\mathcal {H}}\to {\mathcal {H}}} reduces to an N × N {\displaystyle N\times N} matrix, given by S = ∑ i ∈ I v i 2 F i F ~ i T , {\displaystyle S=\sum _{i\in {\mathcal {I}}}v_{i}^{2}F_{i}{\tilde {F}}_{i}^{T},} with F i = [ ⋮ ⋮ ⋮ f i 1 f i 2 ⋯ f i | J i | ⋮ ⋮ ⋮ ] N × | J i | , {\displaystyle F_{i}={\begin{bmatrix}\vdots &\vdots &&\vdots \\f_{i1}&f_{i2}&\cdots &f_{i|J_{i}|}\\\vdots &\vdots &&\vdots \\\end{bmatrix}}_{N\times |J_{i}|},} and F ~ i = [ ⋮ ⋮ ⋮ f ~ i 1 f ~ i 2 ⋯ f ~ i | J i | ⋮ ⋮ ⋮ ] N × | J i | , {\displaystyle {\tilde {F}}_{i}={\begin{bmatrix}\vdots &\vdots &&\vdots \\{\tilde {f}}_{i1}&{\tilde {f}}_{i2}&\cdots &{\tilde {f}}_{i|J_{i}|}\\\vdots &\vdots &&\vdots \\\end{bmatrix}}_{N\times |J_{i}|},} where f ~ i j {\displaystyle {\tilde {f}}_{ij}} is the canonical dual frame of f i j {\displaystyle f_{ij}} . == See also == Hilbert space Frame (linear algebra) == References == == External links == Fusion Frames
|
Wikipedia:Félix Pollaczek#0
|
Félix Pollaczek (1 December 1892 in Vienna – 29 April 1981 at Boulogne-Billancourt) was an Austrian-French engineer and mathematician, known for numerous contributions to number theory, mathematical analysis, mathematical physics and probability theory. He is best known for the Pollaczek–Khinchine formula in queueing theory (1930), and the Meixner-Pollaczek polynomials. == Education and career == Pollaczek studied at the Technical University of Vienna, got a M.Sc. in electrical engineering from Technical University of Brno (1920), and his Ph.D. in mathematics from University of Berlin (1922) with a dissertation titled Über die Kreiskörper der l-ten und l2-ten Einheitswurzeln, advised by Issai Schur and based on results published first in 1917. Pollaczek was employed by AEG in Berlin (1921–23), worked for Reichspost (1923–33). In 1933, he was fired because he was Jewish. He moved to Paris, where he was consulting teletraffic engineer to various institutions from 1933 onwards, including the Société d'Études pour Liaisons Téléphoniques et Télégraphiques (SELT) and the French National Centre for Scientific Research (CNRS). In 1977, Pollaczek was awarded the John von Neumann Theory Prize, although his age prevented him from receiving the prize in person. He was posthumously elected to the 2002 class of Fellows of the Institute for Operations Research and the Management Sciences. == Personal life == He married mathematician Hilda Geiringer in 1921, and they had a child, Magda, in 1922. However, their marriage did not last, and Magda was brought up by Hilda. Pollaczek became physicist László Tisza's father-in-law due to Magda's marriage. == References ==
|
Wikipedia:G. H. Hardy#0
|
Godfrey Harold Hardy (7 February 1877 – 1 December 1947) was an English mathematician, known for his achievements in number theory and mathematical analysis. In biology, he is known for the Hardy–Weinberg principle, a basic principle of population genetics. G. H. Hardy is usually known by those outside the field of mathematics for his 1940 essay A Mathematician's Apology. Starting in 1914, Hardy was the mentor of the Indian mathematician Srinivasa Ramanujan, a relationship that has become celebrated. Hardy almost immediately recognised Ramanujan's extraordinary albeit untutored brilliance, and Hardy and Ramanujan became close collaborators. In an interview by Paul Erdős, when Hardy was asked what his greatest contribution to mathematics was, Hardy unhesitatingly replied that it was the discovery of Ramanujan. In a lecture on Ramanujan, Hardy said that "my association with him is the one romantic incident in my life".: 2 == Biography == G. H. Hardy was born on 7 February 1877, in Cranleigh, Surrey, England, into a teaching family. His father was Bursar and Art Master at Cranleigh School; his mother had been a senior mistress at Lincoln Training College for teachers. Both of his parents were mathematically inclined, though neither had a university education. He and his sister Gertrude "Gertie" Emily Hardy (1878–1963) were brought up by their educationally enlightened parents in a typical Victorian nursery attended by a nurse. At an early age, he argued with his nurse about the existence of Santa Claus and the efficacy of prayer. He read aloud to his sister books such as Don Quixote, Gulliver's Travels, and Robinson Crusoe.: 447 Hardy's own natural affinity for mathematics was perceptible at an early age. When just two years old, he wrote numbers up to millions, and when taken to church he amused himself by factorising the numbers of the hymns. After schooling at Cranleigh, Hardy was awarded a scholarship to Winchester College for his mathematical work. In 1896, he entered Trinity College, Cambridge. He was first tutored under Robert Rumsey Webb, but found it unsatisfying, and briefly considered switching to history. He then was tutored by Augustus Love, who recommended him to read Camille Jordan's Cours d'analyse, which taught him for the first time "what mathematics really meant". After only two years of preparation under his coach, Robert Alfred Herman, Hardy was fourth in the Mathematics Tripos examination. Years later, he sought to abolish the Tripos system, as he felt that it was becoming more an end in itself than a means to an end. While at university, Hardy joined the Cambridge Apostles, an elite, intellectual secret society. Hardy cited as his most important influence his independent study of Cours d'analyse de l'École Polytechnique by the French mathematician Camille Jordan, through which he became acquainted with the more precise mathematics tradition in continental Europe. In 1900 he passed part II of the Tripos, and in the same year he was elected to a Prize Fellowship at Trinity College.: 448 In 1903 he earned his M.A., which was the highest academic degree at English universities at that time. When his Prize Fellowship expired in 1906 he was appointed to the Trinity staff as a lecturer in mathematics, where teaching six hours per week left him time for research.: 448 On 16 January 1913, Ramanujan wrote to Hardy, who Ramanujan had known from studying Orders of Infinity (1910). Hardy read the letter in the morning, suspected it was a crank or a prank, but thought it over and realized in the evening that it was likely genuine because "great mathematicians are commoner than thieves or humbugs of such incredible skill". He then invited Ramanujan to Cambridge and began "the one romantic incident in my life". In the aftermath of the Bertrand Russell affair during World War I, in 1919 he left Cambridge to take the Savilian Chair of Geometry (and thus become a Fellow of New College) at Oxford. Hardy spent the academic year 1928–1929 at Princeton University in an academic exchange with Oswald Veblen, who spent the year at Oxford. Hardy gave the Josiah Willard Gibbs lecture for 1928. Hardy left Oxford and returned to Cambridge in 1931, becoming again a fellow of Trinity College and holding the Sadleirian Professorship until 1942.: 453 It is believed that he left Oxford for Cambridge to avoid the compulsory retirement at 65. He was on the governing body of Abingdon School from 1922 to 1935. In 1939, he suffered a coronary thrombosis, which prevented him from playing tennis, squash, etc. He also lost his creative powers in mathematics. He was constantly bored and distracted himself by writing a privately circulated memoir about the Bertrand Russell affair. In the early summer of 1947, he attempted suicide by barbiturate overdose. After that, he resolved to simply wait for death. He died suddenly one early morning while listening to his sister read out from a book of the history of Cambridge University cricket. == Work == Hardy is credited with reforming British mathematics by bringing rigour into it, which was previously a characteristic of French, Swiss and German mathematics. British mathematicians had remained largely in the tradition of applied mathematics, in thrall to the reputation of Isaac Newton (see Cambridge Mathematical Tripos). Hardy was more in tune with the cours d'analyse methods dominant in France, and aggressively promoted his conception of pure mathematics, in particular against the hydrodynamics that was an important part of Cambridge mathematics. Hardy preferred to work only 4 hours every day on mathematics, spending the rest of the day talking, playing cricket, and other gentlemanly activities. From 1911, he collaborated with John Edensor Littlewood, in extensive work in mathematical analysis and analytic number theory. This (along with much else) led to quantitative progress on Waring's problem, as part of the Hardy–Littlewood circle method, as it became known. In prime number theory, they proved results and some notable conditional results. This was a major factor in the development of number theory as a system of conjectures; examples are the first and second Hardy–Littlewood conjectures. Hardy's collaboration with Littlewood is among the most successful and famous collaborations in mathematical history. In a 1947 lecture, the Danish mathematician Harald Bohr reported a colleague as saying, "Nowadays, there are only three really great English mathematicians: Hardy, Littlewood, and Hardy–Littlewood.": xxvii In November 1919, Hardy wrote to Bertrand Russell about his work with Littlewood."I wish you could find some tactful way of stirring up Littlewood to do a little writing. Heaven knows I am conscious of my huge debt to him ... but in our collaboration he will contribute ideas and ideas only ... all the tedious part has to be done by me [or] it simply won't be done ... I can get absolutely no help from him at all; not even an inquiry as to how I am getting on!" Hardy is also known for formulating the Hardy–Weinberg principle, a basic principle of population genetics, independently from Wilhelm Weinberg in 1908. He played cricket with the geneticist Reginald Punnett, who introduced the problem to him in purely mathematical terms.: 9 Hardy, who had no interest in genetics and described the mathematical argument as "very simple", may never have realised how important the result became.: 117 Hardy was elected an international honorary member of the American Academy of Arts and Sciences in 1921, an international member of the United States National Academy of Sciences in 1927, and an international member of the American Philosophical Society in 1939. Hardy's collected papers have been published in seven volumes by Oxford University Press. === Pure mathematics === Hardy preferred his work to be considered pure mathematics, perhaps because of his detestation of war and the military uses to which mathematics had been applied. He made several statements similar to that in his Apology: I have never done anything "useful". No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world. However, aside from formulating the Hardy–Weinberg principle in population genetics, his famous work on integer partitions with his collaborator Ramanujan, known as the Hardy–Ramanujan asymptotic formula, has been widely applied in physics to find quantum partition functions of atomic nuclei (first used by Niels Bohr) and to derive thermodynamic functions of non-interacting Bose–Einstein systems. Though Hardy wanted his maths to be "pure" and devoid of any application, much of his work has found applications in other branches of science. Moreover, Hardy deliberately pointed out in his Apology that mathematicians generally do not "glory in the uselessness of their work", but rather – because science can be used for evil ends as well as good – "mathematicians may be justified in rejoicing that there is one science at any rate, and that their own, whose very remoteness from ordinary human activities should keep it gentle and clean.": 33 Hardy also rejected as a "delusion" the belief that the difference between pure and applied mathematics had anything to do with their utility. Hardy regards as "pure" the kinds of mathematics that are independent of the physical world, but also considers some "applied" mathematicians, such as the physicists Maxwell and Einstein, to be among the "real" mathematicians, whose work "has permanent aesthetic value" and "is eternal because the best of it may, like the best literature, continue to cause intense emotional satisfaction to thousands of people after thousands of years." Although he admitted that what he called "real" mathematics may someday become useful, he asserted that, at the time in which the Apology was written, only the "dull and elementary parts" of either pure or applied mathematics could "work for good or ill".: 39 == Personality == Hardy was extremely shy as a child and was socially awkward, cold and eccentric throughout his life. During his school years, he was top of his class in most subjects, and won many prizes and awards but hated having to receive them in front of the entire school. He was uncomfortable being introduced to new people, and could not bear to look at his own reflection in a mirror. It is said that, when staying in hotels, he would cover all the mirrors with towels. Socially, Hardy was associated with the Bloomsbury Group and the Cambridge Apostles; G. E. Moore, Bertrand Russell and J. M. Keynes were friends. Apart from close friendships, he had a few platonic relationships with young men who shared his sensibilities, and often his love of cricket. A mutual interest in cricket led him to befriend the young C. P. Snow.: 10–12 Hardy was a lifelong bachelor and in his final years he was cared for by his sister. He was an avid cricket fan. Maynard Keynes observed that if Hardy had read the stock exchange for half an hour every day with as much interest and attention as he did the day's cricket scores, he would have become a rich man. He liked to speak of the best class of mathematical research as "the Hobbs class", and later, after Bradman appeared as an even greater batsman, "the Bradman class". Around the age of 20, he decided that he did not believe in God, which proved a minor issue as attending the chapel was compulsory at Cambridge University. He wrote a letter to his parents explaining that, and from then on he refused to go into any college chapel, even for purely ritualistic duties. He was at times politically involved, if not an activist. He took part in the Union of Democratic Control during World War I, and For Intellectual Liberty in the late 1930s. He admired America and the Soviet Union roughly equally. He found both sides of the Second World War objectionable. Paul Hoffman writes that "His concerns were wide-ranging, as evidenced by six New Year's resolutions he set in a postcard to a friend: (1) prove the Riemann hypothesis; (2) make 211 not out in the fourth innings of the last Test Match at the Oval; (3) find an argument for the nonexistence of God which shall convince the general public; (4) be the first man at the top of Mount Everest; (5) be proclaimed the first president of the U. S. S. R. of Great Britain and Germany; and (6) murder Mussolini. == Cultural references == Hardy is a key character, played by Jeremy Irons, in the 2015 film The Man Who Knew Infinity, based on the biography of Ramanujan with the same title. Hardy is a major character in David Leavitt's historical fiction novel The Indian Clerk (2007), which depicts his Cambridge years and his relationship with John Edensor Littlewood and Ramanujan. Hardy is a secondary character in Uncle Petros and Goldbach's Conjecture (1992), a mathematics novel by Apostolos Doxiadis. Hardy is also a character in the 2014 Indian film, Ramanujan, played by Kevin McGowan. == Bibliography == Hardy, G. H. (2012) [1st pub. 1940, with foreword 1967]. A Mathematician's Apology. With a foreword by C. P. Snow. Cambridge: Cambridge University Press. ISBN 978-1-107-29559-9. Full text The reprinted Mathematician's Apology with an introduction by C.P. Snow was recommended by Marcus du Sautoy in the BBC Radio program A Good Read in 2007. Hardy, G. H. (1999) [1st pub. Cambridge University Press: 1940]. Ramanujan: Twelve Lectures on Subjects Suggested by his Life and Work. Providence, RI: AMS Chelsea. ISBN 978-0-8218-2023-0. Hardy, G. H.; Wright, E. M. (2008) [1st ed. 1938]. An Introduction to the Theory of Numbers. Revised by D. R. Heath-Brown and J. H. Silverman, with a foreword by Andrew Wiles (6th ed.). Oxford: Oxford University Press. ISBN 978-0-19-921985-8. Hardy, G. H. (2008) [1st ed. 1908]. A Course of Pure Mathematics. With a foreword by T. W. Körner (10th ed.). Cambridge University Press. ISBN 978-0-521-72055-7. Hardy, G. H. (2013) [1st ed. Clarendon Press: 1949]. Divergent Series (2nd ed.). Providence, RI: American Mathematical Society. ISBN 978-0-8218-2649-2. LCCN 49005496. MR 0030620. OCLC 808787. Full text Hardy, G. H. (1966–1979). London Mathematical Society committee (ed.). Collected papers of G. H. Hardy; including joint papers with J. E. Littlewood and others. Oxford: Clarendon Press. ISBN 0-19-853340-3. OCLC 823424. Hardy, G. H.; Littlewood, J. E.; Pólya, G. (1934). Inequalities (PDF) (1st ed.). Cambridge: Cambridge University Press. Hardy, G. H. (1970) [1st pub. 1942]. Bertrand Russell and Trinity. With a foreword by C. D. Broad. Cambridge University Press. ISBN 978-0-521-11392-2. == See also == == Notes == == References == == Further reading == Kanigel, Robert (1991). The Man Who Knew Infinity: A Life of the Genius Ramanujan. New York: Washington Square Press. ISBN 0-671-75061-5. Snow, C. P. (1967). "G. H. Hardy". Variety of Men. London: Macmillan. pp. 15–46. Reprinted as Snow, C.P (2012) [1st pub. 1967]. Foreword. A Mathematician's Apology. By Hardy, G. H. Cambridge University Press. ISBN 978-1-107-29559-9. Albers, D.J.; Alexanderson, G.L.; Dunham, W., eds. (2015). The G.H. Hardy Reader. Cambridge: Cambridge University Press. ISBN 978-1-10713-555-0. == External links == Works by G. H. Hardy at Project Gutenberg Works by or about G. H. Hardy at the Internet Archive Works by G. H. Hardy at LibriVox (public domain audiobooks) O'Connor, John J.; Robertson, Edmund F., "G. H. Hardy", MacTutor History of Mathematics Archive, University of St Andrews Quotations of G. H. Hardy Hardy's work on Number Theory Weisstein, Eric Wolfgang (ed.). "Hardy, Godfrey Harold (1877–1947)". ScienceWorld.
|
Wikipedia:G. N. Watson#0
|
George Neville Watson (31 January 1886 – 2 February 1965) was an English mathematician, who applied complex analysis to the theory of special functions. His collaboration on the 1915 second edition of E. T. Whittaker's A Course of Modern Analysis (1902) produced the classic "Whittaker and Watson" text. In 1918 he proved a significant result known as Watson's lemma, that has many applications in the theory on the asymptotic behaviour of exponential integrals. == Life == He was born in Westward Ho! in Devon the son of George Wentworth Watson, a schoolmaster and genealogist, and his wife, Mary Justina Griffith. He was educated at St Paul's School in London, as a pupil of F. S. Macaulay. He then studied Mathematics at Trinity College, Cambridge. There he encountered E. T. Whittaker, though their overlap was only two years. From 1914 to 1918 he lectured in Mathematics at University College, London. He became Professor of Pure Mathematics at the University of Birmingham in 1918, replacing Prof R S Heath, and remained in this role until 1951. He was awarded an honorary MSc Pure Science in 1919 by Birmingham University. He was President of the London Mathematical Society 1933/35. He died at Leamington Spa on 2 February 1965. == Works == His Treatise on the theory of Bessel functions (1922) also became a classic, in particular in regard to the asymptotic expansions of Bessel functions. He subsequently spent many years on Ramanujan's formulae in the area of modular equations, mock theta functions and q-series, and for some time looked after Ramanujan's lost notebook. Sometime in the late 1920s, G. N. Watson and B. M. Wilson began the task of editing Ramanujan's notebooks. The second notebook, being a revised, enlarged edition of the first, was their primary focus. Wilson was assigned Chapters 2–14, and Watson was to examine Chapters 15–21. Wilson devoted his efforts to this task until 1935, when he died from an infection at the early age of 38. Watson wrote over 30 papers inspired by the notebooks before his interest evidently waned in the late 1930s. Ramanujan discovered many more modular equations than all of his mathematical predecessors combined. Watson provided proofs for most of Ramanujan's modular equations. Bruce C. Berndt completed the project begun by Watson and Wilson. Much of Berndt's book Ramanujan's Notebooks, Part 3 (1998) is based upon the prior work of Watson. Watson's interests included solvable cases of the quintic equation. He introduced Watson's quintuple product identity. == Honours and awards == In 1919 Watson was elected a Fellow of the Royal Society, and in 1946, he received the Sylvester Medal from the Society. He was president of the London Mathematical Society from 1933 to 1935. He is sometimes confused with the mathematician G. L. Watson, who worked on quadratic forms, and G. Watson, a statistician. == Family == In 1925 he married Elfrida Gwenfil Lane daughter of Thomas Wright Lane. == References ==
|
Wikipedia:G. W. Peck#0
|
G. W. Peck is a pseudonymous attribution used as the author or co-author of a number of published academic papers in mathematics. Peck is sometimes humorously identified with George Wilbur Peck, a former governor of the US state of Wisconsin. Peck first appeared as the official author of a 1979 paper entitled "Maximum antichains of rectangular arrays". The name "G. W. Peck" is derived from the initials of the actual writers of this paper: Ronald Graham, Douglas West, George B. Purdy, Paul Erdős, Fan Chung, and Daniel Kleitman. The paper initially listed Peck's affiliation as Xanadu, but the editor of the journal objected, so Ron Graham gave him a job at Bell Labs. Since then, Peck's name has appeared on some sixteen publications, primarily as a pseudonym of Daniel Kleitman. In reference to "G. W. Peck", Richard P. Stanley defined a Peck poset to be a graded partially ordered set that is rank symmetric, rank unimodal, and strongly Sperner. The posets in the original paper by G. W. Peck are not quite Peck posets, as they lack the property of being rank symmetric. == See also == Nicolas Bourbaki Arthur Besse John Rainwater Blanche Descartes Monsieur LeBlanc == References == == External links == Imaginary Erdős numbers, Numberphile, Nov 26, 2014. Video interview with Ron Graham in which he tells the story of G. W. Peck.
|
Wikipedia:GJMS operator#0
|
In the mathematical field of differential geometry, the GJMS operators are a family of differential operators, that are defined on a Riemannian manifold. In an appropriate sense, they depend only on the conformal structure of the manifold. The GJMS operators generalize the Paneitz operator and the conformal Laplacian. The initials GJMS are for its discoverers Graham, Jenne, Mason & Sparling (1992). Properly, the GJMS operator on a conformal manifold of dimension n is a conformally invariant operator between the line bundle of conformal densities of weight k − n/2 for k a positive integer L k : E [ k − n / 2 ] → E [ − k − n / 2 ] . {\displaystyle L_{k}:E[k-n/2]\to E[-k-n/2].} The operators have leading symbol given by a power of the Laplace–Beltrami operator, and have lower order correction terms that ensure conformal invariance. The original construction of the GJMS operators used the ambient construction of Charles Fefferman and Robin Graham. A conformal density defines, in a natural way, a function on the null cone in the ambient space. The GJMS operator is defined by taking density ƒ of the appropriate weight k − n/2 and extending it arbitrarily to a function F off the null cone so that it still retains the same homogeneity. The function ΔkF, where Δ is the ambient Laplace–Beltrami operator, is then homogeneous of degree −k − n/2, and its restriction to the null cone does not depend on how the original function ƒ was extended to begin with, and so is independent of choices. The GJMS operator also represents the obstruction term to a formal asymptotic solution of the Cauchy problem for extending a weight k − n/2 function off the null cone in the ambient space to a harmonic function in the full ambient space. The most important GJMS operators are the critical GJMS operators. In even dimension n, these are the operators Ln/2 that take a true function on the manifold and produce a multiple of the volume form. == References == Graham, C. Robin; Jenne, Ralph; Mason, Lionel J.; Sparling, George A. J. (1992), "Conformally invariant powers of the Laplacian. I. Existence", Journal of the London Mathematical Society, Second Series, 46 (3): 557–565, doi:10.1112/jlms/s2-46.3.557, ISSN 0024-6107, MR 1190438.
|
Wikipedia:Gabriel Altmann#0
|
Gabriel Altmann (24 May 1931 – 2 March 2020) was a Slovak-German linguist and mathematician. He made significant contributions to the field of quantitative linguistics. He is best known for co-developing Menzerath's law, also known as the Menzerath-Altmann law, which describes the relationship between the size of a linguistic construct and the size of its linguistic constituents. == Biography == Altmann was born on 24 May 1931 in Poltár, Czechoslovakia. He spent much of his career as a professor at Ruhr University Bochum in Germany. Over his long career, Altmann authored numerous books and articles focused on quantitative linguistics. He served as the founding editor of the book series Quantitative Linguistics, which publishes works on all aspects of quantitative methods and models in linguistics. He was also on the editorial boards of several journals in the field, such as Journal of Quantitative Linguistics. Altmann made key contributions to establishing the fundamental principles of quantitative linguistics. In addition to his work on Menzerath's law, he helped develop a unified derivation of several linguistic laws. His research applied mathematical and statistical methods to analyze various facets of language, from word length distributions to syntactic structures. == Works == Einführung in die quantitative Lexikologie (1980) Wiederholungen in Texten (1988) Quantitative Linguistics: An International Handbook (2005) == See also == Zipf's law == References ==
|
Wikipedia:Gabriel Judah Lichtenfeld#0
|
Gabriel Judah Lichtenfeld (Hebrew: גַּבְרִיאֵל יְהוּדָה ליכטענפעלד; 1811, Lublin — 22 March 1887, Warsaw) was a Jewish-Polish maskilic mathematician, poet, and author. He wrote for Ha-Shachar, Ha-Tzefirah, Izraelita, and Polish newspapers, mostly on mathematical topics. == Biography == A descendant of Moses Isserles, Lichtenfeld showed early ability as a Talmudic scholar. He later became familiar with Latin, German, French, and Polish, and made a special study of philosophy and mathematics. In the Hebrew periodical Ha-Shachar, there appeared a series of Hebrew articles by Lichtenfeld which attracted attention. His reputation was enhanced by his series of articles, in the Polish periodical Izraelita, on Jewish mathematicians. Lichtenfeld is known also by his polemics with Hayyim Selig Slonimski on mathematical subjects. Among other works, Lichtenfeld was the author of Yedi'ot ha-Shi'urim (1865, "Science of Measurement"), Tzofnat Pa'neach (1874), a critical review of Slonimski's Yesode Ḥokmat ha-Shi'ur, Tosefot (1875), a polemic against Slonimski, Kohen Lelo Elohim (1876), a book of mathematical criticisms, and Sippurim be-Shir ve'Shirim Shonim (1877, "Stories in Verse and Selected Poems"), a collection of poems and rimed prose by himself and by his son-in-law I. L. Peretz. Lichtenfeld's main book on mathematics, Bo'u Ḥeshbon, was published posthumously in 1895. == References == Fuenn, Keneset Yisrael, ii. 356; Zeitlin, William. Bibliotheca Hebraica Post-Mendelssohniana. p. 209. This article incorporates text from a publication now in the public domain: Rosenthal, Herman; Lipman, Jacob Goodale (1901–1906). "Lichtenfeld, Gabriel Judah". In Singer, Isidore; et al. (eds.). The Jewish Encyclopedia. New York: Funk & Wagnalls.
|
Wikipedia:Gabriel Xavier Paul Koenigs#0
|
Gabriel Xavier Paul Koenigs (17 January 1858 in Toulouse, France – 29 October 1931 in Paris, France) was a French mathematician who worked on analysis and geometry. He was elected as Secretary General of the Executive Committee of the International Mathematical Union after the first world war, and used his position to exclude countries with whom France had been at war from the mathematical congresses. He was awarded the Poncelet Prize for 1893. == Publications == Koenigs G. Recherches sur les intégrals de certaines équations fontionnelles. Ann. École Normale, Suppl., 1884, (3)1. Leçons de l'agrégation classique de mathématiques. A. Hermann. 1892. Mémoire sur les lignes géodésiques. Paris: Imp. nationale. 1894. La géométrie réglée et ses applications. Gauthier-Villars. 1895. Leçons de cinématique. Paris: A. Hermann. 1897. Introduction a une théorie nouvelle des mechanismes. A. Hermann. 1905. == See also == Koenigs function Schröder's equation == References == O'Connor, John J.; Robertson, Edmund F., "Gabriel Xavier Paul Koenigs", MacTutor History of Mathematics Archive, University of St Andrews
|
Wikipedia:Gabriela Araujo-Pardo#0
|
Martha Gabriela Araujo-Pardo is a Mexican mathematician specializing in graph theory, including work on graph coloring, Kneser graphs, cages, and finite geometry. She is a researcher at the National Autonomous University of Mexico in the Mathematics Institute, Juriquilla Campus, and the 2024–2026 president of the Mexican Mathematical Society. == Education and career == Araujo studied mathematics at the National Autonomous University of Mexico (UNAM), where she completed her Ph.D. in 2000. Her dissertation, Daisy Structure in Desarguesian Projective Planes, was supervised by Luis Montejano Peimbert. She has worked for the UNAM Mathematics Institute since 2000, with a postdoctoral research visit to the Polytechnic University of Catalonia in Spain. She is the president of the Mexican Mathematical Society (SMM) for the term 2024–2026. == Recognition == In 2004, Araujo was awarded the Sofía-Kovalevskaia grant. In 2013, Araujo won UNAM's Sor Juana Inés de la Cruz 2013 award, and was elected to the Mexican Academy of Sciences. In 2024, Araujo was named Fellow of The World Academy of Sciences (TWAS). == Service == In 2012, Araujo served as the spokesperson of the board of Trustees of the Mexican Mathematical Society. In collaboration with other members, they funded the Equity and Gender Commission of the SMM in 2013, "to promote the inclusion of underrepresented groups, in particular women, in the mathematical activity of the country". She was part of the Commission from 2014 to 2018. She was a member of the Directive Commission and the Diversity and Gender Commission of the Union of Mathematical Societies in Latin America and the Caribbean (UMALCA) from 2021 to 2024. Araujo is the ambassador for Mexico in the Committee of Women of Mathematics of the International Mathematical Union. She was president of the Mexican Mathematical Society (SMM) for the term 2022–2024 and has been reelected for 2024–2026. == References == == External links == Home page Gabriela Araujo-Pardo publications indexed by Google Scholar
|
Wikipedia:Gabriele Manfredi#0
|
Gabriele Manfredi (25 March 1681 – 13 October 1761) was an Italian mathematician who worked in the field of calculus. == Early years == Gabriele Manfredi was born in Bologna, then in the Papal States, on 25 March 1681. He was the son of Alfonso Manfredi, a notary from Lugo, Emilia-Romagna, and Anna Maria Fiorini. His elder brother Eustachio studied law, then turned to science. Gabriele and his brother Eraclito studied medicine, while his fourth brother Emilio became a Jesuit preacher. His two sisters Maddalena and Teresa were also well educated, and later collaborated with their brothers in their work. Gabriele became uncomfortable with the study of anatomy, and turned to other subjects before he and Eustachio were introduced to the new subject of differential calculus. == Mathematician == Manfredi was one of a group of young men at the University who became interested in the techniques of Cartesian geometry and differential calculus, and who engaged in experiments and astronomical observation. Others were his brother Eustachio, Vittorio Francesco Stancari and Giuseppe Verzaglia. Of these, Gabriele Manfredi developed the most advanced understanding of mathematics. Eustachio Manfredi became more interested in astronomy, but Gabriele persisted with mathematics, studying the works of Leibniz and of Johann and Jacob Bernoulli on infinitesimal calculus. After graduating, Gabriele went to Rome at the end of 1702, where he became librarian to Cardinal Pietro Ottoboni, a historian, antiquarian and astronomer. He helped Ottoboni build a sundial at Santa Maria degli Angeli e dei Martiri and helped in the work of reforming the Gregorian calendar. He continued to study mathematics, including differential and integral calculus and logarithmic curves. In 1707 he returned to Bologna where he published his best known work on first-order differential equations. This was the first European work on differential equations. Despite this, he was not given a senior position in the university. He made further contributions to the theory of calculus, although his main contribution after 1715 was as a teacher. == Later career == In 1708 Manfredi began working for the Chancellery of the Senate of Bologna, where he rose to the rank of first chancellor and remained until he retired in 1752. From 1720 he also taught at the University of Bologna. In 1742 he was made superintendent of water, replacing his brother Eustachio. This job, concerned with improving river navigation while avoiding flooding, proved to be difficult and politically controversial. Manfredi married Teresa Del Sole, from the family of the painter Giovanni Gioseffo, and they had three children. He died in Bologna on 5 October 1761 at the age of 80. The asteroid 13225 Manfredi was named in honor of him and his two brothers, Eustachio and Eraclito. == Work == In his work De constructionae aequationum differentialium primi gradu (1707) Manfredi set forth the results he had obtained so far in solving problems related to differential equations and the foundations of calculus. His paper Breve schediasma geometrico per la costruzione di una gran parte delle equazioni differenziali di primo grado (1714) described the procedure commonly adopted for integrating first-order homogeneous differential equations. == List of works == Manfredi, Gabriello (1707). De constructione aequationum differentialium primi gradus (in Latin). Bononiae: typis Constantini Pisarii. Retrieved 13 June 2015. Manfredi, Gabriello. Voto del sig. dottore Gabriello Manfredi pubblico professore di matematica nella Universita di Bologna sopra il parere de' due periti di Bologna e di Ravenna circa l'arginare il Po di Primaro da essi steso dopo la visita dello stesso Po fatta nel 1758. d'ordine della san. me. di Benedetto 14. [Bologna]. == References == Citations Sources
|
Wikipedia:Gabriele Vezzosi#0
|
Gabriele Vezzosi is an Italian mathematician, born in Florence, Italy. His main interest is algebraic geometry. Vezzosi earned an MS degree in Physics at the University of Florence, under the supervision of Alexandre M. Vinogradov, and a PhD in Mathematics at the Scuola Normale Superiore in Pisa, under the supervision of Angelo Vistoli. His first papers dealt with differential calculus over commutative rings, intersection theory, (equivariant) algebraic K-theory, motivic homotopy theory, and existence of vector bundles on singular algebraic surfaces. Around 2001–2002 he started his collaboration with Bertrand Toën. Together, they created homotopical algebraic geometry (HAG), whose more relevant part is derived algebraic geometry (DAG), which is by now a powerful and widespread theory. Slightly later, this theory was reconsidered, and highly expanded by Jacob Lurie. More recently, Vezzosi together with Tony Pantev, Bertrand Toën and Michel Vaquié defined a derived version of symplectic structures and studied important properties and examples (an important instance being Kai Behrend's symmetric obstruction theories); further together with Damien Calaque these authors introduced and studied a derived version of Poisson and coisotropic structures with applications to deformation quantization. Lately Toën and Vezzosi (partly in collaboration with Anthony Blanc and Marco Robalo) moved to applications of derived and non-commutative geometry to arithmetic geometry, especially to Spencer Bloch's conductor conjecture. Vezzosi also defined a derived version of quadratic forms, and in collaboration with Benjamin Hennion and Mauro Porta, proved a very general formal gluing result along non-linear flags with hints of application to a yet conjectural Geometric Langlands program for varieties of dimension bigger than 1. Together with Benjamin Antieau, Vezzosi proved a Hochschild–Kostant–Rosenberg theorem (HKR) for varieties of dimension p in characteristic p. In 2015 he organised the Oberwolfach Seminar on Derived Geometry at the Mathematical Research Institute of Oberwolfach in Germany, and is an organiser of the one-semester thematic program at Mathematical Sciences Research Institute in Berkeley, California in 2019 on Derived algebraic geometry. Vezzosi spent his career so far in Pisa, Florence, Bologna and Paris, has had three PhD students (Schürg, Porta and Melani) and is full professor at the University of Florence (Italy). == References == == External links == Personal web page Gabriele Vezzosi at the Mathematics Genealogy Project Gabriele Vezzosi Wikipedia entry in german Ncatlab entry on derived algebraic geometry Talk at Kashiwara's Conference (IHES, France) June 2017
|
Wikipedia:Gabriella Pinzari#0
|
Gabriella Pinzari is an Italian mathematician known for her research on the n-body problem. == Research == Pinzari's research on the n-body problem has been described as "the most natural way to apply" the Kolmogorov–Arnold–Moser theorem to the problem. The original work of Vladimir Arnold on this theorem attempted to use it to show the stability of the Solar System or similar systems of planetary orbits, but this worked only for the three-body problem because of a degeneracy in Arnold's mathematical framework. Pinzari showed how to eliminate this problem, and extended the solution to larger numbers of bodies, by developing "a rotation-invariant version of the KAM theory". == Education and career == Pinzari earned master's degrees in both physics and mathematics from Sapienza University of Rome, in 1990 and 1996 respectively. She completed her doctorate in 2009 at Roma Tre University under the supervision of Luigi Chierchia. She joined the faculty of the University of Naples Federico II since 2013, and later moved to the University of Padova. == Recognition == She was an Invited Speaker at the 2014 International Congress of Mathematicians, in Seoul, speaking on her work in the session on dynamical systems and ordinary differential equations. == References ==
|
Wikipedia:Gabriella Tarantello#0
|
Gabriella Tarantello (born 15 October 1958) is an Italian mathematician specializing in partial differential equations, differential geometry, and gauge theory. She is a professor in the department of mathematics at the University of Rome Tor Vergata. == Education and career == Tarantello was born in Pratola Peligna. She did her undergraduate studies at the University of L'Aquila, earning a bachelor's degree there in 1982. She then came to New York University for graduate study at the Courant Institute of Mathematical Sciences, earning a master's degree in 1984 and completing her Ph.D. there in 1986. Her dissertation, Some Results on the Minimal Period Problem for Nonlinear Vibrating Strings and Hamiltonian Systems; and on the Number of Solutions for Semilinear Elliptic Equations, was supervised by Louis Nirenberg. After postdoctoral research at the Institute for Advanced Study and a visiting assistant professorship at the University of California, Berkeley, she joined the Carnegie Mellon University faculty in 1989. She returned to Italy as an associate professor at Tor Vergata in 1993, moved to the University of Basilicata as a full professor in 1994, and returned to Tor Vergata as a full professor in 1995. == Books == Tarantello is the author of the book Selfdual Gauge Field Vortices: An Analytical Approach (Progress in Nonlinear Differential Equations and Their Applications 72, Birkhäuser, 2008). With Matthew J. Gursky, Ermanno Lanconelli, Andrea Malchiodi, and Paul C. Yang, she is a co-author of Geometric Analysis and PDEs: Lectures given at the C.I.M.E. Summer School held in Cetraro, Italy, June 11–16, 2007 (Lecture Notes in Mathematics 1977, Springer, 2009). == Recognition == In 2014, Tarantello won the Lucio & Wanda Amerio Gold Medal Prize of the Istituto Lombardo Accademia di Scienze e Lettere. She became a member of the Academia Europaea in 2020. == References == == External links == Home page Gabriella Tarantello publications indexed by Google Scholar
|
Wikipedia:Gady Kozma#0
|
Gady Kozma is an Israeli mathematician. Kozma obtained his PhD in 2001 at the University of Tel Aviv with Alexander Olevskii. He is a scientist at the Weizmann Institute. In 2005, he demonstrated the existence of the scaling limit value (that is, for increasingly finer lattices) of the loop-erased random walk in three dimensions and its invariance under rotations and dilations. A loop-erased random walk consists of a random walk, whose loops, which form when it intersects itself, are removed. This was introduced to the study of self-avoiding random walk by Gregory Lawler in 1980, but is an independent model in another universality class. In the two-dimensional case, conformal invariance was proved by Lawler, Oded Schramm and Wendelin Werner (with Schramm–Loewner evolution) in 2004. The cases of four and more dimensions were treated by Lawler, the scale limiting value is Brownian motion, in four dimensions. Kozma treated the two-dimensional case in 2002 with a new method. In addition to probability theory, he also deals with Fourier series. In 2008 he received the Erdős Prize and in 2010 the Rollo Davidson Prize. He is an editor of the Journal d'Analyse Mathématique. == References ==
|
Wikipedia:Gaetano Scorza#0
|
Bernardino Gaetano Scorza (29 September 1876, in Morano Calabro – 6 August 1939, in Rome) was an Italian mathematician working in algebraic geometry, whose work inspired the theory of Scorza varieties. == Publications == Scorza, Gaetano (1960), Opere scelte. Vol. I. (1899–1915), Pubblicate a cura dell'Unione Matematica Italiana e col contributo del Consiglio Nazionale delle Ricerche, Rome: Edizioni cremonese, MR 0111670 Scorza, Gaetano (1961), Opere Scelte. Vol. II. (1915–1919), Pubblicate a cura dell'Unione Matematica Italiana e col contributo del Consiglio Nazionale delle Ricerche, Rome: Edizioni cremonese, MR 0124997 Scorza, Gaetano (1962), Opere scelte. Vol. III: (1920–1939), Pubblicate a cura dell'Unione Matematica Italiana e col contributo del Consiglio Nazionale delle Ricerche, Rome: Edizioni cremonese, MR 0189973 == References == Giacardi, Livia, "Gaetano Scorza, Biographical sketch", The First Century of the International Commission on Mathematical Education
|
Wikipedia:Galactic algorithm#0
|
A galactic algorithm is an algorithm with record-breaking theoretical (asymptotic) performance, but which is not used due to practical constraints. Typical reasons are that the performance gains only appear for problems that are so large they never occur, or the algorithm's complexity outweighs a relatively small gain in performance. Galactic algorithms were so named by Richard Lipton and Ken Regan, because they will never be used on any data sets on Earth. == Possible use cases == Even if they are never used in practice, galactic algorithms may still contribute to computer science: An algorithm, even if impractical, may show new techniques that may eventually be used to create practical algorithms. See, for example, communication channel capacity, below. Available computational power may catch up to the crossover point, so that a previously impractical algorithm becomes practical. See, for example, Low-density parity-check codes, below. An impractical algorithm can still demonstrate that conjectured bounds can be achieved, or that proposed bounds are wrong, and hence advance the theory of algorithms (see, for example, Reingold's algorithm for connectivity in undirected graphs). As Lipton states:This alone could be important and often is a great reason for finding such algorithms. For example, if tomorrow there were a discovery that showed there is a factoring algorithm with a huge but provably polynomial time bound, that would change our beliefs about factoring. The algorithm might never be used, but would certainly shape the future research into factoring. Similarly, a hypothetical algorithm for the Boolean satisfiability problem with a large but polynomial time bound, such as Θ ( n 2 100 ) {\displaystyle \Theta {\bigl (}n^{2^{100}}{\bigr )}} , although unusable in practice, would settle the P versus NP problem, considered the most important open problem in computer science and one of the Millennium Prize Problems. == Examples == === Integer multiplication === An example of a galactic algorithm is the fastest known way to multiply two numbers, which is based on a 1729-dimensional Fourier transform. It needs O ( n log n ) {\displaystyle O(n\log n)} bit operations, but as the constants hidden by the big O notation are large, it is never used in practice. However, it also shows why galactic algorithms may still be useful. The authors state: "we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits." === Primality testing === The AKS primality test is galactic. It is the most theoretically sound of any known algorithm that can take an arbitrary number and tell if it is prime. In particular, it is provably polynomial-time, deterministic, and unconditionally correct. All other known algorithms fall short on at least one of these criteria, but the shortcomings are minor and the calculations are much faster, so they are used instead. ECPP in practice runs much faster than AKS, but it has never been proven to be polynomial time. The Miller–Rabin test is also much faster than AKS, but produces only a probabilistic result. However the probability of error can be driven down to arbitrarily small values (say < 10 − 100 {\displaystyle <10^{-100}} ), good enough for practical purposes. There is also a deterministic version of the Miller-Rabin test, which runs in polynomial time over all inputs, but its correctness depends on the generalized Riemann hypothesis (which is widely believed, but not proven). The existence of these (much) faster alternatives means AKS is not used in practice. === Matrix multiplication === The first improvement over brute-force matrix multiplication (which needs O ( n 3 ) {\displaystyle O(n^{3})} multiplications) was the Strassen algorithm: a recursive algorithm that needs O ( n 2.807 ) {\displaystyle O(n^{2.807})} multiplications. This algorithm is not galactic and is used in practice. Further extensions of this, using sophisticated group theory, are the Coppersmith–Winograd algorithm and its slightly better successors, needing O ( n 2.373 ) {\displaystyle O(n^{2.373})} multiplications. These are galactic – "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical." === Communication channel capacity === Claude Shannon showed a simple but asymptotically optimal code that can reach the theoretical capacity of a communication channel. It requires assigning a random code word to every possible n {\displaystyle n} -bit message, then decoding by finding the closest code word. If n {\displaystyle n} is chosen large enough, this beats any existing code and can get arbitrarily close to the capacity of the channel. Unfortunately, any n {\displaystyle n} big enough to beat existing codes is also completely impractical. These codes, though never used, inspired decades of research into more practical algorithms that today can achieve rates arbitrarily close to channel capacity. === Sub-graphs === The problem of deciding whether a graph G {\displaystyle G} contains H {\displaystyle H} as a minor is NP-complete in general, but where H {\displaystyle H} is fixed, it can be solved in polynomial time. The running time for testing whether H {\displaystyle H} is a minor of G {\displaystyle G} in this case is O ( n 2 ) {\displaystyle O(n^{2})} , where n {\displaystyle n} is the number of vertices in G {\displaystyle G} and the big O notation hides a constant that depends superexponentially on H {\displaystyle H} . The constant is greater than 2 ↑↑ ( 2 ↑↑ ( 2 ↑↑ ( h / 2 ) ) ) {\displaystyle 2\uparrow \uparrow (2\uparrow \uparrow (2\uparrow \uparrow (h/2)))} in Knuth's up-arrow notation, where h {\displaystyle h} is the number of vertices in H {\displaystyle H} . Even the case of h = 4 {\displaystyle h=4} cannot be reasonably computed as the constant is greater than 2 pentated by 4, or 2 tetrated by 65536, that is, 2 ↑↑↑ 4 = 65536 2 = 2 2 ⋅ ⋅ 2 ⏟ 65536 {\displaystyle 2\uparrow \uparrow \uparrow 4={}^{65536}2=\underbrace {2^{2^{\cdot ^{\cdot ^{2}}}}} _{65536}} . === Cryptographic breaks === In cryptography jargon, a "break" is any attack faster in expectation than brute force – i.e., performing one trial decryption for each possible key. For many cryptographic systems, breaks are known, but are still practically infeasible with current technology. One example is the best attack known against 128-bit AES, which takes only 2 126 {\displaystyle 2^{126}} operations. Despite being impractical, theoretical breaks can provide insight into vulnerability patterns, and sometimes lead to discovery of exploitable breaks. === Traveling salesman problem === For several decades, the best known approximation to the traveling salesman problem in a metric space was the very simple Christofides algorithm which produced a path at most 50% longer than the optimum. (Many other algorithms could usually do much better, but could not provably do so.) In 2020, a newer and much more complex algorithm was discovered that can beat this by 10 − 34 {\displaystyle 10^{-34}} percent. Although no one will ever switch to this algorithm for its very slight worst-case improvement, it is still considered important because "this minuscule improvement breaks through both a theoretical logjam and a psychological one". === Hutter search === A single algorithm, "Hutter search", can solve any well-defined problem in an asymptotically optimal time, barring some caveats. It works by searching through all possible algorithms (by runtime), while simultaneously searching through all possible proofs (by length of proof), looking for a proof of correctness for each algorithm. Since the proof of correctness is of finite size, it "only" adds a constant and does not affect the asymptotic runtime. However, this constant is so big that the algorithm is entirely impractical. For example, if the shortest proof of correctness of a given algorithm is 1000 bits long, the search will examine at least 2999 other potential proofs first. Hutter search is related to Solomonoff induction, which is a formalization of Bayesian inference. All computable theories (as implemented by programs) which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Again, the search over all possible explanations makes this procedure galactic. === Optimization === Simulated annealing, when used with a logarithmic cooling schedule, has been proven to find the global optimum of any optimization problem. However, such a cooling schedule results in entirely impractical runtimes, and is never used. However, knowing this ideal algorithm exists has led to practical variants that are able to find very good (though not provably optimal) solutions to complex optimization problems. === Minimum spanning trees === The expected linear time MST algorithm is able to discover the minimum spanning tree of a graph in O ( m + n ) {\displaystyle O(m+n)} , where m {\displaystyle m} is the number of edges and n {\displaystyle n} is the number of nodes of the graph. However, the constant factor that is hidden by the Big O notation is huge enough to make the algorithm impractical. An implementation is publicly available and given the experimentally estimated implementation constants, it would only be faster than Borůvka's algorithm for graphs in which m + n > 9 ⋅ 10 151 {\displaystyle m+n>9\cdot 10^{151}} . === Hash tables === Researchers have found an algorithm that achieves the provably best-possible asymptotic performance in terms of time-space tradeoff. But it remains purely theoretical: "Despite the new hash table’s unprecedented efficiency, no one is likely to try building it anytime soon. It’s just too complicated to construct." and "in practice, constants really matter. In the real world, a factor of 10 is a game ender.” === Connectivity in undirected graphs === Connectivity in undirected graphs (also known as USTCON, for Unconnected Source-Target CONnectivity) is the problem of deciding if a path exists between two nodes in an undirected graph, or in other words, if they are in the same connected component. If you are allowed to use O ( N ) {\displaystyle O({\text{N}})} space, polynomial time solutions such as Dijkstra's algorithm have been known and used for decades. But for many years it was unknown if this could be done deterministically in O ( log N ) {\displaystyle O({\text{log N}})} space (class L), though it was known to be possible with randomized algorithms (class NL). In 2004, a breakthrough paper by Omer Reingold showed that USTCON is in fact in L. However, despite the asymptotically better space requirement, this algorithm is galactic. The constant hidden by the O ( log N ) {\displaystyle O({\text{log N}})} is so big that in any practical case it uses far more memory than the well known O ( N ) {\displaystyle O({\text{N}})} algorithms, plus it is exceedingly slow. So despite being a landmark in theory (more than 1000 citations as of 2025) it is never used in practice. === Low-density parity-check codes === Low-density parity-check codes, also known as LDPC or Gallager codes, are an example of an algorithm that was galactic when first developed, but became practical as computation improved. They were originally conceived by Robert G. Gallager in his doctoral dissertation at the Massachusetts Institute of Technology in 1960. Although their performance was much better than other codes of that time, reaching the Gilbert–Varshamov bound for linear codes, the codes were largely ignored as their iterative decoding algorithm was prohibitively computationally expensive for the hardware available. Renewed interest in LDPC codes emerged following the invention of the closely-related turbo codes (1993), whose similarly iterative decoding algorithm outperformed other codes used at that time. LDPC codes were subsequently rediscovered in 1996. They are now used in many applications today. == References ==
|
Wikipedia:Galia Dafni#0
|
Galia Devora Dafni is a mathematician specializing in harmonic analysis and function spaces. Educated in the US, she works in Canada as a professor of mathematics and statistics at Concordia University. She is also affiliated with the Centre de Recherches Mathématiques, where she is deputy director for publications and communication. == Education == Dafni lived in Texas as a teenager. After beginning her undergraduate studies at the University of Texas at Austin, Dafni transferred to Pennsylvania State University, where she earned a bachelor's degree in 1988 in mathematics and computer science, "with highest distinction and with honors in mathematics". She went to Princeton University for graduate study in mathematics, earning a master's degree in 1990 and completing her Ph.D. in 1993. Her doctoral dissertation, Hardy Spaces on Strongly Pseudoconvex Domains in C n {\displaystyle C^{n}} and Domains of Finite Type in C 2 {\displaystyle C^{2}} , was supervised by Elias M. Stein. == Career == After another year as an instructor at Princeton, Dafni continues through three postdoctoral positions: as Charles B. Morrey Jr. Assistant Professor of Mathematics at the University of California, Berkeley from 1994 to 1996, as Ralph Boas Assistant Professor of Mathematics at Northwestern University from 1996 to 1998, and as a postdoctoral fellow and research assistant professor at Concordia University from 1998 to 2000. Her move to Montreal and Concordia was motivated in part by a two-body problem with her husband, who also worked in Montreal. Finally, in 2000, she obtained a regular-rank assistant professorship at Concordia, supported by a 5-year NSERC University Faculty Award, through a program to support women in STEM. She obtained tenure there as an associate professor in 2005, and since became a full professor. == Personal life == Dafni is married to Henri Darmon, a mathematician at another Montreal university, McGill University. They met in the early 1990s at Princeton, where Darmon was a postdoctoral researcher. == References == == External links == Home page Galia Dafni publications indexed by Google Scholar
|
Wikipedia:Gan Wee Teck#0
|
Gan Wee Teck (simplified Chinese: 颜维德; traditional Chinese: 顏維德; pinyin: Yán Wéi Dé; Jyutping: Ngaan4 Wai4 Dak1; Pe̍h-ōe-jī: Gân Ûi-tek; born 11 March 1972) is a Malaysian-born Singaporean mathematician. He is a Distinguished Professor of Mathematics at the National University of Singapore (NUS). He is known for his work on automorphic forms and representation theory in the context of the Langlands program, especially the theory of theta correspondence, the Gan–Gross–Prasad conjecture and the Langlands program for Brylinski–Deligne covering groups. == Biography == Though born in Malaysia, Gan grew up in Singapore and attended Pei Hwa Presbyterian Primary School, the Chinese High School, and Hwa Chong Junior College. He did his undergraduate studies at Churchill College, Cambridge University, followed by graduate studies at Harvard University, working under Benedict Gross and obtaining his Ph.D. in 1998. He was subsequently a faculty member at Princeton University (1998–2003) and University of California, San Diego (2003–2010) before moving to the National University of Singapore in 2010. == Contributions == With his collaborators, Gan has resolved several basic problems in the theory of theta correspondence (or Howe correspondence), such as the Howe duality conjecture and the Siegel–Weil formula. He has also made contributions to the Gross–Prasad conjecture, the local Langlands correspondence and the representation theory of metaplectic groups. == Awards and honours == Senior Wrangler, University of Cambridge (1994) American Mathematical Society Centennial Fellowship (2002–2003) Sloan Research Fellowship (Math, 2003) Invited speaker at the International Congress of Mathematicians (ICM) in 2014 (Number Theory section) President's Science Award 2017, Singapore Fellow of the Singapore National Academy of Science (2018) Asian Scientist 100, Asian Scientist, 2018 == Selected works == Gan, Wee Teck; Gross, Benedict H.; Prasad, Dipendra (2012). "Symplectic local root numbers, central critical L-values, and restriction problems in the representation theory of classical groups". Sur les conjectures de Gross et Prasad. Paris: Astérisque (Societé mathématique de France). pp. 1–109. ISBN 978-2-85629-348-5. OCLC 827954844. Gan, Wee Teck; Li, Wen-Wei (2018). "The Shimura–Waldspurger Correspondence for Mp(2n)". Simons Symposia. Cham: Springer International Publishing. arXiv:1612.05008. doi:10.1007/978-3-319-94833-1_6. ISBN 978-3-319-94832-4. ISSN 2365-9564. S2CID 119602159. Gan, Wee Teck; Takeda, Shuichiro (13 July 2015). "A proof of the Howe duality conjecture". Journal of the American Mathematical Society. 29 (2). American Mathematical Society (AMS): 473–493. arXiv:1407.1995. doi:10.1090/jams/839. ISSN 0894-0347. S2CID 942882. Gan, Wee Teck; Ichino, Atsushi (26 March 2013). "Formal degrees and local theta correspondence". Inventiones Mathematicae. 195 (3). Springer Science and Business Media LLC: 509–672. doi:10.1007/s00222-013-0460-5. ISSN 0020-9910. S2CID 253740793. Gan, Wee Teck; Savin, Gordan (31 October 2012). "Representations of metaplectic groups I: epsilon dichotomy and local Langlands correspondence". Compositio Mathematica. 148 (6). Wiley: 1655–1694. doi:10.1112/s0010437x12000486. ISSN 0010-437X. S2CID 17621652. Gan, Wee Teck; Qiu, Yannan; Takeda, Shuichiro (29 March 2014). "The regularized Siegel–Weil formula (the second term identity) and the Rallis inner product formula". Inventiones Mathematicae. 198 (3). Springer Science and Business Media LLC: 739–831. arXiv:1207.4709. Bibcode:2014InMat.198..739G. doi:10.1007/s00222-014-0509-0. ISSN 0020-9910. S2CID 253737500. == References ==
|
Wikipedia:Ganita Kaumudi#0
|
Ganita Kaumudi (Sanskrit: गणितकौमदी) is a treatise on mathematics written by Indian mathematician Narayana Pandita in 1356. It was an arithmetical treatise alongside the other algebraic treatise called "Bijganita Vatamsa" by Narayana Pandit. == Contents == Gaṇita Kaumudī contains about 475 verses of sūtra (rules) and 395 verses of udāharaṇa (examples). It is divided into 14 chapters (vyavahāra): === 1. Prakīrṇaka-vyavahāra === Weights and measures, length, area, volume, etc. It describes addition, subtraction, multiplication, division, square, square root, cube and cube root. The problems of linear and quadratic equations described here are more complex than in earlier works. 63 rules and 82 examples === 2. Miśraka-vyavahāra === Mathematics pertaining to daily life: “mixture of materials, interest on a principal, payment in instalments, mixing gold objects with different purities and other problems pertaining to linear indeterminate equations for many unknowns” 42 rules and 49 examples === 3. Śreḍhī-vyavahāra === Arithmetic and geometric progressions, sequences and series. The generalization here was crucial for finding the infinite series for sine and cosine. 28 rules and 19 examples. === 4. Kṣetra-vyavahāra === Geometry. 149 rules and 94 examples. Includes special material on cyclic quadratilerals, such as the “third diagonal”. === 5. Khāta-vyavahāra === Excavations. 7 rules and 9 examples. === 6. Citi-vyavahāra === Stacks. 2 rules and 2 examples. === 7. Rāśi-vyavahāra === Mounds of grain. 2 rules and 3 examples. === 8. Chāyā-vyavahāra === Shadow problems. 7 rules and 6 examples. === 9. Kuṭṭaka === Linear integer equations. 69 rules and 36 examples. === 10. Vargaprakṛti === Quadratic. 17 rules and 10 examples. Includes a variant of the Chakravala method. Ganita Kaumudi contains many results from continued fractions. In the text Narayana Pandita used the knowledge of simple recurring continued fraction in the solutions of indeterminate equations of the type n x 2 + k 2 = y 2 {\displaystyle nx^{2}+k^{2}=y^{2}} . === 11. Bhāgādāna === Contains factorization method, 11 rules and 7 examples. === 12. Rūpādyaṃśāvatāra === Contains rules for writing a fraction as a sum of unit fractions. 22 rules and 14 examples. Unit fractions were known in Indian mathematics in the Vedic period: the Śulba Sūtras give an approximation of √2 equivalent to 1 + 1 3 + 1 3 ⋅ 4 − 1 3 ⋅ 4 ⋅ 34 {\displaystyle 1+{\tfrac {1}{3}}+{\tfrac {1}{3\cdot 4}}-{\tfrac {1}{3\cdot 4\cdot 34}}} . Systematic rules for expressing a fraction as the sum of unit fractions had previously been given in the Gaṇita-sāra-saṅgraha of Mahāvīra (c. 850). Nārāyaṇa's Gaṇita-kaumudi gave a few more rules: the section bhāgajāti in the twelfth chapter named aṃśāvatāra-vyavahāra contains eight rules. The first few are: Rule 1. To express 1 as a sum of n unit fractions: 1 = 1 1 ⋅ 2 + 1 2 ⋅ 3 + 1 3 ⋅ 4 + ⋯ + 1 ( n − 1 ) ⋅ n + 1 n {\displaystyle 1={\frac {1}{1\cdot 2}}+{\frac {1}{2\cdot 3}}+{\frac {1}{3\cdot 4}}+\dots +{\frac {1}{(n-1)\cdot n}}+{\frac {1}{n}}} Rule 2. To express 1 as a sum of n unit fractions: 1 = 1 2 + 1 3 + 1 3 2 + ⋯ + 1 3 n − 2 + 1 2 ⋅ 3 n − 2 {\displaystyle 1={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{3^{2}}}+\dots +{\frac {1}{3^{n-2}}}+{\frac {1}{2\cdot 3^{n-2}}}} Rule 3. To express a fraction p / q {\displaystyle p/q} as a sum of unit fractions: Pick an arbitrary number i such that ( q + i ) / p {\displaystyle (q+i)/p} is an integer r, write p q = 1 r + i q r {\displaystyle {\frac {p}{q}}={\frac {1}{r}}+{\frac {i}{qr}}} and find successive denominators in the same way by operating on the new fraction. If i is always chosen to be the smallest such integer, this is equivalent to the greedy algorithm for Egyptian fractions, but the Gaṇita-Kaumudī's rule does not give a unique procedure, and instead states evam iṣṭavaśād bahudhā ("Thus there are many ways, according to one's choices.") Rule 4. Given n {\displaystyle n} arbitrary numbers k 1 , k 2 , … , k n {\displaystyle k_{1},k_{2},\dots ,k_{n}} , 1 = ( k 2 − k 1 ) k 1 k 2 ⋅ k 1 + ( k 3 − k 2 ) k 1 k 3 ⋅ k 2 + ⋯ + ( k n − k n − 1 ) k 1 k n ⋅ k n − 1 + 1 ⋅ k 1 k n {\displaystyle 1={\frac {(k_{2}-k_{1})k_{1}}{k_{2}\cdot k_{1}}}+{\frac {(k_{3}-k_{2})k_{1}}{k_{3}\cdot k_{2}}}+\dots +{\frac {(k_{n}-k_{n-1})k_{1}}{k_{n}\cdot k_{n-1}}}+{\frac {1\cdot k_{1}}{k_{n}}}} Rule 5. To express 1 as the sum of fractions with given numerators a 1 , a 2 , … , a n {\displaystyle a_{1},a_{2},\dots ,a_{n}} : Calculate i 1 , i 2 , … , i n {\displaystyle i_{1},i_{2},\dots ,i_{n}} as i 1 = a 1 + 1 {\displaystyle i_{1}=a_{1}+1} , i 2 = a 2 + i 1 {\displaystyle i_{2}=a_{2}+i_{1}} , i 3 = a 3 + i 2 {\displaystyle i_{3}=a_{3}+i_{2}} , and so on, and write 1 = a 1 1 ⋅ i 1 + a 2 i 1 ⋅ i 2 + a 3 i 2 ⋅ i 3 + ⋯ + a n i n − 1 ⋅ i n + 1 i n {\displaystyle 1={\frac {a_{1}}{1\cdot i_{1}}}+{\frac {a_{2}}{i_{1}\cdot i_{2}}}+{\frac {a_{3}}{i_{2}\cdot i_{3}}}+\dots +{\frac {a_{n}}{i_{n-1}\cdot i_{n}}}+{\frac {1}{i_{n}}}} === 13. Aṅka-pāśa === Combinatorics. 97 rules and 45 examples. Generating permutations (including of a multiset), combinations, integer partitions, binomial coefficients, generalized Fibonacci numbers. Narayana Pandita noted the equivalence of the figurate numbers and the formulae for the number of combinations of different things taken so many at a time. The book contains a rule to determine the number of permutations of n objects and a classical algorithm for finding the next permutation in lexicographic ordering though computational methods have advanced well beyond that ancient algorithm. Donald Knuth describes many algorithms dedicated to efficient permutation generation and discuss their history in his book The Art of Computer Programming. === 14. Bhadragaṇita === Magic squares. 60 rules and 17 examples. == Editions == "Translation of Ganita Kaumudi with Rationale in modern mathematics and historical notes" by S L Singh, Principal, Science College, Gurukul Kangri Vishwavidyalaya, Haridwar Ganita Kaumudi, Volume 1–2, Nārāyana Pandita (Issue 57 of Princess of Wales Sarasvati Bhavana Granthamala: Abhinava nibandhamālā Padmakara Dwivedi Jyautishacharya 1936) == References == Notes Bibliography Kusuba, Takanori (2004), "Indian Rules for the Decomposition of Fractions", in Charles Burnett; Jan P. Hogendijk; Kim Plofker; et al. (eds.), Studies in the History of the Exact Sciences in Honour of David Pingree, Brill, ISBN 9004132023, ISSN 0169-8729 M. D. Srinivas, M. S. Sriram, K. Ramasubramanian, Mathematics in India - From Vedic Period to Modern Times. Lectures 25–27. == External links == Ganita Kaumudi Part 1 (1936) Ganita Kaumudi Part 2 (1942) Ganita Kaumudi and the Continued Fraction
|
Wikipedia:Ganitagannadi#0
|
Gaṇitagannaḍi (Mirror of Mathematics) is a commentary in Kannada on Viddṇācārya's Vārșikatantra composed by Śaṅkaranārāyaṇa Joisāru in 1604. Viddṇācārya's Vārșikatantra is a karaṇa text written before 1370 CE. The book, written in Nandinagari script, is a karaṇa text, that is, a book which explain the various computations in astronomy especially with regard to those related to the preparation of Panchangam-s (calendar). Even though manuscripts of Kannada commentaries of several Sanskrit texts on astronomy like Sūryasiddhānta have been identified, Gaṇitagannaḍi is the first such commentary ever to be translated into English, printed and published. Gaṇitagannaḍi was translated into English by B. S. Shylaja, a scientist associated with Jawaharlal Nehru Planetarium, Bengaluru and Seetharama Javagal and was published in 2021. It was Seetharama Javagal who brought to light the palm leaf manuscript of Gaṇitagannaḍi in his grandfather's collection. The most important specialty of the book from an astronomical point of view is that, "in the third chapter (Chāyāddhāya). all the computations are based on a single parameter, namely the shadow length. Other quantities are based on Dyu-nishardha-Karna, to be obtained daily. This includes vishuvat-karna and vishuvatchaya. This clearly demonstrates the importance of actual observations. These traditional astronomers always advocated drig-ganita-aikya (that is, the concordance between observation and computation)." == Outline of the book == The first chapter of the book deals with the procedure for getting kalidina, starting from the kalivarsa count, and the method for getting the mean positions for planets. The second chapter provides the method for deriving the true positions of all planets, perigees and the nodes. The third chapter describes the procedures of tripraśnādhikāra in Sūryasiddhānta. The fourth chapter is devoted to eclipses. The fifth chapter describes a graphical method for obtaining the timings, magnitudes, and points of ingress. The next three chapters are very brief. The last chapter describes the determination of the elevation of the cusps of the crescent moon. == References ==
|
Wikipedia:Garry Tee#0
|
Garry John Tee (28 March 1932 – 18 February 2024) was a New Zealand mathematician and computer scientist. == Biography == Garry John Tee was born in Whanganui on 28 March 1932. Tee attended Seddon Memorial Technical College (now Auckland University of Technology). In 1954, he was awarded a Master of Science with First Class Honours at the Auckland University College (now the University of Auckland). Following graduating, Tee worked as a computer in Australia for an oil prospecting team. In 1958, Tee was a mathematician at the English Electric Company, contributing to the programming of the DEUCE computer. In 1964, Tee helped establish the Department of Mathematics at the University of Lancaster and spent 1965 as a visiting scholar at the Department of Computer Science, Stanford University. By 1968, Tee had returned to the Department of Mathematics at the University of Auckland, playing a key role in founding its Department of Computer Science. Starting in 1969, Tee was also an active member of the Auckland University Underwater Club. In 1971, Tee pursued further studies under Richard Bellman at the University of Southern California. However, Bellman's illness and subsequent death led Tee to return to Auckland without completing his doctorate. In 2003, Tee received an honorary doctorate from the Auckland University of Technology. Tee died in Auckland on 18 February 2024. == Publications == Tee published widely on numerical analysis, Charles Babbage, early women in mathematics and computing, and history of mathematics, computer science, and science more broadly. Publications include: Tee, Garry J. "A novel finite-difference approximation to the biharmonic operator." The Computer Journal 6, no. 2 (1963): 177–192. Tee, Garry J. "Evidence for the Chinese Origin of the Jaguar Motif in Chavin Art". Asian Perspectives 21, no. 1 (1978): 27–29. Tee, Garry J. "The Heritage of Charles Babbage in Australasia." Annals of the History of Computing 5, no. 1 (1983): 45–60. Tee, Garry J. "A Calendar of the Correspondence of Charles Darwin, 1821–1882." Journal of the Royal Society of New Zealand 15, no. 3 (1985): 341–343. Tee, Garry J. "Mathematics in the Pacific Basin". The British Journal for the History of Science 21, no. 4 (1988): 401–417. Tee, Garry J. "A Note on Bechmann's Approximate Construction of π, Suggested by a Deleted Sketch in Villard de Honnecourt's Manuscript." The British Journal for the History of Science 22, no. 2 (1989): 241–242. Tee, Garry J. "Prime powers of zeros of monic polynomials with integer coefficients." The Fibonacci Quarterly 32, no. 3 (1994): 277–283. Tee, Garry J. "Relics of Davy and Faraday in New Zealand". Notes and Records of the Royal Society of London 52, no. 1 (1998): 93–102. Tee, Garry J. "Math Bite: Further Generalizations of a Curiosity That Feynman Remembered All His Life." Mathematics Magazine 72, no. 1 (1999): 44. Tee, Garry J. "Eigenvectors of block circulant and alternating circulant matrices." New Zealand Journal of Mathematics 36, no. 8 (2007): 195–211. == References ==
|
Wikipedia:Garside element#0
|
In mathematics, a Garside element is an element of an algebraic structure such as a monoid that has several desirable properties. Formally, if M is a monoid, then an element Δ of M is said to be a Garside element if the set of all right divisors of Δ, { r ∈ M ∣ for some x ∈ M , Δ = x r } , {\displaystyle \{r\in M\mid {\text{for some }}x\in M,\Delta =xr\},} is the same set as the set of all left divisors of Δ, { ℓ ∈ M ∣ for some x ∈ M , Δ = ℓ x } , {\displaystyle \{\ell \in M\mid {\text{for some }}x\in M,\Delta =\ell x\},} and this set generates M. A Garside element is in general not unique: any power of a Garside element is again a Garside element. == Garside monoid and Garside group == A Garside monoid is a monoid with the following properties: Finitely generated and atomic; Cancellative; The partial order relations of divisibility are lattices; There exists a Garside element. A Garside monoid satisfies the Ore condition for multiplicative sets and hence embeds in its group of fractions: such a group is a Garside group. A Garside group is biautomatic and hence has soluble word problem and conjugacy problem. Examples of such groups include braid groups and, more generally, Artin groups of finite Coxeter type. The name was coined by Patrick Dehornoy and Luis Paris to mark the work on the conjugacy problem for braid groups of Frank Arnold Garside (1915–1988), a teacher at Magdalen College School, Oxford who served as Lord Mayor of Oxford in 1984–1985. == References == Benson Farb, Problems on mapping class groups and related topics (Volume 74 of Proceedings of symposia in pure mathematics) AMS Bookstore, 2006, ISBN 0-8218-3838-5, p. 357 Patrick Dehornoy, Groupes de Garside, Annales Scientifiques de l'École Normale Supérieure (4) 35 (2002) 267-306. MR2003f:20067. Matthieu Picantin, "Garside monoids vs divisibility monoids", Math. Structures Comput. Sci. 15 (2005) 231-242. MR2006d:20102.
|
Wikipedia:Gaston Albert Gohierre de Longchamps#0
|
In geometry, the de Longchamps point of a triangle is a triangle center named after French mathematician Gaston Albert Gohierre de Longchamps. It is the reflection of the orthocenter of the triangle about the circumcenter. == Definition == Let the given triangle have vertices A {\displaystyle A} , B {\displaystyle B} , and C {\displaystyle C} , opposite the respective sides a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , as is the standard notation in triangle geometry. In the 1886 paper in which he introduced this point, de Longchamps initially defined it as the center of a circle Δ {\displaystyle \Delta } orthogonal to the three circles Δ a {\displaystyle \Delta _{a}} , Δ b {\displaystyle \Delta _{b}} , and Δ c {\displaystyle \Delta _{c}} , where Δ a {\displaystyle \Delta _{a}} is centered at A {\displaystyle A} with radius a {\displaystyle a} and the other two circles are defined symmetrically. De Longchamps then also showed that the same point, now known as the de Longchamps point, may be equivalently defined as the orthocenter of the anticomplementary triangle of A B C {\displaystyle ABC} , and that it is the reflection of the orthocenter of A B C {\displaystyle ABC} around the circumcenter. The Steiner circle of a triangle is concentric with the nine-point circle and has radius 3/2 the circumradius of the triangle; the de Longchamps point is the homothetic center of the Steiner circle and the circumcircle. == Additional properties == As the reflection of the orthocenter around the circumcenter, the de Longchamps point belongs to the line through both of these points, which is the Euler line of the given triangle. Thus, it is collinear with all the other triangle centers on the Euler line, which along with the orthocenter and circumcenter include the centroid and the center of the nine-point circle. The de Longchamp point is also collinear, along a different line, with the incenter and the Gergonne point of its triangle. The three circles centered at A {\displaystyle A} , B {\displaystyle B} , and C {\displaystyle C} , with radii s − a {\displaystyle s-a} , s − b {\displaystyle s-b} , and s − c {\displaystyle s-c} respectively (where s {\displaystyle s} is the semiperimeter) are mutually tangent, and there are two more circles tangent to all three of them, the inner and outer Soddy circles; the centers of these two circles also lie on the same line with the de Longchamp point and the incenter. The de Longchamp point is the point of concurrence of this line with the Euler line, and with three other lines defined in a similar way as the line through the incenter but using instead the three excenters of the triangle. The Darboux cubic may be defined from the de Longchamps point, as the locus of points X {\displaystyle X} such that X {\displaystyle X} , the isogonal conjugate of X {\displaystyle X} , and the de Longchamps point are collinear. It is the only cubic curve invariant of a triangle that is both isogonally self-conjugate and centrally symmetric; its center of symmetry is the circumcenter of the triangle. The de Longchamps point itself lies on this curve, as does its reflection the orthocenter. == References == == External links == Weisstein, Eric W. "de Longchamps Point". MathWorld.
|
Wikipedia:Gaston N'Guérékata#0
|
Gaston Mandata Nguérékata (born 20 May 1953) is a Central African mathematician and politician who is currently serving. He was the first Central African to earn a Ph.D. in mathematics. == Early life and education == Nguérékata was born in Paoua on May 20, 1953. He finished his primary education in Ecole Sous Préfecturale de Paoua. As an elementary school student, he became the accountant at his father's store, and soon he discovered that he loved math. Afterward, he continued high school at Lycee Moderne de Berberati and then at Lycée des Rapides in Bangui. During his year at Lycée des Rapides Bangui, he was appointed as the school's basketball team captain. In 1972, he graduated from high school as the first-rank achiever nationalwide on Baccalaureate Series C, the high school national graduating test for mathematics concentration. Upon completing high school, he continued his higher education from graduate to doctoral degree at the University of Montreal through a Canadian government scholarship. He obtained a Ph.D. degree in 1980 with the dissertation titled Quelques Remarques sur les Equations Differentielles Abstraites. He then attended postdoctoral education at University of California, Berkeley. == Career == === Academic career === In 1976, he worked as a teaching assistant at the University of Montreal until 1980. From 1978 to 1980, he became an instructor at Université du Québec à Trois-Rivières. He then returned to the Central African Republic in December 1980 and served as Vice Rector of University of Bangui from 1981 to 1983 and acting rector of the University of Bangui from 1983 to 1984. After he resigned as Andre Kolingba's spokesperson, he served as the University of Bangui's Vice Rector from 1994 to 1995. Upon resigning as vice-rector of the University of Bangui, he moved to the US and taught at Daemen University. ==== Morgan State University ==== In 1996, he joined Morgan State University as a lecturer. One year later, he was promoted to associate professor. In 2003, his rank was elevated to professor and held it until 2017 when he became the University Distinguished Professor. === Political career === Nguérékata joined RDC and served as high commissioner, then deputy minister, of science, technology, and environment from 1987 to 1992. In 1992, he became the Spokesperson for Andre Kolingba until 1993. He also participated in Earth Summit and served as spokesperson for the African Ministerial Group. He founded Parti Pour la Renaissance Centrafricaine (PARC) in 2013. In October 2013, he called for Michel Djotodia resignation due to his incompetency in leading the country. In 2014, he launched several actions in Central African Republic such as providing free WIFI to University of Bangui and launching a project Cahier de doléances (Book of Grievances) in Mbaiki. On 27 February 2015, he signed the Catholic Church Community of Sant’Egidio Central African Republic national reconciliation agreement in Rome. He ran for at 2015–16 Central African general election as a presidential candidate of PARC. He earned 22,391 votes and did not advance to the second round. In 2022, he resigned from his post as the chairman of PARC due to personal issues. == Personal life == Nguérékata speaks French and English. == Awards == , Commander Order of Central African Merit - 1991. , Chevalier of Legion of Honour - 1985. Officer of Central African Orders of Academic Palms - 1984 == Bibliography == === Articles === Ezzinbi, Khalil; Fatajou, Samir; N’guérékata, Gaston Mandata (15 February 2009). "Pseudo-almost-automorphic solutions to some neutral partial functional differential equations in Banach spaces". Nonlinear Analysis: Theory, Methods & Applications. 70 (4): 1641–1647. Ezzinbi, Khalil; Fatajou, Samir; N'Guérékata, Gaston Mandata (15 March 2009). "Pseudo almost automorphic solutions for dissipative differential equations in Banach spaces". Journal of Mathematical Analysis and Applications. 351 (2): 7650772. Ezzinbi, Khalil; N'Guérékata, Gaston Mandata (1 April 2007). "Almost automorphic solutions for some partial functional differential equations". Journal of Mathematical Analysis and Applications. 328 (1): 344–358. Baillon, Jean-Bernard; Blot, Joël; N’Guerekata, Gaston Mandata; Pennequin, Denis (2006). "On C (n)-ost periodic solutions to some nonautonomous differential equations in Banach spaces" (PDF). Commentationes Mathematicae. 46 (2): 263–273. N'Guerekata, Gaston Mandata (1986). "Notes on almost-periodicity in topological vector spaces" (PDF). International Journal of Mathematics and Mathematical Sciences. 9: 201–204. N'Guerekata, Gaston Mandata (1984). "Almost-periodicity in linear topological spaces and applications to abstract differential equations" (PDF). International Journal of Mathematics and Mathematical Sciences. 7 (3): 529–540. N'Guerekata, Gaston Mandata (1983). "Quelques remarques sur les fonctions asymptotiquement presque automorphes". Les Annales des Sciences Mathématiques du Québec. 7 (2): 185–191. N'Guerekata, Gaston Mandata (1981). "Sur les fonctions presqu'automorphes d'équations différentielles abstraites" (PDF). Ann. Sci. Math. Québec. 5 (1): 69–79. === Books === Abbas, Saïd; Benchohra, Mouffak; N'Guérékata, Gaston Mandata (2012). Topics in Fractional Differential Equations. Springer. ISBN 0-8218-2793-6. Liu, Fengshan; N’Guerekata, Gaston Mandata (2008). Discrete and Applied Mathematics. Nova Science Publishers. ISBN 978-1-60021-810-1. Liu, James H; N'guerekata, Gaston Mandata; Nguyen, Van Minh (2008). Topics On Stability And Periodicity In Abstract Differential Equations. Nova Science Publishers. ISBN 978-981-281-823-2. N'Guérékata, Gaston Mandata (2008). Focus on Evolution Equations. Nova Publishers. ISBN 978-1-60021-342-7. N'Guérékata, Gaston Mandata (2008). Leading-Edge Research on Evolution Equations. Nova Publishers. ISBN 978-1-60456-226-2. N'Guérékata, Gaston Mandata (2008). Trends in Evolution Equation Research. Nova Publishers. ISBN 978-1-60456-270-5. Liu, Fengshan; Nashed, Zuhair; N'Guérékata, Gaston Mandata; Pokrajac, Dragoljub; Qiao, Zhijun; Shi, Xiquan; Xia, Xianggen (2006). Advances in Applied and Computational Mathematics. Nova Science Publishers. ISBN 1-60021-358-8. N'Guérékata, Gaston Mandata (2005). Topics in Almost Automorphy. Springer. ISBN 0-387-22846-2. N'Guérékata, Gaston Mandata (2002). Introductory Algebra. Kendall/Hunt Publishing Company. ISBN 0-7872-9401-2. N'Guérékata, Gaston Mandata (2002). PreCalculus. Kendall/Hunt Publishing Company. ISBN 0-7872-9404-7. N'Guérékata, Gaston Mandata (2001). Almost automorphic and almost periodic functions in abstract spaces. Springer Science & Business Media. ISBN 978-0-306-46686-1. == References ==
|
Wikipedia:Gaussian integral#0
|
The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function f ( x ) = e − x 2 {\displaystyle f(x)=e^{-x^{2}}} over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is ∫ − ∞ ∞ e − x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.} Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809, attributing its discovery to Laplace. The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution. The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function. Although no elementary function exists for the error function, as can be proven by the Risch algorithm, the Gaussian integral can be solved analytically through the methods of multivariable calculus. That is, there is no elementary indefinite integral for ∫ e − x 2 d x , {\displaystyle \int e^{-x^{2}}\,dx,} but the definite integral ∫ − ∞ ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx} can be evaluated. The definite integral of an arbitrary Gaussian function is ∫ − ∞ ∞ e − a ( x + b ) 2 d x = π a . {\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}}\,dx={\sqrt {\frac {\pi }{a}}}.} == Computation == === By polar coordinates === A standard way to compute the Gaussian integral, the idea of which goes back to Poisson, is to make use of the property that: ( ∫ − ∞ ∞ e − x 2 d x ) 2 = ∫ − ∞ ∞ e − x 2 d x ∫ − ∞ ∞ e − y 2 d y = ∫ − ∞ ∞ ∫ − ∞ ∞ e − ( x 2 + y 2 ) d x d y . {\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\right)^{2}=\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\int _{-\infty }^{\infty }e^{-y^{2}}\,dy=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-\left(x^{2}+y^{2}\right)}\,dx\,dy.} Consider the function e − ( x 2 + y 2 ) = e − r 2 {\displaystyle e^{-\left(x^{2}+y^{2}\right)}=e^{-r^{2}}} on the plane R 2 {\displaystyle \mathbb {R} ^{2}} , and compute its integral two ways: on the one hand, by double integration in the Cartesian coordinate system, its integral is a square: ( ∫ e − x 2 d x ) 2 ; {\displaystyle \left(\int e^{-x^{2}}\,dx\right)^{2};} on the other hand, by shell integration (a case of double integration in polar coordinates), its integral is computed to be π {\displaystyle \pi } Comparing these two computations yields the integral, though one should take care about the improper integrals involved. ∬ R 2 e − ( x 2 + y 2 ) d x d y = ∫ 0 2 π ∫ 0 ∞ e − r 2 r d r d θ = 2 π ∫ 0 ∞ r e − r 2 d r = 2 π ∫ − ∞ 0 1 2 e s d s s = − r 2 = π ∫ − ∞ 0 e s d s = lim x → − ∞ π ( e 0 − e x ) = π , {\displaystyle {\begin{aligned}\iint _{\mathbb {R} ^{2}}e^{-\left(x^{2}+y^{2}\right)}dx\,dy&=\int _{0}^{2\pi }\int _{0}^{\infty }e^{-r^{2}}r\,dr\,d\theta \\[6pt]&=2\pi \int _{0}^{\infty }re^{-r^{2}}\,dr\\[6pt]&=2\pi \int _{-\infty }^{0}{\tfrac {1}{2}}e^{s}\,ds&&s=-r^{2}\\[6pt]&=\pi \int _{-\infty }^{0}e^{s}\,ds\\[6pt]&=\lim _{x\to -\infty }\pi \left(e^{0}-e^{x}\right)\\[6pt]&=\pi ,\end{aligned}}} where the factor of r is the Jacobian determinant which appears because of the transform to polar coordinates (r dr dθ is the standard measure on the plane, expressed in polar coordinates Wikibooks:Calculus/Polar Integration#Generalization), and the substitution involves taking s = −r2, so ds = −2r dr. Combining these yields ( ∫ − ∞ ∞ e − x 2 d x ) 2 = π , {\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\right)^{2}=\pi ,} so ∫ − ∞ ∞ e − x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.} ==== Complete proof ==== To justify the improper double integrals and equating the two expressions, we begin with an approximating function: I ( a ) = ∫ − a a e − x 2 d x . {\displaystyle I(a)=\int _{-a}^{a}e^{-x^{2}}dx.} If the integral ∫ − ∞ ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx} were absolutely convergent we would have that its Cauchy principal value, that is, the limit lim a → ∞ I ( a ) {\displaystyle \lim _{a\to \infty }I(a)} would coincide with ∫ − ∞ ∞ e − x 2 d x . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx.} To see that this is the case, consider that ∫ − ∞ ∞ | e − x 2 | d x < ∫ − ∞ − 1 − x e − x 2 d x + ∫ − 1 1 e − x 2 d x + ∫ 1 ∞ x e − x 2 d x < ∞ . {\displaystyle \int _{-\infty }^{\infty }\left|e^{-x^{2}}\right|dx<\int _{-\infty }^{-1}-xe^{-x^{2}}\,dx+\int _{-1}^{1}e^{-x^{2}}\,dx+\int _{1}^{\infty }xe^{-x^{2}}\,dx<\infty .} So we can compute ∫ − ∞ ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx} by just taking the limit lim a → ∞ I ( a ) . {\displaystyle \lim _{a\to \infty }I(a).} Taking the square of I ( a ) {\displaystyle I(a)} yields I ( a ) 2 = ( ∫ − a a e − x 2 d x ) ( ∫ − a a e − y 2 d y ) = ∫ − a a ( ∫ − a a e − y 2 d y ) e − x 2 d x = ∫ − a a ∫ − a a e − ( x 2 + y 2 ) d y d x . {\displaystyle {\begin{aligned}I(a)^{2}&=\left(\int _{-a}^{a}e^{-x^{2}}\,dx\right)\left(\int _{-a}^{a}e^{-y^{2}}\,dy\right)\\[6pt]&=\int _{-a}^{a}\left(\int _{-a}^{a}e^{-y^{2}}\,dy\right)\,e^{-x^{2}}\,dx\\[6pt]&=\int _{-a}^{a}\int _{-a}^{a}e^{-\left(x^{2}+y^{2}\right)}\,dy\,dx.\end{aligned}}} Using Fubini's theorem, the above double integral can be seen as an area integral ∬ [ − a , a ] × [ − a , a ] e − ( x 2 + y 2 ) d ( x , y ) , {\displaystyle \iint _{[-a,a]\times [-a,a]}e^{-\left(x^{2}+y^{2}\right)}\,d(x,y),} taken over a square with vertices {(−a, a), (a, a), (a, −a), (−a, −a)} on the xy-plane. Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's incircle must be less than I ( a ) 2 {\displaystyle I(a)^{2}} , and similarly the integral taken over the square's circumcircle must be greater than I ( a ) 2 {\displaystyle I(a)^{2}} . The integrals over the two disks can easily be computed by switching from Cartesian coordinates to polar coordinates: x = r cos θ , y = r sin θ {\displaystyle {\begin{aligned}x&=r\cos \theta ,&y&=r\sin \theta \end{aligned}}} J ( r , θ ) = [ ∂ x ∂ r ∂ x ∂ θ ∂ y ∂ r ∂ y ∂ θ ] = [ cos θ − r sin θ sin θ − r cos θ ] {\displaystyle \mathbf {J} (r,\theta )={\begin{bmatrix}{\dfrac {\partial x}{\partial r}}&{\dfrac {\partial x}{\partial \theta }}\\[1em]{\dfrac {\partial y}{\partial r}}&{\dfrac {\partial y}{\partial \theta }}\end{bmatrix}}={\begin{bmatrix}\cos \theta &-r\sin \theta \\\sin \theta &{\hphantom {-}}r\cos \theta \end{bmatrix}}} d ( x , y ) = | J ( r , θ ) | d ( r , θ ) = r d ( r , θ ) . {\displaystyle d(x,y)=\left|J(r,\theta )\right|d(r,\theta )=r\,d(r,\theta ).} ∫ 0 2 π ∫ 0 a r e − r 2 d r d θ < I 2 ( a ) < ∫ 0 2 π ∫ 0 a 2 r e − r 2 d r d θ . {\displaystyle \int _{0}^{2\pi }\int _{0}^{a}re^{-r^{2}}\,dr\,d\theta <I^{2}(a)<\int _{0}^{2\pi }\int _{0}^{a{\sqrt {2}}}re^{-r^{2}}\,dr\,d\theta .} (See to polar coordinates from Cartesian coordinates for help with polar transformation.) Integrating, π ( 1 − e − a 2 ) < I 2 ( a ) < π ( 1 − e − 2 a 2 ) . {\displaystyle \pi \left(1-e^{-a^{2}}\right)<I^{2}(a)<\pi \left(1-e^{-2a^{2}}\right).} By the squeeze theorem, this gives the Gaussian integral ∫ − ∞ ∞ e − x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.} === By Cartesian coordinates === A different technique, which goes back to Laplace (1812), is the following. Let y = x s d y = x d s . {\displaystyle {\begin{aligned}y&=xs\\dy&=x\,ds.\end{aligned}}} Since the limits on s as y → ±∞ depend on the sign of x, it simplifies the calculation to use the fact that e−x2 is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is, ∫ − ∞ ∞ e − x 2 d x = 2 ∫ 0 ∞ e − x 2 d x . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx=2\int _{0}^{\infty }e^{-x^{2}}\,dx.} Thus, over the range of integration, x ≥ 0, and the variables y and s have the same limits. This yields: I 2 = 4 ∫ 0 ∞ ∫ 0 ∞ e − ( x 2 + y 2 ) d y d x = 4 ∫ 0 ∞ ( ∫ 0 ∞ e − ( x 2 + y 2 ) d y ) d x = 4 ∫ 0 ∞ ( ∫ 0 ∞ e − x 2 ( 1 + s 2 ) x d s ) d x {\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)}dy\,dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)}\,dy\right)\,dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x\,ds\right)\,dx\\[6pt]\end{aligned}}} Then, using Fubini's theorem to switch the order of integration: I 2 = 4 ∫ 0 ∞ ( ∫ 0 ∞ e − x 2 ( 1 + s 2 ) x d x ) d s = 4 ∫ 0 ∞ [ e − x 2 ( 1 + s 2 ) − 2 ( 1 + s 2 ) ] x = 0 x = ∞ d s = 4 ( 1 2 ∫ 0 ∞ d s 1 + s 2 ) = 2 arctan ( s ) | 0 ∞ = π . {\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x\,dx\right)\,ds\\[6pt]&=4\int _{0}^{\infty }\left[{\frac {e^{-x^{2}\left(1+s^{2}\right)}}{-2\left(1+s^{2}\right)}}\right]_{x=0}^{x=\infty }\,ds\\[6pt]&=4\left({\frac {1}{2}}\int _{0}^{\infty }{\frac {ds}{1+s^{2}}}\right)\\[6pt]&=2\arctan(s){\Big |}_{0}^{\infty }\\[6pt]&=\pi .\end{aligned}}} Therefore, I = π {\displaystyle I={\sqrt {\pi }}} , as expected. === By Laplace's method === In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider e − x 2 ≈ 1 − x 2 ≈ ( 1 + x 2 ) − 1 {\displaystyle e^{-x^{2}}\approx 1-x^{2}\approx (1+x^{2})^{-1}} . In fact, since ( 1 + t ) e − t ≤ 1 {\displaystyle (1+t)e^{-t}\leq 1} for all t {\displaystyle t} , we have the exact bounds: 1 − x 2 ≤ e − x 2 ≤ ( 1 + x 2 ) − 1 {\displaystyle 1-x^{2}\leq e^{-x^{2}}\leq (1+x^{2})^{-1}} Then we can do the bound at Laplace approximation limit: ∫ [ − 1 , 1 ] ( 1 − x 2 ) n d x ≤ ∫ [ − 1 , 1 ] e − n x 2 d x ≤ ∫ [ − 1 , 1 ] ( 1 + x 2 ) − n d x {\displaystyle \int _{[-1,1]}(1-x^{2})^{n}dx\leq \int _{[-1,1]}e^{-nx^{2}}dx\leq \int _{[-1,1]}(1+x^{2})^{-n}dx} That is, 2 n ∫ [ 0 , 1 ] ( 1 − x 2 ) n d x ≤ ∫ [ − n , n ] e − x 2 d x ≤ 2 n ∫ [ 0 , 1 ] ( 1 + x 2 ) − n d x {\displaystyle 2{\sqrt {n}}\int _{[0,1]}(1-x^{2})^{n}dx\leq \int _{[-{\sqrt {n}},{\sqrt {n}}]}e^{-x^{2}}dx\leq 2{\sqrt {n}}\int _{[0,1]}(1+x^{2})^{-n}dx} By trigonometric substitution, we exactly compute those two bounds: 2 n ( 2 n ) ! ! / ( 2 n + 1 ) ! ! {\displaystyle 2{\sqrt {n}}(2n)!!/(2n+1)!!} and 2 n ( π / 2 ) ( 2 n − 3 ) ! ! / ( 2 n − 2 ) ! ! {\displaystyle 2{\sqrt {n}}(\pi /2)(2n-3)!!/(2n-2)!!} By taking the square root of the Wallis formula, π 2 = ∏ n = 1 ( 2 n ) 2 ( 2 n − 1 ) ( 2 n + 1 ) {\displaystyle {\frac {\pi }{2}}=\prod _{n=1}{\frac {(2n)^{2}}{(2n-1)(2n+1)}}} we have π = 2 lim n → ∞ n ( 2 n ) ! ! ( 2 n + 1 ) ! ! {\displaystyle {\sqrt {\pi }}=2\lim _{n\to \infty }{\sqrt {n}}{\frac {(2n)!!}{(2n+1)!!}}} , the desired lower bound limit. Similarly we can get the desired upper bound limit. Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula. == Relation to the gamma function == The integrand is an even function, ∫ − ∞ ∞ e − x 2 d x = 2 ∫ 0 ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }e^{-x^{2}}dx} Thus, after the change of variable x = t {\textstyle x={\sqrt {t}}} , this turns into the Euler integral 2 ∫ 0 ∞ e − x 2 d x = 2 ∫ 0 ∞ 1 2 e − t t − 1 2 d t = Γ ( 1 2 ) = π {\displaystyle 2\int _{0}^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }{\frac {1}{2}}\ e^{-t}\ t^{-{\frac {1}{2}}}dt=\Gamma {\left({\frac {1}{2}}\right)}={\sqrt {\pi }}} where Γ ( z ) = ∫ 0 ∞ t z − 1 e − t d t {\textstyle \Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}dt} is the gamma function. This shows why the factorial of a half-integer is a rational multiple of π {\textstyle {\sqrt {\pi }}} . More generally, ∫ 0 ∞ x n e − a x b d x = Γ ( ( n + 1 ) / b ) b a ( n + 1 ) / b , {\displaystyle \int _{0}^{\infty }x^{n}e^{-ax^{b}}dx={\frac {\Gamma {\left((n+1)/b\right)}}{ba^{(n+1)/b}}},} which can be obtained by substituting t = a x b {\displaystyle t=ax^{b}} in the integrand of the gamma function to get Γ ( z ) = a z b ∫ 0 ∞ x b z − 1 e − a x b d x {\textstyle \Gamma (z)=a^{z}b\int _{0}^{\infty }x^{bz-1}e^{-ax^{b}}dx} . == Generalizations == === The integral of a Gaussian function === The integral of an arbitrary Gaussian function is ∫ − ∞ ∞ e − a ( x + b ) 2 d x = π a . {\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}}\,dx={\sqrt {\frac {\pi }{a}}}.} An alternative form is ∫ − ∞ ∞ e − ( a x 2 + b x + c ) d x = π a e b 2 4 a − c . {\displaystyle \int _{-\infty }^{\infty }e^{-(ax^{2}+bx+c)}\,dx={\sqrt {\frac {\pi }{a}}}\,e^{{\frac {b^{2}}{4a}}-c}.} This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the log-normal distribution, for example. === Complex form === ∫ − ∞ ∞ e 1 2 i t 2 d t = e i π / 4 2 π {\displaystyle \int _{-\infty }^{\infty }e^{{\frac {1}{2}}it^{2}}dt=e^{i\pi /4}{\sqrt {2\pi }}} and more generally, ∫ R N e 1 2 i x T A x d x = det ( A ) − 1 2 ( e i π / 4 2 π ) N {\displaystyle \int _{\mathbb {R} ^{N}}e^{{\frac {1}{2}}i\mathbf {x} ^{T}A\mathbf {x} }dx=\det(A)^{-{\frac {1}{2}}}{\left(e^{i\pi /4}{\sqrt {2\pi }}\right)}^{N}} for any positive-definite symmetric matrix A {\displaystyle A} . === n-dimensional and functional generalization === Suppose A is a symmetric positive-definite (hence invertible) n × n precision matrix, which is the matrix inverse of the covariance matrix. Then, ∫ R n exp ( − 1 2 x T A x ) d n x = ∫ R n exp ( − 1 2 ∑ i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A = 1 det ( A / 2 π ) = det ( 2 π A − 1 ) {\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} \right)}\,d^{n}\mathbf {x} &=\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}\,d^{n}\mathbf {x} \\[1ex]&={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}}={\sqrt {\frac {1}{\det \left(A/2\pi \right)}}}\\[1ex]&={\sqrt {\det \left(2\pi A^{-1}\right)}}\end{aligned}}} By completing the square, this generalizes to ∫ R n exp ( − 1 2 x T A x + b T x + c ) d n x = det ( 2 π A − 1 ) exp ( 1 2 b T A − 1 b + c ) {\displaystyle \int _{\mathbb {R} ^{n}}\exp {\left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} +c\right)}\,d^{n}\mathbf {x} ={\sqrt {\det \left(2\pi A^{-1}\right)}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} +c\right)} This fact is applied in the study of the multivariate normal distribution. Also, ∫ x k 1 ⋯ x k 2 N exp ( − 1 2 ∑ i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A 1 2 N N ! ∑ σ ∈ S 2 N ( A − 1 ) k σ ( 1 ) k σ ( 2 ) ⋯ ( A − 1 ) k σ ( 2 N − 1 ) k σ ( 2 N ) {\displaystyle \int x_{k_{1}}\cdots x_{k_{2N}}\,\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}\,d^{n}x={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\,{\frac {1}{2^{N}N!}}\,\sum _{\sigma \in S_{2N}}(A^{-1})_{k_{\sigma (1)}k_{\sigma (2)}}\cdots (A^{-1})_{k_{\sigma (2N-1)}k_{\sigma (2N)}}} where σ is a permutation of {1, …, 2N} and the extra factor on the right-hand side is the sum over all combinatorial pairings of {1, …, 2N} of N copies of A−1. Alternatively, ∫ f ( x ) exp ( − 1 2 ∑ i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A exp ( 1 2 ∑ i , j = 1 n ( A − 1 ) i j ∂ ∂ x i ∂ ∂ x j ) f ( x ) | x = 0 {\displaystyle \int f(\mathbf {x} )\exp {\left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}d^{n}\mathbf {x} ={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}}\,\left.\exp \left({\frac {1}{2}}\sum _{i,j=1}^{n}\left(A^{-1}\right)_{ij}{\partial \over \partial x_{i}}{\partial \over \partial x_{j}}\right)f(\mathbf {x} )\right|_{\mathbf {x} =0}} for some analytic function f, provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series. While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can define a Gaussian functional integral in analogy to the finite-dimensional case. There is still the problem, though, that ( 2 π ) ∞ {\displaystyle (2\pi )^{\infty }} is infinite and also, the functional determinant would also be infinite in general. This can be taken care of if we only consider ratios: ∫ f ( x 1 ) ⋯ f ( x 2 N ) exp [ − ∬ 1 2 A ( x 2 N + 1 , x 2 N + 2 ) f ( x 2 N + 1 ) f ( x 2 N + 2 ) d d x 2 N + 1 d d x 2 N + 2 ] D f ∫ exp [ − ∬ 1 2 A ( x 2 N + 1 , x 2 N + 2 ) f ( x 2 N + 1 ) f ( x 2 N + 2 ) d d x 2 N + 1 d d x 2 N + 2 ] D f = 1 2 N N ! ∑ σ ∈ S 2 N A − 1 ( x σ ( 1 ) , x σ ( 2 ) ) ⋯ A − 1 ( x σ ( 2 N − 1 ) , x σ ( 2 N ) ) . {\displaystyle {\begin{aligned}&{\frac {\displaystyle \int f(x_{1})\cdots f(x_{2N})\exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2})\,d^{d}x_{2N+1}\,d^{d}x_{2N+2}}\right]{\mathcal {D}}f}{\displaystyle \int \exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2})\,d^{d}x_{2N+1}\,d^{d}x_{2N+2}}\right]{\mathcal {D}}f}}\\[6pt]={}&{\frac {1}{2^{N}N!}}\sum _{\sigma \in S_{2N}}A^{-1}(x_{\sigma (1)},x_{\sigma (2)})\cdots A^{-1}(x_{\sigma (2N-1)},x_{\sigma (2N)}).\end{aligned}}} In the DeWitt notation, the equation looks identical to the finite-dimensional case. === n-dimensional with linear term === If A is again a symmetric positive-definite matrix, then (assuming all are column vectors) ∫ exp ( − 1 2 ∑ i , j = 1 n A i j x i x j + ∑ i = 1 n b i x i ) d n x = ∫ exp ( − 1 2 x T A x + b T x ) d n x = ( 2 π ) n det A exp ( 1 2 b T A − 1 b ) . {\displaystyle {\begin{aligned}\int \exp \left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}+\sum _{i=1}^{n}b_{i}x_{i}\right)d^{n}\mathbf {x} &=\int \exp \left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} \right)d^{n}\mathbf {x} \\&={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} \right).\end{aligned}}} === Integrals of similar form === ∫ 0 ∞ x 2 n e − x 2 / a 2 d x = π a 2 n + 1 ( 2 n − 1 ) ! ! 2 n + 1 {\displaystyle \int _{0}^{\infty }x^{2n}e^{-{x^{2}}/{a^{2}}}\,dx={\sqrt {\pi }}{\frac {a^{2n+1}(2n-1)!!}{2^{n+1}}}} ∫ 0 ∞ x 2 n + 1 e − x 2 / a 2 d x = n ! 2 a 2 n + 2 {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-{x^{2}}/{a^{2}}}\,dx={\frac {n!}{2}}a^{2n+2}} ∫ 0 ∞ x 2 n e − b x 2 d x = ( 2 n − 1 ) ! ! b n 2 n + 1 π b {\displaystyle \int _{0}^{\infty }x^{2n}e^{-bx^{2}}\,dx={\frac {(2n-1)!!}{b^{n}2^{n+1}}}{\sqrt {\frac {\pi }{b}}}} ∫ 0 ∞ x 2 n + 1 e − b x 2 d x = n ! 2 b n + 1 {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-bx^{2}}\,dx={\frac {n!}{2b^{n+1}}}} ∫ 0 ∞ x n e − b x 2 d x = Γ ( n + 1 2 ) 2 b n + 1 2 {\displaystyle \int _{0}^{\infty }x^{n}e^{-bx^{2}}\,dx={\frac {\Gamma ({\frac {n+1}{2}})}{2b^{\frac {n+1}{2}}}}} where n {\displaystyle n} is a positive integer An easy way to derive these is by differentiating under the integral sign. ∫ − ∞ ∞ x 2 n e − α x 2 d x = ( − 1 ) n ∫ − ∞ ∞ ∂ n ∂ α n e − α x 2 d x = ( − 1 ) n ∂ n ∂ α n ∫ − ∞ ∞ e − α x 2 d x = π ( − 1 ) n ∂ n ∂ α n α − 1 2 = π α ( 2 n − 1 ) ! ! ( 2 α ) n {\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }x^{2n}e^{-\alpha x^{2}}\,dx&=\left(-1\right)^{n}\int _{-\infty }^{\infty }{\frac {\partial ^{n}}{\partial \alpha ^{n}}}e^{-\alpha x^{2}}\,dx\\[1ex]&=\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\int _{-\infty }^{\infty }e^{-\alpha x^{2}}\,dx\\[1ex]&={\sqrt {\pi }}\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\alpha ^{-{\frac {1}{2}}}\\[1ex]&={\sqrt {\frac {\pi }{\alpha }}}{\frac {(2n-1)!!}{\left(2\alpha \right)^{n}}}\end{aligned}}} One could also integrate by parts and find a recurrence relation to solve this. === Higher-order polynomials === Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in n variables may depend only on SL(n)-invariants of the polynomial. One such invariant is the discriminant, zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants. Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as formal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is ∫ − ∞ ∞ e a x 4 + b x 3 + c x 2 + d x + f d x = 1 2 e f ∑ n , m , p = 0 n + p = 0 mod 2 ∞ b n n ! c m m ! d p p ! Γ ( 3 n + 2 m + p + 1 4 ) ( − a ) 3 n + 2 m + p + 1 4 . {\displaystyle \int _{-\infty }^{\infty }e^{ax^{4}+bx^{3}+cx^{2}+dx+f}\,dx={\frac {1}{2}}e^{f}\sum _{\begin{smallmatrix}n,m,p=0\\n+p=0{\bmod {2}}\end{smallmatrix}}^{\infty }{\frac {b^{n}}{n!}}{\frac {c^{m}}{m!}}{\frac {d^{p}}{p!}}{\frac {\Gamma {\left({\frac {3n+2m+p+1}{4}}\right)}}{{\left(-a\right)}^{\frac {3n+2m+p+1}{4}}}}.} The n + p = 0 mod 2 requirement is because the integral from −∞ to 0 contributes a factor of (−1)n+p/2 to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory. == See also == List of integrals of Gaussian functions Common integrals in quantum field theory Normal distribution List of integrals of exponential functions Error function Berezin integral == References == === Citations === === Sources ===
|
Wikipedia:Gaṇeśa Daivajna#0
|
Gaṇeśa Daivajna (born c. 1507, fl. 1520-1554) was a sixteenth century astronomer, astrologer, and mathematician from western India who wrote books on methods to predict eclipses, planetary conjunctions, positions, and make calculations for calendars. His most major work was the Grahalaghava which was included ephemeris and calendar calculations. Ganesa was born in Nandigrama (see also Golagrama) where his father Kesava (fl. 1496-1507) was a Brahmin astronomer. His mother's name has been noted as Lakshmi and he spent his entire life at Nandigrama. The location of Nandigrama has been suggested by some as being in Gujarat but more careful study of his work places it in Nandgaon in present day Maharashtra. He wrote several works including Grahalaghava, Siddhantarahasya, Buddhivilāsinī, and Laghutithicintamani. His work Buddhivilāsinī (c. 1546) includes commentaries on the mathematics of Bhaskara's Lilavati. One of his demonstrations is of the area of a circle calculated by dissection of the circle into a regular polygon. He started with a polygon with 12 sides and doubled it through 24, 48, 96, 192, to 384 and came up with an approximation for pi as 3927/1250. His book Muhūrtadipikā includes a commentary on his father Kesava's work Muhūrtatattva. Ganesa's grandfather Kamalakara was also an astrologer as were his brothers Ananta and Rama. Works by Kesava include Grahakautuka (1496) on the calculation of eclipses, Jatakapaddhati for the production of horoscopes, and the Tajikapaddhati, which covered Islamic thoughts on astrology. == See also == Kṛṣṇa Daivajña Grahalaghava == References == == External links == Grahalaghava (Hindi)Edition by Sudhakar Dwivedi Jataka Alankara
|
Wikipedia:Geertruida Wijthoff#0
|
Geertruida "Truida" Wijthoff (30 August 1859 – 13 March 1953) was a Dutch mathematician and teacher. In 1907 she became a member of merit of the Royal Dutch Mathematical Society. == Life and work == Truida (birthname, Anna Geertruida Wijthoff) was the eldest of four children born into the wealthy family of Abraham Willem Wijthoff and Anna Catharina Frederika Kerkhoven. Truida's father was a Lutheran and son of the Amsterdam sugar refinery family Wijthoff & Son. Truida's family first lived at Lauriergracht 111, just next to the sugar refinery that burned down in 1880. (In 1911, they moved to PC Hooftstraat 28 in Amsterdam.) Truida's younger sister was the writer Henriëtte Wijthoff and her next younger sibling was Anna Catharina Frederika Wijthoff, a painter and illustrator of children's books. The youngest child was the mathematical theorist and teacher Willem Abraham Wythoff. In 1881, at the age of 22, Truida enrolled at the Athenaeum Illustre of Amsterdam to study mathematics and physics. She and Marie du Saar, who registered as a student in medicine that same autumn, were two of the first women to study there, preceded only by the physician Aletta Jacobs. After graduation, Truida became a teacher at the girls' school in Middelburg from 1884 to 1886. She then returned to Amsterdam to work for the Administration Office for the Management of American Railway Values, which was owned by the Amsterdam banker Wertheim. Like her only brother, Truida was an avid solver of the problems section in the New Archive for Mathematics (NAW). After she received an honorable mention (the highest attainable prize) for the fifth time in the annual competition, she was appointed a member of merit of the Royal Dutch Mathematical Society in 1907. Even after the family moved to Apeldoorn, she won the NAW competition many more times, and in 1923 she received her tenth honorable mention. In 1914, Truida became a member of the Apeldoorn branch of the Association against Quackery. At the end of the nineteenth century, she also became involved in the Masonic weekly, a magazine for the Order of Freemasons. == Personal life == On 23 September 1898, she married her cousin Julius Kerkhoven who for 20 years, had worked as a civil engineer for Tjiandjoer and Tjilatjap on West Java in Indonesia, and in Padang and Batu Taba on Sumatra for the Dutch-Indian Railway Company. Julius was the younger brother of Rudolf Kerkhoven, the inspiration for the main character in Hella Haasse's novel De Heren van de Thee. After her husband's death, Truida continued to live in Apeldoorn with her sister Henriëtte. She survived the other three Wijthoff children and died on 13 March 1953. == References ==
|
Wikipedia:Gelfand–Kirillov dimension#0
|
In algebra, the Gelfand–Kirillov dimension (or GK dimension) of a right module M over a k-algebra A is: GKdim = sup V , M 0 lim sup n → ∞ log n dim k M 0 V n {\displaystyle \operatorname {GKdim} =\sup _{V,M_{0}}\limsup _{n\to \infty }\log _{n}\dim _{k}M_{0}V^{n}} where the supremum is taken over all finite-dimensional subspaces V ⊂ A {\displaystyle V\subset A} and M 0 ⊂ M {\displaystyle M_{0}\subset M} . An algebra is said to have polynomial growth if its Gelfand–Kirillov dimension is finite. == Basic facts == The Gelfand–Kirillov dimension of a finitely generated commutative algebra A over a field is the Krull dimension of A (or equivalently the transcendence degree of the field of fractions of A over the base field.) In particular, the GK dimension of the polynomial ring k [ x 1 , … , x n ] {\displaystyle k[x_{1},\dots ,x_{n}]} Is n. (Warfield) For any real number r ≥ 2, there exists a finitely generated algebra whose GK dimension is r. == In the theory of D-Modules == Given a right module M over the Weyl algebra A n {\displaystyle A_{n}} , the Gelfand–Kirillov dimension of M over the Weyl algebra coincides with the dimension of M, which is by definition the degree of the Hilbert polynomial of M. This enables to prove additivity in short exact sequences for the Gelfand–Kirillov dimension and finally to prove Bernstein's inequality, which states that the dimension of M must be at least n. This leads to the definition of holonomic D-modules as those with the minimal dimension n, and these modules play a great role in the geometric Langlands program. == Notes == == References == Smith, S. Paul; Zhang, James J. (1998). "A remark on Gelfand–Kirillov dimension" (PDF). Proceedings of the American Mathematical Society. 126 (2): 349–352. doi:10.1090/S0002-9939-98-04074-X. Coutinho: A primer of algebraic D-modules. Cambridge, 1995 == Further reading == Artin, Michael (1999). "Noncommutative Rings" (PDF). Chapter VI.
|
Wikipedia:General Leibniz rule#0
|
In calculus, the general Leibniz rule, named after Gottfried Wilhelm Leibniz, generalizes the product rule for the derivative of the product of two (which is also known as "Leibniz's rule"). It states that if f {\displaystyle f} and g {\displaystyle g} are n-times differentiable functions, then the product f g {\displaystyle fg} is also n-times differentiable and its n-th derivative is given by ( f g ) ( n ) = ∑ k = 0 n ( n k ) f ( n − k ) g ( k ) , {\displaystyle (fg)^{(n)}=\sum _{k=0}^{n}{n \choose k}f^{(n-k)}g^{(k)},} where ( n k ) = n ! k ! ( n − k ) ! {\displaystyle {n \choose k}={n! \over k!(n-k)!}} is the binomial coefficient and f ( j ) {\displaystyle f^{(j)}} denotes the jth derivative of f (and in particular f ( 0 ) = f {\displaystyle f^{(0)}=f} ). The rule can be proven by using the product rule and mathematical induction. == Second derivative == If, for example, n = 2, the rule gives an expression for the second derivative of a product of two functions: ( f g ) ″ ( x ) = ∑ k = 0 2 ( 2 k ) f ( 2 − k ) ( x ) g ( k ) ( x ) = f ″ ( x ) g ( x ) + 2 f ′ ( x ) g ′ ( x ) + f ( x ) g ″ ( x ) . {\displaystyle (fg)''(x)=\sum \limits _{k=0}^{2}{{\binom {2}{k}}f^{(2-k)}(x)g^{(k)}(x)}=f''(x)g(x)+2f'(x)g'(x)+f(x)g''(x).} == More than two factors == The formula can be generalized to the product of m differentiable functions f1,...,fm. ( f 1 f 2 ⋯ f m ) ( n ) = ∑ k 1 + k 2 + ⋯ + k m = n ( n k 1 , k 2 , … , k m ) ∏ 1 ≤ t ≤ m f t ( k t ) , {\displaystyle \left(f_{1}f_{2}\cdots f_{m}\right)^{(n)}=\sum _{k_{1}+k_{2}+\cdots +k_{m}=n}{n \choose k_{1},k_{2},\ldots ,k_{m}}\prod _{1\leq t\leq m}f_{t}^{(k_{t})}\,,} where the sum extends over all m-tuples (k1,...,km) of non-negative integers with ∑ t = 1 m k t = n , {\textstyle \sum _{t=1}^{m}k_{t}=n,} and ( n k 1 , k 2 , … , k m ) = n ! k 1 ! k 2 ! ⋯ k m ! {\displaystyle {n \choose k_{1},k_{2},\ldots ,k_{m}}={\frac {n!}{k_{1}!\,k_{2}!\cdots k_{m}!}}} are the multinomial coefficients. This is akin to the multinomial formula from algebra. == Proof == The proof of the general Leibniz rule: 68–69 proceeds by induction. Let f {\displaystyle f} and g {\displaystyle g} be n {\displaystyle n} -times differentiable functions. The base case when n = 1 {\displaystyle n=1} claims that: ( f g ) ′ = f ′ g + f g ′ , {\displaystyle (fg)'=f'g+fg',} which is the usual product rule and is known to be true. Next, assume that the statement holds for a fixed n ≥ 1 , {\displaystyle n\geq 1,} that is, that ( f g ) ( n ) = ∑ k = 0 n ( n k ) f ( n − k ) g ( k ) . {\displaystyle (fg)^{(n)}=\sum _{k=0}^{n}{\binom {n}{k}}f^{(n-k)}g^{(k)}.} Then, ( f g ) ( n + 1 ) = [ ∑ k = 0 n ( n k ) f ( n − k ) g ( k ) ] ′ = ∑ k = 0 n ( n k ) f ( n + 1 − k ) g ( k ) + ∑ k = 0 n ( n k ) f ( n − k ) g ( k + 1 ) = ∑ k = 0 n ( n k ) f ( n + 1 − k ) g ( k ) + ∑ k = 1 n + 1 ( n k − 1 ) f ( n + 1 − k ) g ( k ) = ( n 0 ) f ( n + 1 ) g ( 0 ) + ∑ k = 1 n ( n k ) f ( n + 1 − k ) g ( k ) + ∑ k = 1 n ( n k − 1 ) f ( n + 1 − k ) g ( k ) + ( n n ) f ( 0 ) g ( n + 1 ) = ( n + 1 0 ) f ( n + 1 ) g ( 0 ) + ( ∑ k = 1 n [ ( n k − 1 ) + ( n k ) ] f ( n + 1 − k ) g ( k ) ) + ( n + 1 n + 1 ) f ( 0 ) g ( n + 1 ) = ( n + 1 0 ) f ( n + 1 ) g ( 0 ) + ∑ k = 1 n ( n + 1 k ) f ( n + 1 − k ) g ( k ) + ( n + 1 n + 1 ) f ( 0 ) g ( n + 1 ) = ∑ k = 0 n + 1 ( n + 1 k ) f ( n + 1 − k ) g ( k ) . {\displaystyle {\begin{aligned}(fg)^{(n+1)}&=\left[\sum _{k=0}^{n}{\binom {n}{k}}f^{(n-k)}g^{(k)}\right]'\\&=\sum _{k=0}^{n}{\binom {n}{k}}f^{(n+1-k)}g^{(k)}+\sum _{k=0}^{n}{\binom {n}{k}}f^{(n-k)}g^{(k+1)}\\&=\sum _{k=0}^{n}{\binom {n}{k}}f^{(n+1-k)}g^{(k)}+\sum _{k=1}^{n+1}{\binom {n}{k-1}}f^{(n+1-k)}g^{(k)}\\&={\binom {n}{0}}f^{(n+1)}g^{(0)}+\sum _{k=1}^{n}{\binom {n}{k}}f^{(n+1-k)}g^{(k)}+\sum _{k=1}^{n}{\binom {n}{k-1}}f^{(n+1-k)}g^{(k)}+{\binom {n}{n}}f^{(0)}g^{(n+1)}\\&={\binom {n+1}{0}}f^{(n+1)}g^{(0)}+\left(\sum _{k=1}^{n}\left[{\binom {n}{k-1}}+{\binom {n}{k}}\right]f^{(n+1-k)}g^{(k)}\right)+{\binom {n+1}{n+1}}f^{(0)}g^{(n+1)}\\&={\binom {n+1}{0}}f^{(n+1)}g^{(0)}+\sum _{k=1}^{n}{\binom {n+1}{k}}f^{(n+1-k)}g^{(k)}+{\binom {n+1}{n+1}}f^{(0)}g^{(n+1)}\\&=\sum _{k=0}^{n+1}{\binom {n+1}{k}}f^{(n+1-k)}g^{(k)}.\end{aligned}}} And so the statement holds for n + 1 {\displaystyle n+1} , and the proof is complete. == Relationship to the binomial theorem == The Leibniz rule bears a strong resemblance to the binomial theorem, and in fact the binomial theorem can be proven directly from the Leibniz rule by taking f ( x ) = e a x {\displaystyle f(x)=e^{ax}} and g ( x ) = e b x , {\displaystyle g(x)=e^{bx},} which gives ( a + b ) n e ( a + b ) x = e ( a + b ) x ∑ k = 0 n ( n k ) a n − k b k , {\displaystyle (a+b)^{n}e^{(a+b)x}=e^{(a+b)x}\sum _{k=0}^{n}{\binom {n}{k}}a^{n-k}b^{k},} and then dividing both sides by e ( a + b ) x . {\displaystyle e^{(a+b)x}.} : 69 == Multivariable calculus == With the multi-index notation for partial derivatives of functions of several variables, the Leibniz rule states more generally: ∂ α ( f g ) = ∑ β : β ≤ α ( α β ) ( ∂ β f ) ( ∂ α − β g ) . {\displaystyle \partial ^{\alpha }(fg)=\sum _{\beta \,:\,\beta \leq \alpha }{\alpha \choose \beta }(\partial ^{\beta }f)(\partial ^{\alpha -\beta }g).} This formula can be used to derive a formula that computes the symbol of the composition of differential operators. In fact, let P and Q be differential operators (with coefficients that are differentiable sufficiently many times) and R = P ∘ Q . {\displaystyle R=P\circ Q.} Since R is also a differential operator, the symbol of R is given by: R ( x , ξ ) = e − ⟨ x , ξ ⟩ R ( e ⟨ x , ξ ⟩ ) . {\displaystyle R(x,\xi )=e^{-{\langle x,\xi \rangle }}R(e^{\langle x,\xi \rangle }).} A direct computation now gives: R ( x , ξ ) = ∑ α 1 α ! ( ∂ ∂ ξ ) α P ( x , ξ ) ( ∂ ∂ x ) α Q ( x , ξ ) . {\displaystyle R(x,\xi )=\sum _{\alpha }{1 \over \alpha !}\left({\partial \over \partial \xi }\right)^{\alpha }P(x,\xi )\left({\partial \over \partial x}\right)^{\alpha }Q(x,\xi ).} This formula is usually known as the Leibniz formula. It is used to define the composition in the space of symbols, thereby inducing the ring structure. == See also == Derivation (differential algebra) Umbral calculus == References ==
|
Wikipedia:General existence theorem of discontinuous maps#0
|
In mathematics, linear maps form an important class of "simple" functions which preserve the algebraic structure of linear spaces and are often used as approximations to more general functions (see linear approximation). If the spaces involved are also topological spaces (that is, topological vector spaces), then it makes sense to ask whether all linear maps are continuous. It turns out that for maps defined on infinite-dimensional topological vector spaces (e.g., infinite-dimensional normed spaces), the answer is generally no: there exist discontinuous linear maps. If the domain of definition is complete, it is trickier; such maps can be proven to exist, but the proof relies on the axiom of choice and does not provide an explicit example. == A linear map from a finite-dimensional space is always continuous == Let X and Y be two normed spaces and f : X → Y {\displaystyle f:X\to Y} a linear map from X to Y. If X is finite-dimensional, choose a basis ( e 1 , e 2 , … , e n ) {\displaystyle \left(e_{1},e_{2},\ldots ,e_{n}\right)} in X which may be taken to be unit vectors. Then, f ( x ) = ∑ i = 1 n x i f ( e i ) , {\displaystyle f(x)=\sum _{i=1}^{n}x_{i}f(e_{i}),} and so by the triangle inequality, ‖ f ( x ) ‖ = ‖ ∑ i = 1 n x i f ( e i ) ‖ ≤ ∑ i = 1 n | x i | ‖ f ( e i ) ‖ . {\displaystyle \|f(x)\|=\left\|\sum _{i=1}^{n}x_{i}f(e_{i})\right\|\leq \sum _{i=1}^{n}|x_{i}|\|f(e_{i})\|.} Letting M = sup i { ‖ f ( e i ) ‖ } , {\displaystyle M=\sup _{i}\{\|f(e_{i})\|\},} and using the fact that ∑ i = 1 n | x i | ≤ C ‖ x ‖ {\displaystyle \sum _{i=1}^{n}|x_{i}|\leq C\|x\|} for some C>0 which follows from the fact that any two norms on a finite-dimensional space are equivalent, one finds ‖ f ( x ) ‖ ≤ ( ∑ i = 1 n | x i | ) M ≤ C M ‖ x ‖ . {\displaystyle \|f(x)\|\leq \left(\sum _{i=1}^{n}|x_{i}|\right)M\leq CM\|x\|.} Thus, f {\displaystyle f} is a bounded linear operator and so is continuous. In fact, to see this, simply note that f is linear, and therefore ‖ f ( x ) − f ( x ′ ) ‖ = ‖ f ( x − x ′ ) ‖ ≤ K ‖ x − x ′ ‖ {\displaystyle \|f(x)-f(x')\|=\|f(x-x')\|\leq K\|x-x'\|} for some universal constant K. Thus for any ϵ > 0 , {\displaystyle \epsilon >0,} we can choose δ ≤ ϵ / K {\displaystyle \delta \leq \epsilon /K} so that f ( B ( x , δ ) ) ⊆ B ( f ( x ) , ϵ ) {\displaystyle f(B(x,\delta ))\subseteq B(f(x),\epsilon )} ( B ( x , δ ) {\displaystyle B(x,\delta )} and B ( f ( x ) , ϵ ) {\displaystyle B(f(x),\epsilon )} are the normed balls around x {\displaystyle x} and f ( x ) {\displaystyle f(x)} ), which gives continuity. If X is infinite-dimensional, this proof will fail as there is no guarantee that the supremum M exists. If Y is the zero space {0}, the only map between X and Y is the zero map which is trivially continuous. In all other cases, when X is infinite-dimensional and Y is not the zero space, one can find a discontinuous map from X to Y. == A concrete example == Examples of discontinuous linear maps are easy to construct in spaces that are not complete; on any Cauchy sequence e i {\displaystyle e_{i}} of linearly independent vectors which does not have a limit, there is a linear operator T {\displaystyle T} such that the quantities ‖ T ( e i ) ‖ / ‖ e i ‖ {\displaystyle \|T(e_{i})\|/\|e_{i}\|} grow without bound. In a sense, the linear operators are not continuous because the space has "holes". For example, consider the space X {\displaystyle X} of real-valued smooth functions on the interval [0, 1] with the uniform norm, that is, ‖ f ‖ = sup x ∈ [ 0 , 1 ] | f ( x ) | . {\displaystyle \|f\|=\sup _{x\in [0,1]}|f(x)|.} The derivative-at-a-point map, given by T ( f ) = f ′ ( 0 ) {\displaystyle T(f)=f'(0)\,} defined on X {\displaystyle X} and with real values, is linear, but not continuous. Indeed, consider the sequence f n ( x ) = sin ( n 2 x ) n {\displaystyle f_{n}(x)={\frac {\sin(n^{2}x)}{n}}} for n ≥ 1 {\displaystyle n\geq 1} . This sequence converges uniformly to the constantly zero function, but T ( f n ) = n 2 cos ( n 2 ⋅ 0 ) n = n → ∞ {\displaystyle T(f_{n})={\frac {n^{2}\cos(n^{2}\cdot 0)}{n}}=n\to \infty } as n → ∞ {\displaystyle n\to \infty } instead of T ( f n ) → T ( 0 ) = 0 {\displaystyle T(f_{n})\to T(0)=0} , as would hold for a continuous map. Note that T {\displaystyle T} is real-valued, and so is actually a linear functional on X {\displaystyle X} (an element of the algebraic dual space X ∗ {\displaystyle X^{*}} ). The linear map X → X {\displaystyle X\to X} which assigns to each function its derivative is similarly discontinuous. Note that although the derivative operator is not continuous, it is closed. The fact that the domain is not complete here is important: discontinuous operators on complete spaces require a little more work. == A nonconstructive example == An algebraic basis for the real numbers as a vector space over the rationals is known as a Hamel basis (note that some authors use this term in a broader sense to mean an algebraic basis of any vector space). Note that any two noncommensurable numbers, say 1 and π {\displaystyle \pi } , are linearly independent. One may find a Hamel basis containing them, and define a map f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } so that f ( π ) = 0 , {\displaystyle f(\pi )=0,} f acts as the identity on the rest of the Hamel basis, and extend to all of R {\displaystyle \mathbb {R} } by linearity. Let {rn}n be any sequence of rationals which converges to π {\displaystyle \pi } . Then limn f(rn) = π, but f ( π ) = 0. {\displaystyle f(\pi )=0.} By construction, f is linear over Q {\displaystyle \mathbb {Q} } (not over R {\displaystyle \mathbb {R} } ), but not continuous. Note that f is also not measurable; an additive real function is linear if and only if it is measurable, so for every such function there is a Vitali set. The construction of f relies on the axiom of choice. This example can be extended into a general theorem about the existence of discontinuous linear maps on any infinite-dimensional normed space (as long as the codomain is not trivial). == General existence theorem == Discontinuous linear maps can be proven to exist more generally, even if the space is complete. Let X and Y be normed spaces over the field K where K = R {\displaystyle K=\mathbb {R} } or K = C . {\displaystyle K=\mathbb {C} .} Assume that X is infinite-dimensional and Y is not the zero space. We will find a discontinuous linear map f from X to K, which will imply the existence of a discontinuous linear map g from X to Y given by the formula g ( x ) = f ( x ) y 0 {\displaystyle g(x)=f(x)y_{0}} where y 0 {\displaystyle y_{0}} is an arbitrary nonzero vector in Y. If X is infinite-dimensional, to show the existence of a linear functional which is not continuous then amounts to constructing f which is not bounded. For that, consider a sequence (en)n ( n ≥ 1 {\displaystyle n\geq 1} ) of linearly independent vectors in X, which we normalize. Then, we define T ( e n ) = n ‖ e n ‖ {\displaystyle T(e_{n})=n\|e_{n}\|\,} for each n = 1 , 2 , … {\displaystyle n=1,2,\ldots } Complete this sequence of linearly independent vectors to a vector space basis of X by defining T at the other vectors in the basis to be zero. T so defined will extend uniquely to a linear map on X, and since it is clearly not bounded, it is not continuous. Notice that by using the fact that any set of linearly independent vectors can be completed to a basis, we implicitly used the axiom of choice, which was not needed for the concrete example in the previous section. == Role of the axiom of choice == As noted above, the axiom of choice (AC) is used in the general existence theorem of discontinuous linear maps. In fact, there are no constructive examples of discontinuous linear maps with complete domain (for example, Banach spaces). In analysis as it is usually practiced by working mathematicians, the axiom of choice is always employed (it is an axiom of ZFC set theory); thus, to the analyst, all infinite-dimensional topological vector spaces admit discontinuous linear maps. On the other hand, in 1970 Robert M. Solovay exhibited a model of set theory in which every set of reals is measurable. This implies that there are no discontinuous linear real functions. Clearly AC does not hold in the model. Solovay's result shows that it is not necessary to assume that all infinite-dimensional vector spaces admit discontinuous linear maps, and there are schools of analysis which adopt a more constructivist viewpoint. For example, H. G. Garnir, in searching for so-called "dream spaces" (topological vector spaces on which every linear map into a normed space is continuous), was led to adopt ZF + DC + BP (dependent choice is a weakened form and the Baire property is a negation of strong AC) as his axioms to prove the Garnir–Wright closed graph theorem which states, among other things, that any linear map from an F-space to a TVS is continuous. Going to the extreme of constructivism, there is Ceitin's theorem, which states that every function is continuous (this is to be understood in the terminology of constructivism, according to which only representable functions are considered to be functions). Such stances are held by only a small minority of working mathematicians. The upshot is that the existence of discontinuous linear maps depends on AC; it is consistent with set theory without AC that there are no discontinuous linear maps on complete spaces. In particular, no concrete construction such as the derivative can succeed in defining a discontinuous linear map everywhere on a complete space. == Closed operators == Many naturally occurring linear discontinuous operators are closed, a class of operators which share some of the features of continuous operators. It makes sense to ask which linear operators on a given space are closed. The closed graph theorem asserts that an everywhere-defined closed operator on a complete domain is continuous, so to obtain a discontinuous closed operator, one must permit operators which are not defined everywhere. To be more concrete, let T {\displaystyle T} be a map from X {\displaystyle X} to Y {\displaystyle Y} with domain Dom ( T ) , {\displaystyle \operatorname {Dom} (T),} written T : Dom ( T ) ⊆ X → Y . {\displaystyle T:\operatorname {Dom} (T)\subseteq X\to Y.} We don't lose much if we replace X by the closure of Dom ( T ) . {\displaystyle \operatorname {Dom} (T).} That is, in studying operators that are not everywhere-defined, one may restrict one's attention to densely defined operators without loss of generality. If the graph Γ ( T ) {\displaystyle \Gamma (T)} of T {\displaystyle T} is closed in X × Y , {\displaystyle X\times Y,} we call T closed. Otherwise, consider its closure Γ ( T ) ¯ {\displaystyle {\overline {\Gamma (T)}}} in X × Y . {\displaystyle X\times Y.} If Γ ( T ) ¯ {\displaystyle {\overline {\Gamma (T)}}} is itself the graph of some operator T ¯ , {\displaystyle {\overline {T}},} T {\displaystyle T} is called closable, and T ¯ {\displaystyle {\overline {T}}} is called the closure of T . {\displaystyle T.} So the natural question to ask about linear operators that are not everywhere-defined is whether they are closable. The answer is, "not necessarily"; indeed, every infinite-dimensional normed space admits linear operators that are not closable. As in the case of discontinuous operators considered above, the proof requires the axiom of choice and so is in general nonconstructive, though again, if X is not complete, there are constructible examples. In fact, there is even an example of a linear operator whose graph has closure all of X × Y . {\displaystyle X\times Y.} Such an operator is not closable. Let X be the space of polynomial functions from [0,1] to R {\displaystyle \mathbb {R} } and Y the space of polynomial functions from [2,3] to R {\displaystyle \mathbb {R} } . They are subspaces of C([0,1]) and C([2,3]) respectively, and so normed spaces. Define an operator T which takes the polynomial function x ↦ p(x) on [0,1] to the same function on [2,3]. As a consequence of the Stone–Weierstrass theorem, the graph of this operator is dense in X × Y , {\displaystyle X\times Y,} so this provides a sort of maximally discontinuous linear map (confer nowhere continuous function). Note that X is not complete here, as must be the case when there is such a constructible map. == Impact for dual spaces == The dual space of a topological vector space is the collection of continuous linear maps from the space into the underlying field. Thus the failure of some linear maps to be continuous for infinite-dimensional normed spaces implies that for these spaces, one needs to distinguish the algebraic dual space from the continuous dual space which is then a proper subset. It illustrates the fact that an extra dose of caution is needed in doing analysis on infinite-dimensional spaces as compared to finite-dimensional ones. == Beyond normed spaces == The argument for the existence of discontinuous linear maps on normed spaces can be generalized to all metrizable topological vector spaces, especially to all Fréchet spaces, but there exist infinite-dimensional locally convex topological vector spaces such that every functional is continuous. On the other hand, the Hahn–Banach theorem, which applies to all locally convex spaces, guarantees the existence of many continuous linear functionals, and so a large dual space. In fact, to every convex set, the Minkowski gauge associates a continuous linear functional. The upshot is that spaces with fewer convex sets have fewer functionals, and in the worst-case scenario, a space may have no functionals at all other than the zero functional. This is the case for the L p ( R , d x ) {\displaystyle L^{p}(\mathbb {R} ,dx)} spaces with 0 < p < 1 , {\displaystyle 0<p<1,} from which it follows that these spaces are nonconvex. Note that here is indicated the Lebesgue measure on the real line. There are other L p {\displaystyle L^{p}} spaces with 0 < p < 1 {\displaystyle 0<p<1} which do have nontrivial dual spaces. Another such example is the space of real-valued measurable functions on the unit interval with quasinorm given by ‖ f ‖ = ∫ I | f ( x ) | 1 + | f ( x ) | d x . {\displaystyle \|f\|=\int _{I}{\frac {|f(x)|}{1+|f(x)|}}dx.} This non-locally convex space has a trivial dual space. One can consider even more general spaces. For example, the existence of a homomorphism between complete separable metric groups can also be shown nonconstructively. == See also == Finest locally convex topology – Vector space with a topology defined by convex open setsPages displaying short descriptions of redirect targets Sublinear function – Type of function in linear algebra == References == Constantin Costara, Dumitru Popa, Exercises in Functional Analysis, Springer, 2003. ISBN 1-4020-1560-7. Schechter, Eric, Handbook of Analysis and its Foundations, Academic Press, 1997. ISBN 0-12-622760-8.
|
Wikipedia:General linear group#0
|
In mathematics, the general linear group of degree n {\displaystyle n} is the set of n × n {\displaystyle n\times n} invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two invertible matrices is again invertible, and the inverse of an invertible matrix is invertible, with the identity matrix as the identity element of the group. The group is so named because the columns (and also the rows) of an invertible matrix are linearly independent, hence the vectors/points they define are in general linear position, and matrices in the general linear group take points in general linear position to points in general linear position. To be more precise, it is necessary to specify what kind of objects may appear in the entries of the matrix. For example, the general linear group over R {\displaystyle \mathbb {R} } (the set of real numbers) is the group of n × n {\displaystyle n\times n} invertible matrices of real numbers, and is denoted by GL n ( R ) {\displaystyle \operatorname {GL} _{n}(\mathbb {R} )} or GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} . More generally, the general linear group of degree n {\displaystyle n} over any field F {\displaystyle F} (such as the complex numbers), or a ring R {\displaystyle R} (such as the ring of integers), is the set of n × n {\displaystyle n\times n} invertible matrices with entries from F {\displaystyle F} (or R {\displaystyle R} ), again with matrix multiplication as the group operation. Typical notation is GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} or GL n ( F ) {\displaystyle \operatorname {GL} _{n}(F)} , or simply GL ( n ) {\displaystyle \operatorname {GL} (n)} if the field is understood. More generally still, the general linear group of a vector space GL ( V ) {\displaystyle \operatorname {GL} (V)} is the automorphism group, not necessarily written as matrices. The special linear group, written SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} or SL n ( F ) {\displaystyle \operatorname {SL} _{n}(F)} , is the subgroup of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} consisting of matrices with a determinant of 1. The group GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} and its subgroups are often called linear groups or matrix groups (the automorphism group GL ( V ) {\displaystyle \operatorname {GL} (V)} is a linear group but not a matrix group). These groups are important in the theory of group representations, and also arise in the study of spatial symmetries and symmetries of vector spaces in general, as well as the study of polynomials. The modular group may be realised as a quotient of the special linear group SL ( 2 , Z ) {\displaystyle \operatorname {SL} (2,\mathbb {Z} )} . If n ≥ 2 {\displaystyle n\geq 2} , then the group GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} is not abelian. == General linear group of a vector space == If V {\displaystyle V} is a vector space over the field F {\displaystyle F} , the general linear group of V {\displaystyle V} , written GL ( V ) {\displaystyle \operatorname {GL} (V)} or Aut ( V ) {\displaystyle \operatorname {Aut} (V)} , is the group of all automorphisms of V {\displaystyle V} , i.e. the set of all bijective linear transformations V → V {\displaystyle V\to V} , together with functional composition as group operation. If V {\displaystyle V} has finite dimension n {\displaystyle n} , then GL ( V ) {\displaystyle \operatorname {GL} (V)} and GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} are isomorphic. The isomorphism is not canonical; it depends on a choice of basis in V {\displaystyle V} . Given a basis { e 1 , … , e n } {\displaystyle \{e_{1},\dots ,e_{n}\}} of V {\displaystyle V} and an automorphism T {\displaystyle T} in GL ( V ) {\displaystyle \operatorname {GL} (V)} , we have then for every basis vector ei that T ( e i ) = ∑ j = 1 n a j i e j {\displaystyle T(e_{i})=\sum _{j=1}^{n}a_{ji}e_{j}} for some constants a i j {\displaystyle a_{ij}} in F {\displaystyle F} ; the matrix corresponding to T {\displaystyle T} is then just the matrix with entries given by the a j i {\displaystyle a_{ji}} . In a similar way, for a commutative ring R {\displaystyle R} the group GL ( n , R ) {\displaystyle \operatorname {GL} (n,R)} may be interpreted as the group of automorphisms of a free R {\displaystyle R} -module M {\displaystyle M} of rank n {\displaystyle n} . One can also define GL(M) for any R {\displaystyle R} -module, but in general this is not isomorphic to GL ( n , R ) {\displaystyle \operatorname {GL} (n,R)} (for any n {\displaystyle n} ). == In terms of determinants == Over a field F {\displaystyle F} , a matrix is invertible if and only if its determinant is nonzero. Therefore, an alternative definition of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} is as the group of matrices with nonzero determinant. Over a commutative ring R {\displaystyle R} , more care is needed: a matrix over R {\displaystyle R} is invertible if and only if its determinant is a unit in R {\displaystyle R} , that is, if its determinant is invertible in R {\displaystyle R} . Therefore, GL ( n , R ) {\displaystyle \operatorname {GL} (n,R)} may be defined as the group of matrices whose determinants are units. Over a non-commutative ring R {\displaystyle R} , determinants are not at all well behaved. In this case, GL ( n , R ) {\displaystyle \operatorname {GL} (n,R)} may be defined as the unit group of the matrix ring M ( n , R ) {\displaystyle M(n,R)} . == As a Lie group == === Real case === The general linear group GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} over the field of real numbers is a real Lie group of dimension n 2 {\displaystyle n^{2}} . To see this, note that the set of all n × n {\displaystyle n\times n} real matrices, M n ( R ) {\displaystyle M_{n}(\mathbb {R} )} , forms a real vector space of dimension n 2 {\displaystyle n^{2}} . The subset GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} consists of those matrices whose determinant is non-zero. The determinant is a polynomial map, and hence GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} is an open affine subvariety of M n ( R ) {\displaystyle M_{n}(\mathbb {R} )} (a non-empty open subset of M n ( R ) {\displaystyle M_{n}(\mathbb {R} )} in the Zariski topology), and therefore a smooth manifold of the same dimension. The Lie algebra of GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} , denoted g l n , {\displaystyle {\mathfrak {gl}}_{n},} consists of all n × n {\displaystyle n\times n} real matrices with the commutator serving as the Lie bracket. As a manifold, GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} is not connected but rather has two connected components: the matrices with positive determinant and the ones with negative determinant. The identity component, denoted by GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} , consists of the real n × n {\displaystyle n\times n} matrices with positive determinant. This is also a Lie group of dimension n 2 {\displaystyle n^{2}} ; it has the same Lie algebra as GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} . The polar decomposition, which is unique for invertible matrices, shows that there is a homeomorphism between GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} and the Cartesian product of O ( n ) {\displaystyle \operatorname {O} (n)} with the set of positive-definite symmetric matrices. Similarly, it shows that there is a homeomorphism between GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} and the Cartesian product of SO ( n ) {\displaystyle \operatorname {SO} (n)} with the set of positive-definite symmetric matrices. Because the latter is contractible, the fundamental group of GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} is isomorphic to that of SO ( n ) {\displaystyle \operatorname {SO} (n)} . The homeomorphism also shows that the group GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} is noncompact. “The” maximal compact subgroup of GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} is the orthogonal group O ( n ) {\displaystyle \operatorname {O} (n)} , while "the" maximal compact subgroup of GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} is the special orthogonal group SO ( n ) {\displaystyle \operatorname {SO} (n)} . As for SO ( n ) {\displaystyle \operatorname {SO} (n)} , the group GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} is not simply connected (except when n = 1 {\displaystyle n=1} , but rather has a fundamental group isomorphic to Z {\displaystyle \mathbb {Z} } for n = 2 {\displaystyle n=2} or Z 2 {\displaystyle \mathbb {Z} _{2}} for n > 2 {\displaystyle n>2} . === Complex case === The general linear group over the field of complex numbers, GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} , is a complex Lie group of complex dimension n 2 {\displaystyle n^{2}} . As a real Lie group (through realification) it has dimension 2 n 2 {\displaystyle 2n^{2}} . The set of all real matrices forms a real Lie subgroup. These correspond to the inclusions GL ( n , R ) < GL ( n , C ) < GL ( 2 n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )<\operatorname {GL} (n,\mathbb {C} )<\operatorname {GL} (2n,\mathbb {R} )} , which have real dimensions n 2 {\displaystyle n^{2}} , 2 n 2 {\displaystyle 2n^{2}} , and ( 2 n ) 2 = 4 n 2 {\displaystyle (2n)^{2}=4n^{2}} . Complex n {\displaystyle n} -dimensional matrices can be characterized as real 2 n {\displaystyle 2n} -dimensional matrices that preserve a linear complex structure; that is, matrices that commute with a matrix J {\displaystyle J} such that J 2 = − I {\displaystyle J^{2}=-I} , where J {\displaystyle J} corresponds to multiplying by the imaginary unit i {\displaystyle i} . The Lie algebra corresponding to GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} consists of all n × n {\displaystyle n\times n} complex matrices with the commutator serving as the Lie bracket. Unlike the real case, GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} is connected. This follows, in part, since the multiplicative group of complex numbers C × {\displaystyle \mathbb {C} ^{\times }} is connected. The group manifold GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} is not compact; rather its maximal compact subgroup is the unitary group U ( n ) {\displaystyle \operatorname {U} (n)} . As for U ( n ) {\displaystyle \operatorname {U} (n)} , the group manifold GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} is not simply connected but has a fundamental group isomorphic to Z {\displaystyle \mathbb {Z} } . == Over finite fields == If F {\displaystyle F} is a finite field with q {\displaystyle q} elements, then we sometimes write GL ( n , q ) {\displaystyle \operatorname {GL} (n,q)} instead of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} . When p is prime, GL ( n , p ) {\displaystyle \operatorname {GL} (n,p)} is the outer automorphism group of the group Z p n {\displaystyle \mathbb {Z} _{p}^{n}} , and also the automorphism group, because Z p n {\displaystyle \mathbb {Z} _{p}^{n}} is abelian, so the inner automorphism group is trivial. The order of GL ( n , q ) {\displaystyle \operatorname {GL} (n,q)} is: ∏ k = 0 n − 1 ( q n − q k ) = ( q n − 1 ) ( q n − q ) ( q n − q 2 ) ⋯ ( q n − q n − 1 ) . {\displaystyle \prod _{k=0}^{n-1}(q^{n}-q^{k})=(q^{n}-1)(q^{n}-q)(q^{n}-q^{2})\ \cdots \ (q^{n}-q^{n-1}).} This can be shown by counting the possible columns of the matrix: the first column can be anything but the zero vector; the second column can be anything but the multiples of the first column; and in general, the k {\displaystyle k} th column can be any vector not in the linear span of the first k − 1 {\displaystyle k-1} columns. In q-analog notation, this is [ n ] q ! ( q − 1 ) n q ( n 2 ) {\displaystyle [n]_{q}!(q-1)^{n}q^{n \choose 2}} . For example, GL(3, 2) has order (8 − 1)(8 − 2)(8 − 4) = 168. It is the automorphism group of the Fano plane and of the group Z 2 3 {\displaystyle \mathbb {Z} _{2}^{3}} . This group is also isomorphic to PSL(2, 7). More generally, one can count points of Grassmannian over F {\displaystyle F} : in other words the number of subspaces of a given dimension k {\displaystyle k} . This requires only finding the order of the stabilizer subgroup of one such subspace and dividing into the formula just given, by the orbit-stabilizer theorem. These formulas are connected to the Schubert decomposition of the Grassmannian, and are q-analogs of the Betti numbers of complex Grassmannians. This was one of the clues leading to the Weil conjectures. Note that in the limit q → 1 {\displaystyle q\to 1} the order of GL ( n , q ) {\displaystyle \operatorname {GL} (n,q)} goes to 0! – but under the correct procedure (dividing by ( q − 1 ) n {\displaystyle (q-1)^{n}} ) we see that it is the order of the symmetric group (see Lorscheid's article). In the philosophy of the field with one element, one thus interprets the symmetric group as the general linear group over the field with one element: S n ≅ GL ( n , 1 ) {\displaystyle S_{n}\cong \operatorname {GL} (n,1)} . === History === The general linear group over a prime field, GL ( ν , p ) {\displaystyle \operatorname {GL} (\nu ,p)} , was constructed and its order computed by Évariste Galois in 1832, in his last letter (to Chevalier) and second (of three) attached manuscripts, which he used in the context of studying the Galois group of the general equation of order p ν {\displaystyle p^{\nu }} . == Special linear group == The special linear group, SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} , is the group of all matrices with determinant 1. These matrices are special in that they lie on a subvariety: they satisfy a polynomial equation (as the determinant is a polynomial in the entries). Matrices of this type form a group as the determinant of the product of two matrices is the product of the determinants of each matrix. If we write F × {\displaystyle F^{\times }} for the multiplicative group of F {\displaystyle F} (that is, F {\displaystyle F} excluding 0), then the determinant is a group homomorphism det : GL ( n , F ) → F × {\displaystyle \det :\operatorname {GL} (n,F)\to F^{\times }} that is surjective and its kernel is the special linear group. Thus, SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} is a normal subgroup of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} , and by the first isomorphism theorem, GL ( n , F ) / SL ( n , F ) {\displaystyle \operatorname {GL} (n,F)/\operatorname {SL} (n,F)} is isomorphic to F × {\displaystyle F^{\times }} . In fact, GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} can be written as a semidirect product: GL ( n , F ) = SL ( n , F ) ⋊ F × {\displaystyle \operatorname {GL} (n,F)=\operatorname {SL} (n,F)\rtimes F^{\times }} . The special linear group is also the derived group (also known as commutator subgroup) of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} (for a field or a division ring F {\displaystyle F} ), provided that n ≠ 2 {\displaystyle n\neq 2} or F {\displaystyle F} is not the field with two elements. When F {\displaystyle F} is R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } , SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} is a Lie subgroup of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} of dimension n 2 − 1 {\displaystyle n^{2}-1} . The Lie algebra of SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} consists of all n × n {\displaystyle n\times n} matrices over F {\displaystyle F} with vanishing trace. The Lie bracket is given by the commutator. The special linear group SL ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} can be characterized as the group of volume and orientation-preserving linear transformations of R n {\displaystyle \mathbb {R} ^{n}} . The group SL ( n , C ) {\displaystyle \operatorname {SL} (n,\mathbb {C} )} is simply connected, while SL ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} is not. SL ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} has the same fundamental group as GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} , that is, Z {\displaystyle \mathbb {Z} } for n = 2 {\displaystyle n=2} and Z 2 {\displaystyle \mathbb {Z} _{2}} for n > 2 {\displaystyle n>2} . == Other subgroups == === Diagonal subgroups === The set of all invertible diagonal matrices forms a subgroup of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} isomorphic to ( F × ) n {\displaystyle (F^{\times })^{n}} . In fields like R {\displaystyle \mathbb {R} } and C {\displaystyle \mathbb {C} } , these correspond to rescaling the space; the so-called dilations and contractions. A scalar matrix is a diagonal matrix which is a constant times the identity matrix. The set of all nonzero scalar matrices forms a subgroup of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} isomorphic to F × {\displaystyle F^{\times }} . This group is the center of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} . In particular, it is a normal, abelian subgroup. The center of SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} is simply the set of all scalar matrices with unit determinant, and is isomorphic to the group of n {\displaystyle n} th roots of unity in the field F {\displaystyle F} . === Classical groups === The so-called classical groups are subgroups of GL ( V ) {\displaystyle \operatorname {GL} (V)} which preserve some sort of bilinear form on a vector space V {\displaystyle V} . These include the orthogonal group, O ( V ) {\displaystyle \operatorname {O} (V)} , which preserves a non-degenerate quadratic form on V {\displaystyle V} , symplectic group, Sp ( V ) {\displaystyle \operatorname {Sp} (V)} , which preserves a symplectic form on V {\displaystyle V} (a non-degenerate alternating form), unitary group, U ( V ) {\displaystyle \operatorname {U} (V)} , which, when F = C {\displaystyle F=\mathbb {C} } , preserves a non-degenerate hermitian form on V {\displaystyle V} . These groups provide important examples of Lie groups. == Related groups and monoids == === Projective linear group === The projective linear group PGL ( n , F ) {\displaystyle \operatorname {PGL} (n,F)} and the projective special linear group PSL ( n , F ) {\displaystyle \operatorname {PSL} (n,F)} are the quotients of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} and SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} by their centers (which consist of the multiples of the identity matrix therein); they are the induced action on the associated projective space. === Affine group === The affine group Aff ( n , F ) {\displaystyle \operatorname {Aff} (n,F)} is an extension of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} by the group of translations in F n {\displaystyle F^{n}} . It can be written as a semidirect product: Aff ( n , F ) = GL ( n , F ) ⋉ F n {\displaystyle \operatorname {Aff} (n,F)=\operatorname {GL} (n,F)\ltimes F^{n}} where GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} acts on F n {\displaystyle F^{n}} in the natural manner. The affine group can be viewed as the group of all affine transformations of the affine space underlying the vector space F n {\displaystyle F^{n}} . One has analogous constructions for other subgroups of the general linear group: for instance, the special affine group is the subgroup defined by the semidirect product, SL ( n , F ) ⋉ F n {\displaystyle \operatorname {SL} (n,F)\ltimes F^{n}} , and the Poincaré group is the affine group associated to the Lorentz group, O ( 1 , 3 , F ) ⋉ F n {\displaystyle \operatorname {O} (1,3,F)\ltimes F^{n}} . === General semilinear group === The general semilinear group Γ L ( n , F ) {\displaystyle \operatorname {\Gamma L} (n,F)} is the group of all invertible semilinear transformations, and contains GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} . A semilinear transformation is a transformation which is linear “up to a twist”, meaning “up to a field automorphism under scalar multiplication”. It can be written as a semidirect product: Γ L ( n , F ) = Gal ( F ) ⋉ GL ( n , F ) {\displaystyle \operatorname {\Gamma L} (n,F)=\operatorname {Gal} (F)\ltimes \operatorname {GL} (n,F)} where Gal ( F ) {\displaystyle \operatorname {Gal} (F)} is the Galois group of F {\displaystyle F} (over its prime field), which acts on GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} by the Galois action on the entries. The main interest of Γ L ( n , F ) {\displaystyle \operatorname {\Gamma L} (n,F)} is that the associated projective semilinear group P Γ L ( n , F ) {\displaystyle \operatorname {P\Gamma L} (n,F)} , which contains PGL ( n , F ) {\displaystyle \operatorname {PGL} (n,F)} , is the collineation group of projective space, for n > 2 {\displaystyle n>2} , and thus semilinear maps are of interest in projective geometry. === Full linear monoid === The Full Linear Monoid, derived upon removal of the determinant's non-zero restriction, forms an algebraic structure akin to a monoid, often referred to as the full linear monoid or occasionally as the full linear semigroup or general linear monoid. Notably, it constitutes a regular semigroup. If one removes the restriction of the determinant being non-zero, the resulting algebraic structure is a monoid, usually called the full linear monoid, but occasionally also full linear semigroup, general linear monoid etc. It is actually a regular semigroup. == Infinite general linear group == The infinite general linear group or stable general linear group is the direct limit of the inclusions GL ( n , F ) → GL ( n + 1 , F ) {\displaystyle \operatorname {GL} (n,F)\to \operatorname {GL} (n+1,F)} as the upper left block matrix. It is denoted by either GL ( F ) {\displaystyle \operatorname {GL} (F)} or GL ( ∞ , F ) {\displaystyle \operatorname {GL} (\infty ,F)} , and can also be interpreted as invertible infinite matrices which differ from the identity matrix in only finitely many places. It is used in algebraic K-theory to define K1, and over the reals has a well-understood topology, thanks to Bott periodicity. It should not be confused with the space of (bounded) invertible operators on a Hilbert space, which is a larger group, and topologically much simpler, namely contractible – see Kuiper's theorem. == See also == List of finite simple groups SL2(R) Representation theory of SL2(R) Representations of classical Lie groups == Notes == == References == == External links == "General linear group", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "GL(2, p) and GL(3, 3) Acting on Points" by Ed Pegg, Jr., Wolfram Demonstrations Project, 2007.
|
Wikipedia:Generality of algebra#0
|
In the history of mathematics, the generality of algebra was a phrase used by Augustin-Louis Cauchy to describe a method of argument that was used in the 18th century by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange, particularly in manipulating infinite series. According to Koetsier, the generality of algebra principle assumed, roughly, that the algebraic rules that hold for a certain class of expressions can be extended to hold more generally on a larger class of objects, even if the rules are no longer obviously valid. As a consequence, 18th century mathematicians believed that they could derive meaningful results by applying the usual rules of algebra and calculus that hold for finite expansions even when manipulating infinite expansions. In works such as Cours d'Analyse, Cauchy rejected the use of "generality of algebra" methods and sought a more rigorous foundation for mathematical analysis. == Example == An example is Euler's derivation of the series for 0 < x < π {\displaystyle 0<x<\pi } . He first evaluated the identity at r = 1 {\displaystyle r=1} to obtain The infinite series on the right hand side of (3) diverges for all real x {\displaystyle x} . But nevertheless integrating this term-by-term gives (1), an identity which is known to be true by Fourier analysis. == See also == Principle of permanence Transfer principle == References ==
|
Wikipedia:Generalizations of Pauli matrices#0
|
In mathematics and physics, in particular quantum information, the term generalized Pauli matrices refers to families of matrices which generalize the (linear algebraic) properties of the Pauli matrices. Here, a few classes of such matrices are summarized. == Multi-qubit Pauli matrices (Hermitian) == This method of generalizing the Pauli matrices refers to a generalization from a single 2-level system (qubit) to multiple such systems. In particular, the generalized Pauli matrices for a group of N {\displaystyle N} qubits is just the set of matrices generated by all possible products of Pauli matrices on any of the qubits. The vector space of a single qubit is V 1 = C 2 {\displaystyle V_{1}=\mathbb {C} ^{2}} and the vector space of N {\displaystyle N} qubits is V N = ( C 2 ) ⊗ N ≅ C 2 N {\displaystyle V_{N}=\left(\mathbb {C} ^{2}\right)^{\otimes N}\cong \mathbb {C} ^{2^{N}}} . We use the tensor product notation σ a ( n ) = I ( 1 ) ⊗ ⋯ ⊗ I ( n − 1 ) ⊗ σ a ⊗ I ( n + 1 ) ⊗ ⋯ ⊗ I ( N ) , a = 1 , 2 , 3 {\displaystyle \sigma _{a}^{(n)}=I^{(1)}\otimes \dotsm \otimes I^{(n-1)}\otimes \sigma _{a}\otimes I^{(n+1)}\otimes \dotsm \otimes I^{(N)},\qquad a=1,2,3} to refer to the operator on V N {\displaystyle V_{N}} that acts as a Pauli matrix on the n {\displaystyle n} th qubit and the identity on all other qubits. We can also use a = 0 {\displaystyle a=0} for the identity, i.e., for any n {\displaystyle n} we use σ 0 ( n ) = ⨂ m = 1 N I ( m ) {\textstyle \sigma _{0}^{(n)}=\bigotimes _{m=1}^{N}I^{(m)}} . Then the multi-qubit Pauli matrices are all matrices of the form σ a → := ∏ n = 1 N σ a n ( n ) = σ a 1 ⊗ ⋯ ⊗ σ a N , a → = ( a 1 , … , a N ) ∈ { 0 , 1 , 2 , 3 } × N {\displaystyle \sigma _{\,{\vec {a}}}:=\prod _{n=1}^{N}\sigma _{a_{n}}^{(n)}=\sigma _{a_{1}}\otimes \dotsm \otimes \sigma _{a_{N}},\qquad {\vec {a}}=(a_{1},\ldots ,a_{N})\in \{0,1,2,3\}^{\times N}} , i.e., for a → {\displaystyle {\vec {a}}} a vector of integers between 0 and 4. Thus there are 4 N {\displaystyle 4^{N}} such generalized Pauli matrices if we include the identity I = ⨂ m = 1 N I ( m ) {\textstyle I=\bigotimes _{m=1}^{N}I^{(m)}} and 4 N − 1 {\displaystyle 4^{N}-1} if we do not. === Notations === In quantum computation, it is conventional to denote the Pauli matrices with single upper case letters I ≡ σ 0 , X ≡ σ 1 , Y ≡ σ 2 , Z ≡ σ 3 . {\displaystyle I\equiv \sigma _{0},\qquad X\equiv \sigma _{1},\qquad Y\equiv \sigma _{2},\qquad Z\equiv \sigma _{3}.} This allows subscripts on Pauli matrices to indicate the qubit index. For example, in a system with 3 qubits, X 1 ≡ X ⊗ I ⊗ I , Z 2 ≡ I ⊗ Z ⊗ I . {\displaystyle X_{1}\equiv X\otimes I\otimes I,\qquad Z_{2}\equiv I\otimes Z\otimes I.} Multi-qubit Pauli matrices can be written as products of single-qubit Paulis on disjoint qubits. Alternatively, when it is clear from context, the tensor product symbol ⊗ {\displaystyle \otimes } can be omitted, i.e. unsubscripted Pauli matrices written consecutively represents tensor product rather than matrix product. For example: X Z I ≡ X 1 Z 2 = X ⊗ Z ⊗ I . {\displaystyle XZI\equiv X_{1}Z_{2}=X\otimes Z\otimes I.} == Higher spin matrices (Hermitian) == The traditional Pauli matrices are the matrix representation of the s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} Lie algebra generators J x {\displaystyle J_{x}} , J y {\displaystyle J_{y}} , and J z {\displaystyle J_{z}} in the 2-dimensional irreducible representation of SU(2), corresponding to a spin-1/2 particle. These generate the Lie group SU(2). For a general particle of spin s = 0 , 1 / 2 , 1 , 3 / 2 , 2 , … {\displaystyle s=0,1/2,1,3/2,2,\ldots } , one instead utilizes the 2 s + 1 {\displaystyle 2s+1} -dimensional irreducible representation. == Generalized Gell-Mann matrices (Hermitian) == This method of generalizing the Pauli matrices refers to a generalization from 2-level systems (Pauli matrices acting on qubits) to 3-level systems (Gell-Mann matrices acting on qutrits) and generic d {\displaystyle d} -level systems (generalized Gell-Mann matrices acting on qudits). === Construction === Let E j k {\displaystyle E_{jk}} be the matrix with 1 in the jk-th entry and 0 elsewhere. Consider the space of d × d {\displaystyle d\times d} complex matrices, C d × d {\displaystyle \mathbb {C} ^{d\times d}} , for a fixed d {\displaystyle d} . Define the following matrices, f k , j d = { E k j + E j k for k < j , − i ( E j k − E k j ) for k > j . {\displaystyle f_{k,j}^{\,\,\,\,\,d}={\begin{cases}E_{kj}+E_{jk}&{\text{for }}k<j,\\-i(E_{jk}-E_{kj})&{\text{for }}k>j.\end{cases}}} and h k d = { I d for k = 1 , h k d − 1 ⊕ 0 for 1 < k < d , 2 d ( d − 1 ) ( h 1 d − 1 ⊕ ( 1 − d ) ) = 2 d ( d − 1 ) ( I d − 1 ⊕ ( 1 − d ) ) for k = d {\displaystyle h_{k}^{\,\,\,d}={\begin{cases}I_{d}&{\text{for }}k=1,\\h_{k}^{\,\,\,d-1}\oplus 0&{\text{for }}1<k<d,\\{\sqrt {\tfrac {2}{d(d-1)}}}\left(h_{1}^{d-1}\oplus (1-d)\right)={\sqrt {\tfrac {2}{d(d-1)}}}\left(I_{d-1}\oplus (1-d)\right)&{\text{for }}k=d\end{cases}}} The collection of matrices defined above without the identity matrix are called the generalized Gell-Mann matrices, in dimension d {\displaystyle d} . The symbol ⊕ (utilized in the Cartan subalgebra above) means matrix direct sum. The generalized Gell-Mann matrices are Hermitian and traceless by construction, just like the Pauli matrices. One can also check that they are orthogonal in the Hilbert–Schmidt inner product on C d × d {\displaystyle \mathbb {C} ^{d\times d}} . By dimension count, one sees that they span the vector space of d × d {\displaystyle d\times d} complex matrices, g l ( d , C ) {\displaystyle {\mathfrak {gl}}(d,\mathbb {C} )} . They then provide a Lie-algebra-generator basis acting on the fundamental representation of s u ( d ) {\displaystyle {\mathfrak {su}}(d)} . In dimensions d {\displaystyle d} = 2 and 3, the above construction recovers the Pauli and Gell-Mann matrices, respectively. == Sylvester's generalized Pauli matrices (non-Hermitian) == A particularly notable generalization of the Pauli matrices was constructed by James Joseph Sylvester in 1882. These are known as "Weyl–Heisenberg matrices" as well as "generalized Pauli matrices". === Framing === The Pauli matrices σ 1 {\displaystyle \sigma _{1}} and σ 3 {\displaystyle \sigma _{3}} satisfy the following: σ 1 2 = σ 3 2 = I , σ 1 σ 3 = − σ 3 σ 1 = e π i σ 3 σ 1 . {\displaystyle \sigma _{1}^{2}=\sigma _{3}^{2}=I,\quad \sigma _{1}\sigma _{3}=-\sigma _{3}\sigma _{1}=e^{\pi i}\sigma _{3}\sigma _{1}.} The so-called Walsh–Hadamard conjugation matrix is W = 1 2 [ 1 1 1 − 1 ] . {\displaystyle W={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1&1\\1&-1\end{bmatrix}}.} Like the Pauli matrices, W {\displaystyle W} is both Hermitian and unitary. σ 1 , σ 3 {\displaystyle \sigma _{1},\;\sigma _{3}} and W {\displaystyle W} satisfy the relation σ 1 = W σ 3 W ∗ . {\displaystyle \;\sigma _{1}=W\sigma _{3}W^{*}.} The goal now is to extend the above to higher dimensions, d {\displaystyle d} . === Construction: The clock and shift matrices === Fix the dimension d {\displaystyle d} as before. Let ω = exp ( 2 π i / d ) {\displaystyle \omega =\exp(2\pi i/d)} , a root of unity. Since ω d = 1 {\displaystyle \omega ^{d}=1} and ω ≠ 1 {\displaystyle \omega \neq 1} , the sum of all roots annuls: 1 + ω + ⋯ + ω d − 1 = 0. {\displaystyle 1+\omega +\cdots +\omega ^{d-1}=0.} Integer indices may then be cyclically identified mod d. Now define, with Sylvester, the shift matrix Σ 1 = [ 0 0 0 ⋯ 0 1 1 0 0 ⋯ 0 0 0 1 0 ⋯ 0 0 0 0 1 ⋯ 0 0 ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 0 ⋯ 1 0 ] {\displaystyle \Sigma _{1}={\begin{bmatrix}0&0&0&\cdots &0&1\\1&0&0&\cdots &0&0\\0&1&0&\cdots &0&0\\0&0&1&\cdots &0&0\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &1&0\\\end{bmatrix}}} and the clock matrix, Σ 3 = [ 1 0 0 ⋯ 0 0 ω 0 ⋯ 0 0 0 ω 2 ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 ⋯ ω d − 1 ] . {\displaystyle \Sigma _{3}={\begin{bmatrix}1&0&0&\cdots &0\\0&\omega &0&\cdots &0\\0&0&\omega ^{2}&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\cdots &\omega ^{d-1}\end{bmatrix}}.} These matrices generalize σ 1 {\displaystyle \sigma _{1}} and σ 3 {\displaystyle \sigma _{3}} , respectively. Note that the unitarity and tracelessness of the two Pauli matrices is preserved, but not Hermiticity in dimensions higher than two. Since Pauli matrices describe quaternions, Sylvester dubbed the higher-dimensional analogs "nonions", "sedenions", etc. These two matrices are also the cornerstone of quantum mechanical dynamics in finite-dimensional vector spaces as formulated by Hermann Weyl, and they find routine applications in numerous areas of mathematical physics. The clock matrix amounts to the exponential of position in a "clock" of d {\displaystyle d} hours, and the shift matrix is just the translation operator in that cyclic vector space, so the exponential of the momentum. They are (finite-dimensional) representations of the corresponding elements of the Weyl-Heisenberg group on a d {\displaystyle d} -dimensional Hilbert space. The following relations echo and generalize those of the Pauli matrices: Σ 1 d = Σ 3 d = I {\displaystyle \Sigma _{1}^{d}=\Sigma _{3}^{d}=I} and the braiding relation, Σ 3 Σ 1 = ω Σ 1 Σ 3 = e 2 π i / d Σ 1 Σ 3 , {\displaystyle \Sigma _{3}\Sigma _{1}=\omega \Sigma _{1}\Sigma _{3}=e^{2\pi i/d}\Sigma _{1}\Sigma _{3},} the Weyl formulation of the CCR, and can be rewritten as Σ 3 Σ 1 Σ 3 d − 1 Σ 1 d − 1 = ω . {\displaystyle \Sigma _{3}\Sigma _{1}\Sigma _{3}^{d-1}\Sigma _{1}^{d-1}=\omega ~.} On the other hand, to generalize the Walsh–Hadamard matrix W {\displaystyle W} , note W = 1 2 [ 1 1 1 ω 2 − 1 ] = 1 2 [ 1 1 1 ω d − 1 ] . {\displaystyle W={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1&1\\1&\omega ^{2-1}\end{bmatrix}}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1&1\\1&\omega ^{d-1}\end{bmatrix}}.} Define, again with Sylvester, the following analog matrix, still denoted by W {\displaystyle W} in a slight abuse of notation, W = 1 d [ 1 1 1 ⋯ 1 1 ω d − 1 ω 2 ( d − 1 ) ⋯ ω ( d − 1 ) 2 1 ω d − 2 ω 2 ( d − 2 ) ⋯ ω ( d − 1 ) ( d − 2 ) ⋮ ⋮ ⋮ ⋱ ⋮ 1 ω ω 2 ⋯ ω d − 1 ] . {\displaystyle W={\frac {1}{\sqrt {d}}}{\begin{bmatrix}1&1&1&\cdots &1\\1&\omega ^{d-1}&\omega ^{2(d-1)}&\cdots &\omega ^{(d-1)^{2}}\\1&\omega ^{d-2}&\omega ^{2(d-2)}&\cdots &\omega ^{(d-1)(d-2)}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&\omega &\omega ^{2}&\cdots &\omega ^{d-1}\end{bmatrix}}~.} It is evident that W {\displaystyle W} is no longer Hermitian, but is still unitary. Direct calculation yields Σ 1 = W Σ 3 W ∗ , {\displaystyle \Sigma _{1}=W\Sigma _{3}W^{*}~,} which is the desired analog result. Thus, W {\displaystyle W} , a Vandermonde matrix, arrays the eigenvectors of Σ 1 {\displaystyle \Sigma _{1}} , which has the same eigenvalues as Σ 3 {\displaystyle \Sigma _{3}} . When d = 2 k {\displaystyle d=2^{k}} , W ∗ {\displaystyle W^{*}} is precisely the discrete Fourier transform matrix, converting position coordinates to momentum coordinates and vice versa. === Definition === The complete family of d 2 {\displaystyle d^{2}} unitary (but non-Hermitian) independent matrices { σ k , j } k , j = 1 d {\displaystyle \{\sigma _{k,j}\}_{k,j=1}^{d}} is defined as follows: This provides Sylvester's well-known trace-orthogonal basis for g l ( d , C ) {\displaystyle {\mathfrak {gl}}(d,\mathbb {C} )} , known as "nonions" g l ( 3 , C ) {\displaystyle {\mathfrak {gl}}(3,\mathbb {C} )} , "sedenions" g l ( 4 , C ) {\displaystyle {\mathfrak {gl}}(4,\mathbb {C} )} , etc... This basis can be systematically connected to the above Hermitian basis. (For instance, the powers of Σ 3 {\displaystyle \Sigma _{3}} , the Cartan subalgebra, map to linear combinations of the h k d {\displaystyle h_{k}^{\,\,\,d}} matrices.) It can further be used to identify g l ( d , C ) {\displaystyle {\mathfrak {gl}}(d,\mathbb {C} )} , as d → ∞ {\displaystyle d\to \infty } , with the algebra of Poisson brackets. === Properties === With respect to the Hilbert–Schmidt inner product on operators, ⟨ A , B ⟩ HS = Tr ( A ∗ B ) {\displaystyle \langle A,B\rangle _{\text{HS}}=\operatorname {Tr} (A^{*}B)} , Sylvester's generalized Pauli operators are orthogonal and normalized to d {\displaystyle {\sqrt {d}}} : ⟨ σ k , j , σ k ′ , j ′ ⟩ HS = δ k k ′ δ j j ′ ‖ σ k , j ‖ HS 2 = d δ k k ′ δ j j ′ {\displaystyle \langle \sigma _{k,j},\sigma _{k',j'}\rangle _{\text{HS}}=\delta _{kk'}\delta _{jj'}\|\sigma _{k,j}\|_{\text{HS}}^{2}=d\delta _{kk'}\delta _{jj'}} . This can be checked directly from the above definition of σ k , j {\displaystyle \sigma _{k,j}} . == See also == Heisenberg group § Heisenberg group modulo an odd prime p Hermitian matrix Bloch sphere Discrete Fourier transform Generalized Clifford algebra Weyl–Brauer matrices Circulant matrix Shift operator Quantum Fourier transform 3D rotation group § A note on Lie algebras == Notes ==
|
Wikipedia:Generalized Cohen–Macaulay ring#0
|
In mathematics, a Cohen–Macaulay ring is a commutative ring with some of the algebro-geometric properties of a smooth variety, such as local equidimensionality. Under mild assumptions, a local ring is Cohen–Macaulay exactly when it is a finitely generated free module over a regular local subring. Cohen–Macaulay rings play a central role in commutative algebra: they form a very broad class, and yet they are well understood in many ways. They are named for Francis Sowerby Macaulay (1916), who proved the unmixedness theorem for polynomial rings, and for Irvin Cohen (1946), who proved the unmixedness theorem for formal power series rings. All Cohen–Macaulay rings have the unmixedness property. For Noetherian local rings, there is the following chain of inclusions. Universally catenary rings ⊃ Cohen–Macaulay rings ⊃ Gorenstein rings ⊃ complete intersection rings ⊃ regular local rings == Definition == For a commutative Noetherian local ring R, a finite (i.e. finitely generated) R-module M ≠ 0 {\displaystyle M\neq 0} is a Cohen-Macaulay module if d e p t h ( M ) = d i m ( M ) {\displaystyle \mathrm {depth} (M)=\mathrm {dim} (M)} (in general we have: d e p t h ( M ) ≤ d i m ( M ) {\displaystyle \mathrm {depth} (M)\leq \mathrm {dim} (M)} , see Auslander–Buchsbaum formula for the relation between depth and dim of a certain kind of modules). On the other hand, R {\displaystyle R} is a module on itself, so we call R {\displaystyle R} a Cohen-Macaulay ring if it is a Cohen-Macaulay module as an R {\displaystyle R} -module. A maximal Cohen-Macaulay module is a Cohen-Macaulay module M such that d i m ( M ) = d i m ( R ) {\displaystyle \mathrm {dim} (M)=\mathrm {dim} (R)} . The above definition was for a Noetherian local rings. But we can expand the definition for a more general Noetherian ring: If R {\displaystyle R} is a commutative Noetherian ring, then an R-module M is called Cohen–Macaulay module if M m {\displaystyle M_{\mathrm {m} }} is a Cohen-Macaulay module for all maximal ideals m ∈ S u p p ( M ) {\displaystyle \mathrm {m} \in \mathrm {Supp} (M)} . (This is a kind of circular definition unless we define zero modules as Cohen-Macaulay. So we define zero modules as Cohen-Macaulay modules in this definition.) Now, to define maximal Cohen-Macaulay modules for these rings, we require that M m {\displaystyle M_{\mathrm {m} }} to be such an R m {\displaystyle R_{\mathrm {m} }} -module for each maximal ideal m {\displaystyle \mathrm {m} } of R. As in the local case, R is a Cohen-Macaulay ring if it is a Cohen-Macaulay module (as an R {\displaystyle R} -module on itself). == Examples == Noetherian rings of the following types are Cohen–Macaulay. Any regular local ring. This leads to various examples of Cohen–Macaulay rings, such as the integers Z {\displaystyle \mathbb {Z} } , or a polynomial ring K [ x 1 , … , x n ] {\displaystyle K[x_{1},\ldots ,x_{n}]} over a field K, or a power series ring K [ [ x 1 , … , x n ] ] {\displaystyle K[[x_{1},\ldots ,x_{n}]]} . In geometric terms, every regular scheme, for example a smooth variety over a field, is Cohen–Macaulay. Any 0-dimensional ring (or equivalently, any Artinian ring). Any 1-dimensional reduced ring, for example any 1-dimensional domain. Any 2-dimensional normal ring. Any Gorenstein ring. In particular, any complete intersection ring. The ring of invariants R G {\displaystyle R^{G}} when R is a Cohen–Macaulay algebra over a field of characteristic zero and G is a finite group (or more generally, a linear algebraic group whose identity component is reductive). This is the Hochster–Roberts theorem. Any determinantal ring. That is, let R be the quotient of a regular local ring S by the ideal I generated by the r × r minors of some p × q matrix of elements of S. If the codimension (or height) of I is equal to the "expected" codimension (p−r+1)(q−r+1), R is called a determinantal ring. In that case, R is Cohen−Macaulay. Similarly, coordinate rings of determinantal varieties are Cohen-Macaulay. Some more examples: The ring K[x]/(x²) has dimension 0 and hence is Cohen–Macaulay, but it is not reduced and therefore not regular. The subring K[t2, t3] of the polynomial ring K[t], or its localization or completion at t=0, is a 1-dimensional domain which is Gorenstein, and hence Cohen–Macaulay, but not regular. This ring can also be described as the coordinate ring of the cuspidal cubic curve y2 = x3 over K. The subring K[t3, t4, t5] of the polynomial ring K[t], or its localization or completion at t=0, is a 1-dimensional domain which is Cohen–Macaulay but not Gorenstein. Rational singularities over a field of characteristic zero are Cohen–Macaulay. Toric varieties over any field are Cohen–Macaulay. The minimal model program makes prominent use of varieties with klt (Kawamata log terminal) singularities; in characteristic zero, these are rational singularities and hence are Cohen–Macaulay, One successful analog of rational singularities in positive characteristic is the notion of F-rational singularities; again, such singularities are Cohen–Macaulay. Let X be a projective variety of dimension n ≥ 1 over a field, and let L be an ample line bundle on X. Then the section ring of L R = ⨁ j ≥ 0 H 0 ( X , L j ) {\displaystyle R=\bigoplus _{j\geq 0}H^{0}(X,L^{j})} is Cohen–Macaulay if and only if the cohomology group Hi(X, Lj) is zero for all 1 ≤ i ≤ n−1 and all integers j. It follows, for example, that the affine cone Spec R over an abelian variety X is Cohen–Macaulay when X has dimension 1, but not when X has dimension at least 2 (because H1(X, O) is not zero). See also Generalized Cohen–Macaulay ring. == Cohen–Macaulay schemes == We say that a locally Noetherian scheme X {\displaystyle X} is Cohen–Macaulay if at each point x ∈ X {\displaystyle x\in X} the local ring O X , x {\displaystyle {\mathcal {O}}_{X,x}} is Cohen–Macaulay. === Cohen–Macaulay curves === Cohen–Macaulay curves are a special case of Cohen–Macaulay schemes, but are useful for compactifying moduli spaces of curves where the boundary of the smooth locus M g {\displaystyle {\mathcal {M}}_{g}} is of Cohen–Macaulay curves. There is a useful criterion for deciding whether or not curves are Cohen–Macaulay. Schemes of dimension ≤ 1 {\displaystyle \leq 1} are Cohen–Macaulay if and only if they have no embedded primes. The singularities present in Cohen–Macaulay curves can be classified completely by looking at the plane curve case. ==== Non-examples ==== Using the criterion, there are easy examples of non-Cohen–Macaulay curves from constructing curves with embedded points. For example, the scheme X = Spec ( C [ x , y ] ( x 2 , x y ) ) {\displaystyle X={\text{Spec}}\left({\frac {\mathbb {C} [x,y]}{(x^{2},xy)}}\right)} has the decomposition into prime ideals ( x ) ⋅ ( x , y ) {\displaystyle (x)\cdot (x,y)} . Geometrically it is the y {\displaystyle y} -axis with an embedded point at the origin, which can be thought of as a fat point. Given a smooth projective plane curve C ⊂ P 2 {\displaystyle C\subset \mathbb {P} ^{2}} , a curve with an embedded point can be constructed using the same technique: find the ideal I x {\displaystyle I_{x}} of a point in x ∈ C {\displaystyle x\in C} and multiply it with the ideal I C {\displaystyle I_{C}} of C {\displaystyle C} . Then X = Proj ( C [ x , y , z ] I C ⋅ I x ) {\displaystyle X={\text{Proj}}\left({\frac {\mathbb {C} [x,y,z]}{I_{C}\cdot I_{x}}}\right)} is a curve with an embedded point at x {\displaystyle x} . === Intersection theory === Cohen–Macaulay schemes have a special relation with intersection theory. Precisely, let X be a smooth variety and V, W closed subschemes of pure dimension. Let Z be a proper component of the scheme-theoretic intersection V × X W {\displaystyle V\times _{X}W} , that is, an irreducible component of expected dimension. If the local ring A of V × X W {\displaystyle V\times _{X}W} at the generic point of Z is Cohen-Macaulay, then the intersection multiplicity of V and W along Z is given as the length of A: i ( Z , V ⋅ W , X ) = length ( A ) {\displaystyle i(Z,V\cdot W,X)=\operatorname {length} (A)} . In general, that multiplicity is given as a length essentially characterizes Cohen–Macaulay ring; see #Properties. Multiplicity one criterion, on the other hand, roughly characterizes a regular local ring as a local ring of multiplicity one. === Example === For a simple example, if we take the intersection of a parabola with a line tangent to it, the local ring at the intersection point is isomorphic to C [ x , y ] ( y − x 2 ) ⊗ C [ x , y ] C [ x , y ] ( y ) ≅ C [ x ] ( x 2 ) {\displaystyle {\frac {\mathbb {C} [x,y]}{(y-x^{2})}}\otimes _{\mathbb {C} [x,y]}{\frac {\mathbb {C} [x,y]}{(y)}}\cong {\frac {\mathbb {C} [x]}{(x^{2})}}} which is Cohen–Macaulay of length two, hence the intersection multiplicity is two, as expected. == Miracle flatness or Hironaka's criterion == There is a remarkable characterization of Cohen–Macaulay rings, sometimes called miracle flatness or Hironaka's criterion. Let R be a local ring which is finitely generated as a module over some regular local ring A contained in R. Such a subring exists for any localization R at a prime ideal of a finitely generated algebra over a field, by the Noether normalization lemma; it also exists when R is complete and contains a field, or when R is a complete domain. Then R is Cohen–Macaulay if and only if it is flat as an A-module; it is also equivalent to say that R is free as an A-module. A geometric reformulation is as follows. Let X be a connected affine scheme of finite type over a field K (for example, an affine variety). Let n be the dimension of X. By Noether normalization, there is a finite morphism f from X to affine space An over K. Then X is Cohen–Macaulay if and only if all fibers of f have the same degree. It is striking that this property is independent of the choice of f. Finally, there is a version of Miracle Flatness for graded rings. Let R be a finitely generated commutative graded algebra over a field K, R = K ⊕ R 1 ⊕ R 2 ⊕ ⋯ . {\displaystyle R=K\oplus R_{1}\oplus R_{2}\oplus \cdots .} There is always a graded polynomial subring A ⊂ R (with generators in various degrees) such that R is finitely generated as an A-module. Then R is Cohen–Macaulay if and only if R is free as a graded A-module. Again, it follows that this freeness is independent of the choice of the polynomial subring A. == Properties == A Noetherian local ring is Cohen–Macaulay if and only if its completion is Cohen–Macaulay. If R is a Cohen–Macaulay ring, then the polynomial ring R[x] and the power series ring R[[x]] are Cohen–Macaulay. For a non-zero-divisor u in the maximal ideal of a Noetherian local ring R, R is Cohen–Macaulay if and only if R/(u) is Cohen–Macaulay. The quotient of a Cohen–Macaulay ring by any ideal is universally catenary. If R is a quotient of a Cohen–Macaulay ring, then the locus { p ∈ Spec R | Rp is Cohen–Macaulay } is an open subset of Spec R. Let (R, m, k) be a Noetherian local ring of embedding codimension c, meaning that c = dimk(m/m2) − dim(R). In geometric terms, this holds for a local ring of a subscheme of codimension c in a regular scheme. For c=1, R is Cohen–Macaulay if and only if it is a hypersurface ring. There is also a structure theorem for Cohen–Macaulay rings of codimension 2, the Hilbert–Burch theorem: they are all determinantal rings, defined by the r × r minors of an (r+1) × r matrix for some r. For a Noetherian local ring (R, m), the following are equivalent: R is Cohen–Macaulay. For every parameter ideal Q (an ideal generated by a system of parameters), length ( R / Q ) = e ( Q ) {\displaystyle \operatorname {length} (R/Q)=e(Q)} := the Hilbert–Samuel multiplicity of Q. For some parameter ideal Q, length ( R / Q ) = e ( Q ) {\displaystyle \operatorname {length} (R/Q)=e(Q)} . (See Generalized Cohen–Macaulay ring as well as Buchsbaum ring for rings that generalize this characterization.) == The unmixedness theorem == An ideal I of a Noetherian ring A is called unmixed in height if the height of I is equal to the height of every associated prime P of A/I. (This is stronger than saying that A/I is equidimensional; see below.) The unmixedness theorem is said to hold for the ring A if every ideal I generated by a number of elements equal to its height is unmixed. A Noetherian ring is Cohen–Macaulay if and only if the unmixedness theorem holds for it. The unmixed theorem applies in particular to the zero ideal (an ideal generated by zero elements) and thus it says a Cohen–Macaulay ring is an equidimensional ring; in fact, in the strong sense: there is no embedded component and each component has the same codimension. See also: quasi-unmixed ring (a ring in which the unmixed theorem holds for integral closure of an ideal). == Counterexamples == If K is a field, then the ring R = K[x,y]/(x2,xy) (the coordinate ring of a line with an embedded point) is not Cohen–Macaulay. This follows, for example, by Miracle Flatness: R is finite over the polynomial ring A = K[y], with degree 1 over points of the affine line Spec A with y ≠ 0, but with degree 2 over the point y = 0 (because the K-vector space K[x]/(x2) has dimension 2). If K is a field, then the ring K[x,y,z]/(xy,xz) (the coordinate ring of the union of a line and a plane) is reduced, but not equidimensional, and hence not Cohen–Macaulay. Taking the quotient by the non-zero-divisor x−z gives the previous example. If K is a field, then the ring R = K[w,x,y,z]/(wy,wz,xy,xz) (the coordinate ring of the union of two planes meeting in a point) is reduced and equidimensional, but not Cohen–Macaulay. To prove that, one can use Hartshorne's connectedness theorem: if R is a Cohen–Macaulay local ring of dimension at least 2, then Spec R minus its closed point is connected. The Segre product of two Cohen-Macaulay rings need not be Cohen-Macaulay. == Grothendieck duality == One meaning of the Cohen–Macaulay condition can be seen in coherent duality theory. A variety or scheme X is Cohen–Macaulay if the "dualizing complex", which a priori lies in the derived category of sheaves on X, is represented by a single sheaf. The stronger property of being Gorenstein means that this sheaf is a line bundle. In particular, every regular scheme is Gorenstein. Thus the statements of duality theorems such as Serre duality or Grothendieck local duality for Gorenstein or Cohen–Macaulay schemes retain some of the simplicity of what happens for regular schemes or smooth varieties. == Notes == == References == Bruns, Winfried; Herzog, Jürgen (1993), Cohen–Macaulay Rings, Cambridge Studies in Advanced Mathematics, vol. 39, Cambridge University Press, ISBN 978-0-521-41068-7, MR 1251956 Cohen, I. S. (1946), "On the structure and ideal theory of complete local rings", Transactions of the American Mathematical Society, 59 (1): 54–106, doi:10.2307/1990313, ISSN 0002-9947, JSTOR 1990313, MR 0016094 Cohen's paper was written when "local ring" meant what is now called a "Noetherian local ring". V.I. Danilov (2001) [1994], "Cohen–Macaulay ring", Encyclopedia of Mathematics, EMS Press Eisenbud, David (1995), Commutative Algebra with a View toward Algebraic Geometry, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-5350-1, ISBN 978-0-387-94268-1, MR 1322960 Fulton, William (1993), Introduction to Toric Varieties, Princeton University Press, doi:10.1515/9781400882526, ISBN 978-0-691-00049-7, MR 1234037 Fulton, William (1998), Intersection theory, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge., vol. 2 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-62046-4, MR 1644323 Kollár, János; Mori, Shigefumi (1998), Birational Geometry of Algebraic Varieties, Cambridge University Press, doi:10.1017/CBO9780511662560, ISBN 0-521-63277-3, MR 1658959 Kollár, János (2013), Singularities of the Minimal Model Program, Cambridge University Press, doi:10.1017/CBO9781139547895, ISBN 978-1-107-03534-8, MR 3057950 Macaulay, F.S. (1916), The Algebraic Theory of Modular Systems, Cambridge University Press, doi:10.3792/chmm/1263317740, ISBN 1-4297-0441-1, MR 1281612 {{citation}}: ISBN / Date incompatibility (help) Matsumura, Hideyuki (1989), Commutative Ring Theory, Cambridge Studies in Advanced Mathematics (2nd ed.), Cambridge University Press, ISBN 978-0-521-36764-6, MR 0879273 Schwede, Karl; Tucker, Kevin (2012), "A survey of test ideals", Progress in Commutative Algebra 2, Berlin: Walter de Gruyter, pp. 39–99, arXiv:1104.2000, Bibcode:2011arXiv1104.2000S, MR 2932591 == External links == Examples of Cohen-Macaulay integral domains Examples of Cohen-Macaulay rings == See also == Ring theory Local rings Gorenstein local rings Wiles's proof of Fermat's Last Theorem
|
Wikipedia:Generalized Ozaki cost function#0
|
In economics the generalized-Ozaki (GO) cost function is a general description of the cost of production proposed by Shinichiro Nakamura. The GO cost function is notable for explicitly considering nonhomothetic technology, where the proportions of inputs can vary as the output changes. This stands in contrast to the standard production model, which assumes homothetic technology. == The GO function == For a given output y {\displaystyle y} , at time t {\displaystyle t} and a vector of m {\displaystyle m} input prices p i {\displaystyle p_{i}} , the generalized-Ozaki (GO) cost function C ( ) {\displaystyle C()} is expressed as Here, b i j = b j i {\displaystyle b_{ij}=b_{ji}} and ∑ i b i j = 1 {\displaystyle \sum _{i}b_{ij}=1} , i , j = 1 , . . , m {\displaystyle i,j=1,..,m} . By applying the Shephard's lemma, we derive the demand function for input i {\displaystyle i} , x i {\displaystyle x_{i}} : The GO cost function is flexible in the price space, and treats scale effects and technical change in a highly general manner. The concavity condition which ensures that a constant function aligns with cost minimization for a specific set of p {\displaystyle p} , necessitates that its Hessian (the matrix of second partial derivatives with respect to p i {\displaystyle p_{i}} and p j {\displaystyle p_{j}} ) being negative semidefinite. Several notable special cases can be identified: Homothticity (HT): b y i = b y {\displaystyle b_{yi}=b_{y}} for all i {\displaystyle i} . All input levels ( x i {\displaystyle x_{i}} ) scale proportionally with the overall output level ( y {\displaystyle y} ). Homogeneity of (of degree one) in output (HG): b y = 0 {\displaystyle b_{y}=0} in addition to HT. Factor limitationality (FL): b y i = 0 {\displaystyle b_{yi}=0} for all i {\displaystyle i} . None of the input levels ( x i {\displaystyle x_{i}} ) depend on p {\displaystyle p} . Neutral technical change (NT): b t i = b t {\displaystyle b_{ti}=b_{t}} for all i {\displaystyle i} . When (HT) holds, the GO function reduces to the Generalized Leontief function of Diewert, A well-known flexible functional form for cost and production functions. When (FL) hods, it reduces to a non-linear version of Leontief's model, which explains the cross-sectional variation of x i {\displaystyle x_{i}} when variations in input prices were negligible: == Background == === Cost- and production functions === In economics, production technology is typically represented by the production function f {\displaystyle f} , which, in the case of a single output y {\displaystyle y} and m {\displaystyle m} inputs, is written as y = f ( x ) {\displaystyle y=f(x)} . When considering cost minimization for a given set of prices p {\displaystyle p} and y {\displaystyle y} , the corresponding cost function C ( p , y ) {\displaystyle C(p,y)} can be expressed as: The duality theorems of cost and production functions state that once a well-behaved cost function is established, one can derive the corresponding production function, and vice versa. For a given cost function C ( p , y ) {\displaystyle C(p,y)} , the corresponding production function f {\displaystyle f} can be obtained as (a more rigorous derivation involves using a distance function instead of a production function) : In essence, under general conditions, a specific technology can be equally effectively represented by both cost and production functions. One advantage of using a cost function rather than a production function is that the demand functions for inputs can be easily derived from the former using Shephard's lemma, whereas this process can become cumbersome with the production function. === Homothetic- and Nonhomothetic Technology === Commonly used forms of production functions, such as Cobb-Douglas and Constant Elasticity of Substitution (CES) functions exhibit homothticity. This property means that the production function f {\displaystyle f} can be represented as a positive monotone transformation of a linear-homogeneous function h {\displaystyle h} : y = f ( x ) = ϕ ( h ( x ) ) {\displaystyle y=f(x)=\phi (h(x))} where h ( λ x ) = λ h ( x ) {\displaystyle h(\lambda x)=\lambda h(x)} for any λ > 0 {\displaystyle \lambda >0} . The Cobb-Douglas function is a special case of the CES function for which the elasticity of substitution between the inputs, σ {\displaystyle \sigma } , is one. For a homothetic technology, the cost function can be represented as C ( p , y ) = c ( p ) d ( y ) {\displaystyle C(p,y)=c(p)d(y)} where d {\displaystyle d} is a monotone increasing function, and c {\displaystyle c} is termed a unit cost function. From Shephard's lemma, we obtain the following expression for the ratio of inputs i {\displaystyle i} and j {\displaystyle j} : x i x j = ∂ c ( p ) / ∂ p i ∂ c ( p ) / ∂ p j {\displaystyle {\frac {x_{i}}{x_{j}}}={\frac {\partial c(p)/\partial p_{i}}{\partial c(p)/\partial p_{j}}}} , which implies that for a homothetic technology, the ratio of inputs depends solely on prices and not on the scale of output. However, empirical studies on the cross-section of establishments show that the FL model (3) effectively explains the data, particularly for heavy industries such as steel mills, paper mills, basic chemical sectors, and power stations, indicating that homotheticity may not be applicable. Furthermore, in the areas of trade, homothetic and monolithic functional models do not accurately predict results. One example is in the gravity equation for trade, or how much will two countries trade with each other based on GDP and distance. This led researchers to explore non-homothetic models of production, to fit with a cross section analysis of producer behavior, for example, when producers would begin to minimize costs by switching inputs or investing in increased production. === Flexible Functional Forms === CES functions (note that Cobb-Douglas is a special case of CES) typically involve only two inputs, such as capital and labor. While they can be extended to include more than two inputs, assuming the same degree of substitutability for all inputs may seem overly restrictive (refer to CES for further details on this topic, including the potential for accommodating diverse elasticities of substitution among inputs, although this capability is somewhat constrained). To address this limitation, flexible functional forms have been developed. These general functional forms are called flexible functional forms (FFFs) because they do not impose any restrictions a priori on the degree of substitutability among inputs. These FFFs can provide a second-order approximation to any twice-differentiable function that meets the necessary regulatory conditions, including basic technological conditions and those consistent with cost minimization. Widely used examples of FFFs are the transcendental logarithmic (translog) function and the Generalized Leontief (GL) function. The translog function extends the Cobb-Douglas function to the second order, while the GL function performs a similar extension to the Leontief production function. === Limitations === A drawback of the GL function is its inability to be globally concave without sacrificing flexibility in the price space. This limitation also applies to the GO function, as it is a non-homothetic extension of the GL. In a subsequent study, Nakamura attempted to address this issue by employing the Generalized McFadden function. For further advancements in this area, refer to Ryan and Wales. Moreover, both the GO function and the underlying GL function presume immediate adjustments of inputs in response to changes in p {\displaystyle p} and y {\displaystyle y} . This oversimplifies the reality where technological changes entail significant investments in plant and equipment, thus requiring time, often occurring over years rather than instantaneously. One way to address this issue will be to resort to a variable cost function that explicitly takes into account differences in the speed of adjustments among inputs. == Notes == == References == == See also == Production function List of production functions Constant elasticity of substitution Shephard's lemma Returns to scale
|
Wikipedia:Generalized arithmetic progression#0
|
In mathematics, a generalized arithmetic progression (or multiple arithmetic progression) is a generalization of an arithmetic progression equipped with multiple common differences – whereas an arithmetic progression is generated by a single common difference, a generalized arithmetic progression can be generated by multiple common differences. For example, the sequence 17 , 20 , 22 , 23 , 25 , 26 , 27 , 28 , 29 , … {\displaystyle 17,20,22,23,25,26,27,28,29,\dots } is not an arithmetic progression, but is instead generated by starting with 17 and adding either 3 or 5, thus allowing multiple common differences to generate it. A semilinear set generalizes this idea to multiple dimensions – it is a set of vectors of integers, rather than a set of integers. == Finite generalized arithmetic progression == A finite generalized arithmetic progression, or sometimes just generalized arithmetic progression (GAP), of dimension d is defined to be a set of the form { x 0 + ℓ 1 x 1 + ⋯ + ℓ d x d : 0 ≤ ℓ 1 < L 1 , … , 0 ≤ ℓ d < L d } {\displaystyle \{x_{0}+\ell _{1}x_{1}+\cdots +\ell _{d}x_{d}:0\leq \ell _{1}<L_{1},\ldots ,0\leq \ell _{d}<L_{d}\}} where x 0 , x 1 , … , x d , L 1 , … , L d ∈ Z {\displaystyle x_{0},x_{1},\dots ,x_{d},L_{1},\dots ,L_{d}\in \mathbb {Z} } . The product L 1 L 2 ⋯ L d {\displaystyle L_{1}L_{2}\cdots L_{d}} is called the size of the generalized arithmetic progression; the cardinality of the set can differ from the size if some elements of the set have multiple representations. If the cardinality equals the size, the progression is called proper. Generalized arithmetic progressions can be thought of as a projection of a higher dimensional grid into Z {\displaystyle \mathbb {Z} } . This projection is injective if and only if the generalized arithmetic progression is proper. == Semilinear sets == Formally, an arithmetic progression of N d {\displaystyle \mathbb {N} ^{d}} is an infinite sequence of the form v , v + v ′ , v + 2 v ′ , v + 3 v ′ , … {\displaystyle \mathbf {v} ,\mathbf {v} +\mathbf {v} ',\mathbf {v} +2\mathbf {v} ',\mathbf {v} +3\mathbf {v} ',\ldots } , where v {\displaystyle \mathbf {v} } and v ′ {\displaystyle \mathbf {v} '} are fixed vectors in N d {\displaystyle \mathbb {N} ^{d}} , called the initial vector and common difference respectively. A subset of N d {\displaystyle \mathbb {N} ^{d}} is said to be linear if it is of the form { v + ∑ i = 1 m k i v i : k 1 , … , k m ∈ N } , {\displaystyle \left\{\mathbf {v} +\sum _{i=1}^{m}k_{i}\mathbf {v} _{i}\,\colon \,k_{1},\dots ,k_{m}\in \mathbb {N} \right\},} where m {\displaystyle m} is some integer and v , v 1 , … , v m {\displaystyle \mathbf {v} ,\mathbf {v} _{1},\dots ,\mathbf {v} _{m}} are fixed vectors in N d {\displaystyle \mathbb {N} ^{d}} . A subset of N d {\displaystyle \mathbb {N} ^{d}} is said to be semilinear if it is a finite union of linear sets. The semilinear sets are exactly the sets definable in Presburger arithmetic. == See also == Freiman's theorem == References == Nathanson, Melvyn B. (1996). Additive Number Theory: Inverse Problems and Geometry of Sumsets. Graduate Texts in Mathematics. Vol. 165. Springer. ISBN 0-387-94655-1. Zbl 0859.11003.
|
Wikipedia:Generalized eigenvector#0
|
In linear algebra, a generalized eigenvector of an n × n {\displaystyle n\times n} matrix A {\displaystyle A} is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector. Let V {\displaystyle V} be an n {\displaystyle n} -dimensional vector space and let A {\displaystyle A} be the matrix representation of a linear map from V {\displaystyle V} to V {\displaystyle V} with respect to some ordered basis. There may not always exist a full set of n {\displaystyle n} linearly independent eigenvectors of A {\displaystyle A} that form a complete basis for V {\displaystyle V} . That is, the matrix A {\displaystyle A} may not be diagonalizable. This happens when the algebraic multiplicity of at least one eigenvalue λ i {\displaystyle \lambda _{i}} is greater than its geometric multiplicity (the nullity of the matrix ( A − λ i I ) {\displaystyle (A-\lambda _{i}I)} , or the dimension of its nullspace). In this case, λ i {\displaystyle \lambda _{i}} is called a defective eigenvalue and A {\displaystyle A} is called a defective matrix. A generalized eigenvector x i {\displaystyle x_{i}} corresponding to λ i {\displaystyle \lambda _{i}} , together with the matrix ( A − λ i I ) {\displaystyle (A-\lambda _{i}I)} generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of V {\displaystyle V} . Using generalized eigenvectors, a set of linearly independent eigenvectors of A {\displaystyle A} can be extended, if necessary, to a complete basis for V {\displaystyle V} . This basis can be used to determine an "almost diagonal matrix" J {\displaystyle J} in Jordan normal form, similar to A {\displaystyle A} , which is useful in computing certain matrix functions of A {\displaystyle A} . The matrix J {\displaystyle J} is also useful in solving the system of linear differential equations x ′ = A x , {\displaystyle \mathbf {x} '=A\mathbf {x} ,} where A {\displaystyle A} need not be diagonalizable. The dimension of the generalized eigenspace corresponding to a given eigenvalue λ {\displaystyle \lambda } is the algebraic multiplicity of λ {\displaystyle \lambda } . == Overview and definition == There are several equivalent ways to define an ordinary eigenvector. For our purposes, an eigenvector u {\displaystyle \mathbf {u} } associated with an eigenvalue λ {\displaystyle \lambda } of an n {\displaystyle n} × n {\displaystyle n} matrix A {\displaystyle A} is a nonzero vector for which ( A − λ I ) u = 0 {\displaystyle (A-\lambda I)\mathbf {u} =\mathbf {0} } , where I {\displaystyle I} is the n {\displaystyle n} × n {\displaystyle n} identity matrix and 0 {\displaystyle \mathbf {0} } is the zero vector of length n {\displaystyle n} . That is, u {\displaystyle \mathbf {u} } is in the kernel of the transformation ( A − λ I ) {\displaystyle (A-\lambda I)} . If A {\displaystyle A} has n {\displaystyle n} linearly independent eigenvectors, then A {\displaystyle A} is similar to a diagonal matrix D {\displaystyle D} . That is, there exists an invertible matrix M {\displaystyle M} such that A {\displaystyle A} is diagonalizable through the similarity transformation D = M − 1 A M {\displaystyle D=M^{-1}AM} . The matrix D {\displaystyle D} is called a spectral matrix for A {\displaystyle A} . The matrix M {\displaystyle M} is called a modal matrix for A {\displaystyle A} . Diagonalizable matrices are of particular interest since matrix functions of them can be computed easily. On the other hand, if A {\displaystyle A} does not have n {\displaystyle n} linearly independent eigenvectors associated with it, then A {\displaystyle A} is not diagonalizable. Definition: A vector x m {\displaystyle \mathbf {x} _{m}} is a generalized eigenvector of rank m of the matrix A {\displaystyle A} and corresponding to the eigenvalue λ {\displaystyle \lambda } if ( A − λ I ) m x m = 0 {\displaystyle (A-\lambda I)^{m}\mathbf {x} _{m}=\mathbf {0} } but ( A − λ I ) m − 1 x m ≠ 0 . {\displaystyle (A-\lambda I)^{m-1}\mathbf {x} _{m}\neq \mathbf {0} .} Clearly, a generalized eigenvector of rank 1 is an ordinary eigenvector. Every n {\displaystyle n} × n {\displaystyle n} matrix A {\displaystyle A} has n {\displaystyle n} linearly independent generalized eigenvectors associated with it and can be shown to be similar to an "almost diagonal" matrix J {\displaystyle J} in Jordan normal form. That is, there exists an invertible matrix M {\displaystyle M} such that J = M − 1 A M {\displaystyle J=M^{-1}AM} . The matrix M {\displaystyle M} in this case is called a generalized modal matrix for A {\displaystyle A} . If λ {\displaystyle \lambda } is an eigenvalue of algebraic multiplicity μ {\displaystyle \mu } , then A {\displaystyle A} will have μ {\displaystyle \mu } linearly independent generalized eigenvectors corresponding to λ {\displaystyle \lambda } . These results, in turn, provide a straightforward method for computing certain matrix functions of A {\displaystyle A} . Note: For an n × n {\displaystyle n\times n} matrix A {\displaystyle A} over a field F {\displaystyle F} to be expressed in Jordan normal form, all eigenvalues of A {\displaystyle A} must be in F {\displaystyle F} . That is, the characteristic polynomial f ( x ) {\displaystyle f(x)} must factor completely into linear factors; F {\displaystyle F} must be an algebraically closed field. For example, if A {\displaystyle A} has real-valued elements, then it may be necessary for the eigenvalues and the components of the eigenvectors to have complex values. The set spanned by all generalized eigenvectors for a given λ {\displaystyle \lambda } forms the generalized eigenspace for λ {\displaystyle \lambda } . == Examples == Here are some examples to illustrate the concept of generalized eigenvectors. Some of the details will be described later. === Example 1 === This example is simple but clearly illustrates the point. This type of matrix is used frequently in textbooks. Suppose A = ( 1 1 0 1 ) . {\displaystyle A={\begin{pmatrix}1&1\\0&1\end{pmatrix}}.} Then there is only one eigenvalue, λ = 1 {\displaystyle \lambda =1} , and its algebraic multiplicity is m = 2 {\displaystyle m=2} . Notice that this matrix is in Jordan normal form but is not diagonal. Hence, this matrix is not diagonalizable. Since there is one superdiagonal entry, there will be one generalized eigenvector of rank greater than 1 (or one could note that the vector space V {\displaystyle V} is of dimension 2, so there can be at most one generalized eigenvector of rank greater than 1). Alternatively, one could compute the dimension of the nullspace of A − λ I {\displaystyle A-\lambda I} to be p = 1 {\displaystyle p=1} , and thus there are m − p = 1 {\displaystyle m-p=1} generalized eigenvectors of rank greater than 1. The ordinary eigenvector v 1 = ( 1 0 ) {\displaystyle \mathbf {v} _{1}={\begin{pmatrix}1\\0\end{pmatrix}}} is computed as usual (see the eigenvector page for examples). Using this eigenvector, we compute the generalized eigenvector v 2 {\displaystyle \mathbf {v} _{2}} by solving ( A − λ I ) v 2 = v 1 . {\displaystyle (A-\lambda I)\mathbf {v} _{2}=\mathbf {v} _{1}.} Writing out the values: ( ( 1 1 0 1 ) − 1 ( 1 0 0 1 ) ) ( v 21 v 22 ) = ( 0 1 0 0 ) ( v 21 v 22 ) = ( 1 0 ) . {\displaystyle \left({\begin{pmatrix}1&1\\0&1\end{pmatrix}}-1{\begin{pmatrix}1&0\\0&1\end{pmatrix}}\right){\begin{pmatrix}v_{21}\\v_{22}\end{pmatrix}}={\begin{pmatrix}0&1\\0&0\end{pmatrix}}{\begin{pmatrix}v_{21}\\v_{22}\end{pmatrix}}={\begin{pmatrix}1\\0\end{pmatrix}}.} This simplifies to v 22 = 1. {\displaystyle v_{22}=1.} The element v 21 {\displaystyle v_{21}} has no restrictions. The generalized eigenvector of rank 2 is then v 2 = ( a 1 ) {\displaystyle \mathbf {v} _{2}={\begin{pmatrix}a\\1\end{pmatrix}}} , where a can have any scalar value. The choice of a = 0 is usually the simplest. Note that ( A − λ I ) v 2 = ( 0 1 0 0 ) ( a 1 ) = ( 1 0 ) = v 1 , {\displaystyle (A-\lambda I)\mathbf {v} _{2}={\begin{pmatrix}0&1\\0&0\end{pmatrix}}{\begin{pmatrix}a\\1\end{pmatrix}}={\begin{pmatrix}1\\0\end{pmatrix}}=\mathbf {v} _{1},} so that v 2 {\displaystyle \mathbf {v} _{2}} is a generalized eigenvector, because ( A − λ I ) 2 v 2 = ( A − λ I ) [ ( A − λ I ) v 2 ] = ( A − λ I ) v 1 = ( 0 1 0 0 ) ( 1 0 ) = ( 0 0 ) = 0 , {\displaystyle (A-\lambda I)^{2}\mathbf {v} _{2}=(A-\lambda I)[(A-\lambda I)\mathbf {v} _{2}]=(A-\lambda I)\mathbf {v} _{1}={\begin{pmatrix}0&1\\0&0\end{pmatrix}}{\begin{pmatrix}1\\0\end{pmatrix}}={\begin{pmatrix}0\\0\end{pmatrix}}=\mathbf {0} ,} so that v 1 {\displaystyle \mathbf {v} _{1}} is an ordinary eigenvector, and that v 1 {\displaystyle \mathbf {v} _{1}} and v 2 {\displaystyle \mathbf {v} _{2}} are linearly independent and hence constitute a basis for the vector space V {\displaystyle V} . === Example 2 === This example is more complex than Example 1. Unfortunately, it is a little difficult to construct an interesting example of low order. The matrix A = ( 1 0 0 0 0 3 1 0 0 0 6 3 2 0 0 10 6 3 2 0 15 10 6 3 2 ) {\displaystyle A={\begin{pmatrix}1&0&0&0&0\\3&1&0&0&0\\6&3&2&0&0\\10&6&3&2&0\\15&10&6&3&2\end{pmatrix}}} has eigenvalues λ 1 = 1 {\displaystyle \lambda _{1}=1} and λ 2 = 2 {\displaystyle \lambda _{2}=2} with algebraic multiplicities μ 1 = 2 {\displaystyle \mu _{1}=2} and μ 2 = 3 {\displaystyle \mu _{2}=3} , but geometric multiplicities γ 1 = 1 {\displaystyle \gamma _{1}=1} and γ 2 = 1 {\displaystyle \gamma _{2}=1} . The generalized eigenspaces of A {\displaystyle A} are calculated below. x 1 {\displaystyle \mathbf {x} _{1}} is the ordinary eigenvector associated with λ 1 {\displaystyle \lambda _{1}} . x 2 {\displaystyle \mathbf {x} _{2}} is a generalized eigenvector associated with λ 1 {\displaystyle \lambda _{1}} . y 1 {\displaystyle \mathbf {y} _{1}} is the ordinary eigenvector associated with λ 2 {\displaystyle \lambda _{2}} . y 2 {\displaystyle \mathbf {y} _{2}} and y 3 {\displaystyle \mathbf {y} _{3}} are generalized eigenvectors associated with λ 2 {\displaystyle \lambda _{2}} . ( A − 1 I ) x 1 = ( 0 0 0 0 0 3 0 0 0 0 6 3 1 0 0 10 6 3 1 0 15 10 6 3 1 ) ( 0 3 − 9 9 − 3 ) = ( 0 0 0 0 0 ) = 0 , {\displaystyle (A-1I)\mathbf {x} _{1}={\begin{pmatrix}0&0&0&0&0\\3&0&0&0&0\\6&3&1&0&0\\10&6&3&1&0\\15&10&6&3&1\end{pmatrix}}{\begin{pmatrix}0\\3\\-9\\9\\-3\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\0\\0\end{pmatrix}}=\mathbf {0} ,} ( A − 1 I ) x 2 = ( 0 0 0 0 0 3 0 0 0 0 6 3 1 0 0 10 6 3 1 0 15 10 6 3 1 ) ( 1 − 15 30 − 1 − 45 ) = ( 0 3 − 9 9 − 3 ) = x 1 , {\displaystyle (A-1I)\mathbf {x} _{2}={\begin{pmatrix}0&0&0&0&0\\3&0&0&0&0\\6&3&1&0&0\\10&6&3&1&0\\15&10&6&3&1\end{pmatrix}}{\begin{pmatrix}1\\-15\\30\\-1\\-45\end{pmatrix}}={\begin{pmatrix}0\\3\\-9\\9\\-3\end{pmatrix}}=\mathbf {x} _{1},} ( A − 2 I ) y 1 = ( − 1 0 0 0 0 3 − 1 0 0 0 6 3 0 0 0 10 6 3 0 0 15 10 6 3 0 ) ( 0 0 0 0 9 ) = ( 0 0 0 0 0 ) = 0 , {\displaystyle (A-2I)\mathbf {y} _{1}={\begin{pmatrix}-1&0&0&0&0\\3&-1&0&0&0\\6&3&0&0&0\\10&6&3&0&0\\15&10&6&3&0\end{pmatrix}}{\begin{pmatrix}0\\0\\0\\0\\9\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\0\\0\end{pmatrix}}=\mathbf {0} ,} ( A − 2 I ) y 2 = ( − 1 0 0 0 0 3 − 1 0 0 0 6 3 0 0 0 10 6 3 0 0 15 10 6 3 0 ) ( 0 0 0 3 0 ) = ( 0 0 0 0 9 ) = y 1 , {\displaystyle (A-2I)\mathbf {y} _{2}={\begin{pmatrix}-1&0&0&0&0\\3&-1&0&0&0\\6&3&0&0&0\\10&6&3&0&0\\15&10&6&3&0\end{pmatrix}}{\begin{pmatrix}0\\0\\0\\3\\0\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\0\\9\end{pmatrix}}=\mathbf {y} _{1},} ( A − 2 I ) y 3 = ( − 1 0 0 0 0 3 − 1 0 0 0 6 3 0 0 0 10 6 3 0 0 15 10 6 3 0 ) ( 0 0 1 − 2 0 ) = ( 0 0 0 3 0 ) = y 2 . {\displaystyle (A-2I)\mathbf {y} _{3}={\begin{pmatrix}-1&0&0&0&0\\3&-1&0&0&0\\6&3&0&0&0\\10&6&3&0&0\\15&10&6&3&0\end{pmatrix}}{\begin{pmatrix}0\\0\\1\\-2\\0\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\3\\0\end{pmatrix}}=\mathbf {y} _{2}.} This results in a basis for each of the generalized eigenspaces of A {\displaystyle A} . Together the two chains of generalized eigenvectors span the space of all 5-dimensional column vectors. { x 1 , x 2 } = { ( 0 3 − 9 9 − 3 ) , ( 1 − 15 30 − 1 − 45 ) } , { y 1 , y 2 , y 3 } = { ( 0 0 0 0 9 ) , ( 0 0 0 3 0 ) , ( 0 0 1 − 2 0 ) } . {\displaystyle \left\{\mathbf {x} _{1},\mathbf {x} _{2}\right\}=\left\{{\begin{pmatrix}0\\3\\-9\\9\\-3\end{pmatrix}},{\begin{pmatrix}1\\-15\\30\\-1\\-45\end{pmatrix}}\right\},\left\{\mathbf {y} _{1},\mathbf {y} _{2},\mathbf {y} _{3}\right\}=\left\{{\begin{pmatrix}0\\0\\0\\0\\9\end{pmatrix}},{\begin{pmatrix}0\\0\\0\\3\\0\end{pmatrix}},{\begin{pmatrix}0\\0\\1\\-2\\0\end{pmatrix}}\right\}.} An "almost diagonal" matrix J {\displaystyle J} in Jordan normal form, similar to A {\displaystyle A} is obtained as follows: M = ( x 1 x 2 y 1 y 2 y 3 ) = ( 0 1 0 0 0 3 − 15 0 0 0 − 9 30 0 0 1 9 − 1 0 3 − 2 − 3 − 45 9 0 0 ) , {\displaystyle M={\begin{pmatrix}\mathbf {x} _{1}&\mathbf {x} _{2}&\mathbf {y} _{1}&\mathbf {y} _{2}&\mathbf {y} _{3}\end{pmatrix}}={\begin{pmatrix}0&1&0&0&0\\3&-15&0&0&0\\-9&30&0&0&1\\9&-1&0&3&-2\\-3&-45&9&0&0\end{pmatrix}},} J = ( 1 1 0 0 0 0 1 0 0 0 0 0 2 1 0 0 0 0 2 1 0 0 0 0 2 ) , {\displaystyle J={\begin{pmatrix}1&1&0&0&0\\0&1&0&0&0\\0&0&2&1&0\\0&0&0&2&1\\0&0&0&0&2\end{pmatrix}},} where M {\displaystyle M} is a generalized modal matrix for A {\displaystyle A} , the columns of M {\displaystyle M} are a canonical basis for A {\displaystyle A} , and A M = M J {\displaystyle AM=MJ} . == Jordan chains == Definition: Let x m {\displaystyle \mathbf {x} _{m}} be a generalized eigenvector of rank m corresponding to the matrix A {\displaystyle A} and the eigenvalue λ {\displaystyle \lambda } . The chain generated by x m {\displaystyle \mathbf {x} _{m}} is a set of vectors { x m , x m − 1 , … , x 1 } {\displaystyle \left\{\mathbf {x} _{m},\mathbf {x} _{m-1},\dots ,\mathbf {x} _{1}\right\}} given by where x 1 {\displaystyle \mathbf {x} _{1}} is always an ordinary eigenvector with a given eigenvalue λ {\displaystyle \lambda } . Thus, in general, The vector x j {\displaystyle \mathbf {x} _{j}} , given by (2), is a generalized eigenvector of rank j corresponding to the eigenvalue λ {\displaystyle \lambda } . A chain is a linearly independent set of vectors. == Canonical basis == Definition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains. Thus, once we have determined that a generalized eigenvector of rank m is in a canonical basis, it follows that the m − 1 vectors x m − 1 , x m − 2 , … , x 1 {\displaystyle \mathbf {x} _{m-1},\mathbf {x} _{m-2},\ldots ,\mathbf {x} _{1}} that are in the Jordan chain generated by x m {\displaystyle \mathbf {x} _{m}} are also in the canonical basis. Let λ i {\displaystyle \lambda _{i}} be an eigenvalue of A {\displaystyle A} of algebraic multiplicity μ i {\displaystyle \mu _{i}} . First, find the ranks (matrix ranks) of the matrices ( A − λ i I ) , ( A − λ i I ) 2 , … , ( A − λ i I ) m i {\displaystyle (A-\lambda _{i}I),(A-\lambda _{i}I)^{2},\ldots ,(A-\lambda _{i}I)^{m_{i}}} . The integer m i {\displaystyle m_{i}} is determined to be the first integer for which ( A − λ i I ) m i {\displaystyle (A-\lambda _{i}I)^{m_{i}}} has rank n − μ i {\displaystyle n-\mu _{i}} (n being the number of rows or columns of A {\displaystyle A} , that is, A {\displaystyle A} is n × n). Now define ρ k = rank ( A − λ i I ) k − 1 − rank ( A − λ i I ) k ( k = 1 , 2 , … , m i ) . {\displaystyle \rho _{k}=\operatorname {rank} (A-\lambda _{i}I)^{k-1}-\operatorname {rank} (A-\lambda _{i}I)^{k}\qquad (k=1,2,\ldots ,m_{i}).} The variable ρ k {\displaystyle \rho _{k}} designates the number of linearly independent generalized eigenvectors of rank k corresponding to the eigenvalue λ i {\displaystyle \lambda _{i}} that will appear in a canonical basis for A {\displaystyle A} . Note that rank ( A − λ i I ) 0 = rank ( I ) = n {\displaystyle \operatorname {rank} (A-\lambda _{i}I)^{0}=\operatorname {rank} (I)=n} . == Computation of generalized eigenvectors == In the preceding sections we have seen techniques for obtaining the n {\displaystyle n} linearly independent generalized eigenvectors of a canonical basis for the vector space V {\displaystyle V} associated with an n × n {\displaystyle n\times n} matrix A {\displaystyle A} . These techniques can be combined into a procedure: Solve the characteristic equation of A {\displaystyle A} for eigenvalues λ i {\displaystyle \lambda _{i}} and their algebraic multiplicities μ i {\displaystyle \mu _{i}} ; For each λ i : {\displaystyle \lambda _{i}:} Determine n − μ i {\displaystyle n-\mu _{i}} ; Determine m i {\displaystyle m_{i}} ; Determine ρ k {\displaystyle \rho _{k}} for ( k = 1 , … , m i ) {\displaystyle (k=1,\ldots ,m_{i})} ; Determine each Jordan chain for λ i {\displaystyle \lambda _{i}} ; === Example 3 === The matrix A = ( 5 1 − 2 4 0 5 2 2 0 0 5 3 0 0 0 4 ) {\displaystyle A={\begin{pmatrix}5&1&-2&4\\0&5&2&2\\0&0&5&3\\0&0&0&4\end{pmatrix}}} has an eigenvalue λ 1 = 5 {\displaystyle \lambda _{1}=5} of algebraic multiplicity μ 1 = 3 {\displaystyle \mu _{1}=3} and an eigenvalue λ 2 = 4 {\displaystyle \lambda _{2}=4} of algebraic multiplicity μ 2 = 1 {\displaystyle \mu _{2}=1} . We also have n = 4 {\displaystyle n=4} . For λ 1 {\displaystyle \lambda _{1}} we have n − μ 1 = 4 − 3 = 1 {\displaystyle n-\mu _{1}=4-3=1} . ( A − 5 I ) = ( 0 1 − 2 4 0 0 2 2 0 0 0 3 0 0 0 − 1 ) , rank ( A − 5 I ) = 3. {\displaystyle (A-5I)={\begin{pmatrix}0&1&-2&4\\0&0&2&2\\0&0&0&3\\0&0&0&-1\end{pmatrix}},\qquad \operatorname {rank} (A-5I)=3.} ( A − 5 I ) 2 = ( 0 0 2 − 8 0 0 0 4 0 0 0 − 3 0 0 0 1 ) , rank ( A − 5 I ) 2 = 2. {\displaystyle (A-5I)^{2}={\begin{pmatrix}0&0&2&-8\\0&0&0&4\\0&0&0&-3\\0&0&0&1\end{pmatrix}},\qquad \operatorname {rank} (A-5I)^{2}=2.} ( A − 5 I ) 3 = ( 0 0 0 14 0 0 0 − 4 0 0 0 3 0 0 0 − 1 ) , rank ( A − 5 I ) 3 = 1. {\displaystyle (A-5I)^{3}={\begin{pmatrix}0&0&0&14\\0&0&0&-4\\0&0&0&3\\0&0&0&-1\end{pmatrix}},\qquad \operatorname {rank} (A-5I)^{3}=1.} The first integer m 1 {\displaystyle m_{1}} for which ( A − 5 I ) m 1 {\displaystyle (A-5I)^{m_{1}}} has rank n − μ 1 = 1 {\displaystyle n-\mu _{1}=1} is m 1 = 3 {\displaystyle m_{1}=3} . We now define ρ 3 = rank ( A − 5 I ) 2 − rank ( A − 5 I ) 3 = 2 − 1 = 1 , {\displaystyle \rho _{3}=\operatorname {rank} (A-5I)^{2}-\operatorname {rank} (A-5I)^{3}=2-1=1,} ρ 2 = rank ( A − 5 I ) 1 − rank ( A − 5 I ) 2 = 3 − 2 = 1 , {\displaystyle \rho _{2}=\operatorname {rank} (A-5I)^{1}-\operatorname {rank} (A-5I)^{2}=3-2=1,} ρ 1 = rank ( A − 5 I ) 0 − rank ( A − 5 I ) 1 = 4 − 3 = 1. {\displaystyle \rho _{1}=\operatorname {rank} (A-5I)^{0}-\operatorname {rank} (A-5I)^{1}=4-3=1.} Consequently, there will be three linearly independent generalized eigenvectors; one each of ranks 3, 2 and 1. Since λ 1 {\displaystyle \lambda _{1}} corresponds to a single chain of three linearly independent generalized eigenvectors, we know that there is a generalized eigenvector x 3 {\displaystyle \mathbf {x} _{3}} of rank 3 corresponding to λ 1 {\displaystyle \lambda _{1}} such that but Equations (3) and (4) represent linear systems that can be solved for x 3 {\displaystyle \mathbf {x} _{3}} . Let x 3 = ( x 31 x 32 x 33 x 34 ) . {\displaystyle \mathbf {x} _{3}={\begin{pmatrix}x_{31}\\x_{32}\\x_{33}\\x_{34}\end{pmatrix}}.} Then ( A − 5 I ) 3 x 3 = ( 0 0 0 14 0 0 0 − 4 0 0 0 3 0 0 0 − 1 ) ( x 31 x 32 x 33 x 34 ) = ( 14 x 34 − 4 x 34 3 x 34 − x 34 ) = ( 0 0 0 0 ) {\displaystyle (A-5I)^{3}\mathbf {x} _{3}={\begin{pmatrix}0&0&0&14\\0&0&0&-4\\0&0&0&3\\0&0&0&-1\end{pmatrix}}{\begin{pmatrix}x_{31}\\x_{32}\\x_{33}\\x_{34}\end{pmatrix}}={\begin{pmatrix}14x_{34}\\-4x_{34}\\3x_{34}\\-x_{34}\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\0\end{pmatrix}}} and ( A − 5 I ) 2 x 3 = ( 0 0 2 − 8 0 0 0 4 0 0 0 − 3 0 0 0 1 ) ( x 31 x 32 x 33 x 34 ) = ( 2 x 33 − 8 x 34 4 x 34 − 3 x 34 x 34 ) ≠ ( 0 0 0 0 ) . {\displaystyle (A-5I)^{2}\mathbf {x} _{3}={\begin{pmatrix}0&0&2&-8\\0&0&0&4\\0&0&0&-3\\0&0&0&1\end{pmatrix}}{\begin{pmatrix}x_{31}\\x_{32}\\x_{33}\\x_{34}\end{pmatrix}}={\begin{pmatrix}2x_{33}-8x_{34}\\4x_{34}\\-3x_{34}\\x_{34}\end{pmatrix}}\neq {\begin{pmatrix}0\\0\\0\\0\end{pmatrix}}.} Thus, in order to satisfy the conditions (3) and (4), we must have x 34 = 0 {\displaystyle x_{34}=0} and x 33 ≠ 0 {\displaystyle x_{33}\neq 0} . No restrictions are placed on x 31 {\displaystyle x_{31}} and x 32 {\displaystyle x_{32}} . By choosing x 31 = x 32 = x 34 = 0 , x 33 = 1 {\displaystyle x_{31}=x_{32}=x_{34}=0,x_{33}=1} , we obtain x 3 = ( 0 0 1 0 ) {\displaystyle \mathbf {x} _{3}={\begin{pmatrix}0\\0\\1\\0\end{pmatrix}}} as a generalized eigenvector of rank 3 corresponding to λ 1 = 5 {\displaystyle \lambda _{1}=5} . Note that it is possible to obtain infinitely many other generalized eigenvectors of rank 3 by choosing different values of x 31 {\displaystyle x_{31}} , x 32 {\displaystyle x_{32}} and x 33 {\displaystyle x_{33}} , with x 33 ≠ 0 {\displaystyle x_{33}\neq 0} . Our first choice, however, is the simplest. Now using equations (1), we obtain x 2 {\displaystyle \mathbf {x} _{2}} and x 1 {\displaystyle \mathbf {x} _{1}} as generalized eigenvectors of rank 2 and 1, respectively, where x 2 = ( A − 5 I ) x 3 = ( − 2 2 0 0 ) , {\displaystyle \mathbf {x} _{2}=(A-5I)\mathbf {x} _{3}={\begin{pmatrix}-2\\2\\0\\0\end{pmatrix}},} and x 1 = ( A − 5 I ) x 2 = ( 2 0 0 0 ) . {\displaystyle \mathbf {x} _{1}=(A-5I)\mathbf {x} _{2}={\begin{pmatrix}2\\0\\0\\0\end{pmatrix}}.} The simple eigenvalue λ 2 = 4 {\displaystyle \lambda _{2}=4} can be dealt with using standard techniques and has an ordinary eigenvector y 1 = ( − 14 4 − 3 1 ) . {\displaystyle \mathbf {y} _{1}={\begin{pmatrix}-14\\4\\-3\\1\end{pmatrix}}.} A canonical basis for A {\displaystyle A} is { x 3 , x 2 , x 1 , y 1 } = { ( 0 0 1 0 ) ( − 2 2 0 0 ) ( 2 0 0 0 ) ( − 14 4 − 3 1 ) } . {\displaystyle \left\{\mathbf {x} _{3},\mathbf {x} _{2},\mathbf {x} _{1},\mathbf {y} _{1}\right\}=\left\{{\begin{pmatrix}0\\0\\1\\0\end{pmatrix}}{\begin{pmatrix}-2\\2\\0\\0\end{pmatrix}}{\begin{pmatrix}2\\0\\0\\0\end{pmatrix}}{\begin{pmatrix}-14\\4\\-3\\1\end{pmatrix}}\right\}.} x 1 , x 2 {\displaystyle \mathbf {x} _{1},\mathbf {x} _{2}} and x 3 {\displaystyle \mathbf {x} _{3}} are generalized eigenvectors associated with λ 1 {\displaystyle \lambda _{1}} , while y 1 {\displaystyle \mathbf {y} _{1}} is the ordinary eigenvector associated with λ 2 {\displaystyle \lambda _{2}} . This is a fairly simple example. In general, the numbers ρ k {\displaystyle \rho _{k}} of linearly independent generalized eigenvectors of rank k {\displaystyle k} will not always be equal. That is, there may be several chains of different lengths corresponding to a particular eigenvalue. == Generalized modal matrix == Let A {\displaystyle A} be an n × n matrix. A generalized modal matrix M {\displaystyle M} for A {\displaystyle A} is an n × n matrix whose columns, considered as vectors, form a canonical basis for A {\displaystyle A} and appear in M {\displaystyle M} according to the following rules: All Jordan chains consisting of one vector (that is, one vector in length) appear in the first columns of M {\displaystyle M} . All vectors of one chain appear together in adjacent columns of M {\displaystyle M} . Each chain appears in M {\displaystyle M} in order of increasing rank (that is, the generalized eigenvector of rank 1 appears before the generalized eigenvector of rank 2 of the same chain, which appears before the generalized eigenvector of rank 3 of the same chain, etc.). == Jordan normal form == Let V {\displaystyle V} be an n-dimensional vector space; let ϕ {\displaystyle \phi } be a linear map in L(V), the set of all linear maps from V {\displaystyle V} into itself; and let A {\displaystyle A} be the matrix representation of ϕ {\displaystyle \phi } with respect to some ordered basis. It can be shown that if the characteristic polynomial f ( λ ) {\displaystyle f(\lambda )} of A {\displaystyle A} factors into linear factors, so that f ( λ ) {\displaystyle f(\lambda )} has the form f ( λ ) = ± ( λ − λ 1 ) μ 1 ( λ − λ 2 ) μ 2 ⋯ ( λ − λ r ) μ r , {\displaystyle f(\lambda )=\pm (\lambda -\lambda _{1})^{\mu _{1}}(\lambda -\lambda _{2})^{\mu _{2}}\cdots (\lambda -\lambda _{r})^{\mu _{r}},} where λ 1 , λ 2 , … , λ r {\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{r}} are the distinct eigenvalues of A {\displaystyle A} , then each μ i {\displaystyle \mu _{i}} is the algebraic multiplicity of its corresponding eigenvalue λ i {\displaystyle \lambda _{i}} and A {\displaystyle A} is similar to a matrix J {\displaystyle J} in Jordan normal form, where each λ i {\displaystyle \lambda _{i}} appears μ i {\displaystyle \mu _{i}} consecutive times on the diagonal, and the entry directly above each λ i {\displaystyle \lambda _{i}} (that is, on the superdiagonal) is either 0 or 1: in each block the entry above the first occurrence of each λ i {\displaystyle \lambda _{i}} is always 0 (except in the first block); all other entries on the superdiagonal are 1. All other entries (that is, off the diagonal and superdiagonal) are 0. (But no ordering is imposed among the eigenvalues, or among the blocks for a given eigenvalue.) The matrix J {\displaystyle J} is as close as one can come to a diagonalization of A {\displaystyle A} . If A {\displaystyle A} is diagonalizable, then all entries above the diagonal are zero. Note that some textbooks have the ones on the subdiagonal, that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal. Every n × n matrix A {\displaystyle A} is similar to a matrix J {\displaystyle J} in Jordan normal form, obtained through the similarity transformation J = M − 1 A M {\displaystyle J=M^{-1}AM} , where M {\displaystyle M} is a generalized modal matrix for A {\displaystyle A} . (See Note above.) === Example 4 === Find a matrix in Jordan normal form that is similar to A = ( 0 4 2 − 3 8 3 4 − 8 − 2 ) . {\displaystyle A={\begin{pmatrix}0&4&2\\-3&8&3\\4&-8&-2\end{pmatrix}}.} Solution: The characteristic equation of A {\displaystyle A} is ( λ − 2 ) 3 = 0 {\displaystyle (\lambda -2)^{3}=0} , hence, λ = 2 {\displaystyle \lambda =2} is an eigenvalue of algebraic multiplicity three. Following the procedures of the previous sections, we find that rank ( A − 2 I ) = 1 {\displaystyle \operatorname {rank} (A-2I)=1} and rank ( A − 2 I ) 2 = 0 = n − μ . {\displaystyle \operatorname {rank} (A-2I)^{2}=0=n-\mu .} Thus, ρ 2 = 1 {\displaystyle \rho _{2}=1} and ρ 1 = 2 {\displaystyle \rho _{1}=2} , which implies that a canonical basis for A {\displaystyle A} will contain one linearly independent generalized eigenvector of rank 2 and two linearly independent generalized eigenvectors of rank 1, or equivalently, one chain of two vectors { x 2 , x 1 } {\displaystyle \left\{\mathbf {x} _{2},\mathbf {x} _{1}\right\}} and one chain of one vector { y 1 } {\displaystyle \left\{\mathbf {y} _{1}\right\}} . Designating M = ( y 1 x 1 x 2 ) {\displaystyle M={\begin{pmatrix}\mathbf {y} _{1}&\mathbf {x} _{1}&\mathbf {x} _{2}\end{pmatrix}}} , we find that M = ( 2 2 0 1 3 0 0 − 4 1 ) , {\displaystyle M={\begin{pmatrix}2&2&0\\1&3&0\\0&-4&1\end{pmatrix}},} and J = ( 2 0 0 0 2 1 0 0 2 ) , {\displaystyle J={\begin{pmatrix}2&0&0\\0&2&1\\0&0&2\end{pmatrix}},} where M {\displaystyle M} is a generalized modal matrix for A {\displaystyle A} , the columns of M {\displaystyle M} are a canonical basis for A {\displaystyle A} , and A M = M J {\displaystyle AM=MJ} . Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both M {\displaystyle M} and J {\displaystyle J} may be interchanged, it follows that both M {\displaystyle M} and J {\displaystyle J} are not unique. === Example 5 === In Example 3, we found a canonical basis of linearly independent generalized eigenvectors for a matrix A {\displaystyle A} . A generalized modal matrix for A {\displaystyle A} is M = ( y 1 x 1 x 2 x 3 ) = ( − 14 2 − 2 0 4 0 2 0 − 3 0 0 1 1 0 0 0 ) . {\displaystyle M={\begin{pmatrix}\mathbf {y} _{1}&\mathbf {x} _{1}&\mathbf {x} _{2}&\mathbf {x} _{3}\end{pmatrix}}={\begin{pmatrix}-14&2&-2&0\\4&0&2&0\\-3&0&0&1\\1&0&0&0\end{pmatrix}}.} A matrix in Jordan normal form, similar to A {\displaystyle A} is J = ( 4 0 0 0 0 5 1 0 0 0 5 1 0 0 0 5 ) , {\displaystyle J={\begin{pmatrix}4&0&0&0\\0&5&1&0\\0&0&5&1\\0&0&0&5\end{pmatrix}},} so that A M = M J {\displaystyle AM=MJ} . == Applications == === Matrix functions === Three of the most fundamental operations which can be performed on square matrices are matrix addition, multiplication by a scalar, and matrix multiplication. These are exactly those operations necessary for defining a polynomial function of an n × n matrix A {\displaystyle A} . If we recall from basic calculus that many functions can be written as a Maclaurin series, then we can define more general functions of matrices quite easily. If A {\displaystyle A} is diagonalizable, that is D = M − 1 A M , {\displaystyle D=M^{-1}AM,} with D = ( λ 1 0 ⋯ 0 0 λ 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ λ n ) , {\displaystyle D={\begin{pmatrix}\lambda _{1}&0&\cdots &0\\0&\lambda _{2}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &\lambda _{n}\end{pmatrix}},} then D k = ( λ 1 k 0 ⋯ 0 0 λ 2 k ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ λ n k ) {\displaystyle D^{k}={\begin{pmatrix}\lambda _{1}^{k}&0&\cdots &0\\0&\lambda _{2}^{k}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &\lambda _{n}^{k}\end{pmatrix}}} and the evaluation of the Maclaurin series for functions of A {\displaystyle A} is greatly simplified. For example, to obtain any power k of A {\displaystyle A} , we need only compute D k {\displaystyle D^{k}} , premultiply D k {\displaystyle D^{k}} by M {\displaystyle M} , and postmultiply the result by M − 1 {\displaystyle M^{-1}} . Using generalized eigenvectors, we can obtain the Jordan normal form for A {\displaystyle A} and these results can be generalized to a straightforward method for computing functions of nondiagonalizable matrices. (See Matrix function#Jordan decomposition.) === Differential equations === Consider the problem of solving the system of linear ordinary differential equations where x = ( x 1 ( t ) x 2 ( t ) ⋮ x n ( t ) ) , x ′ = ( x 1 ′ ( t ) x 2 ′ ( t ) ⋮ x n ′ ( t ) ) , {\displaystyle \mathbf {x} ={\begin{pmatrix}x_{1}(t)\\x_{2}(t)\\\vdots \\x_{n}(t)\end{pmatrix}},\quad \mathbf {x} '={\begin{pmatrix}x_{1}'(t)\\x_{2}'(t)\\\vdots \\x_{n}'(t)\end{pmatrix}},} and A = ( a i j ) . {\displaystyle A=(a_{ij}).} If the matrix A {\displaystyle A} is a diagonal matrix so that a i j = 0 {\displaystyle a_{ij}=0} for i ≠ j {\displaystyle i\neq j} , then the system (5) reduces to a system of n equations which take the form In this case, the general solution is given by x 1 = k 1 e a 11 t {\displaystyle x_{1}=k_{1}e^{a_{11}t}} x 2 = k 2 e a 22 t {\displaystyle x_{2}=k_{2}e^{a_{22}t}} ⋮ {\displaystyle \vdots } x n = k n e a n n t . {\displaystyle x_{n}=k_{n}e^{a_{nn}t}.} In the general case, we try to diagonalize A {\displaystyle A} and reduce the system (5) to a system like (6) as follows. If A {\displaystyle A} is diagonalizable, we have D = M − 1 A M {\displaystyle D=M^{-1}AM} , where M {\displaystyle M} is a modal matrix for A {\displaystyle A} . Substituting A = M D M − 1 {\displaystyle A=MDM^{-1}} , equation (5) takes the form M − 1 x ′ = D ( M − 1 x ) {\displaystyle M^{-1}\mathbf {x} '=D(M^{-1}\mathbf {x} )} , or where The solution of (7) is y 1 = k 1 e λ 1 t {\displaystyle y_{1}=k_{1}e^{\lambda _{1}t}} y 2 = k 2 e λ 2 t {\displaystyle y_{2}=k_{2}e^{\lambda _{2}t}} ⋮ {\displaystyle \vdots } y n = k n e λ n t . {\displaystyle y_{n}=k_{n}e^{\lambda _{n}t}.} The solution x {\displaystyle \mathbf {x} } of (5) is then obtained using the relation (8). On the other hand, if A {\displaystyle A} is not diagonalizable, we choose M {\displaystyle M} to be a generalized modal matrix for A {\displaystyle A} , such that J = M − 1 A M {\displaystyle J=M^{-1}AM} is the Jordan normal form of A {\displaystyle A} . The system y ′ = J y {\displaystyle \mathbf {y} '=J\mathbf {y} } has the form where the λ i {\displaystyle \lambda _{i}} are the eigenvalues from the main diagonal of J {\displaystyle J} and the ϵ i {\displaystyle \epsilon _{i}} are the ones and zeros from the superdiagonal of J {\displaystyle J} . The system (9) is often more easily solved than (5). We may solve the last equation in (9) for y n {\displaystyle y_{n}} , obtaining y n = k n e λ n t {\displaystyle y_{n}=k_{n}e^{\lambda _{n}t}} . We then substitute this solution for y n {\displaystyle y_{n}} into the next to last equation in (9) and solve for y n − 1 {\displaystyle y_{n-1}} . Continuing this procedure, we work through (9) from the last equation to the first, solving the entire system for y {\displaystyle \mathbf {y} } . The solution x {\displaystyle \mathbf {x} } is then obtained using the relation (8). Lemma: Given the following chain of generalized eigenvectors of length r , {\displaystyle r,} X 1 = v 1 e λ t {\displaystyle X_{1}=v_{1}e^{\lambda t}} X 2 = ( t v 1 + v 2 ) e λ t {\displaystyle X_{2}=(tv_{1}+v_{2})e^{\lambda t}} X 3 = ( t 2 2 v 1 + t v 2 + v 3 ) e λ t {\displaystyle X_{3}=\left({\frac {t^{2}}{2}}v_{1}+tv_{2}+v_{3}\right)e^{\lambda t}} ⋮ {\displaystyle \vdots } X r = ( t r − 1 ( r − 1 ) ! v 1 + . . . + t 2 2 v r − 2 + t v r − 1 + v r ) e λ t {\displaystyle X_{r}=\left({\frac {t^{r-1}}{(r-1)!}}v_{1}+...+{\frac {t^{2}}{2}}v_{r-2}+tv_{r-1}+v_{r}\right)e^{\lambda t}} , these functions solve the system of equations, X ′ = A X . {\displaystyle X'=AX.} Proof: Define v 0 = 0 {\displaystyle v_{0}=0} X j ( t ) = e λ t ∑ i = 1 j t j − i ( j − i ) ! v i . {\displaystyle X_{j}(t)=e^{\lambda t}\sum _{i=1}^{j}{\frac {t^{j-i}}{(j-i)!}}v_{i}.} Then, as t 0 = 1 {\displaystyle {t^{0}}=1} and 1 ′ = 0 {\displaystyle 1'=0} , X j ′ ( t ) = e λ t ∑ i = 1 j − 1 t j − i − 1 ( j − i − 1 ) ! v i + e λ t λ ∑ i = 1 j t j − i ( j − i ) ! v i {\displaystyle X'_{j}(t)=e^{\lambda t}\sum _{i=1}^{j-1}{\frac {t^{j-i-1}}{(j-i-1)!}}v_{i}+e^{\lambda t}\lambda \sum _{i=1}^{j}{\frac {t^{j-i}}{(j-i)!}}v_{i}} . On the other hand we have, v 0 = 0 {\displaystyle v_{0}=0} and so A X j ( t ) = e λ t ∑ i = 1 j t j − i ( j − i ) ! A v i {\displaystyle AX_{j}(t)=e^{\lambda t}\sum _{i=1}^{j}{\frac {t^{j-i}}{(j-i)!}}Av_{i}} = e λ t ∑ i = 1 j t j − i ( j − i ) ! ( v i − 1 + λ v i ) {\displaystyle =e^{\lambda t}\sum _{i=1}^{j}{\frac {t^{j-i}}{(j-i)!}}(v_{i-1}+\lambda v_{i})} = e λ t ∑ i = 2 j t j − i ( j − i ) ! v i − 1 + e λ t λ ∑ i = 1 j t j − i ( j − i ) ! v i {\displaystyle =e^{\lambda t}\sum _{i=2}^{j}{\frac {t^{j-i}}{(j-i)!}}v_{i-1}+e^{\lambda t}\lambda \sum _{i=1}^{j}{\frac {t^{j-i}}{(j-i)!}}v_{i}} = e λ t ∑ i = 1 j − 1 t j − i − 1 ( j − i − 1 ) ! v i + e λ t λ ∑ i = 1 j t j − i ( j − i ) ! v i {\displaystyle =e^{\lambda t}\sum _{i=1}^{j-1}{\frac {t^{j-i-1}}{(j-i-1)!}}v_{i}+e^{\lambda t}\lambda \sum _{i=1}^{j}{\frac {t^{j-i}}{(j-i)!}}v_{i}} = X j ′ ( t ) {\displaystyle =X'_{j}(t)} as required. == Notes == == References == Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0 Axler, Sheldon (1997). Linear Algebra Done Right (2nd ed.). Springer. ISBN 978-0-387-98258-8. Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Co., ISBN 0-395-14017-X Bronson, Richard (1970), Matrix Methods: An Introduction, New York: Academic Press, LCCN 70097490 Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3 Cullen, Charles G. (1966), Matrices and Linear Transformations, Reading: Addison-Wesley, LCCN 66021267 Franklin, Joel N. (1968), Matrix Theory, Englewood Cliffs: Prentice-Hall, LCCN 68016345 Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 0-8018-5414-8 Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9 Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016 {{citation}}: ISBN / Date incompatibility (help) Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8 Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76091646
|
Wikipedia:Generalized singular value decomposition#0
|
In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices (somewhat like the higher-order or tensor SVD) and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD. == First version: two-matrix decomposition == The generalized singular value decomposition (GSVD) is a matrix decomposition on a pair of matrices which generalizes the singular value decomposition. It was introduced by Van Loan in 1976 and later developed by Paige and Saunders, which is the version described here. In contrast to the SVD, the GSVD decomposes simultaneously a pair of matrices with the same number of columns. The SVD and the GSVD, as well as some other possible generalizations of the SVD, are extensively used in the study of the conditioning and regularization of linear systems with respect to quadratic semi-norms. In the following, let F = R {\displaystyle \mathbb {F} =\mathbb {R} } , or F = C {\displaystyle \mathbb {F} =\mathbb {C} } . === Definition === The generalized singular value decomposition of matrices A 1 ∈ F m 1 × n {\displaystyle A_{1}\in \mathbb {F} ^{m_{1}\times n}} and A 2 ∈ F m 2 × n {\displaystyle A_{2}\in \mathbb {F} ^{m_{2}\times n}} is A 1 = U 1 Σ 1 [ W ∗ D , 0 D ] Q ∗ , A 2 = U 2 Σ 2 [ W ∗ D , 0 D ] Q ∗ , {\displaystyle {\begin{aligned}A_{1}&=U_{1}\Sigma _{1}[W^{*}D,0_{D}]Q^{*},\\A_{2}&=U_{2}\Sigma _{2}[W^{*}D,0_{D}]Q^{*},\end{aligned}}} where U 1 ∈ F m 1 × m 1 {\displaystyle U_{1}\in \mathbb {F} ^{m_{1}\times m_{1}}} is unitary, U 2 ∈ F m 2 × m 2 {\displaystyle U_{2}\in \mathbb {F} ^{m_{2}\times m_{2}}} is unitary, Q ∈ F n × n {\displaystyle Q\in \mathbb {F} ^{n\times n}} is unitary, W ∈ F k × k {\displaystyle W\in \mathbb {F} ^{k\times k}} is unitary, D ∈ R k × k {\displaystyle D\in \mathbb {R} ^{k\times k}} is real diagonal with positive diagonal, and contains the non-zero singular values of C = [ A 1 A 2 ] {\displaystyle C={\begin{bmatrix}A_{1}\\A_{2}\end{bmatrix}}} in decreasing order, 0 D = 0 ∈ R k × ( n − k ) {\displaystyle 0_{D}=0\in \mathbb {R} ^{k\times (n-k)}} , Σ 1 = ⌈ I A , S 1 , 0 A ⌋ ∈ R m 1 × k {\displaystyle \Sigma _{1}=\lceil I_{A},S_{1},0_{A}\rfloor \in \mathbb {R} ^{m_{1}\times k}} is real non-negative block-diagonal, where S 1 = ⌈ α r + 1 , … , α r + s ⌋ {\displaystyle S_{1}=\lceil \alpha _{r+1},\dots ,\alpha _{r+s}\rfloor } with 1 > α r + 1 ≥ ⋯ ≥ α r + s > 0 {\displaystyle 1>\alpha _{r+1}\geq \cdots \geq \alpha _{r+s}>0} , I A = I r {\displaystyle I_{A}=I_{r}} , and 0 A = 0 ∈ R ( m 1 − r − s ) × ( k − r − s ) {\displaystyle 0_{A}=0\in \mathbb {R} ^{(m_{1}-r-s)\times (k-r-s)}} , Σ 2 = ⌈ 0 B , S 2 , I B ⌋ ∈ R m 2 × k {\displaystyle \Sigma _{2}=\lceil 0_{B},S_{2},I_{B}\rfloor \in \mathbb {R} ^{m_{2}\times k}} is real non-negative block-diagonal, where S 2 = ⌈ β r + 1 , … , β r + s ⌋ {\displaystyle S_{2}=\lceil \beta _{r+1},\dots ,\beta _{r+s}\rfloor } with 0 < β r + 1 ≤ ⋯ ≤ β r + s < 1 {\displaystyle 0<\beta _{r+1}\leq \cdots \leq \beta _{r+s}<1} , I B = I k − r − s {\displaystyle I_{B}=I_{k-r-s}} , and 0 B = 0 ∈ R ( m 2 − k + r ) × r {\displaystyle 0_{B}=0\in \mathbb {R} ^{(m_{2}-k+r)\times r}} , Σ 1 ∗ Σ 1 = ⌈ α 1 2 , … , α k 2 ⌋ {\displaystyle \Sigma _{1}^{*}\Sigma _{1}=\lceil \alpha _{1}^{2},\dots ,\alpha _{k}^{2}\rfloor } , Σ 2 ∗ Σ 2 = ⌈ β 1 2 , … , β k 2 ⌋ {\displaystyle \Sigma _{2}^{*}\Sigma _{2}=\lceil \beta _{1}^{2},\dots ,\beta _{k}^{2}\rfloor } , Σ 1 ∗ Σ 1 + Σ 2 ∗ Σ 2 = I k {\displaystyle \Sigma _{1}^{*}\Sigma _{1}+\Sigma _{2}^{*}\Sigma _{2}=I_{k}} , k = rank ( C ) {\displaystyle k={\textrm {rank}}(C)} . We denote α 1 = ⋯ = α r = 1 {\displaystyle \alpha _{1}=\cdots =\alpha _{r}=1} , α r + s + 1 = ⋯ = α k = 0 {\displaystyle \alpha _{r+s+1}=\cdots =\alpha _{k}=0} , β 1 = ⋯ = β r = 0 {\displaystyle \beta _{1}=\cdots =\beta _{r}=0} , and β r + s + 1 = ⋯ = β k = 1 {\displaystyle \beta _{r+s+1}=\cdots =\beta _{k}=1} . While Σ 1 {\displaystyle \Sigma _{1}} is diagonal, Σ 2 {\displaystyle \Sigma _{2}} is not always diagonal, because of the leading rectangular zero matrix; instead Σ 2 {\displaystyle \Sigma _{2}} is "bottom-right-diagonal". === Variations === There are many variations of the GSVD. These variations are related to the fact that it is always possible to multiply Q ∗ {\displaystyle Q^{*}} from the left by E E ∗ = I {\displaystyle EE^{*}=I} where E ∈ F n × n {\displaystyle E\in \mathbb {F} ^{n\times n}} is an arbitrary unitary matrix. We denote X = ( [ W ∗ D , 0 D ] Q ∗ ) ∗ {\displaystyle X=([W^{*}D,0_{D}]Q^{*})^{*}} X ∗ = [ 0 , R ] Q ^ ∗ {\displaystyle X^{*}=[0,R]{\hat {Q}}^{*}} , where R ∈ F k × k {\displaystyle R\in \mathbb {F} ^{k\times k}} is upper-triangular and invertible, and Q ^ ∈ F n × n {\displaystyle {\hat {Q}}\in \mathbb {F} ^{n\times n}} is unitary. Such matrices exist by RQ-decomposition. Y = W ∗ D {\displaystyle Y=W^{*}D} . Then Y {\displaystyle Y} is invertible. Here are some variations of the GSVD: MATLAB (gsvd): A 1 = U 1 Σ 1 X ∗ , A 2 = U 2 Σ 2 X ∗ . {\displaystyle {\begin{aligned}A_{1}&=U_{1}\Sigma _{1}X^{*},\\A_{2}&=U_{2}\Sigma _{2}X^{*}.\end{aligned}}} LAPACK (LA_GGSVD): A 1 = U 1 Σ 1 [ 0 , R ] Q ^ ∗ , A 2 = U 2 Σ 2 [ 0 , R ] Q ^ ∗ . {\displaystyle {\begin{aligned}A_{1}&=U_{1}\Sigma _{1}[0,R]{\hat {Q}}^{*},\\A_{2}&=U_{2}\Sigma _{2}[0,R]{\hat {Q}}^{*}.\end{aligned}}} Simplified: A 1 = U 1 Σ 1 [ Y , 0 D ] Q ∗ , A 2 = U 2 Σ 2 [ Y , 0 D ] Q ∗ . {\displaystyle {\begin{aligned}A_{1}&=U_{1}\Sigma _{1}[Y,0_{D}]Q^{*},\\A_{2}&=U_{2}\Sigma _{2}[Y,0_{D}]Q^{*}.\end{aligned}}} === Generalized singular values === A generalized singular value of A 1 {\displaystyle A_{1}} and A 2 {\displaystyle A_{2}} is a pair ( a , b ) ∈ R 2 {\displaystyle (a,b)\in \mathbb {R} ^{2}} such that lim δ → 0 det ( b 2 A 1 ∗ A 1 − a 2 A 2 ∗ A 2 + δ I n ) / det ( δ I n − k ) = 0 , a 2 + b 2 = 1 , a , b ≥ 0. {\displaystyle {\begin{aligned}\lim _{\delta \to 0}\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta I_{n})/\det(\delta I_{n-k})&=0,\\a^{2}+b^{2}&=1,\\a,b&\geq 0.\end{aligned}}} We have A i A j ∗ = U i Σ i Y Y ∗ Σ j ∗ U j ∗ {\displaystyle A_{i}A_{j}^{*}=U_{i}\Sigma _{i}YY^{*}\Sigma _{j}^{*}U_{j}^{*}} A i ∗ A j = Q [ Y ∗ Σ i ∗ Σ j Y 0 0 0 ] Q ∗ = Q 1 Y ∗ Σ i ∗ Σ j Y Q 1 ∗ {\displaystyle A_{i}^{*}A_{j}=Q{\begin{bmatrix}Y^{*}\Sigma _{i}^{*}\Sigma _{j}Y&0\\0&0\end{bmatrix}}Q^{*}=Q_{1}Y^{*}\Sigma _{i}^{*}\Sigma _{j}YQ_{1}^{*}} By these properties we can show that the generalized singular values are exactly the pairs ( α i , β i ) {\displaystyle (\alpha _{i},\beta _{i})} . We have det ( b 2 A 1 ∗ A 1 − a 2 A 2 ∗ A 2 + δ I n ) = det ( b 2 A 1 ∗ A 1 − a 2 A 2 ∗ A 2 + δ Q Q ∗ ) = det ( Q [ Y ∗ ( b 2 Σ 1 ∗ Σ 1 − a 2 Σ 2 ∗ Σ 2 ) Y + δ I k 0 0 δ I n − k ] Q ∗ ) = det ( δ I n − k ) det ( Y ∗ ( b 2 Σ 1 ∗ Σ 1 − a 2 Σ 2 ∗ Σ 2 ) Y + δ I k ) . {\displaystyle {\begin{aligned}&\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta I_{n})\\=&\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta QQ^{*})\\=&\det \left(Q{\begin{bmatrix}Y^{*}(b^{2}\Sigma _{1}^{*}\Sigma _{1}-a^{2}\Sigma _{2}^{*}\Sigma _{2})Y+\delta I_{k}&0\\0&\delta I_{n-k}\end{bmatrix}}Q^{*}\right)\\=&\det(\delta I_{n-k})\det(Y^{*}(b^{2}\Sigma _{1}^{*}\Sigma _{1}-a^{2}\Sigma _{2}^{*}\Sigma _{2})Y+\delta I_{k}).\end{aligned}}} Therefore lim δ → 0 det ( b 2 A 1 ∗ A 1 − a 2 A 2 ∗ A 2 + δ I n ) / det ( δ I n − k ) = lim δ → 0 det ( Y ∗ ( b 2 Σ 1 ∗ Σ 1 − a 2 Σ 2 ∗ Σ 2 ) Y + δ I k ) = det ( Y ∗ ( b 2 Σ 1 ∗ Σ 1 − a 2 Σ 2 ∗ Σ 2 ) Y ) = | det ( Y ) | 2 ∏ i = 1 k ( b 2 α i 2 − a 2 β i 2 ) . {\displaystyle {\begin{aligned}{}&\lim _{\delta \to 0}\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta I_{n})/\det(\delta I_{n-k})\\=&\lim _{\delta \to 0}\det(Y^{*}(b^{2}\Sigma _{1}^{*}\Sigma _{1}-a^{2}\Sigma _{2}^{*}\Sigma _{2})Y+\delta I_{k})\\=&\det(Y^{*}(b^{2}\Sigma _{1}^{*}\Sigma _{1}-a^{2}\Sigma _{2}^{*}\Sigma _{2})Y)\\=&|\det(Y)|^{2}\prod _{i=1}^{k}(b^{2}\alpha _{i}^{2}-a^{2}\beta _{i}^{2}).\end{aligned}}} This expression is zero exactly when a = α i {\displaystyle a=\alpha _{i}} and b = β i {\displaystyle b=\beta _{i}} for some i {\displaystyle i} . In, the generalized singular values are claimed to be those which solve det ( b 2 A 1 ∗ A 1 − a 2 A 2 ∗ A 2 ) = 0 {\displaystyle \det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2})=0} . However, this claim only holds when k = n {\displaystyle k=n} , since otherwise the determinant is zero for every pair ( a , b ) ∈ R 2 {\displaystyle (a,b)\in \mathbb {R} ^{2}} ; this can be seen by substituting δ = 0 {\displaystyle \delta =0} above. === Generalized inverse === Define E + = E − 1 {\displaystyle E^{+}=E^{-1}} for any invertible matrix E ∈ F n × n {\displaystyle E\in \mathbb {F} ^{n\times n}} , 0 + = 0 ∗ {\displaystyle 0^{+}=0^{*}} for any zero matrix 0 ∈ F m × n {\displaystyle 0\in \mathbb {F} ^{m\times n}} , and ⌈ E 1 , E 2 ⌋ + = ⌈ E 1 + , E 2 + ⌋ {\displaystyle \left\lceil E_{1},E_{2}\right\rfloor ^{+}=\left\lceil E_{1}^{+},E_{2}^{+}\right\rfloor } for any block-diagonal matrix. Then define A i + = Q [ Y − 1 0 ] Σ i + U i ∗ {\displaystyle A_{i}^{+}=Q{\begin{bmatrix}Y^{-1}\\0\end{bmatrix}}\Sigma _{i}^{+}U_{i}^{*}} It can be shown that A i + {\displaystyle A_{i}^{+}} as defined here is a generalized inverse of A i {\displaystyle A_{i}} ; in particular a { 1 , 2 , 3 } {\displaystyle \{1,2,3\}} -inverse of A i {\displaystyle A_{i}} . Since it does not in general satisfy ( A i + A i ) ∗ = A i + A i {\displaystyle (A_{i}^{+}A_{i})^{*}=A_{i}^{+}A_{i}} , this is not the Moore–Penrose inverse; otherwise we could derive ( A B ) + = B + A + {\displaystyle (AB)^{+}=B^{+}A^{+}} for any choice of matrices, which only holds for certain class of matrices. Suppose Q = [ Q 1 Q 2 ] {\displaystyle Q={\begin{bmatrix}Q_{1}&Q_{2}\end{bmatrix}}} , where Q 1 ∈ F n × k {\displaystyle Q_{1}\in \mathbb {F} ^{n\times k}} and Q 2 ∈ F n × ( n − k ) {\displaystyle Q_{2}\in \mathbb {F} ^{n\times (n-k)}} . This generalized inverse has the following properties: Σ 1 + = ⌈ I A , S 1 − 1 , 0 A T ⌋ {\displaystyle \Sigma _{1}^{+}=\lceil I_{A},S_{1}^{-1},0_{A}^{T}\rfloor } Σ 2 + = ⌈ 0 B T , S 2 − 1 , I B ⌋ {\displaystyle \Sigma _{2}^{+}=\lceil 0_{B}^{T},S_{2}^{-1},I_{B}\rfloor } Σ 1 Σ 1 + = ⌈ I , I , 0 ⌋ {\displaystyle \Sigma _{1}\Sigma _{1}^{+}=\lceil I,I,0\rfloor } Σ 2 Σ 2 + = ⌈ 0 , I , I ⌋ {\displaystyle \Sigma _{2}\Sigma _{2}^{+}=\lceil 0,I,I\rfloor } Σ 1 Σ 2 + = ⌈ 0 , S 1 S 2 − 1 , 0 ⌋ {\displaystyle \Sigma _{1}\Sigma _{2}^{+}=\lceil 0,S_{1}S_{2}^{-1},0\rfloor } Σ 1 + Σ 2 = ⌈ 0 , S 1 − 1 S 2 , 0 ⌋ {\displaystyle \Sigma _{1}^{+}\Sigma _{2}=\lceil 0,S_{1}^{-1}S_{2},0\rfloor } A i A j + = U i Σ i Σ j + U j ∗ {\displaystyle A_{i}A_{j}^{+}=U_{i}\Sigma _{i}\Sigma _{j}^{+}U_{j}^{*}} A i + A j = Q [ Y − 1 Σ i + Σ j Y 0 0 0 ] Q ∗ = Q 1 Y − 1 Σ i + Σ j Y Q 1 ∗ {\displaystyle A_{i}^{+}A_{j}=Q{\begin{bmatrix}Y^{-1}\Sigma _{i}^{+}\Sigma _{j}Y&0\\0&0\end{bmatrix}}Q^{*}=Q_{1}Y^{-1}\Sigma _{i}^{+}\Sigma _{j}YQ_{1}^{*}} === Quotient SVD === A generalized singular ratio of A 1 {\displaystyle A_{1}} and A 2 {\displaystyle A_{2}} is σ i = α i β i + {\displaystyle \sigma _{i}=\alpha _{i}\beta _{i}^{+}} . By the above properties, A 1 A 2 + = U 1 Σ 1 Σ 2 + U 2 ∗ {\displaystyle A_{1}A_{2}^{+}=U_{1}\Sigma _{1}\Sigma _{2}^{+}U_{2}^{*}} . Note that Σ 1 Σ 2 + = ⌈ 0 , S 1 S 2 − 1 , 0 ⌋ {\displaystyle \Sigma _{1}\Sigma _{2}^{+}=\lceil 0,S_{1}S_{2}^{-1},0\rfloor } is diagonal, and that, ignoring the leading zeros, contains the singular ratios in decreasing order. If A 2 {\displaystyle A_{2}} is invertible, then Σ 1 Σ 2 + {\displaystyle \Sigma _{1}\Sigma _{2}^{+}} has no leading zeros, and the generalized singular ratios are the singular values, and U 1 {\displaystyle U_{1}} and U 2 {\displaystyle U_{2}} are the matrices of singular vectors, of the matrix A 1 A 2 + = A 1 A 2 − 1 {\displaystyle A_{1}A_{2}^{+}=A_{1}A_{2}^{-1}} . In fact, computing the SVD of A 1 A 2 − 1 {\displaystyle A_{1}A_{2}^{-1}} is one of the motivations for the GSVD, as "forming A B − 1 {\displaystyle AB^{-1}} and finding its SVD can lead to unnecessary and large numerical errors when B {\displaystyle B} is ill-conditioned for solution of equations". Hence the sometimes used name "quotient SVD", although this is not the only reason for using GSVD. If A 2 {\displaystyle A_{2}} is not invertible, then U 1 Σ 1 Σ 2 + U 2 ∗ {\displaystyle U_{1}\Sigma _{1}\Sigma _{2}^{+}U_{2}^{*}} is still the SVD of A 1 A 2 + {\displaystyle A_{1}A_{2}^{+}} if we relax the requirement of having the singular values in decreasing order. Alternatively, a decreasing order SVD can be found by moving the leading zeros to the back: U 1 Σ 1 Σ 2 + U 2 ∗ = ( U 1 P 1 ) P 1 ∗ Σ 1 Σ 2 + P 2 ( P 2 ∗ U 2 ∗ ) {\displaystyle U_{1}\Sigma _{1}\Sigma _{2}^{+}U_{2}^{*}=(U_{1}P_{1})P_{1}^{*}\Sigma _{1}\Sigma _{2}^{+}P_{2}(P_{2}^{*}U_{2}^{*})} , where P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} are appropriate permutation matrices. Since rank equals the number of non-zero singular values, r a n k ( A 1 A 2 + ) = s {\displaystyle \mathrm {rank} (A_{1}A_{2}^{+})=s} . === Construction === Let C = P ⌈ D , 0 ⌋ Q ∗ {\displaystyle C=P\lceil D,0\rfloor Q^{*}} be the SVD of C = [ A 1 A 2 ] {\displaystyle C={\begin{bmatrix}A_{1}\\A_{2}\end{bmatrix}}} , where P ∈ F ( m 1 + m 2 ) × ( m 1 × m 2 ) {\displaystyle P\in \mathbb {F} ^{(m_{1}+m_{2})\times (m_{1}\times m_{2})}} is unitary, and Q {\displaystyle Q} and D {\displaystyle D} are as described, P = [ P 1 , P 2 ] {\displaystyle P=[P_{1},P_{2}]} , where P 1 ∈ F ( m 1 + m 2 ) × k {\displaystyle P_{1}\in \mathbb {F} ^{(m_{1}+m_{2})\times k}} and P 2 ∈ F ( m 1 + m 2 ) × ( n − k ) {\displaystyle P_{2}\in \mathbb {F} ^{(m_{1}+m_{2})\times (n-k)}} , P 1 = [ P 11 P 21 ] {\displaystyle P_{1}={\begin{bmatrix}P_{11}\\P_{21}\end{bmatrix}}} , where P 11 ∈ F m 1 × k {\displaystyle P_{11}\in \mathbb {F} ^{m_{1}\times k}} and P 21 ∈ F m 2 × k {\displaystyle P_{21}\in \mathbb {F} ^{m_{2}\times k}} , P 11 = U 1 Σ 1 W ∗ {\displaystyle P_{11}=U_{1}\Sigma _{1}W^{*}} by the SVD of P 11 {\displaystyle P_{11}} , where U 1 {\displaystyle U_{1}} , Σ 1 {\displaystyle \Sigma _{1}} and W {\displaystyle W} are as described, P 21 W = U 2 Σ 2 {\displaystyle P_{21}W=U_{2}\Sigma _{2}} by a decomposition similar to a QR-decomposition, where U 2 {\displaystyle U_{2}} and Σ 2 {\displaystyle \Sigma _{2}} are as described. Then C = P ⌈ D , 0 ⌋ Q ∗ = [ P 1 D , 0 ] Q ∗ = [ U 1 Σ 1 W ∗ D 0 U 2 Σ 2 W ∗ D 0 ] Q ∗ = [ U 1 Σ 1 [ W ∗ D , 0 ] Q ∗ U 2 Σ 2 [ W ∗ D , 0 ] Q ∗ ] . {\displaystyle {\begin{aligned}C&=P\lceil D,0\rfloor Q^{*}\\{}&=[P_{1}D,0]Q^{*}\\{}&={\begin{bmatrix}U_{1}\Sigma _{1}W^{*}D&0\\U_{2}\Sigma _{2}W^{*}D&0\end{bmatrix}}Q^{*}\\{}&={\begin{bmatrix}U_{1}\Sigma _{1}[W^{*}D,0]Q^{*}\\U_{2}\Sigma _{2}[W^{*}D,0]Q^{*}\end{bmatrix}}.\end{aligned}}} We also have [ U 1 ∗ 0 0 U 2 ∗ ] P 1 W = [ Σ 1 Σ 2 ] . {\displaystyle {\begin{bmatrix}U_{1}^{*}&0\\0&U_{2}^{*}\end{bmatrix}}P_{1}W={\begin{bmatrix}\Sigma _{1}\\\Sigma _{2}\end{bmatrix}}.} Therefore Σ 1 ∗ Σ 1 + Σ 2 ∗ Σ 2 = [ Σ 1 Σ 2 ] ∗ [ Σ 1 Σ 2 ] = W ∗ P 1 ∗ [ U 1 0 0 U 2 ] [ U 1 ∗ 0 0 U 2 ∗ ] P 1 W = I . {\displaystyle \Sigma _{1}^{*}\Sigma _{1}+\Sigma _{2}^{*}\Sigma _{2}={\begin{bmatrix}\Sigma _{1}\\\Sigma _{2}\end{bmatrix}}^{*}{\begin{bmatrix}\Sigma _{1}\\\Sigma _{2}\end{bmatrix}}=W^{*}P_{1}^{*}{\begin{bmatrix}U_{1}&0\\0&U_{2}\end{bmatrix}}{\begin{bmatrix}U_{1}^{*}&0\\0&U_{2}^{*}\end{bmatrix}}P_{1}W=I.} Since P 1 {\displaystyle P_{1}} has orthonormal columns, | | P 1 | | 2 ≤ 1 {\displaystyle ||P_{1}||_{2}\leq 1} . Therefore | | Σ 1 | | 2 = | | U 1 ∗ P 1 W | | 2 = | | P 1 | | 2 ≤ 1. {\displaystyle ||\Sigma _{1}||_{2}=||U_{1}^{*}P_{1}W||_{2}=||P_{1}||_{2}\leq 1.} We also have for each x ∈ R k {\displaystyle x\in \mathbb {R} ^{k}} such that | | x | | 2 = 1 {\displaystyle ||x||_{2}=1} that | | P 21 x | | 2 2 ≤ | | P 11 x | | 2 2 + | | P 21 x | | 2 2 = | | P 1 x | | 2 2 ≤ 1. {\displaystyle ||P_{21}x||_{2}^{2}\leq ||P_{11}x||_{2}^{2}+||P_{21}x||_{2}^{2}=||P_{1}x||_{2}^{2}\leq 1.} Therefore | | P 21 | | 2 ≤ 1 {\displaystyle ||P_{21}||_{2}\leq 1} , and | | Σ 2 | | 2 = | | U 2 ∗ P 21 W | | 2 = | | P 21 | | 2 ≤ 1. {\displaystyle ||\Sigma _{2}||_{2}=||U_{2}^{*}P_{21}W||_{2}=||P_{21}||_{2}\leq 1.} == Applications == The GSVD, formulated as a comparative spectral decomposition, has been successfully applied to signal processing and data science, e.g., in genomic signal processing. These applications inspired several additional comparative spectral decompositions, i.e., the higher-order GSVD (HO GSVD) and the tensor GSVD. It has equally found applications to estimate the spectral decompositions of linear operators when the eigenfunctions are parameterized with a linear model, i.e. a reproducing kernel Hilbert space. == Second version: weighted single-matrix decomposition == The weighted version of the generalized singular value decomposition (GSVD) is a constrained matrix decomposition with constraints imposed on the left and right singular vectors of the singular value decomposition. This form of the GSVD is an extension of the SVD as such. Given the SVD of an m×n real or complex matrix M M = U Σ V ∗ {\displaystyle M=U\Sigma V^{*}\,} where U ∗ W u U = V ∗ W v V = I . {\displaystyle U^{*}W_{u}U=V^{*}W_{v}V=I.} Where I is the identity matrix and where U {\displaystyle U} and V {\displaystyle V} are orthonormal given their constraints ( W u {\displaystyle W_{u}} and W v {\displaystyle W_{v}} ). Additionally, W u {\displaystyle W_{u}} and W v {\displaystyle W_{v}} are positive definite matrices (often diagonal matrices of weights). This form of the GSVD is the core of certain techniques, such as generalized principal component analysis and Correspondence analysis. The weighted form of the GSVD is called as such because, with the correct selection of weights, it generalizes many techniques (such as multidimensional scaling and linear discriminant analysis). == References == == Further reading ==
|
Wikipedia:Generating set of a module#0
|
In mathematics, a generating set Γ of a module M over a ring R is a subset of M such that the smallest submodule of M containing Γ is M itself (the smallest submodule containing a subset is the intersection of all submodules containing the set). The set Γ is then said to generate M. For example, the ring R is generated by the identity element 1 as a left R-module over itself. If there is a finite generating set, then a module is said to be finitely generated. This applies to ideals, which are the submodules of the ring itself. In particular, a principal ideal is an ideal that has a generating set consisting of a single element. Explicitly, if Γ is a generating set of a module M, then every element of M is a (finite) R-linear combination of some elements of Γ; i.e., for each x in M, there are r1, ..., rm in R and g1, ..., gm in Γ such that x = r 1 g 1 + ⋯ + r m g m . {\displaystyle x=r_{1}g_{1}+\cdots +r_{m}g_{m}.} Put in another way, there is a surjection ⨁ g ∈ Γ R → M , r g ↦ r g g , {\displaystyle \bigoplus _{g\in \Gamma }R\to M,\,r_{g}\mapsto r_{g}g,} where we wrote rg for an element in the g-th component of the direct sum. (Coincidentally, since a generating set always exists, e.g. M itself, this shows that a module is a quotient of a free module, a useful fact.) A generating set of a module is said to be minimal if no proper subset of the set generates the module. If R is a field, then a minimal generating set is the same thing as a basis. Unless the module is finitely generated, there may exist no minimal generating set. The cardinality of a minimal generating set need not be an invariant of the module; Z is generated as a principal ideal by 1, but it is also generated by, say, a minimal generating set {2, 3}. What is uniquely determined by a module is the infimum of the numbers of the generators of the module. Let R be a local ring with maximal ideal m and residue field k and M finitely generated module. Then Nakayama's lemma says that M has a minimal generating set whose cardinality is dim k M / m M = dim k M ⊗ R k {\displaystyle \dim _{k}M/mM=\dim _{k}M\otimes _{R}k} . If M is flat, then this minimal generating set is linearly independent (so M is free). See also: Minimal resolution. A more refined information is obtained if one considers the relations between the generators; see Free presentation of a module. == See also == Countably generated module Flat module Invariant basis number == References == Dummit, David; Foote, Richard. Abstract Algebra.
|
Wikipedia:Generator (mathematics)#0
|
In mathematics and physics, the term generator or generating set may refer to any of a number of related concepts. The underlying concept in each case is that of a smaller set of objects, together with a set of operations that can be applied to it, that result in the creation of a larger collection of objects, called the generated set. The larger set is then said to be generated by the smaller set. It is commonly the case that the generating set has a simpler set of properties than the generated set, thus making it easier to discuss and examine. It is usually the case that properties of the generating set are in some way preserved by the act of generation; likewise, the properties of the generated set are often reflected in the generating set. == List of generators == A list of examples of generating sets follow. Generating set or spanning set of a vector space: a set that spans the vector space Generating set of a group: A subset of a group that is not contained in any subgroup of the group other than the entire group Generating set of a ring: A subset S of a ring A generates A if the only subring of A containing S is A Generating set of an ideal in a ring Generating set of a module A generator, in category theory, is an object that can be used to distinguish morphisms In topology, a collection of sets that generate the topology is called a subbase Generating set of a topological algebra: S is a generating set of a topological algebra A if the smallest closed subalgebra of A containing S is A Generating a σ-algebra by a collection of subsets == Differential equations == In the study of differential equations, and commonly those occurring in physics, one has the idea of a set of infinitesimal displacements that can be extended to obtain a manifold, or at least, a local part of it, by means of integration. The general concept is of using the exponential map to take the vectors in the tangent space and extend them, as geodesics, to an open set surrounding the tangent point. In this case, it is not unusual to call the elements of the tangent space the generators of the manifold. When the manifold possesses some sort of symmetry, there is also the related notion of a charge or current, which is sometimes also called the generator, although, strictly speaking, charges are not elements of the tangent space. Elements of the Lie algebra to a Lie group are sometimes referred to as "generators of the group," especially by physicists. The Lie algebra can be thought of as the infinitesimal vectors generating the group, at least locally, by means of the exponential map, but the Lie algebra does not form a generating set in the strict sense. In stochastic analysis, an Itō diffusion or more general Itō process has an infinitesimal generator. The generator of any continuous symmetry implied by Noether's theorem, the generators of a Lie group being a special case. In this case, a generator is sometimes called a charge or Noether charge, examples include: angular momentum as the generator of rotations, linear momentum as the generator of translations, electric charge being the generator of the U(1) symmetry group of electromagnetism, the color charges of quarks are the generators of the SU(3) color symmetry in quantum chromodynamics, More precisely, "charge" should apply only to the root system of a Lie group. == See also == Free object Generating function Lie theory Symmetry (physics) Supersymmetry Gauge theory Field (physics) == References == == External links == Generating Sets, K. Conrad
|
Wikipedia:Geneviève Gauthier#0
|
Geneviève Gauthier is a Canadian mathematician, statistician, and decision scientist, known for her work in mathematical finance including the valuation of options and financial risk management. She is a professor of statistics in the Department of Decision Sciences at HEC Montréal. == Education and career == Gauthier is originally from Montreal, and earned bachelor's and master's degrees in mathematics from the Université du Québec à Montréal. One of her faculty mentors there was Alain Latour. She completed her Ph.D. in mathematics from Carleton University, under the supervision of Donald A. Dawson; her dissertation was Multilevel Bilinear System of Stochastic Differential Equations. She became a faculty member at HEC Montréal on completing her doctorate in 1996, and was promoted to full professor there in 2008. She chaired the Department of Decision Sciences from 2013 to 2016. == Recognition == In 2018 the Statistical Society of Canada selected Gauthier as the winner of the SSC Award for Impact of Applied and Collaborative Work, "for her outstanding contributions to the promotion of innovative statistical methodologies in financial engineering, and in the training of highly qualified personnel". == References == == External links == Home page Geneviève Gauthier publications indexed by Google Scholar
|
Wikipedia:Gennadii Rubinstein#0
|
Gennadii Shlemovich Rubinstein (Russian: Геннадий Шлемович Рубинштейн) was a Russian mathematician. His research focused on mathematical programming and operations research. His name is associated to the Kantorovich–Rubinstein metric, also commonly known as the Wasserstein distance, used in optimal transport. Alternate form of the first name: Gennady. Alternate forms of the last name: Rubinšteĭn, Rubinshtein. == Doctorate == Gennadii Rubinstein got his doctorate in St. Petersburg State University in 1956, under the supervision of Leonid V. Kantorovich. == Selected publications == Rubinstein, G. Sh. (1995). "On multiple-point centers of normalized measures on locally compact metric spaces". Siberian Mathematical Journal. 36 (1): 143–146. Bibcode:1995SibMJ..36..143R. doi:10.1007/BF02113928. S2CID 121093125. Rubinstein, G. Sh. (1970). "Duality in mathematical programming and some problems of convex analysis". Russian Mathematical Surveys. 25 (5): 171–200. Bibcode:1970RuMaS..25..171R. doi:10.1070/RM1970v025n05ABEH003800. S2CID 250903124. Akilov, G. P.; Kantorovich, L. V.; Rubinstein, G. Sh. (1967). "Extremal States and Extremal Controls". SIAM Journal on Control. 5 (4): 600–608. doi:10.1137/0305039. Rubinstein, G. Sh. (1963). "Dual extremal problems". Dokl. Akad. Nauk SSSR. 152 (2): 288–291. Rubinstein, G. Sh.; Urbanik, K. (1957). "Solution of an Extremal Problem". Theory of Probability & Its Applications. 2 (3): 364–366. doi:10.1137/1102025. Kantorovich, L. V.; Rubinstein, G. Sh. (1957). "On a functional space and certain extremum problems". Dokl. Akad. Nauk SSSR. 11 (6): 1058–1061. Rubinstein, G. Sh. (1954). "The general solution of a finite system of linear inequalities". Uspekhi Mat. Nauk. 9 (2(60)): 171–177. == See also == List of Russian mathematicians == References == == External links == Gennadii Rubinstein at the Mathematics Genealogy Project A web page about Gennadii Rubinstein's publications Gennadii Shlemovich Rubinstein (obituary)
|
Wikipedia:Gennadiy Feldman#0
|
Gennadiy Mykhailovych Feldman (Ukrainian: Геннадій Михайлович Фельдман; born October 15, 1947) is a Soviet and Ukrainian mathematician, corresponding member of the National Academy of Sciences of Ukraine, Doctor of Science in Physics and Mathematics, Professor, Head of the Mathematical Division of B. Verkin Institute for Low Temperature Physics and Engineering of the National Academy of Sciences of Ukraine. == Biography == In 1970, Feldman graduated in the Faculty of Mechanics and Mathematics of V.N. Karazin Kharkiv National University. From 1970 to 1973, he was a PhD student at B. Verkin Institute for Low Temperature Physics and Engineering. In 1973, he defended his thesis Harmonic Analysis of Non-unitary Representations of Locally Compact Abelian Groups (supervisor Prof. Yu. I. Lyubich) and obtained a degree of Candidate of Sciences (PhD). He has been working at B. Verkin Institute for Low Temperature Physics and Engineering, starting as a junior researcher and heading the Mathematical Division in 2012. In 1985, he defended his doctoral thesis, Arithmetic of Probability Measures on Locally Compact Abelian groups at Vilnius University and obtained a degree of Doctor of Sciences (Dr. Hab.). In 2018, he was elected as a corresponding member of the National Academy of Sciences of Ukraine. For more than 20 years, Feldman worked part-time at the Faculty of Mechanics and Mathematics of V.N. Karazin Kharkiv National University. He is the author of 5 monographs and more than 100 scientific articles. Feldman specializes in the field of abstract harmonic analysis and algebraic probability theory. He constructed a theory of decompositions of random variables and proved analogs of the classical characterization theorems of mathematical statistics in the case when random variables take values in various classes of locally compact Abelian groups (discrete, compact, and others). == Awards == Ostrogradsky prize of the National Academy of Sciences of Ukraine (2009) for the series of works “Probabilistic Problems on Groups and in Spectral Theory” (together with L. Pastur and M. Shcherbina). State Prize of Ukraine in Science and Technology (2018) for the work «Qualitative methods of research models of mathematical physics» (together with A. Kochubey, M. Shcherbina, O. Rebenko, I. Mykytyuk, V. Samojlenko, A. Prykarpatskyj). Mitropolskiy prize of the National Academy of Sciences of Ukraine (2021) for the series of works “New analytical methods in the theory of nonlinear oscillations, the theory of random matrices and in characterization problems” (together with V. Slyusarchuk and M. Shcherbina). == Monographs == Г. М. Фельдман. Арифметика вероятностных распределений и характеризационные задачи на абелевых группах, Киев: Наукова думка, 1990, 168 с. G.M. Fel'dman. Arithmetic of probability distributions, and characterization problems on Abelian groups, Transl. Math. Monographs. Vol. 116, Providence, RI: American Mathematical Society, 1993, 223 p. [1] Gennadiy Feldman. Functional equations and characterization problems on locally compact Abelian groups, EMS Tracts in Mathematics 5, Zurich: European Mathematical Society, 2008, 268 p. [2] Г. М. Фельдман. Характеризационные задачи математической статистики на локально компактных абелевых группах, Киев: Наукова думка, 2010, 432 с. Gennadiy Feldman. Characterization of Probability Distributions on Locally Compact Abelian Groups. Mathematical Surveys and Monographs. Vol. 273, Providence, RI: American Mathematical Society, 2023, 240 pp. [3] == References == == External links == Web page on B. Verkin Institute for Low Temperature Physics and Engineering of the National Academy of Sciences of Ukraine Web page on Math-Net.Ru Article on the occasion of the sixtieth anniversary 75th anniversary of Corresponding Member of the NAS of Ukraine G.M. Feldman
|
Wikipedia:Geodesic grid#0
|
A geodesic grid is a spatial grid based on a geodesic polyhedron or Goldberg polyhedron. == History == The earliest use of the (icosahedral) geodesic grid in geophysical modeling dates back to 1968 and the work by Sadourny, Arakawa, and Mintz and Williamson. Later work expanded on this base. == Construction == A geodesic grid is a global Earth spatial reference that uses polygon tiles based on the subdivision of a polyhedron (usually the icosahedron, and usually a Class I subdivision) to subdivide the surface of the Earth. Such a grid does not have a straightforward relationship to latitude and longitude, but conforms to many of the main criteria for a statistically valid discrete global grid. Primarily, the cells' area and shape are generally similar, especially near the poles where many other spatial grids have singularities or heavy distortion. The popular Quaternary Triangular Mesh (QTM) falls into this category. Geodesic grids may use the dual polyhedron of the geodesic polyhedron, which is the Goldberg polyhedron. Goldberg polyhedra are made up of hexagons and (if based on the icosahedron) 12 pentagons. One implementation that uses an icosahedron as the base polyhedron, hexagonal cells, and the Snyder equal-area projection is known as the Icosahedron Snyder Equal Area (ISEA) grid. == Applications == In biodiversity science, geodesic grids are a global extension of local discrete grids that are staked out in field studies to ensure appropriate statistical sampling and larger multi-use grids deployed at regional and national levels to develop an aggregated understanding of biodiversity. These grids translate environmental and ecological monitoring data from multiple spatial and temporal scales into assessments of current ecological condition and forecasts of risks to our natural resources. A geodesic grid allows local to global assimilation of ecologically significant information at its own level of granularity. When modeling the weather, ocean circulation, or the climate, partial differential equations are used to describe the evolution of these systems over time. Because computer programs are used to build and work with these complex models, approximations need to be formulated into easily computable forms. Some of these numerical analysis techniques (such as finite differences) require the area of interest to be subdivided into a grid — in this case, over the shape of the Earth. Geodesic grids can be used in video game development to model fictional worlds instead of the Earth. They are a natural analog of the hex map to a spherical surface. === Pros and cons === Pros: Largely isotropic. Resolution can be easily increased by binary division. Does not suffer from over sampling near the poles like more traditional rectangular longitude–latitude square grids. Does not result in dense linear systems like spectral methods do (see also Gaussian grid). No single points of contact between neighboring grid cells. Square grids and isometric grids suffer from the ambiguous problem of how to handle neighbors that only touch at a single point. Cells can be both minimally distorted and near-equal-area. In contrast, square grids are not equal area, while equal-area rectangular grids vary in shape from equator to poles. Cons: More complicated to implement than rectangular longitude–latitude grids in computers. == See also == Geodesics on an ellipsoid Geographic coordinate system Grid reference Discrete Global Grid Spherical design, generalization to more than three dimensions Quadrilateralized spherical cube, a grid over the earth based on the cube and made of quadrilaterals instead of triangles Polyhedral map projection HEALPix Hierarchical triangular mesh == Notes == == References == == External links == BUGS climate model page on geodesic grids Discrete Global Grids page at the Computer Science department at Southern Oregon University "How PYXIS Works". Pyxis public wiki. 25 January 2011. Archived from the original on Mar 1, 2021. Carfora, Maria Francesca (2007-12-31). "Interpolation on spherical geodesic grids: A comparative study". Journal of Computational and Applied Mathematics. Proceedings of the Numerical Analysis Conference 2005. 210 (1): 99–105. doi:10.1016/j.cam.2006.10.068. ISSN 0377-0427.
|
Wikipedia:Geoff Bascand#0
|
Geoff Bascand was the Deputy Governor and Head of Operations at the Reserve Bank of New Zealand. He was Government Statistician and the Chief Executive of Statistics New Zealand until May 2013. Bascand is a graduate of the University of Otago and the Australian National University with a BA (Honours) degree in Geography and a master's degree in Economics. == Career == Bascand has worked for the New Zealand Treasury, the International Monetary Fund in Washington, and the New Zealand Department of Labour. He was appointed one of three Deputy Government Statisticians for Statistics New Zealand in July 2004 and was responsible for Macro-Economic, Environment, Regional and Geography Statistics. He was appointed Government Statistician and Chief Executive of Statistics New Zealand on 22 May 2007. He started his career in 1981 at the Treasury as an economic analyst and later became Director of Forecasting. From 1998 until 2004, Bascand was the General Manager of the Labour Market Policy Group at the Department of Labour. As well as holding senior policy and management positions at the Treasury and the Department of Labour, Bascand has been a Research Fellow at the Centre of Policy Studies at Monash University in Australia, and from 1996 until 1997 he was a staff economist at the International Monetary Fund in Washington DC. In February 2005, he was a recipient of a Leadership Development Centre Fellowship award. On 12 February 2013 Bascand announced his resignation at Statistics New Zealand, finishing there on 24 May 2013. He has accepted a position at the Reserve Bank as Deputy Governor and Head of Operations. == References ==
|
Wikipedia:Geometriae Dedicata#0
|
Geometriae Dedicata is a mathematical journal, founded in 1972, concentrating on geometry and its relationship to topology, group theory and the theory of dynamical systems. It was created on the initiative of Hans Freudenthal in Utrecht, the Netherlands. It is published by Springer Netherlands. The Editor-in-Chief is Richard Alan Wentworth. == References == == External links == Springer site
|
Wikipedia:Geometric Exercises in Paper Folding#0
|
Geometric Exercises in Paper Folding is a book on the mathematics of paper folding. It was written by Indian mathematician T. Sundara Row, first published in India in 1893, and later republished in many other editions. Its topics include paper constructions for regular polygons, symmetry, and algebraic curves. According to the historian of mathematics Michael Friedman, it became "one of the main engines of the popularization of folding as a mathematical activity". == Publication history == Geometric Exercises in Paper Folding was first published by Addison & Co. in Madras in 1893. The book became known in Europe through a remark of Felix Klein in his book Vorträge über ausgewählte Fragen der Elementargeometrie (1895) and its translation Famous Problems Of Elementary Geometry (1897). Based on the success of Geometric Exercises in Paper Folding in Germany, the Open Court Press of Chicago published it in the US, with updates by Wooster Woodruff Beman and David Eugene Smith. Although Open Court listed four editions of the book, published in 1901, 1905, 1917, and 1941, the content did not change between these editions. The fourth edition was also published in London by La Salle, and both presses reprinted the fourth edition in 1958. The contributions of Beman and Smith to the Open Court editions have been described as "translation and adaptation", despite the fact that the original 1893 edition was already in English. Beman and Smith also replaced many footnotes with references to their own work, replaced some of the diagrams by photographs, and removed some remarks specific to India. In 1966, Dover Publications of New York published a reprint of the 1905 edition, and other publishers of out-of-copyright works have also printed editions of the book. == Topics == Geometric Exercises in Paper Folding shows how to construct various geometric figures using paper-folding in place of the classical Greek Straightedge and compass constructions. The book begins by constructing regular polygons beyond the classical constructible polygons of 3, 4, or 5 sides, or of any power of two times these numbers, and the construction by Carl Friedrich Gauss of the heptadecagon, it also provides a paper-folding construction of the regular nonagon, not possible with compass and straightedge. The nonagon construction involves angle trisection, but Rao is vague about how this can be performed using folding; an exact and rigorous method for folding-based trisection would have to wait until the work in the 1930s of Margherita Piazzola Beloch. The construction of the square also includes a discussion of the Pythagorean theorem. The book uses high-order regular polygons to provide a geometric calculation of pi. A discussion of the symmetries of the plane includes congruence, similarity, and collineations of the projective plane; this part of the book also covers some of the major theorems of projective geometry including Desargues's theorem, Pascal's theorem, and Poncelet's closure theorem. Later chapters of the book show how to construct algebraic curves including the conic sections, the conchoid, the cubical parabola, the witch of Agnesi, the cissoid of Diocles, and the Cassini ovals. The book also provides a gnomon-based proof of Nicomachus's theorem that the sum of the first n {\displaystyle n} cubes is the square of the sum of the first n {\displaystyle n} integers, and material on other arithmetic series, geometric series, and harmonic series. There are 285 exercises, and many illustrations, both in the form of diagrams and (in the updated editions) photographs. == Influences == Tandalam Sundara Row was born in 1853, the son of a college principal, and earned a bachelor's degree at the Kumbakonam College in 1874, with second-place honours in mathematics. He became a tax collector in Tiruchirappalli, retiring in 1913, and pursued mathematics as an amateur. As well as Geometric Exercises in Paper Folding, he also wrote a second book, Elementary Solid Geometry, published in three parts from 1906 to 1909. One of the sources of inspiration for Geometric Exercises in Paper Folding was Kindergarten Gift No. VIII: Paper-folding. This was one of the Froebel gifts, a set of kindergarten activities designed in the early 19th century by Friedrich Fröbel. The book was also influenced by an earlier Indian geometry textbook, First Lessons in Geometry, by Bhimanakunte Hanumantha Rao (1855–1922). First Lessons drew inspiration from Fröbel's gifts in setting exercises based on paper-folding, and from the book Elementary Geometry: Congruent Figures by Olaus Henrici in using a definition of geometric congruence based on matching shapes to each other and well-suited for folding-based geometry. In turn, Geometric Exercises in Paper Folding inspired other works of mathematics. A chapter in Mathematische Unterhaltungen und Spiele [Mathematical Recreations and Games] by Wilhelm Ahrens (1901) concerns folding and is based on Rao's book, inspiring the inclusion of this material in several other books on recreational mathematics. Other mathematical publications have studied the curves that can be generated by the folding processes used in Geometric Exercises in Paper Folding. In 1934, Margherita Piazzola Beloch began her research on axiomatizing the mathematics of paper-folding, a line of work that would eventually lead to the Huzita–Hatori axioms in the late 20th century. Beloch was explicitly inspired by Rao's book, titling her first work in this area "Alcune applicazioni del metodo del ripiegamento della carta di Sundara Row" ["Several applications of the method of folding a paper of Sundara Row"]. == Audience and reception == The original intent of Geometric Exercises in Paper Folding was twofold: as an aid in geometry instruction, and as a work of recreational mathematics to inspire interest in geometry in a general audience. Edward Mann Langley, reviewing the 1901 edition, suggested that its content went well beyond what should be covered in a standard geometry course. And in their own textbook on geometry using paper-folding exercises, The First Book of Geometry (1905), Grace Chisholm Young and William Henry Young heavily criticized Geometric Exercises in Paper Folding, writing that it is "too difficult for a child, and too infantile for a grown person". However, reviewing the 1966 Dover edition, mathematics educator Pamela Liebeck called it "remarkably relevant" to the discovery learning techniques for geometry instruction of the time, and in 2016 computational origami expert Tetsuo Ida, introducing an attempt to formalize the mathematics of the book, wrote "After 123 years, the significance of the book remains." == References == == External links == Madras edition and Open Court edition of Geometric Exercises in Paper Folding on the Internet Archive
|
Wikipedia:Geometric and Functional Analysis#0
|
Geometric and Functional Analysis (GAFA) is a mathematical journal published by Birkhäuser, an independent division of Springer-Verlag. The journal is published bi-monthly. The journal publishes major results on a broad range of mathematical topics related to geometry and analysis. GAFA is both an acronym and a part of the official full name of the journal. == History == GAFA was founded in 1991 by Mikhail Gromov and Vitali Milman. The idea for the journal was inspired by the long-running Israeli seminar series "Geometric Aspects of Functional Analysis" of which Vitali Milman had been one of the main organizers in the previous years. The journal retained the same acronym as the series to stress the connection between the two. == Journal information == The journal is reviewed cover-to-cover in Mathematical Reviews and zbMATH Open and is indexed cover-to-cover in the Web of Science. According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.2. The journal has seven editors: Vitali Milman (editor-in-chief), Simon Donaldson, Mikhail Gromov, Larry Guth, Boáz Klartag, Leonid Polterovich, and Peter Sarnak. == See also == Geometric analysis == References == == External links == Geometric and Functional Analysis (GAFA), official journal website, Springer-Verlag
|
Wikipedia:Geometric mean theorem#0
|
In Euclidean geometry, the right triangle altitude theorem or geometric mean theorem is a relation between the altitude on the hypotenuse in a right triangle and the two line segments it creates on the hypotenuse. It states that the geometric mean of those two segments equals the altitude. == Theorem and its converse == If h denotes the altitude in a right triangle and p and q the segments on the hypotenuse then the theorem can be stated as: h = p q {\displaystyle h={\sqrt {pq}}} or in term of areas: h 2 = p q . {\displaystyle h^{2}=pq.} The converse statement is true as well. Any triangle, in which the altitude equals the geometric mean of the two line segments created by it, is a right triangle. The theorem can also be thought of as a special case of the intersecting chords theorem for a circle, since the converse of Thales' theorem ensures that the hypotenuse of the right angled triangle is the diameter of its circumcircle. == Applications == The formulation in terms of areas yields a method to square a rectangle with ruler and compass, that is to construct a square of equal area to a given rectangle. For such a rectangle with sides p and q we denote its top left vertex with D (see the Proof > Based on similarity section for a graphic of the construction). Now we extend the segment q to its left by p (using arc AE centered on D) and draw a half circle with endpoints A and B with the new segment p + q as its diameter. Then we erect a perpendicular line to the diameter in D that intersects the half circle in C. Due to Thales' theorem C and the diameter form a right triangle with the line segment DC as its altitude, hence DC is the side of a square with the area of the rectangle. The method also allows for the construction of square roots (see constructible number), since starting with a rectangle that has a width of 1 the constructed square will have a side length that equals the square root of the rectangle's length. Another application of this theorem provides a geometrical proof of the AM–GM inequality in the case of two numbers. For the numbers p and q one constructs a half circle with diameter p + q. Now the altitude represents the geometric mean and the radius the arithmetic mean of the two numbers. Since the altitude is always smaller or equal to the radius, this yields the inequality. == History == The theorem is usually attributed to Euclid (ca. 360–280 BC), who stated it as a corollary to proposition 8 in book VI of his Elements. In proposition 14 of book II Euclid gives a method for squaring a rectangle, which essentially matches the method given here. Euclid however provides a different slightly more complicated proof for the correctness of the construction rather than relying on the geometric mean theorem. == Proof == === Based on similarity === Proof of theorem: The triangles △ADC , △ BCD are similar, since: consider triangles △ABC, △ACD ; here we have ∠ A C B = ∠ A D C = 90 ∘ , ∠ B A C = ∠ C A D ; {\displaystyle \angle ACB=\angle ADC=90^{\circ },\quad \angle BAC=\angle CAD;} therefore by the AA postulate △ A B C ∼ △ A C D . {\displaystyle \triangle ABC\sim \triangle ACD.} further, consider triangles △ABC, △BCD ; here we have ∠ A C B = ∠ B D C = 90 ∘ , ∠ A B C = ∠ C B D ; {\displaystyle \angle ACB=\angle BDC=90^{\circ },\quad \angle ABC=\angle CBD;} therefore by the AA postulate △ A B C ∼ △ B C D . {\displaystyle \triangle ABC\sim \triangle BCD.} Therefore, both triangles △ACD, △BCD are similar to △ABC and themselves, i.e. △ A C D ∼ △ A B C ∼ △ B C D . {\displaystyle \triangle ACD\sim \triangle ABC\sim \triangle BCD.} Because of the similarity we get the following equality of ratios and its algebraic rearrangement yields the theorem: h p = q h ⇔ h 2 = p q ⇔ h = p q ( h , p , q > 0 ) {\displaystyle {\frac {h}{p}}={\frac {q}{h}}\,\Leftrightarrow \,h^{2}=pq\,\Leftrightarrow \,h={\sqrt {pq}}\qquad (h,p,q>0)} Proof of converse: For the converse we have a triangle △ABC in which h 2 = p q {\displaystyle h^{2}=pq} holds and need to show that the angle at C is a right angle. Now because of h 2 = p q {\displaystyle h^{2}=pq} we also have h p = q h . {\displaystyle {\tfrac {h}{p}}={\tfrac {q}{h}}.} Together with ∠ A D C = ∠ C D B {\displaystyle \angle ADC=\angle CDB} the triangles △ADC, △BDC have an angle of equal size and have corresponding pairs of legs with the same ratio. This means the triangles are similar, which yields: ∠ A C B = ∠ A C D + ∠ D C B = ∠ A C D + ( 90 ∘ − ∠ D B C ) = ∠ A C D + ( 90 ∘ − ∠ A C D ) = 90 ∘ {\displaystyle {\begin{aligned}\angle ACB&=\angle ACD+\angle DCB\\&=\angle ACD+(90^{\circ }-\angle DBC)\\&=\angle ACD+(90^{\circ }-\angle ACD)\\&=90^{\circ }\end{aligned}}} === Based on the Pythagorean theorem === In the setting of the geometric mean theorem there are three right triangles △ABC, △ADC and △DBC in which the Pythagorean theorem yields: h 2 = a 2 − q 2 h 2 = b 2 − p 2 c 2 = a 2 + b 2 {\displaystyle {\begin{aligned}h^{2}&=a^{2}-q^{2}\\h^{2}&=b^{2}-p^{2}\\c^{2}&=a^{2}+b^{2}\end{aligned}}} Adding the first 2 two equations and then using the third then leads to: 2 h 2 = a 2 + b 2 − p 2 − q 2 = c 2 − p 2 − q 2 = ( p + q ) 2 − p 2 − q 2 = 2 p q ∴ h 2 = p q . {\displaystyle {\begin{aligned}2h^{2}&=a^{2}+b^{2}-p^{2}-q^{2}\\&=c^{2}-p^{2}-q^{2}\\&=(p+q)^{2}-p^{2}-q^{2}\\&=2pq\\\therefore \ h^{2}&=pq.\end{aligned}}} which finally yields the formula of the geometric mean theorem. === Based on dissection and rearrangement === Dissecting the right triangle along its altitude h yields two similar triangles, which can be augmented and arranged in two alternative ways into a larger right triangle with perpendicular sides of lengths p + h and q + h. One such arrangement requires a square of area h2 to complete it, the other a rectangle of area pq. Since both arrangements yield the same triangle, the areas of the square and the rectangle must be identical. === Based on shear mappings === A square constructed on the altitude can be transformed into a rectangle of equal area with sides p and q with the help of three shear mappings (shear mappings preserve the area): == References == == External links == Geometric Mean at Cut-the-Knot
|
Wikipedia:Geometric transformation#0
|
In mathematics, a geometric transformation is any bijection of a set to itself (or to another such set) with some salient geometrical underpinning, such as preserving distances, angles, or ratios (scale). More specifically, it is a function whose domain and range are sets of points – most often a real coordinate space, R 2 {\displaystyle \mathbb {R} ^{2}} or R 3 {\displaystyle \mathbb {R} ^{3}} – such that the function is bijective so that its inverse exists. The study of geometry may be approached by the study of these transformations, such as in transformation geometry. == Classifications == Geometric transformations can be classified by the dimension of their operand sets (thus distinguishing between, say, planar transformations and spatial transformations). They can also be classified according to the properties they preserve: Displacements preserve distances and oriented angles (e.g., translations); Isometries preserve angles and distances (e.g., Euclidean transformations); Similarities preserve angles and ratios between distances (e.g., resizing); Affine transformations preserve parallelism (e.g., scaling, shear); Projective transformations preserve collinearity; Each of these classes contains the previous one. Möbius transformations using complex coordinates on the plane (as well as circle inversion) preserve the set of all lines and circles, but may interchange lines and circles. Conformal transformations preserve angles, and are, in the first order, similarities. Equiareal transformations, preserve areas in the planar case or volumes in the three dimensional case. and are, in the first order, affine transformations of determinant 1. Homeomorphisms (bicontinuous transformations) preserve the neighborhoods of points. Diffeomorphisms (bidifferentiable transformations) are the transformations that are affine in the first order; they contain the preceding ones as special cases, and can be further refined. Transformations of the same type form groups that may be sub-groups of other transformation groups. == Opposite group actions == Many geometric transformations are expressed with linear algebra. The bijective linear transformations are elements of a general linear group. The linear transformation A is non-singular. For a row vector v, the matrix product vA gives another row vector w = vA. The transpose of a row vector v is a column vector vT, and the transpose of the above equality is w T = ( v A ) T = A T v T . {\displaystyle w^{T}=(vA)^{T}=A^{T}v^{T}.} Here AT provides a left action on column vectors. In transformation geometry there are compositions AB. Starting with a row vector v, the right action of the composed transformation is w = vAB. After transposition, w T = ( v A B ) T = ( A B ) T v T = B T A T v T . {\displaystyle w^{T}=(vAB)^{T}=(AB)^{T}v^{T}=B^{T}A^{T}v^{T}.} Thus for AB the associated left group action is B T A T . {\displaystyle B^{T}A^{T}.} In the study of opposite groups, the distinction is made between opposite group actions because commutative groups are the only groups for which these opposites are equal. == Active and passive transformations == == See also == Coordinate transformation Erlangen program Symmetry (geometry) Motion Reflection Rigid transformation Rotation Topology Transformation matrix == References == == Further reading == Adler, Irving (2012) [1966], A New Look at Geometry, Dover, ISBN 978-0-486-49851-5 Dienes, Z. P.; Golding, E. W. (1967) . Geometry Through Transformations (3 vols.): Geometry of Distortion, Geometry of Congruence, and Groups and Coordinates. New York: Herder and Herder. David Gans – Transformations and geometries. Hilbert, David; Cohn-Vossen, Stephan (1952). Geometry and the Imagination (2nd ed.). Chelsea. ISBN 0-8284-1087-9. {{cite book}}: ISBN / Date incompatibility (help) John McCleary (2013) Geometry from a Differentiable Viewpoint, Cambridge University Press ISBN 978-0-521-11607-7 Modenov, P. S.; Parkhomenko, A. S. (1965) . Geometric Transformations (2 vols.): Euclidean and Affine Transformations, and Projective Transformations. New York: Academic Press. A. N. Pressley – Elementary Differential Geometry. Yaglom, I. M. (1962, 1968, 1973, 2009) . Geometric Transformations (4 vols.). Random House (I, II & III), MAA (I, II, III & IV).
|
Wikipedia:Geometry Center#0
|
The Geometry Center was a mathematics research and education center at the University of Minnesota. It was established by the National Science Foundation in the late 1980s and closed in 1998. The focus of the center's work was the use of computer graphics and visualization for research and education in pure mathematics and geometry. The center's founding director was Al Marden. Richard McGehee directed the center during its final years. The center's governing board was chaired by David P. Dobkin. == Geomview == Much of the work done at the center was for the development of Geomview, a three-dimensional interactive geometry program. This focused on mathematical visualization with options to allow hyperbolic space to be visualised. It was originally written for Silicon Graphics workstations, and has been ported to run on Linux systems; it is available for installation in most Linux distributions through the package management system. Geomview can run under Windows using Cygwin and under Mac OS X. Geomview has a web site at www.geomview.org. Geomview is built on the Object Oriented Graphics Library (OOGL). The displayed scene and the attributes of the objects in it may be manipulated by the graphical command language (GCL) of Geomview. Geomview may be set as a default 3-D viewer for Mathematica. == Videos == Geomview was used in the construction of several mathematical movies including: Not Knot, exploring hyperbolic space rendering of knot complements. [1] Outside In, a movie about sphere eversion. [2] The Shape of Space, exploring possible three-dimensional spaces. [3] == Other software == Other programs developed at the Center included: WebEQ, a web browser plugin allowing mathematical equations to be viewed and edited. [4] Kali, to explore plane symmetry groups. [5] The Orrery, a Solar System visualizer. [6] SaVi, a satellite visualisation tool for examining the orbits and coverage of satellite constellations. [7] Crafter, for structural design of spacecraft. [8] Surface Evolver, to explore minimal surfaces. [9] [10] SnapPea, a hyperbolic 3-manifold analyzer. [11] qhull, to explore convex hulls. [12] KaleidoTile, to explore tessellations of the sphere, Euclidean plane, and hyperbolic plane. [13] == Website == Richard McGehee, the center's director, has stated that the website was one of the first one hundred websites ever published. Despite the Center being closed, its website is still online at [14] as an archive of a wide range of geometric topics, including: Geometry and the Imagination handouts for a two-week course by John Horton Conway, William Thurston and others. [15] Science U, a collection of interactive exhibits. [16] The Geometry Forum, an electronic community focused on geometry and math education. [17] Preprints, 99 preprints from the center. [18] Archived 2006-02-12 at the Wayback Machine The Topological Zoo, a collection of curves and surfaces. [19] Geomview is supported through the dedicated Geomview website. == Research == During its time of operation, a large number of mathematical workshops were held at the center. Many well-known mathematicians visited the center, including Eugenio Calabi, John Horton Conway, Donald E. Knuth, David Mumford, William Thurston, and Jeff Weeks. There were over thirty postdocs, apprentices and graduate students. == References ==
|
Wikipedia:Geordie Williamson#0
|
Geordie Williamson (born 1981 in Bowral, Australia) is an Australian mathematician at the University of Sydney. He became the youngest living Fellow of the Royal Society when he was elected in 2018 at the age of 36. == Education == Educated at Chevalier College, Williamson graduated in 1999 with a UAI of 99.45. He studied at the University of Sydney and graduated with a Bachelor's degree in 2003 and then at the Albert-Ludwigs University of Freiburg, where he received his doctorate in 2008 under the supervision of Wolfgang Soergel. Williamson is the brother of the late James Williamson, a World Solo 24-hour mountain bike champion who died while competing in South Africa in 2010. == Research and career == After his PhD, Williamson was a post-doctoral researcher at the University of Oxford, based at St. Peter's College, Oxford and from 2011 until 2016 he was at the Max Planck Institute for Mathematics. Williamson deals with a geometric representation of group theory. With Ben Elias, he gave a new proof and a simplification of the theory of the Kazhdan–Lusztig conjectures (previously proved in 1981 by both Beilinson–Bernstein and Brylinski–Kashiwara). For this purpose, they built on works by Wolfgang Soergel and developed a purely algebraic Hodge theory of Soergel bimodules about polynomial rings, In this context, they also succeeded in proving the long-standing positive presumption of positivity for the coefficients of the Kazhdan–Lusztig polynomials for Coxeter groups. For Weyl groups (special Coxeter groups, which are connected to Lie groups), David Kazhdan and George Lusztig succeeded in doing so by identifying the polynomials with certain invariants (local intersection cohomology) of Schubert varieties. Elias and Williamson were able to follow this path of proof also for more general groups of reflection (Coxeter groups), although there is no geometrical interpretation in contrast to the case of the Weyl groups. He is also known for several counterexamples. In 1980, Lusztig suggested a character formula for simple modules of reductive groups over fields of finite characteristic p. The conjecture was proved in 1994-95 by a combination of three papers, one by Henning Haahr Andersen, Jens Carsten Jantzen, and Wolfgang Soergel, one by David Kazhdan and George Lusztig and one by Masaki Kashiwara and Toshiyuki Tanisaki for sufficiently large group-specific characteristics (without explicit bound) and later by Peter Fiebig for a very high explicitly stated bound. Williamson found several infinite families of counterexamples to the generally suspected validity limits of Lusztig's conjecture. He also found counterexamples to a 1990 conjecture of Gordon James on symmetric groups. His work also provided new perspectives on the respective conjectures. In 2023 he was awarded an Australian Laureate Fellowship to further his research into fundamental symmetries. === Publications === Ben Elias; Geordie Williamson (2014), "The Hodge Theory of Soergel bimodules", Annals of Mathematics, 180 (3): 1089–1136, arXiv:1212.0791 Williamson, Geordie (2017), "Schubert calculus and torsion explosion (With Appendix by A. Kontorovich, P. McNamara, G. Williamson)", Journal of the American Mathematical Society, 30: 1023–1046, arXiv:1309.5055, doi:10.1090/jams/868 Williamson, Geordie (2012), "Modular intersection cohomology complexes on flag varieties (With Appendix by Tom Braden)", Mathematische Zeitschrift, 272: 697–727, arXiv:0709.0207, doi:10.1007/s00209-011-0955-y Williamson, Geordie (2014), "On an analogue of the James conjecture", Representation Theory, 18 (2): 15–27, arXiv:1212.0794, doi:10.1090/S1088-4165-2014-00447-3 Ben Elias; Geordie Williamson (2016), "Kazhdan-Lusztig conjectures and shadows of Hodge theory", in Werner Ballmann; Christian Blohmann; Gerd Faltings; Peter Teichner; Don Zagier (eds.), Arbeitstagung Bonn 2013: In Memory of Friedrich Hirzebruch, Progress in Mathematics, vol. 319, Birkhäuser, pp. 105–126, arXiv:1403.1650, doi:10.1007/978-3-319-43648-7_5, ISBN 978-3-319-43646-3 Daniel Juteau; Carl Mautner (2014), "Parity sheaves", Journal of the American Mathematical Society, 27 (4): 1169–1212, arXiv:0906.2994, doi:10.1090/S0894-0347-2014-00804-3 === Awards and honours === In 2016, he received the Chevalley Prize of the American Mathematical Society and the Clay Research Award. He is an invited speaker at the European Congress of Mathematicians in Berlin 2016 (Shadows of Hodge theory in representation theory). In 2016 he was awarded the EMS Prize, for 2017 he was awarded the New Horizons in Mathematics Prize. In 2018, he was plenary speaker at the International Congress of Mathematicians in Rio de Janeiro and was elected a Fellow of the Royal Society (FRS) and the Australian Academy of Science. Williamson was awarded the 2018 Australian Mathematical Society Medal, the NSW Premier's Prizes for Science & Engineering: Excellence in Mathematics, Earth Sciences, Chemistry or Physics in 2022 and the Max Planck-Humboldt Research Award in 2024. == References ==
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.