source stringlengths 16 98 | text stringlengths 40 168k |
|---|---|
Wikipedia:Peter Cameron (mathematician)#0 | Peter Jephson Cameron FRSE (born 23 January 1947) is an Australian mathematician who works in group theory, combinatorics, coding theory, and model theory. He is currently Emeritus Professor at the University of St Andrews and Queen Mary University of London. == Education == Cameron received a B.Sc. from the University of Queensland and a D.Phil. in 1971 from the University of Oxford as a Rhodes Scholar, with Peter M. Neumann as his supervisor. Subsequently, he was a Junior Research Fellow and later a Tutorial Fellow at Merton College, Oxford, and also lecturer at Bedford College, London. == Work == Cameron specialises in algebra and combinatorics; he has written books about combinatorics, algebra, permutation groups, and logic, and has produced over 350 academic papers. In 1988, he posed the Cameron–Erdős conjecture with Paul Erdős. == Honours and awards == He was awarded the London Mathematical Society's Whitehead Prize in 1979 and Senior Whitehead Prize in 2017, and is joint winner of the 2003 Euler Medal. In 2008, he was selected as the Forder Lecturer of the LMS and New Zealand Mathematical Society. In 2018 he was elected a Fellow of the Royal Society of Edinburgh. == Books == Cameron, Peter J.; Lint, Jacobus Hendricus van (1975). Graph theory, coding theory, and block designs. Cambridge, Eng.: Cambridge University Press. ISBN 978-1-107-08708-8. OCLC 846492893. Cameron, Peter J. (10 June 1976). Parallelisms of Complete Designs. Cambridge University Press. doi:10.1017/cbo9780511662102. ISBN 978-0-521-21160-4. Cameron, Peter J. (29 June 1990). Oligomorphic Permutation Groups. Cambridge University Press. doi:10.1017/cbo9780511549809. ISBN 978-0-521-38836-8. Cameron, P. J.; Lint, J. H. van (19 September 1991). Designs, Graphs, Codes and their Links. Cambridge University Press. doi:10.1017/cbo9780511623714. ISBN 978-0-521-41325-1. Cameron, Peter J. (1994). Combinatorics : topics, techniques, algorithms. Cambridge: Cambridge University Press. ISBN 0-521-45133-7. OCLC 29910262. Cameron, Peter J. (1998). Sets, Logic and Categories. London: Springer London. ISBN 978-1-4471-0589-3. OCLC 958523400. Cameron, Peter J. (4 February 1999). Permutation Groups. Cambridge University Press. doi:10.1017/cbo9780511623677. ISBN 978-0-521-65302-2. Cameron, Peter J. (2008). Introduction to algebra. Oxford: Oxford University Press. ISBN 978-0-19-156622-6. OCLC 213466141. == Notes == == References == Short biography == See also == Cameron–Fon-Der-Flaass IBIS theorem == External links == Home page at Queen Mary University of London Home page at University of St Andrews Peter Cameron's 60th birthday conference Theorems by Peter Cameron at Theorem of the Day Peter Cameron's blog Peter Cameron at the Mathematics Genealogy Project |
Wikipedia:Peter Landrock#0 | Peter Landrock (born August 20, 1948 in Horsens) is a Danish cryptographer and mathematician. He is known for his contributions to data encryption methods and codes. Landrock has been active since the 1970s as research scientist and faculty member for Cambridge University and the University of Aarhus and others, and was active for Microsoft and Cryptomathic. He has been visiting professor at Oxford University, Leuven University and Princeton University. == Background and career == Landrock obtained a diploma in mathematics and physics in 1972 from the University of Aarhus. He received his Ph.D. in mathematics from the University of Chicago in 1974 for his research on elementary abelian and dihedral defect groups, under George Isaac Glauberman and Richard Dagobert Brauer. In 1975, Landrock became associate Professor in the Department of Mathematics at Aarhus University, then full Professor. From 1982 until 1983, Landrock was visiting professor at the Institute for Advanced Study in Princeton, New Jersey. In 1986 he founded the company Cryptomathic together with Ivan Damgård. It was his research work on cryptography and coding theory at the Isaac Newton Institute, which inspired him to shift the focus of his work to corporate research at Cryptomathic, where he joined forces with researchers such as Vincent Rijmen and Whitfield Diffie. By 1996 he had joined the Isaac Newton Institute, Cambridge University as Research Program Organizer, and since 1997, Landrock has been senior member of the Wolfson College, Cambridge University. Landrock has been member of the Danish IT Security Council as adviser to the Danish Government from 1999 to 2007. From 1997 until 2010, Landrock was as a Member of Microsoft's Technical Advisory Board in Cambridge and has also served as a member of the board of the Villum Foundation in Copenhagen since 2008. In 2014 Landrock became Member of the Technical Advisory Board of the Turing Gateway of Mathematics at Cambridge University. In 2021, he was elected a By-Fellow of Churchill College, Cambridge. == Cryptography == Landrock was President of the International Association for Cryptologic Research from 1992 to 1995 and General Chair at the Eurocrypt conference for cryptography research in 1990. In 1996 he was one of the organizers of a research programme in Cryptography at the Newton Institute at University of Cambridge. The term "What You See Is What You Sign" (WYSIWYS) was coined in 1998 by Landrock and Torben P. Pedersen of Cryptomathic during their work on delivering secure and legally binding digital signatures for Pan-European projects. Landrock contributed to more than twenty entries to the Encyclopedia of Cryptography and Security including articles on PKCS, SSH, public key infrastructure and certificate authorities. His research focus since the late 1980s included subject areas as Key management systems, EMV and Card Payment Solutions and Authentication. He has lectured on cryptography at more than 150 Universities. The European Patent Office recognized that Landrock's “inventions have helped secure electronic voting systems and electronic passport solutions”. == Awards and recognition == In 1991 Landrock was awarded the Danish Data Security Prize, and in 2004, Landrock received the BIT Price for engineering entrepreneurship from the Danish Engineers. His achievements with Cryptomathic were recognised by the World Economic Forum in 2003 and he received the VISA Smart Start Award for the work on Chip and Pin. In 2010, Landrock was named a finalist for European Inventor 2010 in the "Lifetime Achievement" category by the European Patent Office stating that many of today’s established data encryption methods and codes “bear the mark of ... Peter Landrock” In July 2019, Landrock was awarded the degree of Doctor of Science honoris causa for his lifetime achievement in cryptographic technology. == References == == External links == Homepage of Professor Peter Landrock at Wolfson College Cambridge List of Publications by Peter Landrock Peter Landrock at the Mathematics Genealogy Project |
Wikipedia:Peter Lorimer (mathematician)#0 | Peter James Lorimer (16 April 1939 – 7 February 2010) was a New Zealand mathematician. His research concerned group theory, combinatorics, and Ramsey theory. == Academic career == Born in Christchurch, Lorimer did a BSc / MSc in mathematics at the University of Auckland and won a Commonwealth Scholarship to do a PhD at McGill University in Montreal, which he completed in 1963 under the supervision of Hans Schwerdtfeger. He returned to New Zealand to lecture, first at University of Canterbury and then at University of Auckland. == References == == External links == institutional homepage |
Wikipedia:Peter Ludvig Sylow#0 | Peter Ludvig Meidell Sylow (Norwegian pronunciation: [ˈsyːlɔv]) (12 December 1832 – 7 September 1918) was a Norwegian mathematician who proved foundational results in group theory. Sylow processed and further developed the innovative works of mathematicians Niels Henrik Abel and Évariste Galois in algebra. Sylow theorems and p-groups, known as Sylow subgroups, are fundamental in finite groups. By profession, Sylow was a teacher at the Fredrikshald Latin School (Norwegian: Fredrikshalds lærde og realskole) for 40 years from 1858 to 1898, and then a professor at the University of Oslo for 20 years from 1898 to 1918. Despite the isolation in Frederikshald, Sylow was an active member of the mathematical world. He wrote a total of approximately 25 mathematical and biographical works, corresponded with many of the leading mathematicians of the time, and was an able co-editor of Acta Mathematica from the journal's start in 1882. He was also elected into the Norwegian Academy of Science and Letters in 1868, a corresponding member of the Academy of Sciences in Göttingen and the University of Copenhagen awarded him an honorary doctorate in 1894. == Early life == Ludvig Sylow was born in Kristiania (now Oslo) on 12 December 1832 to later minister and customs treasurer Thomas Edvard von Westen Sylow (1792–1875) and Magdalene Cecilie Cathrine Mejdell (1806–98). His father had been an officer and a captain in the cavalry, and later he served as the head of the Ministry of the Army between 1848 and 1854. Initially, his father was aware of his son's talent in Mathematics, so he encouraged him to work independently. From home, Sylow learned a sense of duty and hard work, but was also taught to be modest and although this was done with the best of intentions, it would become an obstacle for him later in life since it meant that he was happy to spend many years in a more lowly position than he should have had. == Career as a mathematician == === Education and first steps in mathematics === Sylow attended Christiania Cathedral School, graduating in 1850 after taking the examen artium. He then became a student at the University of Oslo where he began his studies of natural sciences. In 1853, the University of Oslo awarded him the Crown Prince's gold medal (Kronprinsens gullmedalje) for a Mathematics subject about Gnomonics. In 1856 he took the high school mathematics teacher's examination (Realkandidat, Norwish to Real candidate) with excellent grades. He completed his graduation in 1856, but since no university post was available, he taught for two years at Hartvig Nissen School, an independent girls' school in the Uranienborg district of Christiania, which had been founded by Hartvig Nissen and Ole Jacob Broch. His years there came during Broch's most energetic university period, and it was Broch who introduced Sylow to Carl Gustav Jacob Jacobi's fundamental work on elliptic functions, among other things. In 1858, Sylow moved to the town of Fredrikshald (now called Halden) in Østfold county, where he taught at Fredrikshald Latin School (Norwegian: Fredrikshalds lærde og realskole) as the Head Teacher in Mathematics and Science, a modest position that he held for a whole 40 years, from 1858 to 1898. Although Sylow would have made an outstanding university lecturer, he did not make a particularly good school teacher, since he was interested in the advanced areas of mathematics and had thus little enthusiasm for teaching at lower levels. Moreover, he also found it difficult to keep discipline in his classroom, so the fact that his career was largely in schools rather than universities was a poor use of his talents on two scores: Universities were the poorer for not having Sylow as a lecturer, while schools were poorer for having him as a teacher. === Abel and the theory of equations === During his studies, Sylow had become interested in the work of Niels Henrik Abel, and especially in an unfinished work on equation theory that had been left behind. However, it was only at Hartvig Nissen School (1856–58) that he began to research that work more deeply, in part thanks to Ole Jacob Broch, who was the school's pure mathematics teacher at the time. It was Broch who gave the young teacher Sylow much encouragement to continue his advanced mathematical researches. Although at first Sylow found reading Abel's papers a difficult task, he managed to struggle through them and soon found that Abel had achieved a far deeper understanding of the theory of equations than what he had managed to write in his published papers. Some of Sylow's first attempts to publish some of Abel's unpublished results that he had found in his papers proved to be unsuccessful. For instance, he sent one of these papers to Crelle's Journal in Berlin, but the editor there, Leopold Kronecker, had already published these results having discovered them himself, and had no wish to have a paper in print which showed that Abel had proved them long before he had. Kronecker did not accept that Abel had preceded him, and therefore, he rejected Sylow's paper, but even though the article was rejected, posterity has proved Sylow right. Sylow showcased his discoveries at a Scandinavian meeting of naturalists in 1860 in Copenhagen, where he presented a solid interpretation of a strange equation-theoretic treatise by Abel, edited only in fragments. === Failure to join a university === In 1861 Sylow obtained a scholarship for studies in Paris and Berlin. In Paris he attended lectures by Michel Chasles on the theory of conics, by Joseph Liouville on rational mechanics and by Jean-Marie Duhamel on the theory of limits. He also used this scholarship to make himself acquainted with newer works, particularly in the theory of equations. In Berlin, Sylow had useful discussions with Kronecker, but was unable to attend courses by Karl Weierstrass who was ill at the time, and since there were no other courses being given in Berlin that interested him, Sylow instead decided to work in the library, studying number theory and the theory of equations. In the following year, in 1862, Sylow lectured at the University of Christiania as a substitute for Professor Ole Jacob Broch, who had been elected to serve in the Storting, the Norwegian parliament. In his lectures Sylow explained Abel's and Galois's work on algebraic equations, and in doing so he became one of the first in Europe to lecture on Évariste Galois's works. Among his listeners was the young Sophus Lie, who would later create a strange new science on the basis of these ideas, the theory of continuous symmetry. Lie once commented that Sylow deserved a university position because of his "broad knowledge, his sharp powers of criticism, and his outstanding mathematical work". And for a time, it seemed that the university would finally bet on him since he had received a scholarship trip to Berlin and Paris in 1861, and then spent a year doing the mathematical lectures at the Christiania University during Broch's absence abroad, during which he also began to treat and lecture Galois' group theory. But instead, his career simply stopped. When Broch again became an MP in the Storting from 1865 to 1868, he was keen to have Sylow take over the teaching of his university during this time, but the school in Fredrikshald in which Sylow was a teacher refused to give him leave to teach at the Christiania university, and they received support from the ministry. Broch left his chair as professor of pure mathematics in 1869, thus leaving a vacancy that Sylow was well qualified to have filled, and in fact, everyone expected Sylow to take over his professorship in pure mathematics. However, the University of Christiania did not rate pure mathematics very highly at that time, preferring more practical, useful, down-to-earth mathematics with more applicable topics, and Sylow was too theoretical in his approach so he was not appointed. The professor of applied mathematics, Carl Anton Bjerknes, was instead pressured to move into Broch's position, so that Cato Guldberg could take over the applied mathematics. === Sylow's theorems === Since few contemporary mathematicians were as deeply familiar with Abel's work as Sylow was, Professor Carl Anton Bjerknes advised him to study Évariste Galois's works about group theory, in which Abel had also contributed a lot. However, it was only when Sylow began to lecture about Abel's and Galois's work on algebraic equations in 1862, that he began to further develop their innovative works, especially those related to group theory, and in fact, by the end of that year, Sylow had already proved foundational results in group theory, which are now known as Sylow theorems and p-groups, known as Sylow subgroups, which are now basic terms in group theory. He was thus one of the first mathematicians to penetrate Galois' group theory. However, it was not until 1872, 10 years later, that Sylow published his most important discoveries in group theory in Alfred Clebsch's journal (Math. Ann.), in a small treatise of ten pages called Theorémes sur les groupes de substitutions, in which Sylow generalizes his discoveries and proves what is perhaps the most profound result in the theory of finite groups. Almost all work on finite groups uses Sylow's theorems. When the famous French mathematician Camille Jordan published the standard work Théorie des Substitutions in 1870, Sylow was familiar with most of what was written there and more. When Jordan visited Christiania in 1872, Sophus Lie took him on an excursion to Frognerseteren with Sylow, who described to him what is now called "Sylow's theorem", which he had known since 1862. Jordan was astonished and somewhat skeptical, but shortly afterwards, he wrote enthusiastically from Sweden, and he helped Sylow to get that 10-page thesis published that same year in 1872. That thesis made Sylow a well-known European mathematician. === Written works === In 1868 he was elected into the Norwegian Academy of Science and Letters (Det Norske Widenskaps-Akademi). From 1870 to 1871, Sylow exchanged nine letters with Julius Petersen who, at this time, was working on his doctoral dissertation. Petersen sought Sylow's advice about the main theorem of his dissertation and all of these letters deal are about this subject. The two mathematicians exchanged another sixteen letters a few years later, in 1876 and 1877. However, Sylow's most well-known written work rests on his 10-page thesis published in 1872 called Théorèmes sur les groupes de substitutions (Theorems on substitution groups), which was published in Mathematische Annalen Volume 5 (pages 584 to 594). This paper has the three Sylow theorems, which prove foundational results in group theory. Sylow had already proved this in 1862, but only published it in 1872, and by then, Augustin-Louis Cauchy had already proved that a group whose order is divisible by a prime. Winfried Scharlau described how Sylow was led to his discovery by his study of Galois' work, in particular of Galois' criterion for the solvability of equations of prime degree. The paper explains how Sylow used methods from Galois theory in his proofs. Besides the thesis of 1872, Sylow's main work was the new edition of Abel's collected writings which he procured in association with former student Sophus Lie on a public basis, when in 1873, Sylow and Lie were commissioned to provide a new edition of Niels Henrik Abel's collected works, paid for by the state. The preparations for the publication of this work took eight years, from 1873 to 1881, during which he had only partial leave from his teaching work, being on leave from school for four years. Sylow and Lie prepared an edition of Abel's complete work published under the title Oeuvres complète de Niels Henrik Abel (French for: Complete Works of Niels Henrik Abel). The motivation for this had come from the Norwegian Academy of Science who applied to the Norwegian Parliament for funding for the project, which was quickly granted. This funding allowed Sylow to take leave from his school in Fredrikshald for four years in order to devote himself to the project. Sylow wanted as much as possible of Abel's early works to come out, not just his great treatises with their exemplary stringency, and thus, he used this opportunity to dig up more of his early works, and in fact, there was much more additional Abel material published in the Sylow/Lie edition which appeared on 9 December 1881, than what Bjerknes used on his Abel biography of 1880. In 1902, Sylow, in collaboration with Elling Holst, published Abel's correspondence. Further Abel documents had been discovered after the Sylow/Lie book came out in 1881 and, at the Third Scandinavian Congress of Mathematicians, which was held in Kristiania in 1913, Sylow discussed this new material. In addition to the Sylow theorems and the Abel material, Sylow also published a few papers on elliptic functions, particularly on complex multiplication, as well as papers on group theory. === Later career === In 1883 Sylow became an editor of Acta Mathematica, was elected a member of the Academy of Sciences of Göttingen and, in 1894, the University of Copenhagen awarded him an honorary doctorate. A couple of times in his youth, Sylow briefly had the prospect of becoming a lecturer at a university, where he had absolutely belonged from the first moment, but the disfavor of the times left him unnoticed in his native land despite his name being already widely known outside Norway. As a result, Sylow spent a whole 40 years, from 1858 to 1898, holding the modest position of head teacher in mathematics and science at the Frederikshald Latin School, a long reign that came to an end when Sylow was finally appointed as a professor of mathematics at the University of Oslo in 1898, and despite already being 65 when he obtained a university post, he was still able to hold this position for 20 years, until 1918, when he died at the age of 85. His rare talent in mathematics revealed itself immediately upon his arrival at the university, to which he brought knowledge far beyond elementary mathematics. At first, he was paid a headmaster's salary, which was approximately half the salary of a university professor, but he later received salary increases. At the centenary of Abel's birth in 1902, Sylow gave the welcoming address at a conference to mark the centenary of his birth, giving a characterization of his great predecessor, who was hailed by all the famous mathematicians of the various countries who had met together as the one who will stand as the permanent. == Personal life == Sylow never married, but was a warm person with a nice sense of humour. He was an avid lover of being out of doors and often spent summer vacations in the mountains, usually in Kongsvoll, where he studied plants. Kongsvoll is a mountain station providing food and shelter on the route between Oslo and Trondheim, erected when the route was used by pilgrims visiting the shrine of St Olav in Trondheim. == Death == Sylow died on 7 September 1918, at the age of 85, in Christiania, Norway. == Honors == The Crown Prince's Gold Medal (1853) The Norwegian Academy of Science and Letters, elected in 1868 Corresponding member of the Academy of Sciences in Göttingen (1893) Editor of Acta Mathematica from (1893) Honorary doctorate at Copenhagen University (1894) == References == == External links == Mathews, G. B. (1919). "Ludvig Sylow". Nature. 103 (2577): 49. Bibcode:1919Natur.103...49G. doi:10.1038/103049a0. Sylow, M.L. (1872). "Théorèmes sur les groupes de substitutions". Mathematische Annalen. 5 (4): 584–594. doi:10.1007/BF01442913. S2CID 121928336. O'Connor, John J.; Robertson, Edmund F., "Peter Ludvig Sylow", MacTutor History of Mathematics Archive, University of St Andrews Peter Ludvig Sylow at the Mathematics Genealogy Project |
Wikipedia:Peter M. Gruber#0 | Peter Manfred Gruber (28 August 1941, Klagenfurt – 7 March 2017, Vienna) was an Austrian mathematician working in geometric number theory as well as in convex and discrete geometry. == Biography == Gruber obtained his PhD at the University of Vienna in 1966, under the supervision of Nikolaus Hofreiter. From 1971, he was Professor at the University of Linz, and from 1976, at the TU Wien. He was a member of the Austrian Academy of Sciences, a foreign member of the Russian Academy of Sciences, and a corresponding member of the Bavarian Academy of Sciences and Humanities. His past doctoral students include Monika Ludwig. == Selected publications == Gruber, P.M. (2007). Convex and Discrete Geometry. Berlin: Springer-Verlag. Gruber, P.M.; Lekkerkerker, C.G. (1987). Geometry of Numbers. Amsterdam: North-Holland. == Decorations and awards == 1967: Prize of the Austrian Mathematical Society 1978, 1980 and 1982: Chairman of the Austrian Mathematical Society 1991: Full member of the Austrian Academy of Sciences (Corresponding member since 1988) 1996: Medal of the Union of Czech Mathematicians and Physicists 2001: Austrian Cross of Honour for Science and Art, 1st class 2001: Medal of the mathematical and physical faculty of Charles University in Prague 2003: Foreign member of the Russian Academy of Sciences 2008: Grand Silver Medal for Services to the Republic of Austria 2013: Fellow of the American Mathematical Society, for "contributions to the geometry of numbers and to convex and discrete geometry". Honorary doctorates from the Universities of Siegen, Turin and Salzburg Member of the Academies of Sciences in Messina and Modena Corresponding member of the Bavarian Academy of Sciences == Notes == |
Wikipedia:Peter Redford Scott Lang#0 | Sir Peter Redford Scott Lang VD FRSE (1850–1926) was a Scottish mathematician and Regius Professor at the University of St Andrews. In the 1880s he instituted “Common Dinners” to bring the students together for joint meals (often referred to as “commies”). This had a major impact upon student social life and was thereafter adopted by several Scottish universities. In memory of this the University of St Andrews holds an annual Scott Lang Dinner. == Life == He was born in Edinburgh on 8 October 1850, the youngest of six children of Barbara Turnbull (née Cochrane) and Robert Laidlaw Lang (b.1808), an advocate’s clerk. They lived at 125 Fountainbridge in the south-west of the city. He was educated at the Edinburgh Institution (now known as Stewarts Melville College) and then studied mathematics and natural philosophy (physics) at the University of Edinburgh. His university studies were interspersed with training as a life assurance clerk. He graduated MA BSc in 1872 and began assisting in lectures in natural philosophy at the University of Edinburgh. In 1878 he was elected a Fellow of the Royal Society of Edinburgh. His proposers were Sir Robert Christison, Peter Guthrie Tait, David Stevenson, and John Hutton Balfour. In 1879 he moved to the University of St Andrews as Professor of Mathematics. During his time at St Andrews he purchased a house on South Street. He rose to also be Dean of the Faculty of Arts within the University. He was a Lieutenant Colonel in the 1st Fifeshire Royal Garrison Artillery, a volunteer battalion based at the no 7 battery at Anstruther. He had served as a volunteer for at least 20 years, gaining the Victorian Officer’s Decoration (VD) in 1900. He was granted the honorary rank of Colonel on 25 October 1902. He was knighted in 1921 by King George V on the point of his retiral. In 1922 the University of St Andrews awarded him an honorary doctorate (LLD). He died at home in St Andrews on 5 July 1926. He is buried with his wife and daughter in St Andrews Cathedral Churchyard. The grave lies on a wall to the south of the central tower. == Family == He was married to Alice Mary Dickson (1858-1932) from Colinton. They had one daughter, Edith Mary Valentine Lang (1880-1936). == References == |
Wikipedia:Peter Richtarik#0 | Peter Richtarik is a Slovak mathematician and computer scientist working in the area of big data optimization and machine learning, known for his work on randomized coordinate descent algorithms, stochastic gradient descent and federated learning. He is currently a Professor of Computer Science at the King Abdullah University of Science and Technology. == Education == Richtarik earned a master's degree in mathematics from Comenius University, Slovakia, in 2001, graduating summa cum laude. In 2007, he obtained a PhD in operations research from Cornell University, advised by Michael Jeremy Todd. == Career == Between 2007 and 2009, he was a postdoctoral scholar in the Center for Operations Research and Econometrics and Department of Mathematical Engineering at Universite catholique de Louvain, Belgium, working with Yurii Nesterov. Between 2009 and 2019, Richtarik was a Lecturer and later Reader in the School of Mathematics at the University of Edinburgh. He is a Turing Fellow. Richtarik founded and organizes a conference series entitled "Optimization and Big Data". === Academic work === Richtarik's early research concerned gradient-type methods, optimization in relative scale, sparse principal component analysis and algorithms for optimal design. Since his appointment at Edinburgh, he has been working extensively on building algorithmic foundations of randomized methods in convex optimization, especially randomized coordinate descent algorithms and stochastic gradient descent methods. These methods are well suited for optimization problems described by big data and have applications in fields such as machine learning, signal processing and data science. Richtarik is the co-inventor of an algorithm generalizing the randomized Kaczmarz method for solving a system of linear equations, contributed to the invention of federated learning, and co-developed a stochastic variant of the Newton's method. == Awards and distinctions == 2020, Due to his Hirsch index of 40 or more, he belongs among top 0.05% of computer scientists. 2016, SIGEST Award (jointly with Olivier Fercoq) of the Society for Industrial and Applied Mathematics 2016, EPSRC Early Career Fellowship in Mathematical Sciences 2015, EUSA Best Research or Dissertation Supervisor Award (2nd place) 2014, Plenary Talk at 46th Conference of Slovak Mathematicians == Bibliography == Peter Richtarik & Martin Takac (2012). "Efficient serial and parallel coordinate descent methods for huge-scale truss topology design". Operations Research Proceedings 2011. Operations Research Proceedings. Springer-Verlag. pp. 27–32. doi:10.1007/978-3-642-29210-1_5. ISBN 978-3-642-29209-5. Peter Richtarik & Martin Takac (2014). "Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function". Mathematical Programming. 144 (1). Springer: 1–38. arXiv:1107.2848. doi:10.1007/s10107-012-0614-z. S2CID 254137101. Olivier Fercoq & Peter Richtarik (2015). "Accelerated, parallel and proximal coordinate descent". SIAM Journal on Optimization. 25 (4): 1997–2023. arXiv:1312.5799. doi:10.1137/130949993. S2CID 8068556. Dominik Csiba; Zheng Qu; Peter Richtarik (2015). "Stochastic Dual Coordinate Ascent with Adaptive Probabilities" (pdf). Proceedings of the 32nd International Conference on Machine Learning. pp. 674–683. Robert M Gower & Peter Richtarik (2015). "Randomized Iterative Methods for Linear Systems". SIAM Journal on Matrix Analysis and Applications. 36 (4): 1660–1690. doi:10.1137/15M1025487. hdl:20.500.11820/5c673b9e-8cf3-482c-8602-da8abcb903dd. S2CID 8215294. Peter Richtarik & Martin Takac (2016). "Parallel coordinate descent methods for big data optimization". Mathematical Programming. 156 (1): 433–484. doi:10.1007/s10107-015-0901-6. hdl:20.500.11820/a5649cad-b6b8-4ccc-9ca2-b368131dcbe5. S2CID 254133277. Zheng Qu & Peter Richtarik (2016). "Coordinate descent with arbitrary sampling I: algorithms and complexity". Optimization Methods and Software. 31 (5): 829–857. arXiv:1412.8060. doi:10.1080/10556788.2016.1190360. S2CID 2636844. Zheng Qu & Peter Richtarik (2016). "Coordinate descent with arbitrary sampling II: expected separable overapproximation". Optimization Methods and Software. 31 (5): 858–884. arXiv:1412.8063. doi:10.1080/10556788.2016.1190361. S2CID 11048560. Zheng Qu; Peter Richtarik; Martin Takac; Olivier Fercoq (2016). "SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization" (pdf). Proceedings of the 33rd International Conference on Machine Learning. pp. 1823–1832. Zeyuan Allen-Zhu; Zheng Qu; Peter Richtarik; Yang Yuan (2016). "Even faster accelerated coordinate descent using non-uniform sampling" (pdf). Proceedings of the 33rd International Conference on Machine Learning. pp. 1110–1119. Dominik Csiba & Peter Richtarik (2016). "Importance sampling for minibatches". arXiv:1602.02283 [cs.LG]. Dominik Csiba & Peter Richtarik (2016). "Coordinate descent face-off: primal or dual?". arXiv:1605.08982 [math.OC]. == References == == External links == Richtarik's professional web page Richtarik's Google Scholar profile |
Wikipedia:Peter Rosenthal#0 | Peter Michael Rosenthal (June 1, 1941 – May 25, 2024) was an American-Canadian mathematician, lawyer, and activist who was Professor of Mathematics at the University of Toronto, and an adjunct professor of Law at the University of Toronto Law School. == Early life and family == Rosenthal grew up in a Jewish family in Flushing, Queens, New York with his parents, Harold (1913–1983) and Esther (1914–1985), and two younger brothers, Erik and Walter. Rosenthal described himself as a "red diaper baby". His father was a high school math teacher and his mother was a left-wing activist who had been a member of the Communist Party in her youth. His maternal grandmother, Sonia, had immigrated to New York from Russia after the failed 1905 Russian Revolution and was a supporter of the Bolsheviks. Rosenthal himself was also a committed activist and in 1960 participated in protests at the Woolworth's in Flushing in solidarity with the sit-ins at Woolworth's in Greensboro, North Carolina protesting racial segregation. Rosenthal had poor grades in high school and barely graduated, but after nearly failing in college due to the time he spent attending civil rights and anti-nuclear protests, he began to focus on his studies at Queens College, excelling in math. Erik Rosenthal is an emeritus professor of mathematics at the University of New Haven. Their youngest brother, Walter (Wally) Rosenthal, is a community activist and trade unionist in New York City who taught at York College after retiring from the United States Postal Service. Both Erik and Wally were civil rights and anti-war activists in the 1960s. == Mathematics career == Rosenthal graduated from Queens College, City University of New York with a B.S. in Mathematics in 1962. In 1963 he obtained an MA in Mathematics and in 1967 a Ph.D. in Mathematics from the University of Michigan; his Ph.D. thesis advisor was Paul Halmos. His thesis, "On lattices of invariant subspaces" concerns operators on Hilbert space, and most of his subsequent research was in operator theory and related fields. Much of his work was related to the invariant subspace problem, the still-unsolved problem of the existence of invariant subspaces for bounded linear operators on Hilbert space. He made substantial contributions to the development of reflexive and reductive operator algebras and to the study of lattices of invariant subspaces, composition operators on the Hardy-Hilbert space and linear operator equations. His publications include many with his long-time collaborator Heydar Radjavi, including the book Invariant Subspaces (Springer-Verlag, 1973; second edition 2003). In 1967, Rosenthal moved to Canada to accept an assistant professorship at the University of Toronto where he remained for the rest of his career, eventually becoming a full professor and retiring as a professor emeritus. Rosenthal supervised the Ph.D. theses of fifteen students and the research work of a number of post-doctoral fellows. == Legal career == In parallel with his career in mathematics, Rosenthal pursued a career in law. While teaching at the University of Toronto in 1969, Rosenthal was arrested while giving a speech at an anti-Vietnam War demonstration outside of the US consulate in Toronto. Representing himself in court, he was acquitted of obstructing police but convicted of causing a disturbance, but was able to have his conviction overturned on appeal. With his newfound interest in the law, Rosenthal began volunteering as a paralegal representing friends and activists who had been arrested and charged with minor criminal offences at protests or for civil disobedience or other activist-related offences, particularly related to civil rights or anti-racist activity. Rosenthal was threatened by the Law Society of Upper Canada for practicing law without a license and he hired Charles Roach to represent him before the law society. The law society abandoned its action after Roach moved a motion to move the disciplinary proceeding to court. In the 1980s, Rosenthal worked with Roach representing 21 peace activists who had been charged in relation to protests against Litton Industries and their work on manufacturing components for cruise missiles, with Rosenthal arguing that Litton executives were endangering the safety of Canadians through its products. Rosenthal was also involved in a campaign to protest an invitation to South Africa ambassador Glenn Babb to speak at the University of Toronto in defence of South African's apartheid regime. Rosenthal was one of four University of Toronto professors who sought an injunction to stop Babb along with a declaration by the court that apartheid was a crime against humanity. While this effort was unsuccessful it helped lead to a later decision by the university to divest from South Africa. Roach encouraged Rosenthal to go to law school so that he could represent clients in more serious cases, and he was admitted to University of Toronto Law School in 1987 at the age of 46. He went on to obtain an LL.B. in 1990 and was called to the Ontario bar in 1992. Rosenthal joined Roach's firm as a partner. He was a major figure in the Toronto legal community, and was profiled by Toronto Life, The Globe and Mail, and the Toronto Star In 2006, Now Magazine named Rosenthal Toronto's "Best activist lawyer". In May 2016, he was awarded a Law Society Medal by the Law Society of Upper Canada. Rosenthal provided legal services for various leftist causes and marginalized clients for free. He was also active in civil law suing police and public officials, and participated in inquests into the police shootings of several Black men, representing the families of the deceased. Rosenthal represented Miguel Figueroa, the leader of the Communist Party of Canada, in the case Figueroa v. Canada before the Supreme Court of Canada. The court ruled in Figueroa's favor, striking down a law that prohibited small political parties from obtaining the same tax benefits as large parties. Rosenthal represented many activists who faced charges as a result of political protests, including Shawn Brant, John Clarke and the Ontario Coalition Against Poverty, Vicki Monague of Stop Dump Site 41, Dudley Laws and the Black Action Defence Committee, and Jaggi Singh and others arrested at the 2010 G20 Toronto summit protests, and wrote articles about some of those cases. In 2006, Rosenthal represented Indigenous activists at the Ipperwash Crisis and cross-examined former Premier of Ontario Mike Harris over allegedly saying ""I want the fucking Indians out of the park." == Personal life and death == Rosenthal married his first wife, Helen Black (1942–2017), in 1960 when he was 19 and she was 18. Both of them were social activists and would become mathematicians at the University of Toronto. They divorced in 1979, but remained friends. Rosenthal married his second wife, Carol Kitai, a medical doctor, in 1985. Rosenthal was a lifelong Marxist and political activist. He was a red diaper baby; his mother was active in the civil rights and anti-war movements. Rosenthal told the Globe and Mail: "I regard myself as a Marxist, but not one affiliated with any particular parties... I have a very strong hatred of racism and the grotesque economic inequalities such as exist in the world. It is very deeply embedded in my bones." Rosenthal died in Toronto on May 25, 2024, at the age of 82. He had suffered from heart disease and Parkinson’s disease, and died due to complications from COVID-19. The song "A Little Rain (A Song for Pete)" (2016), by the alternative rock band the Arkells, was inspired by Rosenthal. It was written by Arkells' lead singer Max Kerman, a friend of Rosenthal and his family. == Works == Radjavi, Heydar; Rosenthal, Peter (1973), Invariant Subspaces, Springer, MR 0367682, 2nd edition MR2003221 Radjavi, Heydar; Rosenthal, Peter (2000), Simultaneous Triangularization, Springer, ISBN 978-0-387-98466-7 Martinez-Avendano, Ruben; Rosenthal, Peter (2006), An Introduction to Operators on the Hardy-Hilbert Space, Springer, ISBN 978-0-387-35418-7 (with Sheldon Axler and Donald Sarason) editors. A Glimpse at Hilbert Space Operators, Birkhäuser, 2010. Rosenthal, Daniel; Rosenthal, David; Rosenthal, Peter (2014), A Readable Introduction to Real Mathematics, Springer, ISBN 978-3-319-05654-8, MR 3235953 == References == |
Wikipedia:Peter Waweru#0 | Peter Waweru Kamaku (born 27 May 1982) is a Kenyan football referee, academic administrator and researcher. He has been a referee in Kenyan Premier League since 2013 and a FIFA listed referee since 2017. He is also a professor of pure mathematics at Jomo Kenyatta University of Agriculture and Technology in Kenya. == Early life and education == Waweru was born in Nairobi, Kenya. He attended Gatheri Primary School and Alliance High School. He graduated with a Bachelor of Science in mathematics & computer science in (2006) and a Master of Science in pure mathematics (2008) both from Jomo Kenyatta University of Agriculture and Technology. He attained a Ph.D. in pure mathematics in 2013 from Jomo Kenyatta University of Agriculture and Technology. In 2015, Waweru earned a postgraduate diploma in education technology from University of Cape Town. == Career == === Referee works === Waweru started to officiate football in the lower leagues of Kenya in 2011. In 2013, he joined Kenyan Premier League and in 2017 he was listed as a FIFA referee. He has officiated in various FIFA tournaments such as AFCON U20 in 2019, AFCON 2019 in Egypt, 2019 U17 World Cup in Brazil, 2021 CHAN where he officiated the 2021 CHAN finals. Since 2017, Waweru has officiated several CAF Champions League games, Confederation Cup matches and FIFA World Cup qualifying matches. Waweru was chosen as one of the referees for the 2021 Africa Cup of Nations held in Cameroon from 9 January to 6 February 2022, and the 2023 Africa Cup of Nations held in Côte d'Ivoire from 13 January to 11 February 2024. === Lecturer === Waweru has served in different positions such as an Academic teaching assistant (2007-2009), as a Tutorial Follow (2009-2013), as a Lecturer (2013-2019) and since 2019 he serves as a senior lecturer at Jomo Kenyatta University of Agriculture and Technology. Waweru lectures Number theory, Coding theory and Algebra related courses. == Other considerations == He is a member of the 20 pioneer group for CAF/FIFA Professional Referee 2020. == Research reviews == He has published the finding of his research on Abstract Algebra, Coding and Number Theories in Mathematical books and other peer-related publication journals hence cited with an H-Index of 7 with 321 citations in over 120 peer-reviewed journals in Mathematical and Science publications. == References == == External links == Kenya - K. Waweru - Profile with news, career statistics and history - Soccerway Dr. Waweru Kamaku Peter Kamaku Waweru | Latest Football Betting Odds | Soccer Base Chiefs make two forced changes against Wydad |
Wikipedia:Peter Whittle (mathematician)#0 | Peter Whittle (27 February 1927 – 10 August 2021) was a mathematician and statistician from New Zealand, working in the fields of stochastic nets, optimal control, time series analysis, stochastic optimisation and stochastic dynamics. From 1967 to 1994, he was the Churchill Professor of Mathematics for Operational Research at the University of Cambridge.[1] == Career == Whittle was born in Wellington. He graduated from the University of New Zealand in 1947 with a BSc in mathematics and physics and in 1948 with an MSc in mathematics. He then moved to Uppsala, Sweden in 1950 to study for his PhD with Herman Wold (at Uppsala University). His thesis, Hypothesis Testing in Time Series, generalised Wold's autoregressive representation theorem for univariate stationary processes to multivariate processes. Whittle's thesis was published in 1951[2]. A synopsis of Whittle's thesis also appeared as an appendix to the second edition of Wold's book on time-series analysis. Whittle remained in Uppsala at the Statistics Institute as a docent until 1953, when he returned to New Zealand. In New Zealand, Whittle worked at the Department of Scientific and Industrial Research (DSIR) in the Applied Mathematics Laboratory (later named the Applied Mathematics Division). In 1959 Whittle was appointed to a lectureship in Cambridge University. Whittle was appointed Professor of Mathematical statistics at the University of Manchester in 1961. After six years in Manchester, Whittle returned to Cambridge as the Churchill Professor of Mathematics for Operational Research, a post he held until his retirement in 1994. From 1973, he was also Director of the Statistical Laboratory, University of Cambridge. He was a fellow of Churchill College, Cambridge. He died in Cambridge, England. == Recognition == Whittle was elected a Fellow of the Royal Society in 1978, and an Honorary Fellow of the Royal Society of New Zealand in 1981. The Royal Society awarded him their Sylvester Medal in 1994 in recognition of his "major distinctive contributions to time series analysis, to optimisation theory, and to a wide range of topics in applied probability theory and the mathematics of operational research". In 1986, the Institute for Operations Research and the Management Sciences awarded Whittle the Lanchester Prize for his book Systems in Stochastic Equilibrium (ISBN 0-471-90887-8) and the John von Neumann Theory Prize in 1997 for his "outstanding contributions to the theory of operations research and management science". He was elected to the 2002 class of Fellows of the Institute for Operations Research and the Management Sciences. == Personal life == In 1951, Whittle married a Finnish woman, Käthe Blomquist, whom he had met in Sweden. The Whittle family has six children. == Bibliography == === Books === Whittle, P. (1951). Hypothesis testing in times series analysis. Uppsala: Almqvist & Wiksells Boktryckeri AB. Whittle, P. (1963). Prediction and Regulation. English Universities Press. ISBN 0-8166-1147-5. {{cite book}}: ISBN / Date incompatibility (help) Republished as: Whittle, P. (1983). Prediction and Regulation by Linear Least-Square Methods. University of Minnesota Press. ISBN 0-8166-1148-3. Whittle, P. (1970). Probability (Library of university mathematics). Penguin. ISBN 0-14-080085-9. Republished as: Whittle, P. (30 April 1976). Probability. John Wiley and Sons Ltd. ISBN 0-471-01657-8. Whittle, P. (28 July 1971). Optimization Under Constraints. John Wiley and Sons Ltd. ISBN 0-471-94130-1. Whittle, P. (4 August 1982). Optimization Over Time. John Wiley and Sons Ltd. ISBN 0-471-10120-6. Whittle, P. (April 1983). Optimization Over Time: Dynamic Programming and Stochastic Control. John Wiley and Sons Ltd. ISBN 0-471-10496-5. Whittle, P. (4 June 1986). Systems in Stochastic Equilibrium. John Wiley and Sons Ltd. ISBN 0-471-90887-8. Whittle, P. (April 1990). Risk-Sensitive Optimal Control. John Wiley and Sons Ltd. ISBN 0-471-92622-1. Whittle, P. (14 May 1992). Probability Via Expectation (3rd ed.). Springer Verlag. ISBN 0-387-97758-9. Republished as: Whittle, P. (20 April 2000). Probability Via Expectation (4th ed.). Springer. ISBN 0-387-98955-2. Whittle, P. (18 July 1996). Optimal Control: Basics and Beyond. John Wiley and Sons Ltd. ISBN 0-471-95679-1. Whittle, P. (8 December 1998). Neural Nets and Chaotic Carriers. John Wiley and Sons Ltd. ISBN 0-471-98541-4. Whittle, P. (31 May 2007). Networks: Optimisation and Evolution. Cambridge University Press. ISBN 9780521871006. === Selected articles === Whittle, P. (1953). "The analysis of multiple stationary time series". Journal of the Royal Statistical Society, Series B. 15 (1): 125–139. doi:10.1111/j.2517-6161.1953.tb00131.x. JSTOR 2983728. Reprinted with an introduction by Matthew Calder and Richard A. Davis as Whittle, P. (1997). "The analysis of multiple stationary time series". In Samuel Kotz and Norman L. Johnson (ed.). Breakthroughs in statistics, Volume III. Springer Series in Statistics: Perspectives in Statistics. New York: Springer-Verlag. pp. 141–169. ISBN 0-387-94988-7. Whittle, Peter (1954). "On stationary processes in the plane". Biometrika. 41 (3–4): 434–449. doi:10.1093/biomet/41.3-4.434. Reprinted as Whittle, Peter (2001). "On stationary processes in the plane". In D. M. Titterington and D. R. Cox (ed.). Biometrika: One Hundred Years. Oxford University Press. pp. 293–308. ISBN 0-19-850993-6. Whittle, P. (May 1954). "Optimum preventative sampling". Journal of the Operations Research Society of America. 2 (2): 197–203. doi:10.1287/opre.2.2.197. JSTOR 166605. Whittle, P. (1973). "Some general points in the theory of optimal experimental design". Journal of the Royal Statistical Society, Series B. 35: 123–130. doi:10.1111/j.2517-6161.1973.tb00944.x. Whittle, Peter (1980). "Multi-armed bandits and the Gittins index". Journal of the Royal Statistical Society Ser. B (Methodology). 42 (2): 143–149. Whittle, Peter (1981). "Arm-acquiring bandits". Annals of Probability. 9: 284–292. doi:10.1214/aop/1176994469. (Available online) Whittle, Peter (1988). "Restless bandits: Activity allocation in a changing world". Journal of Applied Probability. 25A (Special volume: A celebration of applied probability (A festschrift for Joe Gani)): 287–298. doi:10.1017/s0021900200040420. MR 0974588. Whittle, P. (1991). "Likelihood and cost as path integrals (With discussion and a reply by the author)". Journal of the Royal Statistical Society, Series B. 53 (3): 505–538. doi:10.1111/j.2517-6161.1991.tb01842.x. Whittle, Peter (2002). "Applied probability in Great Britain (50th anniversary issue of Operations Research)". Oper. Res. 50 (1): 227–239. doi:10.1287/opre.50.1.227.17792. === Biographical works === Kelly, F. P. (1994). Probability, statistics and optimisation: A Tribute to Peter Whittle. Chicheter: John Wiley & Sons. ISBN 0-471-94829-2. Peter Whittle. 1994. "Almost Home". pages 1–28. Anonymous. "Publications of Peter Whittle". pages xxi–xxvi. (A list of 129 publications.) Anonymous. Biographical sketch (untitled). page xxvii. == References == == External links == Webpage of the Cambridge Statistical Laboratory Mathematics Genealogy Project. "Peter Whittle". Retrieved 3 January 2005. Mathematical Reviews. "Peter Whittle". Retrieved 14 May 2010. INFORMS: Biography of Peter Whittle from the Institute for Operations Research and the Managerial Sciences |
Wikipedia:Petr Mandl#0 | Professor Petr Mandl DSc (5 November 1933 – 24 February 2012) was a Czech mathematician known for his contributions to the fields of stochastic processes and actuarial science. He published several books and more than hundred articles. Petr Mandl was a founding member, former chairman and honorary chairman of the Czech Society of Actuaries. == Biography == Mandl was born in Plzeň, Czechoslovakia on 5 November 1933. His father was Vladimír Mandl. In 1957, he graduated at the Faculty of Mathematics and Physics, Charles University, Prague. After twenty years spent at the Czechoslovak Academy of Sciences he returned to the university as a lecturer. In 1992, he revived studies of actuarial science in Czechoslovakia by introducing a course of Financial and Insurance Mathematics within the Department of Probability and Mathematical Statistics. For many years he tirelessly organised a Seminar in Actuarial Science (Czech: Seminář z aktuárských věd) at University premises. Petr Mandl was a founding member of the Czech Society of Actuaries. In December 1995, he was elected chairman of the Society and re-elected twice in 1998 and 2001. During Mandl's tenure, the Society reached international recognition when it became a full member of the International Actuarial Association and an observer member of the Groupe Consultatif. Petr Mandl also initiated a change of rules for certification of Society members. In 2003, Mandl stepped down as chairman and was replaced by Jiří Fialka. Soon afterwards he has been elected honorary chairman of the Society. In 2009, he was awarded a Medal of Merit by the president of Czech Republic for his services to the state in the field of science. Petr Mandl died on 24 February 2012. == Controversy == While Mandl was much respected for his expertise and breadth of knowledge he was also disliked for his poor lecturing and autocratic behaviour. == Selected publications == Mandl, Petr: Analytical treatment of one-dimensional Markov processes, Academia, Springer, 1968. Mandl, Petr: Pravděpodobnostní dynamické modely, Academia, 1985. (in Czech) == References == |
Wikipedia:Petr Vopěnka#0 | Petr Vopěnka (16 May 1935 – 20 March 2015) was a Czech mathematician. In the early seventies, he developed alternative set theory (i.e. alternative to the classical Cantor theory), which he subsequently developed in a series of articles and monographs. Vopěnka’s name is associated with many mathematical achievements, including Vopěnka's principle. Since the mid-eighties he concerned himself with philosophical questions of mathematics (particularly vis-à-vis Husserlian phenomenology). Vopěnka served as the Minister of Education of the Czech Republic (then part of Czechoslovakia) from 1990 to 1992 within the government of Prime Minister Petr Pithart. == Biography == Petr Vopěnka grew up in small town of Dolní Kralovice. After finishing gymnasium in Ledeč nad Sázavou in 1953 he went to study mathematics at the Mathematics and Physics Faculty of Charles University in Prague, graduating in 1958. In 1962 he was made Candidate of Sciences (CSc) and in 1967 Doctor of Science (DrSc). His advisors were Eduard Čech and Ladislav Rieger. Starting in 1958 Vopěnka taught at the Mathematics and Physics Faculty, since 1964 as lecturer, since 1965 as senior lecturer. In 1968 he was made professor but was prevented to take this title until 1990 due to political reasons. Between 1966 and 1969 Vopěnka served as Vice Dean of the faculty. In 1967 Vopěnka became head of the newly established Department of Mathematical Logic. The department was abolished in 1970 and Vopěnka, though allowed to stay at the university, fell into disfavour with the regime, which limited his contacts with foreign mathematicians. During the 1970s and 1980s he concentrated on philosophy and history of mathematics and on phenomenology of infinity. After the Velvet Revolution, in January 1990, Vopěnka became Deputy Rector of the Charles University. During the period June 1990 – July 1992 he served as Minister of Education of the Czech Republic (then part of Czechoslovakia). In this position he, without much of success and facing protests from the teachers, attempted to institute school reforms. In 1992 the Department of Mathematical Logic was reopened and Vopěnka became its head. In 2000 he retired from the Charles University and the department was closed. Until 2009 Vopěnka worked as a professor at the Jan Evangelista Purkyně University in Ústí nad Labem, in the Department of Mathematics of the Faculty of Science. Petr Vopěnka also participated in translation and publishing of early mathematical texts (such as works of Euclid and Al-Khwarezmi) into the Czech language, and then he worked at the Department of Philosophy and Department of Interdisciplinary Activities, University of West Bohemia in Plzeň. == Bibliography == Petr Vopěnka (2004). Horizonty nekonečna. Prague: Moraviapress. ISBN 80-86181-66-9. Petr Vopěnka (1999). Úhelný kámen evropské vzdělanosti a moci. Prague: Práh. ISBN 80-7252-022-9. Petr Vopěnka (1989). Introduction to mathematics in the alternative set theory. Bratislava: Alfa. ISBN 80-05-00438-9. Petr Vopěnka (1979). Mathematics in the Alternative Set Theory. Leipzig: Teubner. ASIN B0006E3AXY. Petr Vopěnka, Petr Hájek (1972). The Theory of Semisets. Amsterdam, Prague: North-Holland. ISBN 0-7204-2267-1. == See also == Semiset Vopěnka's principle == Notes == == References == Vopěnka, P. (2001) [1994], "Alternative set theory", Encyclopedia of Mathematics, EMS Press Antonín Sochor (November 1993). "Interpretations of the alternative set theory". Archive for Mathematical Logic. 32 (6). Berlin / Heidelberg: Springer: 391–398. doi:10.1007/BF01270464. S2CID 9198859. various (1979–1990). "Papers of different authors published". Comm. Math. Univ. Carolinae. ISSN 0010-2628. Petr Vopěnka (1989). Proceedings of the 1 st Symposium Mathematics in the alternative set theory. Bratislava: Union of Slovak Mathematicians and Physicists. ISBN _. Azriel Levy; Vopěnka, Petr (1984). "Mathematics in the Alternative set Theory by Petr Vopenka". The Journal of Symbolic Logic. 49 (4). The Journal of Symbolic Logic, Vol. 49, No. 4: 1423–1424. doi:10.2307/2274302. JSTOR 2274302. S2CID 122518682. == Further reading == VIZE 97. "Petr Vopěnka Biography" (PDF). Archived from the original (PDF) on 2007-09-29. Retrieved 2007-07-27.{{cite web}}: CS1 maint: numeric names: authors list (link) Akihiro Kanamori (2007). "Set Theory from Cantor to Cohen" (PDF). The Mathematical Development of Set Theory from Cantor to Cohen (significant revision). Jiří Fiala (5 October 2004). "Laudatio - VIZE 97 Award to Petr Vopenka" (PDF) (in Czech). VIZE 97. Archived from the original (PDF) on 26 July 2011. Antonín Sochor (2001). "Petr Vopěnka (born 16. 5. 1935)". Ann. Pure Appl. Logic. 109 (1–2): 1–13. doi:10.1016/S0168-0072(01)00037-9. Antonín Sochor (2000). "Petr Vopěnka (born 16. 5. 1935)". Pokroky Mat. Fyz. Astron. (in Czech). 45 (2): 125–134. ISSN 0032-2423. Archived from the original on 2007-09-27. Retrieved 2007-07-27. Akihiro Kanamori (1996). "The Mathematical Development of Set Theory from Cantor to Cohen". The Bulletin of Symbolic Logic. 2 (1). The Bulletin of Symbolic Logic, Vol. 2, No. 1: 1–71. CiteSeerX 10.1.1.28.1664. doi:10.2307/421046. JSTOR 421046. S2CID 14147715. == External links == Short biography in English Documentary about Vopěnka (in Czech with English subtitles, freely downloadable) |
Wikipedia:Petru Mocanu#0 | Petru T. Mocanu (1 June 1931 – 28 March 2016) was a Romanian mathematician who was elected in 2009 as a titular member of the Romanian Academy. Mocanu was born in Brăila. He studied at the Nicolae Bălcescu High School in Brăila, graduating in 1950. He then went to study mathematics at Babeș-Bolyai University in Cluj-Napoca, completing his B.Sc in 1953, and his Ph.D. in 1959. His dissertation, written under the supervision of Gheorghe Călugăreanu, was titled Variational methods in the theory of univalent functions. He continued as faculty at Babeș-Bolyai University, rising to the rank of Professor in 1970. Mocanu was an invited professor at the University of Conakry in 1966–1967, and the Ohio State University in 1992. == Publications == Miller, Sanford S.; Mocanu, Petru T. (2000), Differential subordinations. Theory and applications, Monographs and Textbooks in Pure and Applied Mathematics, vol. 225, New York: Marcel Dekker, Inc., ISBN 0-8247-0029-5, MR 1760285 Miller, Sanford S.; Mocanu, Petru T. (1981), "Differential subordinations and univalent functions", Michigan Mathematical Journal, 28 (2): 157–172, doi:10.1307/mmj/1029002507, MR 0616267 Mocanu, Petru T. (2011), "Injectivity conditions in the complex plane", Complex Analysis and Operator Theory, 5 (3): 759–766, doi:10.1007/s11785-010-0052-y, MR 2836321 == Notes == == External links == math.ubbcluj.ro/~pmocanu/ Petru Mocanu at the Mathematics Genealogy Project |
Wikipedia:Petrus Ramus#0 | Petrus Ramus (French: Pierre de La Ramée; Anglicized as Peter Ramus ; 1515 – 26 August 1572) was a French humanist, logician, and educational reformer. A Protestant convert, he was a victim of the St. Bartholomew's Day massacre. == Early life == He was born at the village of Cuts, Picardy; his father was a farmer. He gained admission at age twelve (thus about 1527) to the Collège de Navarre, working as a servant. A reaction against scholasticism was in full tide, at a transitional time for Aristotelianism. On the occasion of receiving his M.A. degree in 1536, Ramus allegedly took as his thesis Quaecumque ab Aristotele dicta essent, commentitia esse (Everything that Aristotle has said is false), which Walter J. Ong paraphrases as follows: All the things that Aristotle has said are inconsistent because they are poorly systematized and can be called to mind only by the use of arbitrary mnemonic devices. According to Ong this kind of spectacular thesis was in fact routine at the time. Even so, Ong raises questions as to whether Ramus actually ever delivered this thesis. == Early academic career == Ramus, as graduate of the university, started courses of lectures. At this period he was engaged in numerous separate controversies. One opponent in 1543 was the Benedictine Joachim Périon. He was accused, by Jacques Charpentier, professor of medicine, of undermining the foundations of philosophy and religion. Arnaud d'Ossat, a pupil and friend of Ramus, defended him against Charpentier. Ramus was made to debate Goveanus (Antonio de Gouveia), over two days. The matter was brought before the parlement of Paris, and finally before Francis I. By him it was referred to a commission of five, who found Ramus guilty of having "acted rashly, arrogantly and impudently," and interdicted his lectures (1544). == Royal support == He withdrew from Paris, but soon afterwards returned, the decree against him being canceled by Henry II, who came to the throne in 1547, through the influence of Charles, Cardinal of Lorraine. He obtained a position at the Collège de Navarre. In 1551 Henry II appointed him a regius professor at the Collège de France, but at his request he was given the unique and at the time controversial title of Professor of Philosophy and eloquence. For a considerable time he lectured before audiences numbering as many as 2,000. Pierre Galland, another professor there, published Contra novam academiam Petri Rami oratio (1551), and called him a "parricide" for his attitude to Aristotle. The more serious charge was that he was a nouveau academicien, in other words a sceptic. Audomarus Talaeus (Omer Talon c.1510–1581), a close ally of Ramus, had indeed published a work in 1548 derived from Cicero's description of Academic scepticism, the school of Arcesilaus and Carneades. == After conversion == In 1561 he faced significant enmity following his adoption of Protestantism. He had to flee from Paris; and, though he found asylum in the palace of Fontainebleau, his house was pillaged and his library burned in his absence. He resumed his chair after this for a time, but he was summoned on 30 June 1568 before the King's Attorney General to be heard with Simon Baudichon and other professors: the position of affairs was again so threatening that he found it advisable to ask permission to travel. He spent around two years in Germany and Switzerland. The La Rochelle Confession of Faith earned his disapproval, in 1571, rupturing his relationship with Theodore Beza and leading Ramus to write angrily to Heinrich Bullinger. Returning to France, he fell a victim in the St. Bartholomew's Day Massacre (1572). Hiding for a while in a bookshop off the Rue St Jacques, he returned to his lodgings, on 26 August, the third day of the violence. There he was stabbed while at prayer. Suspicions against Charpentier have been voiced ever since. His death was compared by one of his first biographers, his friend and colleague Nicolas de Nancel, to the murder of Cicero. == Pedagogue == A central issue is that Ramus's anti-Aristotelianism arose out of a concern for pedagogy. Aristotelian philosophy, in its Early Modern form as scholasticism showing its age, was in a confused and disordered state. Ramus sought to infuse order and simplicity into philosophical and scholastic education by reinvigorating a sense of dialectic as the overriding logical and methodological basis for the various disciplines. He published in 1543 the Aristotelicae Animadversiones and Dialecticae Partitiones, the former a criticism on the old logic and the latter a new textbook of the science. What are substantially fresh editions of the Partitiones appeared in 1547 as Institutiones Dialecticae, and in 1548 as Scholae Dialecticae; his Dialectique (1555), a French version of his system, is the earliest work on the subject in the French language. In the Dialecticae partitiones Ramus recommends the use of summaries, headings, citations and examples. Ong calls Ramus's use of outlines, "a reorganization of the whole of knowledge and indeed of the whole human lifeworld." After studying Ramus's work, Ong concluded that the results of his "methodizing" of the arts "are the amateurish works of a desperate man who is not a thinker but merely an erudite pedagogue". On the other hand, his work had an immediate impact on the issue of disciplinary boundaries, educators largely having accepted his arguments by the end of the 17th century. == Logician == The logic of Ramus enjoyed a great celebrity for a time, and there existed a school of Ramists boasting numerous adherents in France, Germany, Switzerland, and the Netherlands. It cannot be said, however, that Ramus's innovations mark any epoch in the history of logic, and there is little ground for his claim to supersede Aristotle by an independent system of logic. The distinction between natural and artificial logic, i.e., between the implicit logic of daily speech and the same logic made explicit in a system, passed over into the logical handbooks. He amends the syllogism. He admits only the first three figures, as in the original Aristotelian scheme, and in his later works he also attacks the validity of the third figure, following in this the precedent of Laurentius Valla. Ramus also set the modern fashion of deducing the figures from the position of the middle term in the premises, instead of basing them, as Aristotle does, upon the different relation of the middle to the major term and minor term. == Rhetorician == As James Jasinski explains, "the range of rhetoric began to be narrowed during the 16th century, thanks in part to the works of Peter Ramus." In using the word "narrowed," Jasinski is referring to Ramus's argument for divorcing rhetoric from dialectic (logic), a move that had far reaching implications for rhetorical studies and for popular conceptions of public persuasion. Contemporary rhetoricians have tended to reject Ramus's view in favor of a more wide ranging (and in many respects, Aristotelian) understanding of the rhetorical arts as encompassing "a [broad] range of ordinary language practices." Rhetoric, traditionally, had had five parts, of which inventio (invention) was the first. Ramus insisted on rhetoric to be studied alongside dialectic through two main manuals: invention and judgement under the dialectic manual, and style and delivery in the rhetoric manual. Memory, one of the five skills of traditional rhetoric, was regarded by Ramus as being part of psychology, as opposed to being part of rhetoric, and thus dispensed from his idea of rhetoric and dialectic. Brian Vickers said that the Ramist influence here did add to rhetoric: it concentrated more on the remaining aspect of elocutio or effective use of language, and emphasised the role of vernacular European languages (rather than Latin). Ramist reforms strengthened the rhetoricians' tendency to focus on style. The effect was that rhetoric was applied in literature. Invention involves fourteen topics, including definition, cause, effect, subject, adjunct, difference, contrary, comparison, similarity, and testimony. Style encompasses four tropes: metaphor, synecdoche, metonymy, and irony. It also includes rules for poetic meter and rhythmical prose, figures corresponding to attitudes a speaker may take, and of repetition. Delivery covers the use of voice and gestures. His rhetorical leaning is seen in the definition of logic as the ars disserendi; he maintains that the rules of logic may be better learned from observation of the way in which Cicero persuaded his hearers than from a study of Aristotle's works on logic (the Organon). Logic falls, according to Ramus, into two parts: invention (treating of the notion and definition) and judgment (comprising the judgment proper, syllogism and method). Here he was influenced by Rodolphus Agricola. This division gave rise to the jocular designation of judgment or mother-wit as the "secunda Petri". But what Ramus does here in fact redefines rhetoric. There is a new configuration, with logic and rhetoric each having two parts: rhetoric was to cover elocutio and pronuntiatio. In general, Ramism liked to deal with binary trees as method for organising knowledge. == Mathematician == He was also known as a mathematician, a student of Johannes Sturm. It has been suggested that Sturm was an influence in another way, by his lectures given in 1529 on Hermogenes of Tarsus: the Ramist method of dichotomy is to be found in Hermogenes. He had students of his own. He corresponded with John Dee on mathematics, and at one point recommended to Elizabeth I that she appoint him to a university chair. The views of Ramus on mathematics implied a limitation to the practical: he considered Euclid's theory on irrational numbers to be useless. The emphasis on technological applications and engineering mathematics was coupled to an appeal to nationalism (France was well behind Italy, and needed to catch up with Germany). == Ramism == The teachings of Ramus had a broadly based reception well into the seventeenth century. Later movements, such as Baconianism, pansophism, and Cartesianism, in different ways built on Ramism, and took advantage of the space cleared by some of the simplifications (and oversimplifications) it had effected. The longest-lasting strand of Ramism was in systematic Calvinist theology, where textbook treatments with a Ramist framework were still used into the eighteenth century, particularly in New England. The first writings on Ramism, after the death of Ramus, included biographies, and were by disciples of sorts: Freigius (1574 or 1575), Banosius (1576), Nancelius (1599), of whom only Nancelius was closely acquainted with the man. Followers of Ramus in different fields included Johannes Althusius, Caspar Olevianus, John Milton, Johannes Piscator, Rudolph Snellius and Hieronymus Treutler. == Works == He published fifty works in his lifetime and nine appeared after his death. Ong undertook the complex bibliographical task of tracing his books through their editions. Aristotelicae Animadversiones (1543) Brutinae questiones (1547) Rhetoricae distinctiones in Quintilianum (1549) Dialectique (1555) Arithmétique (1555) De moribus veterum Gallorum (Paris, 1559; second edition, Basel, 1572) Liber de Cæsaris Militia Paris, 1584 Advertissement sur la réformation de l'université de Paris, au Roy, Paris, (1562) Three grammars: Grammatica latina (1548), Grammatica Graeca (1560), Grammaire Française (1562) Scolae physicae, metaphysicae, mathematicae (1565, 1566, 1578) Prooemium mathematicum (Paris, 1567) Scholarum mathematicarum libri unus et triginta (Basel, 1569) (his most famous work) Commentariorum de religione christiana (Frankfurt, 1576) == See also == Mnemonics Ramism == Notes == == References == This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Ramus, Petrus". Encyclopædia Britannica. Vol. 22 (11th ed.). Cambridge University Press. p. 881. == Further reading == Nelly Bruyère, Méthode et dialectique dans l'oeuvre de La Ramée: Renaissance et Age classique, Paris, Vrin 1984 Desmaze, Charles. Petrus Ramus, professeur au Collège de France, sa vie, ses ecrits, sa mort (Paris, 1864). Feingold, Mordechai; Freedman, Joseph S.; Rother, Wolfgang (eds.). The Influence of Petrus Ramus. Studies in Sixteenth and Seventeenth Century Philosophy and Sciences. Schwabe, Basel 2001, ISBN 978-3-7965-1560-6. Freedman, Joseph S. Philosophy and the Arts in Central Europe, 1500-1700: Teaching and Texts at Schools and Universities (Ashgate, 1999). Graves, Frank Pierrepont. Peter Ramus and the Educational Reformation of the Sixteenth Century (Macmillan, 1912). Høffding, Harald. History of Modern Philosophy (English translation, 1900), vol. i.185. Howard Hotson, Commonplace Learning: Ramism and Its German Ramifications, 1543–1630 (Oxford: Oxford University Press, 2007). Lobstein, Paul. Petrus Ramus als Theolog (Strassburg, 1878). Miller, Perry. The New England Mind (Harvard University Press, 1939). Milton, John. A Fuller Course in the Art of Logic Conformed to the Method of Peter Ramus (London, 1672). Ed. and trans. Walter J. Ong and Charles J. Ermatinger. Complete Prose Works of John Milton: Volume 8. Ed. Maurice Kelley. New Haven: Yale UP, 1982. p. 206-407. Ong, Walter J. (1982). Orality and literacy: The technologizing of the word. New York: Methuen.(p. viii). ---.Ramus, Method, and the Decay of Dialogue: From the Art of Discourse to the Art of Reason (Harvard University Press, 1958; reissued with a new foreword by Adrian Johns, University of Chicago Press, 2004.Ong, S.J., Walter J.: Ramus, Method, and the Decay of Dialogue ISBN 0-226-62976-7). ---. Ramus and Talon Inventory (Harvard University Press, 1958). Owen, John. The Skeptics of the French Renaissance (London, 1893). Pranti, K. "Uber P. Ramus" in Munchener Sitzungs berichte (1878). Saisset, Émile. Les précurseurs de Descartes (Paris, 1862). Sharratt, Peter. "The Present State of Studies on Ramus," Studi francesi 47-48 (1972) 201-13. —. "Recent Work on Peter Ramus (1970–1986)," Rhetorica: A Journal of the History of Rhetoric 5 (1987): 7-58. —. "Ramus 2000," Rhetorica: A Journal of the History of Rhetoric 18 (2000): 399-455. Voigt. Uber den Ramismus der Universität Leipzig (Leipzig, 1888). Waddington, Charles De Petri Rami vita, scriptis, philosophia (Paris, 1848). == External links == Works by Petrus Ramus at Project Gutenberg Works by or about Petrus Ramus at the Internet Archive 'Ramism' entry in The Dictionary of the History of Ideas Sellberg, Erland. "Petrus Ramus". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Petrus Ramus at the Mathematics Genealogy Project Catholic Encyclopedia entry Charles Waddington, Ramus (Pierre de la Ramée) sa vie, ses écrits et ses opinions (1855) |
Wikipedia:Pfaffian function#0 | In mathematics, Pfaffian functions are a certain class of functions whose derivative can be written in terms of the original function. They were originally introduced by Askold Khovanskii in the 1970s, but are named after German mathematician Johann Pfaff. == Basic definition == Some functions, when differentiated, give a result which can be written in terms of the original function. Perhaps the simplest example is the exponential function, f(x) = ex. If we differentiate this function we get ex again, that is f ′ ( x ) = f ( x ) . {\displaystyle f^{\prime }(x)=f(x).} Another example of a function like this is the reciprocal function, g(x) = 1/x. If we differentiate this function we will see that g ′ ( x ) = − g ( x ) 2 . {\displaystyle g^{\prime }(x)=-g(x)^{2}.} Other functions may not have the above property, but their derivative may be written in terms of functions like those above. For example, if we take the function h(x) = ex log x then we see h ′ ( x ) = e x log x + x − 1 e x = h ( x ) + f ( x ) g ( x ) . {\displaystyle h^{\prime }(x)=e^{x}\log x+x^{-1}e^{x}=h(x)+f(x)g(x).} Functions like these form the links in a so-called Pfaffian chain. Such a chain is a sequence of functions, say f1, f2, f3, etc., with the property that if we differentiate any of the functions in this chain then the result can be written in terms of the function itself and all the functions preceding it in the chain (specifically as a polynomial in those functions and the variables involved). So with the functions above we have that f, g, h is a Pfaffian chain. A Pfaffian function is then just a polynomial in the functions appearing in a Pfaffian chain and the function argument. So with the Pfaffian chain just mentioned, functions such as F(x) = x3f(x)2 − 2g(x)h(x) are Pfaffian. == Rigorous definition == Let U be an open domain in Rn. A Pfaffian chain of order r ≥ 0 and degree α ≥ 1 in U is a sequence of real analytic functions f1,..., fr in U satisfying differential equations ∂ f i ∂ x j = P i , j ( x , f 1 ( x ) , … , f i ( x ) ) {\displaystyle {\frac {\partial f_{i}}{\partial x_{j}}}=P_{i,j}({\boldsymbol {x}},f_{1}({\boldsymbol {x}}),\ldots ,f_{i}({\boldsymbol {x}}))} for i = 1, ..., r where Pi, j ∈ R[x1, ..., xn, y1, ..., yi] are polynomials of degree ≤ α. A function f on U is called a Pfaffian function of order r and degree (α, β) if f ( x ) = P ( x , f 1 ( x ) , … , f r ( x ) ) , {\displaystyle f({\boldsymbol {x}})=P({\boldsymbol {x}},f_{1}({\boldsymbol {x}}),\ldots ,f_{r}({\boldsymbol {x}})),\,} where P ∈ R[x1, ..., xn, y1, ..., yr] is a polynomial of degree at most β ≥ 1. The numbers r, α, and β are collectively known as the format of the Pfaffian function, and give a useful measure of its complexity. == Examples == The most trivial examples of Pfaffian functions are the polynomial functions. Such a function will be a polynomial in a Pfaffian chain of order r = 0, that is the chain with no functions. Such a function will have α = 0 and β equal to the degree of the polynomial. Perhaps the simplest nontrivial Pfaffian function is f(x) = ex. This is Pfaffian with order r = 1 and α = β = 1 due to the differential equation f′ = f. Recursively, one may define f1(x) = exp(x) and fm+1(x) = exp(fm(x)) for 1 ≤ m < r. Then fm′ = f1f2···fm. So this is a Pfaffian chain of order r and degree α = r. All of the algebraic functions are Pfaffian on suitable domains, as are the hyperbolic functions. The trigonometric functions on bounded intervals are Pfaffian, but they must be formed indirectly. For example, the function cos(x) is a polynomial in the Pfaffian chain tan(x/2), cos2(x/2) on the interval (−π, π). In fact all the elementary functions and Liouvillian functions are Pfaffian. == In model theory == Consider the structure R = (R, +, −, ·, <, 0, 1), the ordered field of real numbers. In the 1960s Andrei Gabrielov proved that the structure obtained by starting with R and adding a function symbol for every analytic function restricted to the unit box [0, 1]m is model complete. That is, any set definable in this structure Ran was just the projection of some higher-dimensional set defined by identities and inequalities involving these restricted analytic functions. In the 1990s, Alex Wilkie showed that one has the same result if instead of adding every restricted analytic function, one just adds the unrestricted exponential function to R to get the ordered real field with exponentiation, Rexp, a result known as Wilkie's theorem. Wilkie also tackled the question of which finite sets of analytic functions could be added to R to get a model-completeness result. It turned out that adding any Pfaffian chain restricted to the box [0, 1]m would give the same result. In particular one may add all Pfaffian functions to R to get the structure RPfaff as a variant of Gabrielov's result. The result on exponentiation is not a special case of this result (even though exp is a Pfaffian chain by itself), as it applies to the unrestricted exponential function. This result of Wilkie's proved that the structure RPfaff is an o-minimal structure. == Noetherian functions == The equations above that define a Pfaffian chain are said to satisfy a triangular condition, since the derivative of each successive function in the chain is a polynomial in one extra variable. Thus if they are written out in turn a triangular shape appears: f 1 ′ = P 1 ( x , f 1 ) f 2 ′ = P 2 ( x , f 1 , f 2 ) f 3 ′ = P 3 ( x , f 1 , f 2 , f 3 ) , {\displaystyle {\begin{aligned}f_{1}^{\prime }&=P_{1}(x,f_{1})\\f_{2}^{\prime }&=P_{2}(x,f_{1},f_{2})\\f_{3}^{\prime }&=P_{3}(x,f_{1},f_{2},f_{3}),\end{aligned}}} and so on. If this triangularity condition is relaxed so that the derivative of each function in the chain is a polynomial in all the other functions in the chain, then the chain of functions is known as a Noetherian chain, and a function constructed as a polynomial in this chain is called a Noetherian function. So, for example, a Noetherian chain of order three is composed of three functions f1, f2, f3, satisfying the equations f 1 ′ = P 1 ( x , f 1 , f 2 , f 3 ) f 2 ′ = P 2 ( x , f 1 , f 2 , f 3 ) f 3 ′ = P 3 ( x , f 1 , f 2 , f 3 ) . {\displaystyle {\begin{aligned}f_{1}^{\prime }&=P_{1}(x,f_{1},f_{2},f_{3})\\f_{2}^{\prime }&=P_{2}(x,f_{1},f_{2},f_{3})\\f_{3}^{\prime }&=P_{3}(x,f_{1},f_{2},f_{3}).\end{aligned}}} The name stems from the fact that the ring generated by the functions in such a chain is Noetherian. Any Pfaffian chain is also a Noetherian chain (the extra variables in each polynomial are simply redundant in this case), but not every Noetherian chain is Pfaffian; for example, if we take f1(x) = sin x and f2(x) = cos x then we have the equations f 1 ′ ( x ) = f 2 ( x ) f 2 ′ ( x ) = − f 1 ( x ) , {\displaystyle {\begin{aligned}f_{1}^{\prime }(x)&=f_{2}(x)\\f_{2}^{\prime }(x)&=-f_{1}(x),\end{aligned}}} and these hold for all real numbers x, so f1, f2 is a Noetherian chain on all of R. But there is no polynomial P(x, y) such that the derivative of sin x can be written as P(x, sin x), and so this chain is not Pfaffian. == Notes == == References == Khovanskii, A.G. (1991). Fewnomials. Translations of Mathematical Monographs. Vol. 88. Translated from the Russian by Smilka Zdravkovska. Providence, RI: American Mathematical Society. ISBN 0-8218-4547-0. Zbl 0728.12002. |
Wikipedia:Philbert Maurice d'Ocagne#0 | Philbert Maurice d'Ocagne (25 March 1862 – 23 September 1938) was a French engineer and mathematician. He founded the field of nomography, the graphic computation of algebraic equations, on charts that he called nomograms. == Biography == Philbert Maurice d'Ocagne was born in Paris on 25 March 1862. He attended high school at the Lycée Fontanges school in Paris and studied at Chaptal college. In 1877, he published his first mathematical work. In 1880, he entered the École Polytechnique. He published many articles on math. Starting in 1885, he served for six years as an engineer, supporting waterworks projects in Rochefort and Cherbourg and then worked at Seine-et-Oise at the residence of Pontoise. From 1882, he continued to publish articles on mathematics in the French Academy of Sciences and major journals, including Journal of the École Polytechnique, Bulletin de la Société Mathématique de France, Acta Mathematica, Archiv der Mathematik und Physik, and American Journal of Mathematics. He became a tutor (répétiteur) at the École Polytechnique in 1893 and then in 1894 became a professor at the École Nationale des Ponts et Chaussées. In 1891, he began publishing papers on nomography. In 1901, he was appointed deputy director of general survey of France. Ten years later, he became chief of maps and plans and precision instruments for the Ministry of Public Works. He was appointed chief engineer in 1908. In 1912 he was appointed professor of geometry at the École Polytechnique, and became Inspector General of roads and bridges in 1920. In 1893, he joined the faculty of the Polytechnic School, first as instructor of astronomy and geodesy. During WWI, his techniques for finding approximate solutions to the transcendental equations of plastic deformation allowed French gunmakers to implement autofrettage on an industrial scale and boost the output of artillery pieces. In early 1912, he became chair of geometry. In 1901, he became president of the Mathematical Society of France. In 1922, he was admitted to the Academy of Sciences. == Family == Originally from the province of Alençon in Normandy, his family can be traced to the 8th century. D'Ocagne's lineage came from the du Plessis family, and he used the pseudonym Philbert du Plessis in some of his scientific publications. Mortimer d'Ocagne, Maurice's father, published widely on economic and financial topics and wrote a book on French higher education: Les Grandes Ecoles de France. He also served as the drama critic for the Revue Britannique, going to the theater every night and never missing a premiere. He served as the dean of the subscribers of the Opera. He died in 1919 at the age of 98. == Awards == Leconte Prize in 1892 for his work Nomographie Dalmont Prize of the Academy of Sciences (Paris) in 1894, for his mathematical work == Selected works == "Sur l'évaluation graphique des moments et des moments d'inertie des aires planes" (1884) Calcul graphique et nomographie, Paris, Doin (1908) == References == == Sources == P. Humbert, "Maurice d’Ocagne (1862–1938)", Ciel et Terre, vol. 55, 1939, p. 108 http://adsabs.harvard.edu/full/1939C%26T....55..108H [archive] |
Wikipedia:Philibert Nang#0 | Philibert Nang (born 1967) is a Gabonese mathematician known for his work in algebra (D-modules, Riemann–Hilbert correspondence). Nang won the 2011 ICTP Ramanujan Prize for his research in mathematics, and because he conducted it in Gabon the ICTP declared: "It is hoped that his example will inspire other young African mathematicians working at the highest levels while based in Africa." He was awarded the African Mathematics Millennium Science Initiative-Phillip Griffiths Prize in 2017. He obtained his Ph.D. from the Pierre and Marie Curie University in 1996 under the supervision of Louis Boutet de Monvel. Nang currently serves as president of the Gabon Mathematical Society. He has been a visiting member at the Max Planck Institute for Mathematics and at the Tata Institute of Fundamental Research. Currently he is employed as associate professor at University of Pretoria in South Africa. == Selected publications == "On the classification of regular holonomic D-modules on skew-symmetric matrices", Journal of Algebra, Volume 356, Issue 1, 2012, pp. 115–132. "D-modules associated to the determinantal singularities", Proc. Japan Acad. Ser. A Math. Sci., Volume 80, Number 5, 2004, pp. 74–78. "D-modules associated to the group of similitudes", Publ. Res. I. Math. Sci., Volume 35, Number 2, 1999, pp. 223–247. == References == |
Wikipedia:Philip Hall#0 | Philip Hall FRS (11 April 1904 – 30 December 1982), was an English mathematician. His major work was on group theory, notably on finite groups and solvable groups. == Biography == He was educated first at Christ's Hospital, where he won the Thompson Gold Medal for mathematics, and later at King's College, Cambridge. He was elected a Fellow of the Royal Society in 1951 and awarded its Sylvester Medal in 1961. He was President of the London Mathematical Society from 1955–1957, and was awarded its Berwick Prize in 1958 and De Morgan Medal in 1965. == Publications == Hall, P. (1934). "A Contribution to the Theory of Groups of Prime-Power Order". Proceedings of the London Mathematical Society. s2-36: 29–07. doi:10.1112/plms/s2-36.1.29. Hall, P.; Higman, G. (1956). "On the p-Length of p-Soluble Groups and Reduction Theorems for Burnside's Problem". Proceedings of the London Mathematical Society. s3-6: 1–42. doi:10.1112/plms/s3-6.1.1. Hall, Philip (1988), The collected works of Philip Hall, Oxford Science Publications, The Clarendon Press Oxford University Press, ISBN 978-0-19-853254-5, MR 0986732 == See also == Abstract clone Commutator collecting process Isoclinism of groups Regular p-group Three subgroups lemma Hall algebra, and Hall polynomials Hall subgroup Hall–Higman theorem Hall–Littlewood polynomial Hall's universal group Hall's marriage theorem Hall word Hall–Witt identity Irwin–Hall distribution Zappa–Szép product == References == |
Wikipedia:Philipp Furtwängler#0 | Friederich Pius Philipp Furtwängler (April 21, 1869 – May 19, 1940) was a German number theorist. == Biography == Furtwängler wrote an 1896 doctoral dissertation at the University of Göttingen on cubic forms (Zur Theorie der in Linearfaktoren zerlegbaren ganzzahligen ternären kubischen Formen), under Felix Klein. Most of his academic life, from 1912 to 1938, was spent at the University of Vienna, where he taught for example Kurt Gödel, who later said that Furtwängler's lectures on number theory were the best mathematical lectures that he ever heard; Gödel had originally intended to become a physicist but turned to mathematics partly as a result of Furtwängler's lectures. From 1916, Furtwängler became increasingly paralysed and, without notes, lectured from a wheelchair while his assistant wrote equations on the blackboard. Some of Furtwängler's doctoral students were Wolfgang Gröbner, Nikolaus Hofreiter, Henry Mann, Otto Schreier, and Olga Taussky-Todd. Through these and others, he has over 3000 academic descendants. He is now best known for his contribution to the principal ideal theorem in the form of his Beweis des Hauptidealsatzes für Klassenkörper algebraischer Zahlkörper (1929). Philipp Furtwängler was a grandson of the organ builder Philipp Furtwängler (1800-1867) and a second cousin of the conductor Wilhelm Furtwängler. == Selected publications == with Helmut Hasse and W. Jehne: Allgemeine Theorie der algebraischen Zahlen. Vol. 8. Teubner, 1953. == See also == Eisenstein reciprocity Hilbert class field Keller's conjecture Kummer–Vandiver conjecture Principalization (algebra) == References == == Sources == "Philipp Furtwängler". In: Österreichisches Biographisches Lexikon 1815–1950 (ÖBL). Vol. 1, Austrian Academy of Sciences, Vienna 1957, p. 383. Nikolaus Hofreiter (1961). "Furtwängler, Friedrich Pius Philipp". Neue Deutsche Biographie (in German). Vol. 5. Berlin: Duncker & Humblot. pp. 740–740. == External links == Literature by and about Philipp Furtwängler in the German National Library catalogue http://bibliothek.bbaw.de/kataloge/literaturnachweise/furtwaen/literatur.pdf (PDF-Datei; 35 kB) Friederich Pius Philipp Furtwängler at the MacTutor History of Mathematics archive |
Wikipedia:Philippe Di Francesco#0 | Philippe Di Francesco is a French-American mathematician, focusing in mathematical physics, physical combinatorics and integrable systems. He is senior researcher (Directeur de Recherche) at the Institute of Theoretical Physics, Saclay in France, and is currently the Morris and Gertrude Fine Distinguished Professor of Mathematics at University of Illinois. He is also author of the book 'Conformal Field Theory'. He received his PhD in 1989, under Jean-Claude Le Guillou and Jean-Bernard Zuber, at the Pierre and Marie Curie University. == References == |
Wikipedia:Philippe Le Corbeiller#0 | Philippe Emmanuel Le Corbeiller (January 11, 1891 – July 24, 1980) was a French-American electrical engineer, mathematician, physicist, and educator. After a career in France as an expert on the electronics of telecommunications, he became a professor of applied physics and general education at Harvard University. His most important scientific contributions were in the theory and applications of nonlinear systems, including self-oscillators. == Career in France == Son of author and politician Jean-Maurice Le Corbeiller and his wife Marguerite Dreux, Philippe entered the École Polytechnique in 1910, training there in engineering and the mathematical sciences. During World War I he served in the French Signal Corps, earning the croix de guerre and joining the staff of Marshal Ferdinand Foch. After the war, Le Corbeiller worked on telegraphy and radio systems. In 1926 he completed a doctorate in mathematics from the Sorbonne. His dissertation was on the arithmetic theory of Hermitian forms. Written under the supervision of Charles Émile Picard, Le Corbeiller's dissertation built upon the work of the then recently deceased Georges Humbert. From 1929 to 1939, Le Corbeiller served in the French ministry of communications (Ministère des Postes, Télégraphes et Téléphones) as a research engineer and taught at the École Supérieure d’Électricité (Supélec). From 1939 to 1941 he was technical and programming director of the French national broadcasting network (Radiodiffusion nationale). He also obtained a licence in philosophy from the Sorbonne in 1938. == Move to Harvard == Le Corbeiller and his family moved to the United States in 1941, fleeing the German occupation of France. Le Corbeiller spent the rest of World War II at Harvard University, teaching electronics to US Army and Navy personnel. After the war, he became a lecturer in applied physics at Harvard, and in 1949 he was promoted to professor of both applied physics and general education. Elected fellow of the American Academy of Arts and Sciences, the Acoustical Society of America, and the Econometric Society, Le Corbeiller was also a member of the American Physical Society and the American Association for the Advancement of Science. == Scientific and educational work == Le Corbeiller's research interests spanned several branches of pure and applied mathematics, as well as electromechanics, control theory, acoustics, and economics. He was a friend of Dutch physicist Balthasar van der Pol, whose work on the nonlinear theory of self-oscillating dynamical systems (see van der Pol oscillator and relaxation oscillator) Le Corbeiller extended and applied to problems in mathematics, engineering, and economics. An important contribution of Le Corbeiller's was to connect the mathematical theory of self-oscillators with the thermodynamics of engines. At Harvard, Le Corbeiller had a major influence on the work of economic theorist Richard M. Goodwin, who used concepts from nonlinear systems to describe the business cycle in macroeconomics. Le Corbeiller also cultivated an interest in the history and philosophy of science, which he combined with his enthusiasm for general and adult education. He was actively involved in the initiative of Harvard President James Bryant Conant to develop a history of science–based general science education, collaborating in that effort with other lecturers such as Edwin C. Kemble, Gerald Holton, I. Bernard Cohen, and Thomas Kuhn. == Personal life == Philippe Le Corbeiller married Dorothy Leeming, a citizen of the United States, in Paris in 1924. They had one son, Jean, who graduated from Harvard in 1948, and who worked as editor of Scientific American magazine and as professor at the Seminar and Lang Colleges of the New School for Social Research, in New York City. In 1952, Philippe Le Corbeiller's mother donated to Harvard's Fogg Museum a bouillon cup and a saucer reportedly used by Marie Antoinette during her imprisonment and passed down through Madame Campan. After retiring from Harvard in 1960 Philippe Le Corbeiller taught briefly at the New School and at Smith College. Widowed in 1962, he married Pietronetta Posthuma, the widow of Balthasar van der Pol, in 1964 in New York City. The couple settled in the Netherlands in 1968. Le Corbeiller died in Wassenaar in 1980. == Selected works == Le Corbeiller, P. (1931). Les systèmes autoentretenus et les oscillations de relaxation. Conférences faites au Conservatoire National des Arts et Métiers les 6 et 7 mai 1931. Paris: Hermann et cie. Décaux, B.; Le Corbeiller, P. (1931). "Sur un système électrique auto-entreténu utilisant un tube à néon". Comptes Rendus de l'Académie des Sciences. 193 (2): 723–725. Le Corbeiller, P. (1932). "Sur l'entretien en oscillations du réseau passif le plus général". Comptes Rendus de l'Académie des Sciences. 194: 1564–1566. Le Corbeiller, P. (1932). "Le mécanisme de la production des oscillations". Annales des Postes, Télégraphes et Téléphones. 21 (1): 697–731. Reprinted in Le Corbeiller, P. (1933). "Le mécanisme de la production des oscillations". Onde Électrique. 12: 116–148. Le Corbeiller, P. (1933). "Les systèmes autoentretenus et les oscillations de relaxation". Econometrica. 11 (3): 328–332. doi:10.2307/1907044. JSTOR 1907044. Le Corbeiller, P. (1936). "The non-linear theory of the maintenance of oscillations". Journal of the Institution of Electrical Engineers. 79 (477): 361–378. doi:10.1049/jiee-1.1936.0162. Le Corbeiller, P. (1939). Électro-acoustique: oscillations et ondes harmoniques, transformateurs électro-mécaniques, transformateurs mécanico-acoustiques, génération d'oscillations acoustiques, acoustique physiologique, mesures. Paris: Étienne Chiron. Friedrichs, K. O.; Le Corbeiller, P.; Levinson, N.; Stoker, J. J. (1943). Non-linear Mechanics. Providence, RI: Brown University. Le Corbeiller, P. (1950). Matrix Analysis of Electric Networks. Cambridge, MA: Harvard University Press. Le Corbeiller, P. (1951). "A new pattern in science". Journal of Chemical Education. 28 (10): 553–555. Bibcode:1951JChEd..28..553L. doi:10.1021/ed028p553. Le Corbeiller, P.; Yeung, Y.-W. (1952). "Duality in Mechanics". Journal of the Acoustical Society of America. 24 (4): 643–648. Bibcode:1952ASAJ...24R.451L. doi:10.1121/1.1917501. Cohen, I. B.; Watson, F. D., eds. (1952). "Applications of Science and the Teaching of Science". General Education in Science. Cambridge, MA: Harvard University Press. pp. 133–140. Le Corbeiller, P. (1953). "Crystals and the Future of Physics". Scientific American. 188 (1): 50–56. Bibcode:1953SciAm.188a..50C. doi:10.1038/scientificamerican0153-50. Le Corbeiller, P. (1954). "The Curvature of Space". Scientific American. 191 (5): 80–86. Bibcode:1954SciAm.191e..80C. doi:10.1038/scientificamerican1154-80. Le Corbeiller, P. (1960). "Two-Stroke Oscillators". IRE Transactions on Circuit Theory. 7 (4): 387–398. doi:10.1109/TCT.1960.1086719. Le Corbeiller, P., ed. (1963). The Languages of Science: Nine Eminent Scientists Survey Modern Developments in Scientific Communication. New York: Basic Books. Le Corbeiller, P.; Lukas, A. V. (1966). Dimensional Analysis. New York: Basic Systems. == References == == External links == Mathematics Genealogy Project |
Wikipedia:Philippe Michel (economist)#0 | Philippe Michel (6 October 1937 – 22 July 2004) was a French mathematical economist. == From mathematics to mathematical economics == Philippe Michel earned a PhD in Mathematics from the University of Paris VI in 1972, and became a Professor of Mathematics in 1976 at the University of Paris I. In 1993, he joined the Faculty of Economics at the University of Aix-Marseille II at GREQAM. Philippe Michel first scientific contributions were in the field of mathematics, with a focus on optimal control theory. As the methods developed in this field are important tools used in economics, Philippe Michel progressively entered into this field. A list of his contribution to mathematics is provided in an article by Jean-Paul Penot. == Contributions to economics == Philippe Michel's contribution to economics is both related to macroeconomics and public economics. His reference framework is the growth model, viewed as a succession of different generations. The numerous contributions can be classified according to three main topics. First, Philippe Michel studies the problem of the choice of a social welfare function that allows defining the social optimum. Second, when the social optimum has been characterized, its decentralization can be achieved in various frameworks where different types of frictions or externalities may come from environment, education, money, etc. Finally, applying an economic policy may imply two problems : the problem of the inconsistency of the optimal policy, and the question of the neutrality of transfers, when agents are altruistic. == Prize Philippe Michel == Philippe Michel died on 22 July 2004. Jean-Michel Grandmont published an obituary stressing that his prolific scientific contributions were a reflection of his character: original and deep thinking, intellectual honesty. Through his publications, he still has an important place in economic research. The role he played in advising numerous young researchers over the years has led his friends to honor him through the award of a young researcher prize for the best paper on economic dynamics on the fifth anniversary of his death. == Books published == Philippe Michel published several books in French. In English: A Theory of Economic Growth, Dynamics and Policy in Overlapping Generations (with D. de la Croix), Cambridge University Press, 2002. == References == |
Wikipedia:Philippe Michel (number theorist)#0 | Philippe Gabriel Michel (born 23 January 1969) is a French mathematician who holds the chair in analytic number theory at the École Polytechnique Fédérale de Lausanne in Switzerland. == Early life, education and career == Michel was born in Lyon. He studied from 1989 to 1993 at the École normale supérieure de Cachan, and then moved to the University of Paris-Sud, where he earned a doctorate in 1995 under the supervision of Étienne Fouvry and then a habilitation in 1998. He was a professor at the University of Montpellier from 1998 to 2008, when he moved to EPFL. == Recognition == In 1999, Michel was awarded the Peccot-Vimont Prize and gave the Peccot Lecture at the Collège de France. In 2006, he was an invited speaker at the International Congress of Mathematicians. In 2011, he was elected to the Academia Europaea. In 2012, he became one of the inaugural fellows of the American Mathematical Society. == References == == External links == Home page |
Wikipedia:Philo line#0 | In geometry, the Philo line is a line segment defined from an angle and a point inside the angle as the shortest line segment through the point that has its endpoints on the two sides of the angle. Also known as the Philon line, it is named after Philo of Byzantium, a Greek writer on mechanical devices, who lived probably during the 1st or 2nd century BC. Philo used the line to double the cube; because doubling the cube cannot be done by a straightedge and compass construction, neither can constructing the Philo line. == Geometric characterization == The defining point of a Philo line, and the base of a perpendicular from the apex of the angle to the line, are equidistant from the endpoints of the line. That is, suppose that segment D E {\displaystyle DE} is the Philo line for point P {\displaystyle P} and angle D O E {\displaystyle DOE} , and let Q {\displaystyle Q} be the base of a perpendicular line O Q {\displaystyle OQ} to D E {\displaystyle DE} . Then D P = E Q {\displaystyle DP=EQ} and D Q = E P {\displaystyle DQ=EP} . Conversely, if P {\displaystyle P} and Q {\displaystyle Q} are any two points equidistant from the ends of a line segment D E {\displaystyle DE} , and if O {\displaystyle O} is any point on the line through Q {\displaystyle Q} that is perpendicular to D E {\displaystyle DE} , then D E {\displaystyle DE} is the Philo line for angle D O E {\displaystyle DOE} and point P {\displaystyle P} . == Algebraic Construction == A suitable fixation of the line given the directions from O {\displaystyle O} to E {\displaystyle E} and from O {\displaystyle O} to D {\displaystyle D} and the location of P {\displaystyle P} in that infinite triangle is obtained by the following algebra: The point O {\displaystyle O} is put into the center of the coordinate system, the direction from O {\displaystyle O} to E {\displaystyle E} defines the horizontal x {\displaystyle x} -coordinate, and the direction from O {\displaystyle O} to D {\displaystyle D} defines the line with the equation y = m x {\displaystyle y{=}mx} in the rectilinear coordinate system. m {\displaystyle m} is the tangent of the angle in the triangle D O E {\displaystyle DOE} . Then P {\displaystyle P} has the Cartesian Coordinates ( P x , P y ) {\displaystyle (P_{x},P_{y})} and the task is to find E = ( E x , 0 ) {\displaystyle E=(E_{x},0)} on the horizontal axis and D = ( D x , D y ) = ( D x , m D x ) {\displaystyle D=(D_{x},D_{y})=(D_{x},mD_{x})} on the other side of the triangle. The equation of a bundle of lines with inclinations α {\displaystyle \alpha } that run through the point ( x , y ) = ( P x , P y ) {\displaystyle (x,y)=(P_{x},P_{y})} is y = α ( x − P x ) + P y . {\displaystyle y=\alpha (x-P_{x})+P_{y}.} These lines intersect the horizontal axis at α ( x − P x ) + P y = 0 {\displaystyle \alpha (x-P_{x})+P_{y}=0} which has the solution ( E x , E y ) = ( P x − P y α , 0 ) . {\displaystyle (E_{x},E_{y})=\left(P_{x}-{\frac {P_{y}}{\alpha }},0\right).} These lines intersect the opposite side y = m x {\displaystyle y=mx} at α ( x − P x ) + P y = m x {\displaystyle \alpha (x-P_{x})+P_{y}=mx} which has the solution ( D x , D y ) = ( α P x − P y α − m , m α P x − P y α − m ) . {\displaystyle (D_{x},D_{y})=\left({\frac {\alpha P_{x}-P_{y}}{\alpha -m}},m{\frac {\alpha P_{x}-P_{y}}{\alpha -m}}\right).} The squared Euclidean distance between the intersections of the horizontal line and the diagonal is E D 2 = d 2 = ( E x − D x ) 2 + ( E y − D y ) 2 = m 2 ( α P x − P y ) 2 ( 1 + α 2 ) α 2 ( α − m ) 2 . {\displaystyle ED^{2}=d^{2}=(E_{x}-D_{x})^{2}+(E_{y}-D_{y})^{2}={\frac {m^{2}(\alpha P_{x}-P_{y})^{2}(1+\alpha ^{2})}{\alpha ^{2}(\alpha -m)^{2}}}.} The Philo Line is defined by the minimum of that distance at negative α {\displaystyle \alpha } . An arithmetic expression for the location of the minimum is obtained by setting the derivative ∂ d 2 / ∂ α = 0 {\displaystyle \partial d^{2}/\partial \alpha =0} , so − 2 m 2 ( P x α − P y ) [ ( m P x − P y ) α 3 + P x α 2 − 2 P y α + P y m ] α 3 ( α − m ) 3 = 0. {\displaystyle -2m^{2}{\frac {(P_{x}\alpha -P_{y})[(mP_{x}-P_{y})\alpha ^{3}+P_{x}\alpha ^{2}-2P_{y}\alpha +P_{y}m]}{\alpha ^{3}(\alpha -m)^{3}}}=0.} So calculating the root of the polynomial in the numerator, ( m P x − P y ) α 3 + P x α 2 − 2 P y α + P y m = 0 {\displaystyle (mP_{x}-P_{y})\alpha ^{3}+P_{x}\alpha ^{2}-2P_{y}\alpha +P_{y}m=0} determines the slope of the particular line in the line bundle which has the shortest length. [The global minimum at inclination α = P y / P x {\displaystyle \alpha =P_{y}/P_{x}} from the root of the other factor is not of interest; it does not define a triangle but means that the horizontal line, the diagonal and the line of the bundle all intersect at ( 0 , 0 ) {\displaystyle (0,0)} .] − α {\displaystyle -\alpha } is the tangent of the angle O E D {\displaystyle OED} . Inverting the equation above as α 1 = P y / ( P x − E x ) {\displaystyle \alpha _{1}=P_{y}/(P_{x}-E_{x})} and plugging this into the previous equation one finds that E x {\displaystyle E_{x}} is a root of the cubic polynomial m x 3 + ( 2 P y − 3 m P x ) x 2 + 3 P x ( m P x − P y ) x − ( m P x − P y ) ( P x 2 + P y 2 ) . {\displaystyle mx^{3}+(2P_{y}-3mP_{x})x^{2}+3P_{x}(mP_{x}-P_{y})x-(mP_{x}-P_{y})(P_{x}^{2}+P_{y}^{2}).} So solving that cubic equation finds the intersection of the Philo line on the horizontal axis. Plugging in the same expression into the expression for the squared distance gives d 2 = P y 2 + x 2 − 2 x P x + P x 2 ( P y + m x − m P x ) 2 x 2 m 2 . {\displaystyle d^{2}={\frac {P_{y}^{2}+x^{2}-2xP_{x}+P_{x}^{2}}{(P_{y}+mx-mP_{x})^{2}}}x^{2}m^{2}.} === Location of === Q {\displaystyle Q} Since the line O Q {\displaystyle OQ} is orthogonal to E D {\displaystyle ED} , its slope is − 1 / α {\displaystyle -1/\alpha } , so the points on that line are y = − x / α {\displaystyle y=-x/\alpha } . The coordinates of the point Q = ( Q x , Q y ) {\displaystyle Q=(Q_{x},Q_{y})} are calculated by intersecting this line with the Philo line, y = α ( x − P x ) + P y {\displaystyle y=\alpha (x-P_{x})+P_{y}} . α ( x − P x ) + P y = − x / α {\displaystyle \alpha (x-P_{x})+P_{y}=-x/\alpha } yields Q x = ( α P x − P y ) α 1 + α 2 {\displaystyle Q_{x}={\frac {(\alpha P_{x}-P_{y})\alpha }{1+\alpha ^{2}}}} Q y = − Q x / α = P y − α P x 1 + α 2 {\displaystyle Q_{y}=-Q_{x}/\alpha ={\frac {P_{y}-\alpha P_{x}}{1+\alpha ^{2}}}} With the coordinates ( D x , D y ) {\displaystyle (D_{x},D_{y})} shown above, the squared distance from D {\displaystyle D} to Q {\displaystyle Q} is D Q 2 = ( D x − Q x ) 2 + ( D y − Q y ) 2 = ( α P x − P y ) 2 ( 1 + α m ) 2 ( 1 + α 2 ) ( α − m ) 2 {\displaystyle DQ^{2}=(D_{x}-Q_{x})^{2}+(D_{y}-Q_{y})^{2}={\frac {(\alpha P_{x}-P_{y})^{2}(1+\alpha m)^{2}}{(1+\alpha ^{2})(\alpha -m)^{2}}}} . The squared distance from E {\displaystyle E} to P {\displaystyle P} is E P 2 ≡ ( E x − P x ) 2 + ( E y − P y ) 2 = P y 2 ( 1 + α 2 ) α 2 {\displaystyle EP^{2}\equiv (E_{x}-P_{x})^{2}+(E_{y}-P_{y})^{2}={\frac {P_{y}^{2}(1+\alpha ^{2})}{\alpha ^{2}}}} . The difference of these two expressions is D Q 2 − E P 2 = [ ( P x m + P y ) α 3 + ( P x − 2 P y m ) α 2 − P y m ] [ ( P x m − P y ) α 3 + P x α 2 − 2 P y α + P y m ] α 2 ( 1 + α 2 ) ( a − m ) 2 {\displaystyle DQ^{2}-EP^{2}={\frac {[(P_{x}m+P_{y})\alpha ^{3}+(P_{x}-2P_{y}m)\alpha ^{2}-P_{y}m][(P_{x}m-P_{y})\alpha ^{3}+P_{x}\alpha ^{2}-2P_{y}\alpha +P_{y}m]}{\alpha ^{2}(1+\alpha ^{2})(a-m)^{2}}}} . Given the cubic equation for α {\displaystyle \alpha } above, which is one of the two cubic polynomials in the numerator, this is zero. This is the algebraic proof that the minimization of D E {\displaystyle DE} leads to D Q = P E {\displaystyle DQ=PE} . ==== Special case: right angle ==== The equation of a bundle of lines with inclination α {\displaystyle \alpha } that run through the point ( x , y ) = ( P x , P y ) {\displaystyle (x,y)=(P_{x},P_{y})} , P x , P y > 0 {\displaystyle P_{x},P_{y}>0} , has an intersection with the x {\displaystyle x} -axis given above. If D O E {\displaystyle DOE} form a right angle, the limit m → ∞ {\displaystyle m\to \infty } of the previous section results in the following special case: These lines intersect the y {\displaystyle y} -axis at α ( − P x ) + P y {\displaystyle \alpha (-P_{x})+P_{y}} which has the solution ( D x , D y ) = ( 0 , P y − α P x ) . {\displaystyle (D_{x},D_{y})=(0,P_{y}-\alpha P_{x}).} The squared Euclidean distance between the intersections of the horizontal line and vertical lines is d 2 = ( E x − D x ) 2 + ( E y − D y ) 2 = ( α P x − P y ) 2 ( 1 + α 2 ) α 2 . {\displaystyle d^{2}=(E_{x}-D_{x})^{2}+(E_{y}-D_{y})^{2}={\frac {(\alpha P_{x}-P_{y})^{2}(1+\alpha ^{2})}{\alpha ^{2}}}.} The Philo Line is defined by the minimum of that curve (at negative α {\displaystyle \alpha } ). An arithmetic expression for the location of the minimum is where the derivative ∂ d 2 / ∂ α = 0 {\displaystyle \partial d^{2}/\partial \alpha =0} , so 2 ( P x α − P y ) ( P x α 3 + P y ) α 3 = 0 {\displaystyle 2{\frac {(P_{x}\alpha -P_{y})(P_{x}\alpha ^{3}+P_{y})}{\alpha ^{3}}}=0} equivalent to α = − P y / P x 3 {\displaystyle \alpha =-{\sqrt[{3}]{P_{y}/P_{x}}}} Therefore d = P y − α P x | α | 1 + α 2 = P x [ 1 + ( P y / P x ) 2 / 3 ] 3 / 2 . {\displaystyle d={\frac {P_{y}-\alpha P_{x}}{|\alpha |}}{\sqrt {1+\alpha ^{2}}}=P_{x}[1+(P_{y}/P_{x})^{2/3}]^{3/2}.} Alternatively, inverting the previous equations as α 1 = P y / ( P x − E x ) {\displaystyle \alpha _{1}=P_{y}/(P_{x}-E_{x})} and plugging this into another equation above one finds E x = P x + P y P y / P x 3 . {\displaystyle E_{x}=P_{x}+P_{y}{\sqrt[{3}]{P_{y}/P_{x}}}.} == Doubling the cube == The Philo line can be used to double the cube, that is, to construct a geometric representation of the cube root of two, and this was Philo's purpose in defining this line. Specifically, let P Q R S {\displaystyle PQRS} be a rectangle whose aspect ratio P Q : Q R {\displaystyle PQ:QR} is 1 : 2 {\displaystyle 1:2} , as in the figure. Let T U {\displaystyle TU} be the Philo line of point P {\displaystyle P} with respect to right angle Q R S {\displaystyle QRS} . Define point V {\displaystyle V} to be the point of intersection of line T U {\displaystyle TU} and of the circle through points P Q R S {\displaystyle PQRS} . Because triangle R V P {\displaystyle RVP} is inscribed in the circle with R P {\displaystyle RP} as diameter, it is a right triangle, and V {\displaystyle V} is the base of a perpendicular from the apex of the angle to the Philo line. Let W {\displaystyle W} be the point where line Q R {\displaystyle QR} crosses a perpendicular line through V {\displaystyle V} . Then the equalities of segments R S = P Q {\displaystyle RS=PQ} , R W = Q U {\displaystyle RW=QU} , and W U = R Q {\displaystyle WU=RQ} follow from the characteristic property of the Philo line. The similarity of the right triangles P Q U {\displaystyle PQU} , R W V {\displaystyle RWV} , and V W U {\displaystyle VWU} follow by perpendicular bisection of right triangles. Combining these equalities and similarities gives the equality of proportions R S : R W = P Q : Q U = R W : W V = W V : W U = W V : R Q {\displaystyle RS:RW=PQ:QU=RW:WV=WV:WU=WV:RQ} or more concisely R S : R W = R W : W V = W V : R Q {\displaystyle RS:RW=RW:WV=WV:RQ} . Since the first and last terms of these three equal proportions are in the ratio 1 : 2 {\displaystyle 1:2} , the proportions themselves must all be 1 : 2 3 {\displaystyle 1:{\sqrt[{3}]{2}}} , the proportion that is required to double the cube. Since doubling the cube is impossible with a straightedge and compass construction, it is similarly impossible to construct the Philo line with these tools. == Minimizing the area == Given the point P {\displaystyle P} and the angle D O E {\displaystyle DOE} , a variant of the problem may minimize the area of the triangle O E D {\displaystyle OED} . With the expressions for ( E x , E y ) {\displaystyle (E_{x},E_{y})} and ( D x , D y ) {\displaystyle (D_{x},D_{y})} given above, the area is half the product of height and base length, A = D y E x / 2 = m ( α P x − P y ) 2 2 α ( α − m ) {\displaystyle A=D_{y}E_{x}/2={\frac {m(\alpha P_{x}-P_{y})^{2}}{2\alpha (\alpha -m)}}} . Finding the slope α {\displaystyle \alpha } that minimizes the area means to set ∂ A / ∂ α = 0 {\displaystyle \partial A/\partial \alpha =0} , − m ( α P x − P y ) [ ( m P x − 2 P y ) α + P y m ] 2 α 2 ( α − m ) 2 = 0 {\displaystyle -{\frac {m(\alpha P_{x}-P_{y})[(mP_{x}-2P_{y})\alpha +P_{y}m]}{2\alpha ^{2}(\alpha -m)^{2}}}=0} . Again discarding the root α = P y / P x {\displaystyle \alpha =P_{y}/P_{x}} which does not define a triangle, the slope is in that case α = − m P y m P x − 2 P y {\displaystyle \alpha =-{\frac {mP_{y}}{mP_{x}-2P_{y}}}} and the minimum area A = 2 P y ( m P x − P y ) m {\displaystyle A={\frac {2P_{y}(mP_{x}-P_{y})}{m}}} . == References == == Further reading == == External links == Weisstein, Eric W. "Philo Line". MathWorld. |
Wikipedia:Philonides of Laodicea#0 | Philonides (Ancient Greek: Φιλωνίδης, c. 200 – c. 130 BCE) of Laodicea in Syria, was an Epicurean philosopher and mathematician who lived in the Seleucid court during the reigns of Antiochus IV Epiphanes and Demetrius I Soter. He is known principally from a Life of Philonides, which was discovered among the charred papyrus scrolls at the Villa of the Papyri at Herculaneum. Philonides was born into a family with good connections with the Seleucid court. He is said to have been taught by Eudemus and Dionysodorus the mathematician. Philonides attempted to convert Antiochus IV Epiphanes to Epicureanism, and later instructed his nephew, Demetrius I Soter, in philosophy. Philonides was highly honoured in the court, and he is also known from various stone inscriptions. He was renowned as a mathematician, and is mentioned by Apollonius of Perga in the preface to the second book of his Conics. Philonides was a zealous collector of the works of Epicurus and his colleagues, and is said to have published over 100 treatises, probably compilations of the works he collected. == Notes == |
Wikipedia:Philosophy of Mathematics Education Journal#0 | The Philosophy of Mathematics Education Journal is a peer-reviewed open-access academic journal published and edited by Paul Ernest (University of Exeter). It publishes articles relevant to the philosophy of mathematics education, a subfield of mathematics education that often draws in issues from the philosophy of mathematics. The journal includes articles by internationally recognized and beginning researchers, graduate student assignments, theses, and other pertinent resources. The journal aims to foster awareness of philosophical aspects of mathematics education and mathematics, understood broadly to include most kinds of theoretical reflection and research; to freely disseminate new thinking and to encourage informal communication, dialogue and international co-operation between teachers, scholars and researchers in mathematics, philosophy and education. Recent authors have included: Brian Greer, David W. Jardine, David W. Stinson, Christopher H. Dubbs, Kathleen Nolan, Margaret Walshaw, Nicolas Balacheff, Ole Skovsmose, Paul Ernest, Tony Brown, Roberto Baldino and Tânia Cabral, Roy Wagner, Sal Restivo, Steven Khan, Ubiratan D’Ambrosio, Yasmine Abtahi. Special issues of the journal have focussed on Self-Based Methodology (issue no. 40, 2023) Dedicated to Ubi D'Ambrosio (issue no. 37, 2021) Mathematics Education and the Living World: Responses to Ecological Crisis (issue no. 32, 2017) Critical Mathematics Education (issue no. 25, Oct. 2010) Mathematics and Art (issue no. 29, Sept, 2009) Social justice issues in mathematics education, part 2 (issue no. 21, 2007) Social justice issues in mathematics education, part 1 (issue no. 20, 2007) Semiotics of mathematics education (issue no. 10, 1997) == See also == List of scientific journals in mathematics education == External links == Official website |
Wikipedia:Physics applications of asymptotically safe gravity#0 | The asymptotic safety approach to quantum gravity provides a nonperturbative notion of renormalization in order to find a consistent and predictive quantum field theory of the gravitational interaction and spacetime geometry. It is based upon a nontrivial fixed point of the corresponding renormalization group (RG) flow such that the running coupling constants approach this fixed point in the ultraviolet (UV) limit. This suffices to avoid divergences in physical observables. Moreover, it has predictive power: Generically an arbitrary starting configuration of coupling constants given at some RG scale does not run into the fixed point for increasing scale, but a subset of configurations might have the desired UV properties. For this reason it is possible that — assuming a particular set of couplings has been measured in an experiment — the requirement of asymptotic safety fixes all remaining couplings in such a way that the UV fixed point is approached. Asymptotic safety, if realized in Nature, has far reaching consequences in all areas where quantum effects of gravity are to be expected. Their exploration, however, is still in its infancy. By now there are some phenomenological studies concerning the implications of asymptotic safety in particle physics, astrophysics and cosmology, for instance. == Standard Model == === Mass of the Higgs boson === The Standard Model in combination with asymptotic safety might be valid up to arbitrarily high energies. Based on the assumption that this is indeed correct it is possible to make a statement about the Higgs boson mass. The first concrete results were obtained by Mikhail Shaposhnikov and Christof Wetterich in 2010. Depending on the sign of the gravity induced anomalous dimension A λ {\displaystyle A_{\lambda }} there are two possibilities: For A λ < 0 {\displaystyle A_{\lambda }<0} the Higgs mass m H {\displaystyle m_{\text{H}}} is restricted to the window 126 GeV < m H < 174 GeV {\displaystyle 126\,{\text{GeV}}<m_{\text{H}}<174\,{\text{GeV}}} . If, on the other hand, A λ > 0 {\displaystyle A_{\lambda }>0} which is the favored possibility, m H {\displaystyle m_{\text{H}}} must take the value m H = 126 GeV , {\displaystyle m_{\text{H}}=126\,{\text{GeV}},} with an uncertainty of a few GeV only. In this spirit one can consider m H {\displaystyle m_{\text{H}}} a prediction of asymptotic safety. The result is in surprisingly good agreement with the latest experimental data measured at CERN in 2013 by the ATLAS and CMS collaborations, where a value of m H = 125.10 ± 0.14 GeV {\displaystyle m_{\text{H}}=125.10\ \pm 0.14\,{\text{GeV}}} has been determined. === Fine structure constant === By taking into account the gravitational correction to the running of the fine structure constant α {\displaystyle \alpha } of quantum electrodynamics, Ulrich Harst and Martin Reuter were able to study the impacts of asymptotic safety on the infrared (renormalized) value of α {\displaystyle \alpha } . They found two fixed points suitable for the asymptotic safety construction both of which imply a well-behaved UV limit, without running into a Landau pole type singularity. The first one is characterized by a vanishing α {\displaystyle \alpha } , and the infrared value α IR {\displaystyle \alpha _{\text{IR}}} is a free parameter. In the second case, however, the fixed point value of α {\displaystyle \alpha } is non-zero, and its infrared value is a computable prediction of the theory. In a more recent study, Nicolai Christiansen and Astrid Eichhorn showed that quantum fluctuations of gravity generically generate self-interactions for gauge theories, which have to be included in a discussion of a potential ultraviolet completion. Depending on the gravitational and gauge parameters, they conclude that the fine structure constant α {\displaystyle \alpha } might be asymptotically free and not run into a Landau pole, while the induced coupling for the gauge self-interaction is irrelevant and thus its value can be predicted. This is an explicit example where Asymptotic Safety solves a problem of the Standard Model - the triviality of the U(1) sector - without introducing new free parameters. == Astrophysics and cosmology == Phenomenological consequences of asymptotic safety can be expected also for astrophysics and cosmology. Alfio Bonanno and Reuter investigated the horizon structure of "renormalization group improved" black holes and computed quantum gravity corrections to the Hawking temperature and the corresponding thermodynamical entropy. By means of an RG improvement of the Einstein–Hilbert action, Reuter and Holger Weyer obtained a modified version of the Einstein equations which in turn results in a modification of the Newtonian limit, providing a possible explanation for the observed flat galaxy rotation curves without having to postulate the presence of dark matter. As for cosmology, Bonanno and Reuter argued that asymptotic safety modifies the very early Universe, possibly leading to a resolution to the horizon and flatness problem of standard cosmology. Furthermore, asymptotic safety provides the possibility of inflation without the need of an inflaton field (while driven by the cosmological constant). It was reasoned that the scale invariance related to the non-Gaussian fixed point underlying asymptotic safety is responsible for the near scale invariance of the primordial density perturbations. Using different methods, asymptotically safe inflation was analyzed further by Weinberg. == See also == Asymptotic safety in quantum gravity Quantum gravity UV fixed point == References == |
Wikipedia:Picard–Lindelöf theorem#0 | In mathematics, specifically the study of differential equations, the Picard–Lindelöf theorem gives a set of conditions under which an initial value problem has a unique solution. It is also known as Picard's existence theorem, the Cauchy–Lipschitz theorem, or the existence and uniqueness theorem. The theorem is named after Émile Picard, Ernst Lindelöf, Rudolf Lipschitz and Augustin-Louis Cauchy. == Theorem == Let D ⊆ R × R n {\displaystyle D\subseteq \mathbb {R} \times \mathbb {R} ^{n}} be a closed rectangle with ( t 0 , y 0 ) ∈ int D {\displaystyle (t_{0},y_{0})\in \operatorname {int} D} , the interior of D {\displaystyle D} . Let f : D → R n {\displaystyle f:D\to \mathbb {R} ^{n}} be a function that is continuous in t {\displaystyle t} and Lipschitz continuous in y {\displaystyle y} (with Lipschitz constant independent from t {\displaystyle t} ). Then there exists some ε > 0 {\displaystyle \varepsilon >0} such that the initial value problem y ′ ( t ) = f ( t , y ( t ) ) , y ( t 0 ) = y 0 {\displaystyle y'(t)=f(t,y(t)),\qquad y(t_{0})=y_{0}} has a unique solution y ( t ) {\displaystyle y(t)} on the interval [ t 0 − ε , t 0 + ε ] {\displaystyle [t_{0}-\varepsilon ,t_{0}+\varepsilon ]} . == Proof sketch == A standard proof relies on transforming the differential equation into an integral equation, then applying the Banach fixed-point theorem to prove the existence and uniqueness of solutions. Integrating both sides of the differential equation y ′ ( t ) = f ( t , y ( t ) ) {\textstyle y'(t)=f(t,y(t))} shows that any solution to the differential equation must also satisfy the integral equation y ( t ) − y ( t 0 ) = ∫ t 0 t f ( s , y ( s ) ) d s . {\displaystyle y(t)-y(t_{0})=\int _{t_{0}}^{t}f(s,y(s))\,ds.} Given the hypotheses that f {\displaystyle f} is continuous in t {\displaystyle t} and Lipschitz continuous in y {\displaystyle y} , this integral operator is a contraction and so the Banach fixed-point theorem proves that a solution can be obtained by fixed-point iteration of successive approximations. In this context, this fixed-point iteration method is known as Picard iteration. Set φ 0 ( t ) = y 0 {\displaystyle \varphi _{0}(t)=y_{0}} and φ k + 1 ( t ) = y 0 + ∫ t 0 t f ( s , φ k ( s ) ) d s . {\displaystyle \varphi _{k+1}(t)=y_{0}+\int _{t_{0}}^{t}f(s,\varphi _{k}(s))\,ds.} It follows from the Banach fixed-point theorem that the sequence of "Picard iterates" φ k {\textstyle \varphi _{k}} is convergent and that its limit is a solution to the original initial value problem. Since the Banach fixed-point theorem states that the fixed-point is unique, the solution found through this iteration is the unique solution to the differential equation given an initial value. == Example of Picard iteration == Let y ( t ) = tan ( t ) , {\displaystyle y(t)=\tan(t),} the solution to the equation y ′ ( t ) = 1 + y ( t ) 2 {\displaystyle y'(t)=1+y(t)^{2}} with initial condition y ( t 0 ) = y 0 = 0 , t 0 = 0. {\displaystyle y(t_{0})=y_{0}=0,t_{0}=0.} Starting with φ 0 ( t ) = 0 , {\displaystyle \varphi _{0}(t)=0,} we iterate φ k + 1 ( t ) = ∫ 0 t ( 1 + ( φ k ( s ) ) 2 ) d s {\displaystyle \varphi _{k+1}(t)=\int _{0}^{t}(1+(\varphi _{k}(s))^{2})\,ds} so that φ n ( t ) → y ( t ) {\displaystyle \varphi _{n}(t)\to y(t)} : φ 1 ( t ) = ∫ 0 t ( 1 + 0 2 ) d s = t {\displaystyle \varphi _{1}(t)=\int _{0}^{t}(1+0^{2})\,ds=t} φ 2 ( t ) = ∫ 0 t ( 1 + s 2 ) d s = t + t 3 3 {\displaystyle \varphi _{2}(t)=\int _{0}^{t}(1+s^{2})\,ds=t+{\frac {t^{3}}{3}}} φ 3 ( t ) = ∫ 0 t ( 1 + ( s + s 3 3 ) 2 ) d s = t + t 3 3 + 2 t 5 15 + t 7 63 {\displaystyle \varphi _{3}(t)=\int _{0}^{t}\left(1+\left(s+{\frac {s^{3}}{3}}\right)^{2}\right)\,ds=t+{\frac {t^{3}}{3}}+{\frac {2t^{5}}{15}}+{\frac {t^{7}}{63}}} and so on. Evidently, the functions are computing the Taylor series expansion of our known solution y = tan ( t ) . {\displaystyle y=\tan(t).} Since tan {\displaystyle \tan } has poles at ± π 2 , {\displaystyle \pm {\tfrac {\pi }{2}},} it is not Lipschitz continuous in the neighborhood of those points, and the iteration converges toward a local solution for | t | < π 2 {\displaystyle |t|<{\tfrac {\pi }{2}}} only that is not valid over all of R {\displaystyle \mathbb {R} } . == Example of non-uniqueness == To understand uniqueness of solutions, contrast the following two examples of first order ordinary differential equations for y(t). Both differential equations will possess a single stationary point y = 0. First, the homogeneous linear equation dy/dt = ay ( a < 0 {\displaystyle a<0} ), a stationary solution is y(t) = 0, which is obtained for the initial condition y(0) = 0. Beginning with any other initial condition y(0) = y0 ≠ 0, the solution y ( t ) = y 0 e a t {\displaystyle y(t)=y_{0}e^{at}} tends toward the stationary point y = 0, but it only approaches it in the limit of infinite time, so the uniqueness of solutions over all finite times is guaranteed. By contrast for an equation in which the stationary point can be reached after a finite time, uniqueness of solutions does not hold. Consider the homogeneous nonlinear equation dy/dt = ay 2/3, which has at least these two solutions corresponding to the initial condition y(0) = 0: y(t) = 0 and y ( t ) = { ( a t 3 ) 3 t < 0 0 t ≥ 0 , {\displaystyle y(t)={\begin{cases}\left({\tfrac {at}{3}}\right)^{3}&t<0\\\ \ \ \ 0&t\geq 0,\end{cases}}} so the previous state of the system is not uniquely determined by its state at or after t = 0. The uniqueness theorem does not apply because the derivative of the function f (y) = y 2/3 is not bounded in the neighborhood of y = 0 and therefore it is not Lipschitz continuous, violating the hypothesis of the theorem. == Detailed proof == Let L {\displaystyle L} be the Lipschitz constant of ( t , y ) ↦ f ( t , y ) {\displaystyle (t,y)\mapsto f(t,y)} with respect to y . {\displaystyle y.} The function f {\displaystyle f} is continuous as a function of ( t , y ) {\displaystyle (t,y)} . In particular, since t ↦ f ( t , y ) {\displaystyle t\mapsto f(t,y)} is a continuous function of t {\displaystyle t} , we have that for any point ( t 0 , y 0 ) {\displaystyle (t_{0},y_{0})} and ϵ > 0 {\displaystyle \epsilon >0} there exist δ > 0 {\displaystyle \delta >0} such that | f ( t , y 0 ) − f ( t 0 , y 0 ) | < ϵ / 2 {\displaystyle |f(t,y_{0})-f(t_{0},y_{0})|<\epsilon /2} when | t − t 0 | < δ {\displaystyle |t-t_{0}|<\delta } . We have | f ( t , y ) − f ( t 0 , y 0 ) | ≤ | f ( t , y ) − f ( t , y 0 ) | + | f ( t , y 0 ) − f ( t 0 , y 0 ) | < ϵ , {\displaystyle |f(t,y)-f(t_{0},y_{0})|\leq |f(t,y)-f(t,y_{0})|+|f(t,y_{0})-f(t_{0},y_{0})|<\epsilon ,} provided | t − t 0 | < δ {\displaystyle |t-t_{0}|<\delta } and | y − y 0 | < ϵ / 2 L {\displaystyle |y-y_{0}|<\epsilon /2L} , which shows that f {\displaystyle f} is continuous at ( t 0 , y 0 ) {\displaystyle (t_{0},y_{0})} . Let a := 1 / 2 L {\displaystyle a:=1/2L} and take any b > 0 {\displaystyle b>0} such that C a , b = I a ( t 0 ) × B b ( y 0 ) {\displaystyle C_{a,b}=I_{a}(t_{0})\times B_{b}(y_{0})} is a subset of D , {\displaystyle D,} where I a ( t 0 ) = [ t 0 − a , t 0 + a ] B b ( y 0 ) = [ y 0 − b , y 0 + b ] . {\displaystyle {\begin{aligned}I_{a}(t_{0})&=[t_{0}-a,t_{0}+a]\\B_{b}(y_{0})&=[y_{0}-b,y_{0}+b].\end{aligned}}} Such a set exists because ( t 0 , y 0 ) {\displaystyle (t_{0},y_{0})} is in the interior of D , {\displaystyle D,} by assumption. Let M = sup ( t , y ) ∈ C a , b ‖ f ( t , y ) ‖ , {\displaystyle M=\sup _{(t,y)\in C_{a,b}}\|f(t,y)\|,} which is the supremum of (the absolute values of) the slopes of the function. The function f {\displaystyle f} attains a maximum on C a , b {\displaystyle C_{a,b}} because f {\displaystyle f} is continuous and C a , b {\displaystyle C_{a,b}} is compact. For a later step in the proof, we need that a < b / M , {\displaystyle a<b/M,} so if a ≥ b / M , {\displaystyle a\geq b/M,} then change a {\displaystyle a} to a := 1 2 min { 1 / L , b / M } , {\displaystyle a:={\tfrac {1}{2}}\min\{1/L,\ b/M\},} and update I a ( t 0 ) , {\displaystyle I_{a}(t_{0}),} B b ( y 0 ) , {\displaystyle B_{b}(y_{0}),} C a , b , {\displaystyle C_{a,b},} and M {\displaystyle M} accordingly (this update will be needed at most once since M {\displaystyle M} cannot increase as a result of restricting C a , b {\displaystyle C_{a,b}} ). Consider C ( I a ( t 0 ) , B b ( y 0 ) ) {\displaystyle {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))} , the function space of continuous functions I a ( t 0 ) → B b ( y 0 ) . {\displaystyle I_{a}(t_{0})\to B_{b}(y_{0}).} We will proceed by applying the Banach fixed-point theorem using the metric on C ( I a ( t 0 ) , B b ( y 0 ) ) {\displaystyle {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))} induced by the uniform norm. Namely, for each continuous function φ : I a ( t 0 ) → B b ( y 0 ) , {\displaystyle \varphi :I_{a}(t_{0})\to B_{b}(y_{0}),} the norm of φ {\displaystyle \varphi } is ‖ φ ‖ ∞ = sup t ∈ I a ‖ φ ( t ) ‖ . {\displaystyle \|\varphi \|_{\infty }=\sup _{t\in I_{a}}\|\varphi (t)\|.} The Picard operator Γ : C ( I a ( t 0 ) , B b ( y 0 ) ) → C ( I a ( t 0 ) , B b ( y 0 ) ) {\displaystyle \Gamma :{\mathcal {C}}{\big (}I_{a}(t_{0}),B_{b}(y_{0}){\big )}\to {\mathcal {C}}{\big (}I_{a}(t_{0}),B_{b}(y_{0}){\big )}} is defined for each φ ∈ C ( I a ( t 0 ) , B b ( y 0 ) ) {\displaystyle \varphi \in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))} by Γ φ ∈ C ( I a ( t 0 ) , B b ( y 0 ) ) {\displaystyle \Gamma \varphi \in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))} given by Γ φ ( t ) = y 0 + ∫ t 0 t f ( s , φ ( s ) ) d s ∀ t ∈ I a ( t 0 ) . {\displaystyle \Gamma \varphi (t)=y_{0}+\int _{t_{0}}^{t}f(s,\varphi (s))\,ds\quad \forall t\in I_{a}(t_{0}).} To apply the Banach fixed-point theorem, we must show that Γ {\displaystyle \Gamma } maps a complete non-empty metric space X into itself and also is a contraction mapping. We first show that Γ {\displaystyle \Gamma } takes B b ( y 0 ) {\displaystyle B_{b}(y_{0})} into itself in the space of continuous functions with the uniform norm. Here, B b ( y 0 ) {\displaystyle B_{b}(y_{0})} is a closed ball in the space of continuous (and bounded) functions "centered" at the constant function y 0 {\displaystyle y_{0}} . Hence we need to show that ‖ φ − y 0 ‖ ∞ ≤ b {\displaystyle \|\varphi -y_{0}\|_{\infty }\leq b} implies ‖ Γ φ ( t ) − y 0 ‖ = ‖ ∫ t 0 t f ( s , φ ( s ) ) d s ‖ ≤ ∫ t 0 t ′ ‖ f ( s , φ ( s ) ) ‖ d s ≤ ∫ t 0 t ′ M d s = M | t ′ − t 0 | ≤ M a ≤ b {\displaystyle \left\|\Gamma \varphi (t)-y_{0}\right\|=\left\|\int _{t_{0}}^{t}f(s,\varphi (s))\,ds\right\|\leq \int _{t_{0}}^{t'}\left\|f(s,\varphi (s))\right\|ds\leq \int _{t_{0}}^{t'}M\,ds=M\left|t'-t_{0}\right|\leq Ma\leq b} where t ′ {\displaystyle t'} is some number in [ t 0 − a , t 0 + a ] {\displaystyle [t_{0}-a,t_{0}+a]} where the maximum is achieved. The last inequality in the chain is true since a < b / M . {\displaystyle a<b/M.} Now let us prove that Γ {\displaystyle \Gamma } is a contraction mapping as required to apply the Banach fixed-point theorem. In particular, we want to show that there exists 0 ≤ q < 1 , {\displaystyle 0\leq q<1,} such that ‖ Γ φ 1 − Γ φ 2 ‖ ∞ ≤ q ‖ φ 1 − φ 2 ‖ ∞ {\displaystyle \left\|\Gamma \varphi _{1}-\Gamma \varphi _{2}\right\|_{\infty }\leq q\left\|\varphi _{1}-\varphi _{2}\right\|_{\infty }} for all φ 1 , φ 2 ∈ C ( I a ( t 0 ) , B b ( y 0 ) ) . {\displaystyle \varphi _{1},\varphi _{2}\in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0})).} Let q = a L {\displaystyle q=aL} and take any φ 1 , φ 2 ∈ C ( I a ( t 0 ) , B b ( y 0 ) ) . {\displaystyle \varphi _{1},\varphi _{2}\in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0})).} Take t {\displaystyle t} such that ‖ Γ φ 1 − Γ φ 2 ‖ ∞ = ‖ ( Γ φ 1 − Γ φ 2 ) ( t ) ‖ . {\displaystyle \|\Gamma \varphi _{1}-\Gamma \varphi _{2}\|_{\infty }=\left\|\left(\Gamma \varphi _{1}-\Gamma \varphi _{2}\right)(t)\right\|.} Then, using the definition of Γ {\displaystyle \Gamma } , ‖ ( Γ φ 1 − Γ φ 2 ) ( t ) ‖ = ‖ ∫ t 0 t ( f ( s , φ 1 ( s ) ) − f ( s , φ 2 ( s ) ) ) d s ‖ ≤ ∫ t 0 t ‖ f ( s , φ 1 ( s ) ) − f ( s , φ 2 ( s ) ) ‖ d s ≤ L ∫ t 0 t ‖ φ 1 ( s ) − φ 2 ( s ) ‖ d s since f is Lipschitz-continuous ≤ L ∫ t 0 t ‖ φ 1 − φ 2 ‖ ∞ d s ≤ L a ‖ φ 1 − φ 2 ‖ ∞ , {\displaystyle {\begin{aligned}\left\|\left(\Gamma \varphi _{1}-\Gamma \varphi _{2}\right)(t)\right\|&=\left\|\int _{t_{0}}^{t}\left(f(s,\varphi _{1}(s))-f(s,\varphi _{2}(s))\right)ds\right\|\\&\leq \int _{t_{0}}^{t}\left\|f\left(s,\varphi _{1}(s)\right)-f\left(s,\varphi _{2}(s)\right)\right\|ds\\&\leq L\int _{t_{0}}^{t}\left\|\varphi _{1}(s)-\varphi _{2}(s)\right\|ds&&{\text{since }}f{\text{ is Lipschitz-continuous}}\\&\leq L\int _{t_{0}}^{t}\left\|\varphi _{1}-\varphi _{2}\right\|_{\infty }\,ds\\&\leq La\left\|\varphi _{1}-\varphi _{2}\right\|_{\infty },\end{aligned}}} where t − t 0 ≤ a , {\displaystyle t-t_{0}\leq a,} because the domains of ϕ 1 , ϕ 2 {\displaystyle \phi _{1},\phi _{2}} are both I a ( t 0 ) × B b ( y 0 ) . {\displaystyle I_{a}(t_{0})\times B_{b}(y_{0}).} By definition, q = a L , {\displaystyle q=aL,} and a < 1 / L , {\displaystyle a<1/L,} so q < 1. {\displaystyle q<1.} Therefore, Γ {\displaystyle \Gamma } is a contraction. We have established that the Picard's operator is a contraction on the Banach spaces with the metric induced by the uniform norm. This allows us to apply the Banach fixed-point theorem to conclude that the operator has a unique fixed point. In particular, there is a unique function φ ∈ C ( I a ( t 0 ) , B b ( y 0 ) ) \varphi \in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0})) such that Γ φ = φ . {\displaystyle \Gamma \varphi =\varphi .} Thus, φ {\displaystyle \varphi } is the unique solution of the initial value problem, valid on the interval I a . {\displaystyle I_{a}.} == Optimization of the solution's interval == We wish to remove the dependence of the interval Ia on L. To this end, there is a corollary of the Banach fixed-point theorem: if an operator Tn is a contraction for some n in N, then T has a unique fixed point. Before applying this theorem to the Picard operator, recall the following: Proof. Induction on m. For the base of the induction (m = 1) we have already seen this, so suppose the inequality holds for m − 1, then we have: ‖ Γ m φ 1 ( t ) − Γ m φ 2 ( t ) ‖ = ‖ Γ Γ m − 1 φ 1 ( t ) − Γ Γ m − 1 φ 2 ( t ) ‖ ≤ | ∫ t 0 t ‖ f ( s , Γ m − 1 φ 1 ( s ) ) − f ( s , Γ m − 1 φ 2 ( s ) ) ‖ d s | ≤ L | ∫ t 0 t ‖ Γ m − 1 φ 1 ( s ) − Γ m − 1 φ 2 ( s ) ‖ d s | ≤ L | ∫ t 0 t L m − 1 | s − t 0 | m − 1 ( m − 1 ) ! ‖ φ 1 − φ 2 ‖ d s | ≤ L m | t − t 0 | m m ! ‖ φ 1 − φ 2 ‖ . {\displaystyle {\begin{aligned}\left\|\Gamma ^{m}\varphi _{1}(t)-\Gamma ^{m}\varphi _{2}(t)\right\|&=\left\|\Gamma \Gamma ^{m-1}\varphi _{1}(t)-\Gamma \Gamma ^{m-1}\varphi _{2}(t)\right\|\\&\leq \left|\int _{t_{0}}^{t}\left\|f\left(s,\Gamma ^{m-1}\varphi _{1}(s)\right)-f\left(s,\Gamma ^{m-1}\varphi _{2}(s)\right)\right\|ds\right|\\&\leq L\left|\int _{t_{0}}^{t}\left\|\Gamma ^{m-1}\varphi _{1}(s)-\Gamma ^{m-1}\varphi _{2}(s)\right\|ds\right|\\&\leq L\left|\int _{t_{0}}^{t}{\frac {L^{m-1}|s-t_{0}|^{m-1}}{(m-1)!}}\left\|\varphi _{1}-\varphi _{2}\right\|ds\right|\\&\leq {\frac {L^{m}|t-t_{0}|^{m}}{m!}}\left\|\varphi _{1}-\varphi _{2}\right\|.\end{aligned}}} By taking a supremum over t ∈ [ t 0 − α , t 0 + α ] {\displaystyle t\in [t_{0}-\alpha ,t_{0}+\alpha ]} we see that ‖ Γ m φ 1 − Γ m φ 2 ‖ ≤ L m α m m ! ‖ φ 1 − φ 2 ‖ {\displaystyle \left\|\Gamma ^{m}\varphi _{1}-\Gamma ^{m}\varphi _{2}\right\|\leq {\frac {L^{m}\alpha ^{m}}{m!}}\left\|\varphi _{1}-\varphi _{2}\right\|} . This inequality assures that for some large m, L m α m m ! < 1 , {\displaystyle {\frac {L^{m}\alpha ^{m}}{m!}}<1,} and hence Γm will be a contraction. So by the previous corollary Γ will have a unique fixed point. Finally, we have been able to optimize the interval of the solution by taking α = min{a, b/M}. In the end, this result shows the interval of definition of the solution does not depend on the Lipschitz constant of the field, but only on the interval of definition of the field and its maximum absolute value. == Other existence theorems == The Picard–Lindelöf theorem shows that the solution exists and that it is unique. The Peano existence theorem shows only existence, not uniqueness, but it assumes only that f is continuous in y, instead of Lipschitz continuous. For example, the right-hand side of the equation dy/dt = y 1/3 with initial condition y(0) = 0 is continuous but not Lipschitz continuous. Indeed, rather than being unique, this equation has at least three solutions: y ( t ) = 0 , y ( t ) = ± ( 2 3 t ) 3 2 {\displaystyle y(t)=0,\qquad y(t)=\pm \left({\tfrac {2}{3}}t\right)^{\frac {3}{2}}} . Even more general is Carathéodory's existence theorem, which proves existence (in a more general sense) under weaker conditions on f . Although these conditions are only sufficient, there also exist necessary and sufficient conditions for the solution of an initial value problem to be unique, such as Okamura's theorem. == Global existence of solution == The Picard–Lindelöf theorem ensures that solutions to initial value problems exist uniquely within a local interval [ t 0 − ε , t 0 + ε ] {\displaystyle [t_{0}-\varepsilon ,t_{0}+\varepsilon ]} , possibly dependent on each solution. The behavior of solutions beyond this local interval can vary depending on the properties of f and the domain over which f is defined. For instance, if f is globally Lipschitz, then the local interval of existence of each solution can be extended to the entire real line and all the solutions are defined over the entire R. If f is only locally Lipschitz, some solutions may not be defined for certain values of t, even if f is smooth. For instance, the differential equation dy/dt = y 2 with initial condition y(0) = 1 has the solution y(t) = 1/(1-t), which is not defined at t = 1. Nevertheless, if f is a differentiable function defined on a compact submanifold of Rn such that the prescribed derivative is tangent to the given submanifold, then the initial value problem has a unique solution for all time. More generally, in differential geometry: if f is a differentiable vector field defined over a domain which is a compact smooth manifold, then all its trajectories (integral curves) exist for all time. == See also == Cauchy–Kovalevskaya theorem Complete vector fields Frobenius theorem (differential topology) Integrability conditions for differential systems Newton's method Euler method Trapezoidal rule == Notes == == References == == External links == |
Wikipedia:Pickover stalk#0 | Pickover stalks are certain kinds of details to be found empirically in the Mandelbrot set, in the study of fractal geometry. They are so named after the researcher Clifford Pickover, whose "epsilon cross" method was instrumental in their discovery. An "epsilon cross" is a cross-shaped orbit trap. According to Vepstas (1997) "Pickover hit on the novel concept of looking to see how closely the orbits of interior points come to the x and y axes. In these pictures, the closer that the point approaches, the higher up the color scale, with red denoting the closest approach. The logarithm of the distance is taken to accentuate the details". == Biomorphs == Biomorphs are biological-looking Pickover Stalks. At the end of the 1980s, Pickover developed biological feedback organisms similar to Julia sets and the fractal Mandelbrot set. According to Pickover (1999) in summary, he "described an algorithm that can be used for the creation of diverse and complicated forms resembling invertebrate organisms. The shapes are complicated and difficult to predict before actually experimenting with the mappings." He hoped "these techniques will encourage [others] to explore further and discover new forms, by accident, that are on the edge of science and art". Pickover developed an algorithm (which uses neither random perturbations nor natural laws) to create very complicated forms resembling invertebrate organisms. The iteration, or recursion, of mathematical transformations is used to generate biological morphologies. He called them "biomorphs." At the same time he coined "biomorph" for these patterns, the famous evolutionary biologist Richard Dawkins used the word to refer to his own set of biological shapes that were arrived at by a very different procedure. More rigorously, Pickover's "biomorphs" encompass the class of organismic morphologies created by small changes to traditional convergence tests in the field of "Julia set" theory. Pickover's biomorphs show a self-similarity at different scales, a common feature of dynamical systems with feedback. Real systems, such as shorelines and mountain ranges, also show self-similarity over some scales. A 2-dimensional parametric 0L system can “look” like Pickover's biomorphs. == Implementation == The below example, written in pseudocode, renders a Mandelbrot set colored using a Pickover Stalk with a transformation vector and a color dividend. The transformation vector is used to offset the (x, y) position when sampling the point's distance to the horizontal and vertical axis. The color dividend is a float used to determine how thick the stalk is when it is rendered. == References == == Further reading == Pickover, Clifford (1987). "Biomorphs: Computer Displays of Biological Forms Generated from Mathematical Feedback Loops". Computer Graphics Forum. 5 (4): 313–316. doi:10.1111/j.1467-8659.1986.tb00317.x. == External links == Apeirographic Explorations: Biomorphs A random assortment of biomorphs. Mad Teddy's Biomorphs, detailed write-up on Pickover's algorithm, including examples and source code. |
Wikipedia:Picone identity#0 | In the field of ordinary differential equations, the Picone identity, named after Mauro Picone, is a classical result about homogeneous linear second order differential equations. Since its inception in 1910 it has been used with tremendous success in association with an almost immediate proof of the Sturm comparison theorem, a theorem whose proof took up many pages in Sturm's original memoir of 1836. It is also useful in studying the oscillation of such equations and has been generalized to other type of differential equations and difference equations. The Picone identity is used to prove the Sturm–Picone comparison theorem. == Picone identity == Suppose that u and v are solutions of the two homogeneous linear second order differential equations in self-adjoint form ( p 1 ( x ) u ′ ) ′ + q 1 ( x ) u = 0 {\displaystyle (p_{1}(x)u')'+q_{1}(x)u=0} and ( p 2 ( x ) v ′ ) ′ + q 2 ( x ) v = 0. {\displaystyle (p_{2}(x)v')'+q_{2}(x)v=0.} Then, for all x with v(x) ≠ 0, the following identity holds ( u v ( p 1 u ′ v − p 2 u v ′ ) ) ′ = ( q 2 − q 1 ) u 2 + ( p 1 − p 2 ) u ′ 2 + p 2 ( u ′ − v ′ u v ) 2 . {\displaystyle \left({\frac {u}{v}}(p_{1}u'v-p_{2}uv')\right)'=\left(q_{2}-q_{1}\right)u^{2}+\left(p_{1}-p_{2}\right)u'^{2}+p_{2}\left(u'-v'{\frac {u}{v}}\right)^{2}.} === Proof === ( u v ( p 1 u ′ v − p 2 u v ′ ) ) ′ = ( u p 1 u ′ − p 2 v ′ u 2 1 v ) ′ = u ′ p 1 u ′ + u ( p 1 u ′ ) ′ − ( p 2 v ′ ) ′ u 2 1 v − p 2 v ′ 2 u u ′ 1 v + p 2 v ′ u 2 v ′ v 2 = {\displaystyle \left({\frac {u}{v}}(p_{1}u'v-p_{2}uv')\right)'=\left(up_{1}u'-p_{2}v'u^{2}{\frac {1}{v}}\right)'=u'p_{1}u'+u(p_{1}u')'-(p_{2}v')'u^{2}{\frac {1}{v}}-p_{2}v'2uu'{\frac {1}{v}}+p_{2}v'u^{2}{\frac {v'}{v^{2}}}=} = p 1 u ′ 2 − 2 p 2 u u ′ v ′ v + p 2 u 2 v ′ 2 v 2 + u ( p 1 u ′ ) ′ − ( p 2 v ′ ) ′ u 2 v = {\displaystyle =p_{1}u'^{2}-2p_{2}{\frac {uu'v'}{v}}+p_{2}{\frac {u^{2}v'^{2}}{v^{2}}}+u(p_{1}u')'-(p_{2}v')'{\frac {u^{2}}{v}}=} = p 1 u ′ 2 − p 2 u ′ 2 + p 2 u ′ 2 − 2 p 2 u ′ u v ′ v + p 2 ( u v ′ v ) 2 − u ( q 1 u ) + ( q 2 v ) u 2 v = ( p 1 − p 2 ) u ′ 2 + p 2 ( u ′ − v ′ u v ) 2 + ( q 2 − q 1 ) u 2 {\displaystyle =p_{1}u'^{2}-p_{2}u'^{2}+p_{2}u'^{2}-2p_{2}u'{\frac {uv'}{v}}+p_{2}\left({\frac {uv'}{v}}\right)^{2}-u(q_{1}u)+(q_{2}v){\frac {u^{2}}{v}}=\left(p_{1}-p_{2}\right)u'^{2}+p_{2}\left(u'-v'{\frac {u}{v}}\right)^{2}+\left(q_{2}-q_{1}\right)u^{2}} == Notes == Picone, Mauro (1910). "Sui valori eccezionali di un parametro da cui dipende un'equazione differenziale lineare del secondo ordine". Ann. Scuola Norm. Sup. Pisa. 11: 1–141. Swanson, Charles A. (1975). "Picone's Identity". Rendiconti di Matematica. 8 (2): 373–397. == References == |
Wikipedia:Piecewise function#0 | In mathematics, a piecewise function (also called a piecewise-defined function, a hybrid function, or a function defined by cases) is a function whose domain is partitioned into several intervals ("subdomains") on which the function may be defined differently. Piecewise definition is actually a way of specifying the function, rather than a characteristic of the resulting function itself, as every function whose domain contains at least two points can be rewritten as a piecewise function. The first three paragraphs of this article only deal with this first meaning of "piecewise". Terms like piecewise linear, piecewise smooth, piecewise continuous, and others are also very common. The meaning of a function being piecewise P {\displaystyle P} , for a property P {\displaystyle P} is roughly that the domain of the function can be partitioned into pieces on which the property P {\displaystyle P} holds, but is used slightly differently by different authors. Unlike the first meaning, this is a property of the function itself and not only a way to specify it. Sometimes the term is used in a more global sense involving triangulations; see Piecewise linear manifold. == Notation and interpretation == Piecewise functions can be defined using the common functional notation, where the body of the function is an array of functions and associated subdomains. A semicolon or comma may follow the subfunction or subdomain columns. The if {\displaystyle {\text{if}}} or for {\displaystyle {\text{for}}} is rarely omitted at the start of the right column. The subdomains together must cover the whole domain; sometimes it is also required that they are pairwise disjoint, i.e. form a partition of the domain. This is enough for a function to be "defined by cases", but in order for the overall function to be "piecewise", the subdomains are typically required to be nonempty intervals (some may be degenerate intervals, i.e. single points or unbounded intervals) and they are often not allowed to have infinitely many subdomains in any bounded interval. This means that functions with bounded domains will only have finitely many subdomains, while functions with unbounded domains can have infinitely many subdomains, as long as they are appropriately spread out. As an example, consider the piecewise definition of the absolute value function: | x | = { − x , if x < 0 + x , if x ≥ 0. {\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\+x,&{\text{if }}x\geq 0.\end{cases}}} For all values of x {\displaystyle x} less than zero, the first sub-function ( − x {\displaystyle -x} ) is used, which negates the sign of the input value, making negative numbers positive. For all values of x {\displaystyle x} greater than or equal to zero, the second sub-function ( x {\displaystyle x} ) is used, which evaluates trivially to the input value itself. The following table documents the absolute value function at certain values of x {\displaystyle x} : In order to evaluate a piecewise-defined function at a given input value, the appropriate subdomain needs to be chosen in order to select the correct sub-function—and produce the correct output value. == Examples == A step function or piecewise constant function, composed of constant sub-functions Piecewise linear function, composed of linear sub-functions Broken power law, a function composed of power-law sub-functions Spline, a function composed of polynomial sub-functions, often constrained to be smooth at the joints between pieces B-spline PDIFF f ( x ) = { exp ( − 1 1 − x 2 ) , x ∈ ( − 1 , 1 ) 0 , otherwise {\displaystyle f(x)={\begin{cases}\exp \left(-{\frac {1}{1-x^{2}}}\right),&x\in (-1,1)\\0,&{\text{otherwise}}\end{cases}}} and some other common Bump functions. These are infinitely differentiable, but analyticity holds only piecewise. == Continuity and differentiability of piecewise-defined functions == A piecewise-defined function is continuous on a given interval in its domain if the following conditions are met: its sub-functions are continuous on the corresponding intervals (subdomains), there is no discontinuity at an endpoint of any subdomain within that interval. The pictured function, for example, is piecewise-continuous throughout its subdomains, but is not continuous on the entire domain, as it contains a jump discontinuity at x 0 {\displaystyle x_{0}} . The filled circle indicates that the value of the right sub-function is used in this position. For a piecewise-defined function to be differentiable on a given interval in its domain, the following conditions have to fulfilled in addition to those for continuity above: its sub-functions are differentiable on the corresponding open intervals, the one-sided derivatives exist at all intervals' endpoints, at the points where two subintervals touch, the corresponding one-sided derivatives of the two neighboring subintervals coincide. == Applications == In applied mathematical analysis, "piecewise-regular" functions have been found to be consistent with many models of the human visual system, where images are perceived at a first stage as consisting of smooth regions separated by edges (as in a cartoon); a cartoon-like function is a C2 function, smooth except for the existence of discontinuity curves. In particular, shearlets have been used as a representation system to provide sparse approximations of this model class in 2D and 3D. Piecewise defined functions are also commonly used for interpolation, such as in nearest-neighbor interpolation. == See also == Piecewise linear continuation All pages with titles beginning with Piecewise == References == |
Wikipedia:Pieri's formula#0 | In mathematics, Pieri's formula, named after Mario Pieri, describes the product of a Schubert cycle by a special Schubert cycle in the Schubert calculus, or the product of a Schur polynomial by a complete symmetric function. In terms of Schur functions sλ indexed by partitions λ, it states that s μ h r = ∑ λ s λ {\displaystyle \displaystyle s_{\mu }h_{r}=\sum _{\lambda }s_{\lambda }} where hr is a complete homogeneous symmetric polynomial and the sum is over all partitions λ obtained from μ by adding r elements, no two in the same column. By applying the ω involution on the ring of symmetric functions, one obtains the dual Pieri rule for multiplying an elementary symmetric polynomial with a Schur polynomial: s μ e r = ∑ λ s λ {\displaystyle \displaystyle s_{\mu }e_{r}=\sum _{\lambda }s_{\lambda }} The sum is now taken over all partitions λ obtained from μ by adding r elements, no two in the same row. Pieri's formula implies Giambelli's formula. The Littlewood–Richardson rule is a generalization of Pieri's formula giving the product of any two Schur functions. Monk's formula is an analogue of Pieri's formula for flag manifolds. == References == Macdonald, I. G. (1995), Symmetric functions and Hall polynomials, Oxford Mathematical Monographs (2nd ed.), The Clarendon Press Oxford University Press, ISBN 978-0-19-853489-1, MR 1354144, archived from the original on 2012-12-11 Sottile, Frank (2001) [1994], "Schubert calculus", Encyclopedia of Mathematics, EMS Press |
Wikipedia:Pierre Deligne#0 | Pierre René, Viscount Deligne (French: [dəliɲ]; born 3 October 1944) is a Belgian mathematician. He is best known for work on the Weil conjectures, leading to a complete proof in 1973. He is the winner of the 2013 Abel Prize, 2008 Wolf Prize, 1988 Crafoord Prize, and 1978 Fields Medal. == Early life and education == Deligne was born in Etterbeek, attended school at Athénée Adolphe Max and studied at the Université libre de Bruxelles (ULB), writing a dissertation titled Théorème de Lefschetz et critères de dégénérescence de suites spectrales (Theorem of Lefschetz and criteria of degeneration of spectral sequences). He completed his doctorate at the University of Paris-Sud in Orsay 1972 under the supervision of Alexander Grothendieck, with a thesis titled Théorie de Hodge. == Career == Starting in 1965, Deligne worked with Grothendieck at the Institut des Hautes Études Scientifiques (IHÉS) near Paris, initially on the generalization within scheme theory of Zariski's main theorem. In 1968, he also worked with Jean-Pierre Serre; their work led to important results on the l-adic representations attached to modular forms, and the conjectural functional equations of L-functions. Deligne also focused on topics in Hodge theory. He introduced the concept of weights and tested them on objects in complex geometry. He also collaborated with David Mumford on a new description of the moduli spaces for curves. Their work came to be seen as an introduction to one form of the theory of algebraic stacks, and recently has been applied to questions arising from string theory. But Deligne's most famous contribution was his proof of the third and last of the Weil conjectures. This proof completed a programme initiated and largely developed by Alexander Grothendieck lasting for more than a decade. As a corollary he proved the celebrated Ramanujan–Petersson conjecture for modular forms of weight greater than one; weight one was proved in his work with Serre. Deligne's 1974 paper contains the first proof of the Weil conjectures. Deligne's contribution was to supply the estimate of the eigenvalues of the Frobenius endomorphism, considered the geometric analogue of the Riemann hypothesis. It also led to a proof of the Lefschetz hyperplane theorem and the old and new estimates of the classical exponential sums, among other applications. Deligne's 1980 paper contains a much more general version of the Riemann hypothesis. From 1970 until 1984, Deligne was a permanent member of the IHÉS staff. During this time he did much important work outside of his work on algebraic geometry. In joint work with George Lusztig, Deligne applied étale cohomology to construct representations of finite groups of Lie type; with Michael Rapoport, Deligne worked on the moduli spaces from the 'fine' arithmetic point of view, with application to modular forms. He received a Fields Medal in 1978. In 1984, Deligne moved to the Institute for Advanced Study in Princeton. === Hodge cycles === In terms of the completion of some of the underlying Grothendieck program of research, he defined absolute Hodge cycles, as a surrogate for the missing and still largely conjectural theory of motives. This idea allows one to get around the lack of knowledge of the Hodge conjecture, for some applications. The theory of mixed Hodge structures, a powerful tool in algebraic geometry that generalizes classical Hodge theory, was created by applying weight filtration, Hironaka's resolution of singularities and other methods, which he then used to prove the Weil conjectures. He reworked the Tannakian category theory in his 1990 paper for the "Grothendieck Festschrift", employing Beck's theorem – the Tannakian category concept being the categorical expression of the linearity of the theory of motives as the ultimate Weil cohomology. All this is part of the yoga of weights, uniting Hodge theory and the l-adic Galois representations. The Shimura variety theory is related, by the idea that such varieties should parametrize not just good (arithmetically interesting) families of Hodge structures, but actual motives. This theory is not yet a finished product, and more recent trends have used K-theory approaches. === Perverse sheaves === With Alexander Beilinson, Joseph Bernstein, and Ofer Gabber, Deligne made definitive contributions to the theory of perverse sheaves. This theory plays an important role in the recent proof of the fundamental lemma by Ngô Bảo Châu. It was also used by Deligne himself to greatly clarify the nature of the Riemann–Hilbert correspondence, which extends Hilbert's twenty-first problem to higher dimensions. Prior to Deligne's paper, Zoghman Mebkhout's 1980 thesis and the work of Masaki Kashiwara through D-modules theory (but published in the 80s) on the problem have appeared. === Other works === In 1974 at the IHÉS, Deligne's joint paper with Phillip Griffiths, John Morgan and Dennis Sullivan on the real homotopy theory of compact Kähler manifolds was a major piece of work in complex differential geometry which settled several important questions of both classical and modern significance. The input from Weil conjectures, Hodge theory, variations of Hodge structures, and many geometric and topological tools were critical to its investigations. His work in complex singularity theory generalized Milnor maps into an algebraic setting and extended the Picard-Lefschetz formula beyond their general format, generating a new method of research in this subject. His paper with Ken Ribet on abelian L-functions and their extensions to Hilbert modular surfaces and p-adic L-functions form an important part of his work in arithmetic geometry. Other important research achievements of Deligne include the notion of cohomological descent, motivic L-functions, mixed sheaves, nearby vanishing cycles, central extensions of reductive groups, geometry and topology of braid groups, providing the modern axiomatic definition of Shimura varieties, the work in collaboration with George Mostow on the examples of non-arithmetic lattices and monodromy of hypergeometric differential equations in two- and three-dimensional complex hyperbolic spaces, etc. == Awards == He was awarded the Fields Medal in 1978, the Crafoord Prize in 1988, the Balzan Prize in 2004, the Wolf Prize in 2008, and the Abel Prize in 2013, "for seminal contributions to algebraic geometry and for their transformative impact on number theory, representation theory, and related fields". He was elected a foreign member of the Academie des Sciences de Paris in 1978. In 2006 he was ennobled by the Belgian king as viscount. In 2009, Deligne was elected a foreign member of the Royal Swedish Academy of Sciences and a residential member of the American Philosophical Society. He is a member of the Norwegian Academy of Science and Letters. == Selected publications == Deligne, Pierre (1974). "La conjecture de Weil: I". Publications Mathématiques de l'IHÉS. 43: 273–307. doi:10.1007/bf02684373. S2CID 123139343. Deligne, Pierre (1980). "La conjecture de Weil : II". Publications Mathématiques de l'IHÉS. 52: 137–252. doi:10.1007/BF02684780. S2CID 189769469. Deligne, Pierre (1990). "Catégories tannakiennes". Grothendieck Festschrift Vol II. Progress in Mathematics. 87: 111–195. Deligne, Pierre; Griffiths, Phillip; Morgan, John; Sullivan, Dennis (1975). "Real homotopy theory of Kähler manifolds". Inventiones Mathematicae. 29 (3): 245–274. Bibcode:1975InMat..29..245D. doi:10.1007/BF01389853. MR 0382702. S2CID 1357812. Deligne, Pierre; Mostow, George Daniel (1993). Commensurabilities among Lattices in PU(1,n). Princeton, N.J.: Princeton University Press. ISBN 0-691-00096-4. Quantum fields and strings: a course for mathematicians. Vols. 1, 2. Material from the Special Year on Quantum Field Theory held at the Institute for Advanced Study, Princeton, NJ, 1996–1997. Edited by Pierre Deligne, Pavel Etingof, Daniel S. Freed, Lisa C. Jeffrey, David Kazhdan, John W. Morgan, David R. Morrison and Edward Witten. American Mathematical Society, Providence, RI; Institute for Advanced Study (IAS), Princeton, NJ, 1999. Vol. 1: xxii+723 pp.; Vol. 2: pp. i–xxiv and 727–1501. ISBN 0-8218-1198-3. == Hand-written letters == Deligne wrote multiple hand-written letters to other mathematicians in the 1970s. These include "Deligne's letter to Piatetskii-Shapiro (1973)" (PDF). Archived from the original (PDF) on 7 December 2012. Retrieved 15 December 2012. "Deligne's letter to Jean-Pierre Serre (around 1974)". 15 December 2012. "Deligne's letter to Looijenga (1974)" (PDF). Retrieved 20 January 2020. "Deligne's letter to Millson (1986)" (PDF). Retrieved 11 November 2021. == Concepts named after Deligne == The following mathematical concepts are named after Deligne: Brylinski–Deligne extensions Deligne torus Deligne–Lusztig theory Deligne–Mumford moduli space of curves Deligne–Mumford stacks Fourier–Deligne transform Deligne cohomology Deligne motive Deligne tensor product of abelian categories (denoted ⊠ {\displaystyle \boxtimes } ) Deligne's theorem Langlands–Deligne local constant Weil–Deligne group Additionally, many different conjectures in mathematics have been called the Deligne conjecture: Deligne's conjecture on Hochschild cohomology. The Deligne conjecture on special values of L-functions is a formulation of the hope for algebraicity of L(n) where L is an L-function and n is an integer in some set depending on L. There is a Deligne conjecture on 1-motives arising in the theory of motives in algebraic geometry. There is a Gross–Deligne conjecture in the theory of complex multiplication. There is a Deligne conjecture on monodromy, also known as the weight monodromy conjecture, or purity conjecture for the monodromy filtration. There is a Deligne conjecture in the representation theory of exceptional Lie groups. There is a conjecture named the Deligne–Grothendieck conjecture for the discrete Riemann–Roch theorem in characteristic 0. There is a conjecture named the Deligne–Milnor conjecture for the differential interpretation of a formula of Milnor for Milnor fibres, as part of the extension of nearby cycles and their Euler numbers. The Deligne–Milne conjecture is formulated as part of motives and Tannakian categories. There is a Deligne–Langlands conjecture of historical importance in relation with the development of the Langlands philosophy. Deligne's conjecture on the Lefschetz trace formula (now called Fujiwara's theorem for equivariant correspondences). == See also == Brumer–Stark conjecture E7½ Hodge–de Rham spectral sequence Logarithmic form Kodaira vanishing theorem Moduli of algebraic curves Motive (algebraic geometry) Perverse sheaf Riemann–Hilbert correspondence Serre's modularity conjecture Standard conjectures on algebraic cycles == References == == External links == O'Connor, John J.; Robertson, Edmund F., "Pierre Deligne", MacTutor History of Mathematics Archive, University of St Andrews Pierre Deligne at the Mathematics Genealogy Project Roberts, Siobhan (19 June 2012). "Simons Foundation: Pierre Deligne". Simons Foundation. – Biography and extended video interview. Pierre Deligne's home page at Institute for Advanced Study Katz, Nick (June 1980), "The Work of Pierre Deligne", Proceedings of the International Congress of Mathematicians, Helsinki 1978 (PDF), Helsinki, pp. 47–52, ISBN 951-410-352-1, archived from the original (PDF) on 12 July 2012{{citation}}: CS1 maint: location missing publisher (link) An introduction to his work at the time of his Fields medal award. |
Wikipedia:Pierre Dusart#0 | Pierre Dusart is a French mathematician at the Université de Limoges who specializes in number theory. He has published in several countries, specially in South Korea, with his colleague Damien Sauveron who is associate professor in Computer Sciences at the Université de Limoges. == External links == Résumé and thesis: Autour de la fonction qui compte le nombre de nombres premiers (French) "The kth prime is greater than k(ln k + ln ln k-1) for k>=2". Mathematics of Computation 68 (1999), pp. 411–415. "ESTIMATES OF SOME FUNCTIONS OVER PRIMES". == Notes and references == |
Wikipedia:Pierre François Verhulst#0 | Pierre François Verhulst (28 October 1804, in Brussels – 15 February 1849, in Brussels) was a Belgian mathematician and a doctor in number theory from the University of Ghent in 1825. He is best known for the logistic growth model. == Logistic equation == Verhulst developed the logistic function in a series of three papers between 1838 and 1847, based on research on modeling population growth that he conducted in the mid 1830s, under the guidance of Adolphe Quetelet; see Logistic function § History for details. Verhulst published in Verhulst (1838) the equation: d N d t = r N − α N 2 {\displaystyle {\frac {dN}{dt}}=rN-\alpha N^{2}} where N(t) represents number of individuals at time t, r the intrinsic growth rate, and α {\displaystyle \alpha } is the density-dependent crowding effect (also known as intraspecific competition). In this equation, the population equilibrium (sometimes referred to as the carrying capacity, K), N ∗ {\displaystyle N^{*}} , is N ∗ = r α {\displaystyle N^{*}={\frac {r}{\alpha }}} . In Verhulst (1845) he named the solution the logistic curve. Later, Raymond Pearl and Lowell Reed popularized the equation, but with a presumed equilibrium, K, as d N d t = r N ( 1 − N K ) {\displaystyle {\frac {dN}{dt}}=rN\left(1-{\frac {N}{K}}\right)} where K sometimes represents the maximum number of individuals that the environment can support. In relation to the density-dependent crowding effect, α = r K {\displaystyle \alpha ={\frac {r}{K}}} . The Pearl-Reed logistic equation can be integrated exactly, and has solution N ( t ) = K 1 + C K e − r t {\displaystyle N(t)={\frac {K}{1+CKe^{-rt}}}} where C = 1/N(0) − 1/K is determined by the initial condition N(0). The solution can also be written as a weighted harmonic mean of the initial condition and the carrying capacity, 1 N ( t ) = 1 − e − r t K + e − r t N ( 0 ) . {\displaystyle {\frac {1}{N(t)}}={\frac {1-e^{-rt}}{K}}+{\frac {e^{-rt}}{N(0)}}.} Although the continuous-time logistic equation is often compared to the logistic map because of similarity of form, it is actually more closely related to the Beverton–Holt model of fisheries recruitment. The concept of R/K selection theory derives its name from the competing dynamics of exponential growth and carrying capacity introduced by the equations above. == See also == Population dynamics Logistic map Logistic distribution == Works == Verhulst, Pierre-François (1838). "Notice sur la loi que la population suit dans son accroissement". Correspondance mathématique et physique. 10: 113–121. Retrieved 18 February 2013. Verhulst, Pierre-François (1841). Traité élémentaire des fonctions elliptiques : ouvrage destiné à faire suite aux traités élémentaires de calcul intégral. Bruxelles: Hayez. Retrieved 18 February 2013. Verhulst, Pierre-François (1845). "Recherches mathématiques sur la loi d'accroissement de la population" [Mathematical Researches into the Law of Population Growth Increase]. Nouveaux Mémoires de l'Académie Royale des Sciences et Belles-Lettres de Bruxelles. 18: 1–42. Retrieved 18 February 2013. Verhulst, Pierre-François (1847). "Deuxième mémoire sur la loi d'accroissement de la population". Mémoires de l'Académie Royale des Sciences, des Lettres et des Beaux-Arts de Belgique. 20: 1–32. Retrieved 18 February 2013. == References == == External links == O'Connor, John J.; Robertson, Edmund F., "Pierre François Verhulst", MacTutor History of Mathematics Archive, University of St Andrews |
Wikipedia:Pierre Gabriel#0 | Ronaël Julien Pierre-Gabriel (born 13 June 1998) is a French professional footballer who plays as a right-back for Croatian club Dinamo Zagreb. == Club career == Pierre-Gabriel is a youth exponent from Saint-Étienne. He made his Ligue 1 debut on 29 November 2015 against Guingamp. He started in the first eleven, before being substituted after 61 minutes for Pierre-Yves Polomat. On 23 July 2022, Pierre-Gabriel joined Ligue 1 side Strasbourg on a season-long loan with an option-to-buy. On 18 January 2023, he moved on a new loan until the end of the season to Espanyol in Spain. On 1 August 2023, Ligue 1 side Nantes announced the permanent signing of Pierre-Gabriel on a three-year contract. On 7 February 2024, Pierre-Gabriel moved to Dinamo Zagreb in Croatia on a multi-year deal. == Career statistics == As of match played 27 April 2025 == References == == External links == Ronaël Pierre-Gabriel at the French Football Federation (in French) |
Wikipedia:Pierre Humbert (mathematician)#0 | Pierre Humbert (13 June 1891, Paris – 17 November 1953, Montpellier) was a French mathematician who worked on the theory of elliptic functions and introduced Humbert polynomials. He was the son of the mathematician Georges Humbert and married the daughter of Henri Andoyer. Pierre Humbert was an Invited Speaker of the ICM in 1928 in Bologna. == See also == Humbert series == Publications == Introduction à l'études des fonctions elliptiques, à l'usage des étudiants des facultés des sciences, Paris, Hermann 1922 with Henri Andoyer: Histoire de la Nation Française. Tome XIV, Histoire des Sciences en France; première partie, Histoire des Mathématiques, de la Mécanique et de l'Astronomie. Paris 1924 Calcul Symbolique, Paris, Hermann 1934 with Serge Colombo: Le calcul symbolique et ses applications à la physique mathématique, Paris, Gauthier-Villars 1949, 2nd edn. 1965 Potentiels et Prepotentiels, Gauthier-Villars 1937 Exercises numeriques d´ astronomie, Paris 1933 L´Oeuvre astronomique de Gassendi, Hermann 1936 Histoire des découvertes astronomiques, Paris 1948 (book for young people) Pierre Duhem, Paris 1934 Philosophes et Savants, Paris, Flammarion 1953 with Serge Colombo: Introduction mathématique à l’étude des théories électromagnétiques, Gauthier-Villars 1949 == References == O'Connor, John J.; Robertson, Edmund F., "Pierre Humbert", MacTutor History of Mathematics Archive, University of St Andrews |
Wikipedia:Pierre Samuel#0 | Pierre Samuel du Pont de Nemours ( dew-PONT, DEW-pont, French: [pjɛʁ samɥɛl dy pɔ̃ d(ə) nəmuʁ]; 14 December 1739 – 7 August 1817) was a French-American writer, economist, publisher and government official. During the French Revolution, he, his two sons and their families migrated to the United States. His son Éleuthère Irénée du Pont was the founder of E. I. du Pont de Nemours and Company. He was the patriarch and progenitor of one of the United States's most successful and wealthiest business dynasties of the 19th and 20th centuries. == Early life and family == Pierre du Pont was born on 14 December 1739, the son of Samuel du Pont and Anne Alexandrine de Montchanin. His father was a watchmaker and French Protestant, or Huguenot. His mother was a descendant of an impoverished minor noble family from Burgundy. Du Pont married Nicole-Charlotte Marie-Louise le Dée de Rencourt in 1766, also of a minor noble family. They had three sons: Victor Marie (1767–1827), a manufacturer and politician; Paul François (December 1769–January 1770); and Éleuthère Irénée (1771–1834), the founder of E.I. duPont de Nemours and Company in the United States. Nicole-Charlotte died 3 September 1784 of typhoid. == Ancien Régime == With a lively intelligence and high ambition, Pierre became estranged from his father, who wanted him to be a watchmaker. The younger man developed a wide range of acquaintances with access to the French court during the Ancien Régime period. Eventually he became the protégé of Dr. François Quesnay, the personal physician of King Louis XV's mistress, Madame de Pompadour. Quesnay was the leader of a faction known as the économistes, a group of liberals at the court dedicated to economic and agricultural reforms. By the early 1760s, du Pont's writings on the national economy had drawn the attention of intellectuals such as Voltaire and Turgot. His 1768 book on physiocracy (Physiocratie, ou Constitution naturelle du gouvernement le plus avantageux au genre humain) advocated low tariffs and free trade among nations, deeply influenced Adam Smith of Scotland. In 1768, he took over from Nicolas Baudeau, editor of Ephémérides du citoyen, ou Bibliothèque raisonnée des sciences morales et politiques; he published Observations sur l'esclavage des Negres in volume 6. He was invited in 1774 by King Stanisław August Poniatowski (Stanislaus II Augustus) of the Polish–Lithuanian Commonwealth to help organize that country's educational system. The appointment to the Commission of National Education, with which he worked for several months, helped push his career forward, bringing him an appointment within the French government. He served as French inspector general of commerce under Louis XVI. He helped negotiate the treaty of 1783, by which Great Britain formally recognized the independence of the United States, and arranged the terms of a commercial treaty signed by France and England in 1786. In 1784, he was ennobled by lettres patentes from Louis XVI (a process known as noblesse de lettres), which added the de Nemours ('of Nemours') suffix to his name to reflect his residence. == French Revolution == Du Pont initially supported the French Revolution and served as president of the National Constituent Assembly. He and his son Eleuthère were among those who physically defended Louis XVI and Marie Antoinette from a mob besieging the Tuileries Palace in Paris during the insurrection of 10 August 1792. Condemned to the guillotine during the Reign of Terror, du Pont was awaiting execution when Robespierre fell on 9 thermidor an IV (27 July 1794), and he was spared. He married Françoise Robin on 5 vendémiaire an IV (27 September 1795). Robin was the daughter of Antoine Robin de Livet, a French aristocrat who lived in Lyon, and the widow of Pierre Poivre, the noted French administrator. After du Pont's house was sacked by a mob during the events of 18 Fructidor V (4 September 1797), he, his sons and their families immigrated to the United States in 1799. They hoped (but failed) to found a model community of French exiles. In the United States, du Pont developed strong ties with industry and government, in particular with Thomas Jefferson, with whom he had been acquainted since at least 1787 and who had referred to him as "one of the very great men of the age" and "the ablest man in France." Du Pont engaged in informal diplomacy between the United States and France during the reign of Napoleon. He was the originator of an idea that eventually became the Louisiana Purchase, as a way to avoid French troops landing in New Orleans, and possibly sparking armed conflict with U.S. forces. Eventually, he settled in the U.S. permanently; he died there in 1817. His son Éleuthère, who had studied chemistry in France with Antoine Lavoisier, founded a gunpowder manufacturing plant, based on his experience in France as a chemist. It became one of the largest and most successful American corporations, known today as DuPont. In 1800, he was elected a member of the American Philosophical Society in Philadelphia, Pennsylvania. == See also == Du Pont family for other family members and relationships Commission of National Education == References == == Further reading == du Pont, Pierre S. (1942). Genealogy of the Du Pont Family 1739–1942. Wilmington: Hambleton Printing & Publishing. Dutton, William S. (1942). Du Pont, One Hundred and Forty Years. New York: Charles Scribner's Sons. == External links == DuPont Company DuPont Heritage Pierre Samuel du Pont de Nemours papers at Hagley Museum and Library |
Wikipedia:Pierre Suquet#0 | Pierre Suquet (born 22 October 1954) is a French theoretician mechanic and research director at the CNRS. He is a member of the French Academy of Sciences. == Biography == He did his preparatory classes in Grenoble (Maths Sup) then at Louis-Le Grand (Maths Spé), to join the École Normale Supérieure (1973) to become an agrégé de Mathématiques in 1975, and Doctor in 1982. From 1983 to 1988 he was Professor at the University of Montpellier. Then CNRS Research Director, Mechanics and Acoustics Laboratory in Marseille, where he was Director from 1993 to 1999. From 2000 to 2001 he was Visiting Professor at the Clarke Millikan of the California Institute of Technology. Pierre Suquet is a specialist in continuous media and the behaviour of solid materials. His main research interests are elastoplastic structures, homogenization of non-linear composites and numerical simulation in materials mechanics. == Scientific work == === Existence and regularity of elastic-plastic solutions === In 1978, Pierre Suquet introduced the space of vector fields with bounded deformation and established certain properties (existence of internal and external traces on any surface, compact injection...). It shows that the evolution problem for a perfectly plastic elastic body admits a solution in speed (of displacement) in this space under a safe loading condition. It shows that there can be an infinite number of solutions, regular or non-regular. === Homogenization of dissipative media === The framework of generalized standard environments, due to Helphen and Nguyen Quoc Son, allows an easy writing of the laws of macroscopic behaviour. In 1982, Pierre Suquet established homogenization results for environments characterized by 2 potentials (free energy and dissipation potentials) and showed in particular that the generalized standard structure is preserved by changing scales when geometric variations are neglected. He notes that the homogenization of short-memory viscoelastic composites can lead to the appearance of long memory effects (an effect already noted by J. & E. Sanchez-Palencia in 1978). More recently, properties of these long memories have been established in relation to order moments 1 and 2 of the local fields. === Homogenization and limit loads === In 1983, Pierre Suquet gave a first upper bound of the resistance domain of a heterogeneous medium by solving a boundary analysis problem on a base cell. This result is improved by Bouchitte and Suquet who show that the homogenized analysis problem is divided into two sub-problems, one purely volumetric for which the resistance domain is that given by the boundary analysis of a base cell, the second, surface area for which a surface homogenization problem (and not on unit cell) must be solved. === Terminals for non-linear composites === In 1993, Pierre Suquet proposed a series of bollards for non-linear phase composites, using a method different from those available at the time (Willis, 1988, Ponte Castañeda, 1991), then showed in 1995 that Ponte Castañeda's (1991) variational method is a secant method using the second moment by phase of local fields. === Digital method for heterogeneous media based on FFT. === In 1994, H. Moulinec and P. Suquet introduced a numerical method using massively the Fast Fourier Transform (FFT) using only a pixelized image of the study microstructure (without mesh size). By introducing a homogeneous reference medium, the heterogeneity of the medium is transformed into a polarization constraint. The Green operator of the reference medium, known explicitly in Fourier space, can be used to iteratively update the polarization field. Several improvements and accelerations have been made to this method, which is now used internationally in dedicated codes. === Homogenization and reduction of models. === Since 2003, J.C. Michel and P. Suquet have been developing a method to reduce the number of internal variables of homogenized behavioural laws. This Nonuniform Transformation Field Analysis (NTFA) model uses the structuring of microscopic plastic deformation fields. A mode base is first built by the "snapshot POD" method along learning paths. Then the reduced kinetic equations for the field components in these modes are constructed by approaching the effective potentials by techniques derived from non-linear homogenization. == Books == === Book publishing === 1991 Blanc R., Raous M., Suquet P. (eds.) : Mechanics, Numerical Modeling and Dynamics of Materials, Proceedings of the scientific meetings of the fiftieth anniversary of the LMA. 415 pages. 1994 Buttazzo G., Bouchitte G., Suquet P. (eds.) : Calculus of Variations, Homogenization and Continuum Mechanics, Series in Advances in Mathematics for Applied Sciences (vol 18). World Scientific, Singapore, (ISBN 981-02-1783-8). 296 pages. 1997 Suquet P. (ed.) : Continuum Micromechanics, CISM Lecture Notes N0 377. Springer-Verlag. Wien. 347 pages. 2000 Ponte Castañeda P., Suquet P. (eds) : The J.R. Willis 60th Anniversary Volume, J. Mech. Phys. Solids 48, 6/7, 200 === Participation in synthesis works === 1986 Suquet P. : "A few mathematical aspects of incremental Plasticity". Course Notes at the International Centre for Pure and Applied Mathematics. In Applications of Mathematics to Mechanics. Ed. M. Djaoua. Ed. ENIT. 1987 Suquet P. : "Elements of Homogenization for Inelastic Solid Mechanics". Courses at the International Centre for Mechanical Sciences. Udine. 1985. In E. Sanchez-Palencia, A. Zaoui (eds), Homogenization Techniques for Composite Media. Lecture Notes in Physics N0272. Springer-Verlag. Berlin. 1987. pp. 193–278. 1988 Suquet P. : "Discontinuities and Plasticity". Course notes from the International Centre for Mechanical Sciences. Udine. Italy. 1987. In Non Smooth Mechanics and Applications. Ed. J.J. Moreau, P.D. Panagiotopoulos. CISM Course No. 302. Springer-Verlag. Wien. 1988. 279–340. 1991 Bouchitte G., Suquet P. : "Homogenization, Plasticity and Yield design", in G. Dal Maso and G.F. Dell'Antonio (eds) Composite Media and Homogenization Theory, Birkhaüser, Boston, 1991, pp 107–133. 1994 Bouchitte G., Suquet P. : "Equi-coercivity of variational problems. The role of recession functions". Seminar at the Collège de France. April 1990. In H. Brézis, J.L. Lions (eds.) Non-linear partial differential equations and their applications. College de France Seminar XII. Longman, Harlow, 1994, 31–54. 1997 a. Suquet P. : "Effective properties of nonlinear composites". in Suquet P. (ed.) Continuum Micromechanics. CISM Reading Notes N0 377. Springer-Verlag. Wien. 1997. pp 197–264. 1997 b. Suquet P., Moulinec H. : "Numerical simulation of the effective properties of a class of cell materials". in K.M. Golden, G.R. Grimmett, R.D. James, G.W. Milton, P.N. Sen (eds.) Mathematics of multiscale materials. IMA Reading Notes 99. Springer-Verlag, New York, 1997, 277–287. 2000 a. Michel J.C., Galvanetto U., Suquet P. : "Constitutive relations involving internal variables based on a micromechanical analysis", in R. Drouot, G.A. Maugin, F. Sidoroff (eds) Continuum Thermodynamics : The Art and Science of Modeling Material Behaviour, Klüwer Acad. 2000 b. Garajeu M., Suquet P : "Micromechanical models for anisotropic damage in creeping materials. In A. Ben Allal (ed.) Continuous Damage and Fracture, Elsevier, 2000, pp. 117–127. 2001 a. Michel J.C., Moulinec H., Suquet P. : "Composites with periodic microstructure". In M. Bornert, T. Bretheau and P. Gilormini (eds) Homogenization in Materials Mechanics, Hermes Science Publications, 2001, vol. 1, chap. 3, pp. 57–94. 2001 b. Bornert M., Suquet P.: "Non-linear properties of composites: potential approaches." In M. Bornert, T. Bretheau and P. Gilormini (eds) Homogenization in Materials Mechanics, Hermes Science Publications, 2001, vol. 2, chap. 2, pp. 45–90. 2001 v. Chaboche J.L., Suquet P., Besson J.: "Damage and change of scale". In M. Bornert, T. Bretheau and P. Gilormini (eds) Homogenization in Materials Mechanics, Hermes Science Publications, 2001, vol. 2, chap. 3, pp. 91–146. 2001 d. Suquet P. : "Nonlinear composites : Secant methods and variational bounds". In J. Lemaître (ed.) Handbook of Materials Behavior Models. Academic Press, 2001, pp. 968–98 == Dissemination of knowledge == 1988 Suquet P. : "Les milieux périodiques". in La Mécanique en 1988. Mail from the CNRS. 1988. 63. 1989 Sanchez-Palencia E., Suquet P.: "Simpler materials through homogenization". La Recherche, 214, 1989, XXIV-XXVI. 1990 Suquet P. : "L'homogénéisation et la Mécanique des Matériaux". The Mecamat Gazette. February 1990. 1992 Guillemain P., Suquet P. : "Waves and Structural Dynamics". Science and Defense. January 1992. == Honours and awards == Henri de Parville Prize from the French Academy of Sciences (1982). Jean Mandel Prize from the École des mines (1988). CNRS Silver medal (1991). Ampère Prize of the French Academy of Sciences (2000). Midwest Mechanics Distinguished Lecturer (2001). French Academy of Sciences: Elected correspondent on 6 June 1994, then member on 30 November 2004 (Section: Mechanical and Computer Sciences). Koiter Medals of ASME (2006). Distinguished International Scholar. University of Pennsylvania (2009). Chevallier of the Palmes Académiques (2010) James K. Knowles Lecture and Caltech Solid Mechanics Symposium (2014). National Academy of Engineering Member (2021) Timoshenko Medal (2024). == References == |
Wikipedia:Pierre-Justin Delort#0 | Pierre-Justin Delort (1758-1835), often anglicized to Peter, was a French priest and academic who was exiled following the French Revolution and moved to Ireland. He was born in Bordeaux in December 1748. A priest in the Archdiocese of Bordeaux in France, he held a Doctor of Laws from the University of Bordeaux. Delort was a professor of philosophy at the Collège de Guyenne, before the Revolution. Following the revolution, he emigrated to London. == Maynooth College == In 1795 he was appointed the first professor of Natural Philosophy and Mathematics at the newly established Royal College, of St. Patrick, Maynooth, Ireland. Delort was one of the first four professors present in Maynooth in 1795, the others being former professor of philosophy in Paris, Maurice Aherne (Dogmatic Theology), James Bernard Clinch (Humanity), and John Chetwode Eustace (Rhetoric). Delort's first class contained only three students. Delort was one of the four exiles from France; the others being Francois Anglade (Sorbonne, Paris), André Darré (Toulouse), and Louis-Gilles Delahogue (Sorbonne, Paris), sometimes called the French founding fathers of Maynooth. In 1801 following the concordat between the papacy and the French government, he returned to France initially for six months on a leave of absence, but he never returned to Maynooth; his fellow Frenchman Darré became chair of natural philosophy and mathematics. == Return to France and Eucharistic miracle of Bordeaux == Delort became canon and secretary to the Archdiocese of Bordeaux; he also served as Chair of Theology at the local Seminary. While saying Mass in the Church of St. Eulalia in Bordeaux on 3 February (Septuagesima Sunday) 1822, Abbot Delort, substituting for Venerable Pierre Bienvenu Noaille who usually said Mass for the nuns, consecrated the host. When he looked at the host, Jesus appeared in the host. This became known as a Eucharistic miracle. == Legacy == The Delort Prize is awarded for outstanding performance in Pure Mathematics in the First Year Examinations at Maynooth University and is named in his honour. == References == |
Wikipedia:Pierre-Simon Laplace#0 | Pierre-Simon, Marquis de Laplace (; French: [pjɛʁ simɔ̃ laplas]; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summarized and extended the work of his predecessors in his five-volume Mécanique céleste (Celestial Mechanics) (1799–1825). This work translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems. Laplace also popularized and further confirmed Sir Isaac Newton's work. In statistics, the Bayesian interpretation of probability was developed mainly by Laplace. Laplace formulated Laplace's equation, and pioneered the Laplace transform which appears in many branches of mathematical physics, a field that he took a leading role in forming. The Laplacian differential operator, widely used in mathematics, is also named after him. He restated and developed the nebular hypothesis of the origin of the Solar System and was one of the first scientists to suggest an idea similar to that of a black hole, with Stephen Hawking stating that "Laplace essentially predicted the existence of black holes". He originated Laplace's demon, which is a hypothetical all-predicting intellect. He also refined Newton's calculation of the speed of sound to derive a more accurate measurement. Laplace is regarded as one of the greatest scientists of all time. Sometimes referred to as the French Newton or Newton of France, he has been described as possessing a phenomenal natural mathematical faculty superior to that of almost all of his contemporaries. He was Napoleon's examiner when Napoleon graduated from the École Militaire in Paris in 1785. Laplace became a count of the Empire in 1806 and was named a marquis in 1817, after the Bourbon Restoration. == Early years == Some details of Laplace's life are not known, as records of it were burned in 1925 with the family château in Saint Julien de Mailloc, near Lisieux, the home of his great-great-grandson the Comte de Colbert-Laplace. Others had been destroyed earlier, when his house at Arcueil near Paris was looted in 1871. Laplace was born in Beaumont-en-Auge, Normandy on 23 March 1749, a village four miles west of Pont l'Évêque. According to W. W. Rouse Ball, his father, Pierre de Laplace, owned and farmed the small estates of Maarquis. His great-uncle, Maitre Oliver de Laplace, had held the title of Chirurgien Royal. It would seem that from a pupil he became an usher in the school at Beaumont; but, having procured a letter of introduction to d'Alembert, he went to Paris to advance his fortune. However, Karl Pearson is scathing about the inaccuracies in Rouse Ball's account and states: Indeed Caen was probably in Laplace's day the most intellectually active of all the towns of Normandy. It was here that Laplace was educated and was provisionally a professor. It was here he wrote his first paper published in the Mélanges of the Royal Society of Turin, Tome iv. 1766–1769, at least two years before he went at 22 or 23 to Paris in 1771. Thus before he was 20 he was in touch with Lagrange in Turin. He did not go to Paris a raw self-taught country lad with only a peasant background! In 1765 at the age of sixteen Laplace left the "School of the Duke of Orleans" in Beaumont and went to the University of Caen, where he appears to have studied for five years and was a member of the Sphinx. The École Militaire of Beaumont did not replace the old school until 1776. His parents, Pierre Laplace and Marie-Anne Sochon, were from comfortable families. The Laplace family was involved in agriculture until at least 1750, but Pierre Laplace senior was also a cider merchant and syndic of the town of Beaumont. Pierre Simon Laplace attended a school in the village run at a Benedictine priory, his father intending that he be ordained in the Roman Catholic Church. At sixteen, to further his father's intention, he was sent to the University of Caen to read theology. At the university, he was mentored by two enthusiastic teachers of mathematics, Christophe Gadbled and Pierre Le Canu, who awoke his zeal for the subject. Here Laplace's brilliance as a mathematician was quickly recognised and while still at Caen he wrote a memoir Sur le Calcul integral aux differences infiniment petites et aux differences finies. This provided the first correspondence between Laplace and Lagrange. Lagrange was the senior by thirteen years, and had recently founded in his native city Turin a journal named Miscellanea Taurinensia, in which many of his early works were printed and it was in the fourth volume of this series that Laplace's paper appeared. About this time, recognising that he had no vocation for the priesthood, he resolved to become a professional mathematician. Some sources state that he then broke with the church and became an atheist. Laplace did not graduate in theology but left for Paris with a letter of introduction from Le Canu to Jean le Rond d'Alembert who at that time was supreme in scientific circles. According to his great-great-grandson, d'Alembert received him rather poorly, and to get rid of him gave him a thick mathematics book, saying to come back when he had read it. When Laplace came back a few days later, d'Alembert was even less friendly and did not hide his opinion that it was impossible that Laplace could have read and understood the book. But upon questioning him, he realised that it was true, and from that time he took Laplace under his care. Another account is that Laplace solved overnight a problem that d'Alembert set him for submission the following week, then solved a harder problem the following night. D'Alembert was impressed and recommended him for a teaching place in the École Militaire. With a secure income and undemanding teaching, Laplace now threw himself into original research and for the next seventeen years, 1771–1787, he produced much of his original work in astronomy. From 1780 to 1784, Laplace and French chemist Antoine Lavoisier collaborated on several experimental investigations, designing their own equipment for the task. In 1783 they published their joint paper, Memoir on Heat, in which they discussed the kinetic theory of molecular motion. In their experiments they measured the specific heat of various bodies, and the expansion of metals with increasing temperature. They also measured the boiling points of ethanol and ether under pressure. Laplace further impressed the Marquis de Condorcet, and already by 1771 Laplace felt entitled to membership in the French Academy of Sciences. However, that year admission went to Alexandre-Théophile Vandermonde and in 1772 to Jacques Antoine Joseph Cousin. Laplace was disgruntled, and early in 1773 d'Alembert wrote to Lagrange in Berlin to ask if a position could be found for Laplace there. However, Condorcet became permanent secretary of the Académie in February and Laplace was elected associate member on 31 March, at age 24. In 1773 Laplace read his paper on the invariability of planetary motion in front of the Academy des Sciences. That March he was elected to the academy, a place where he conducted the majority of his science. On 15 March 1788, at the age of thirty-nine, Laplace married Marie-Charlotte de Courty de Romanges, an eighteen-year-old girl from a "good" family in Besançon. The wedding was celebrated at Saint-Sulpice, Paris. The couple had a son, Charles-Émile (1789–1874), and a daughter, Sophie-Suzanne (1792–1813). == Analysis, probability, and astronomical stability == Laplace's early published work in 1771 started with differential equations and finite differences but he was already starting to think about the mathematical and philosophical concepts of probability and statistics. However, before his election to the Académie in 1773, he had already drafted two papers that would establish his reputation. The first, Mémoire sur la probabilité des causes par les événements was ultimately published in 1774 while the second paper, published in 1776, further elaborated his statistical thinking and also began his systematic work on celestial mechanics and the stability of the Solar System. The two disciplines would always be interlinked in his mind. "Laplace took probability as an instrument for repairing defects in knowledge." Laplace's work on probability and statistics is discussed below with his mature work on the analytic theory of probabilities. === Stability of the Solar System === Sir Isaac Newton had published his Philosophiæ Naturalis Principia Mathematica in 1687 in which he gave a derivation of Kepler's laws, which describe the motion of the planets, from his laws of motion and his law of universal gravitation. However, though Newton had privately developed the methods of calculus, all his published work used cumbersome geometric reasoning, unsuitable to account for the more subtle higher-order effects of interactions between the planets. Newton himself had doubted the possibility of a mathematical solution to the whole, even concluding that periodic divine intervention was necessary to guarantee the stability of the Solar System. Dispensing with the hypothesis of divine intervention would be a major activity of Laplace's scientific life. It is now generally regarded that Laplace's methods on their own, though vital to the development of the theory, are not sufficiently precise to demonstrate the stability of the Solar System; today the Solar System is understood to be generally chaotic at fine scales, although currently fairly stable on coarse scale.: 83, 93 One particular problem from observational astronomy was the apparent instability whereby Jupiter's orbit appeared to be shrinking while that of Saturn was expanding. The problem had been tackled by Leonhard Euler in 1748, and Joseph Louis Lagrange in 1763, but without success. In 1776, Laplace published a memoir in which he first explored the possible influences of a purported luminiferous ether or of a law of gravitation that did not act instantaneously. He ultimately returned to an intellectual investment in Newtonian gravity. Euler and Lagrange had made a practical approximation by ignoring small terms in the equations of motion. Laplace noted that though the terms themselves were small, when integrated over time they could become important. Laplace carried his analysis into the higher-order terms, up to and including the cubic. Using this more exact analysis, Laplace concluded that any two planets and the Sun must be in mutual equilibrium and thereby launched his work on the stability of the Solar System. Gerald James Whitrow described the achievement as "the most important advance in physical astronomy since Newton". Laplace had a wide knowledge of all sciences and dominated all discussions in the Académie. Laplace seems to have regarded analysis merely as a means of attacking physical problems, though the ability with which he invented the necessary analysis is almost phenomenal. As long as his results were true he took but little trouble to explain the steps by which he arrived at them; he never studied elegance or symmetry in his processes, and it was sufficient for him if he could by any means solve the particular question he was discussing. == Tidal dynamics == === Dynamic theory of tides === While Newton explained the tides by describing the tide-generating forces and Bernoulli gave a description of the static reaction of the waters on Earth to the tidal potential, the dynamic theory of tides, developed by Laplace in 1775, describes the ocean's real reaction to tidal forces. Laplace's theory of ocean tides took into account friction, resonance and natural periods of ocean basins. It predicted the large amphidromic systems in the world's ocean basins and explains the oceanic tides that are actually observed. The equilibrium theory, based on the gravitational gradient from the Sun and Moon but ignoring the Earth's rotation, the effects of continents, and other important effects, could not explain the real ocean tides. Since measurements have confirmed the theory, many things have possible explanations now, like how the tides interact with deep sea ridges and chains of seamounts give rise to deep eddies that transport nutrients from the deep to the surface. The equilibrium tide theory calculates the height of the tide wave of less than half a meter, while the dynamic theory explains why tides are up to 15 meters. Satellite observations confirm the accuracy of the dynamic theory, and the tides worldwide are now measured to within a few centimeters. Measurements from the CHAMP satellite closely match the models based on the TOPEX data. Accurate models of tides worldwide are essential for research since the variations due to tides must be removed from measurements when calculating gravity and changes in sea levels. === Laplace's tidal equations === In 1776, Laplace formulated a single set of linear partial differential equations, for tidal flow described as a barotropic two-dimensional sheet flow. Coriolis effects are introduced as well as lateral forcing by gravity. Laplace obtained these equations by simplifying the fluid dynamic equations. But they can also be derived from energy integrals via Lagrange's equation. For a fluid sheet of average thickness D, the vertical tidal elevation ζ, as well as the horizontal velocity components u and v (in the latitude φ and longitude λ directions, respectively) satisfy Laplace's tidal equations: ∂ ζ ∂ t + 1 a cos ( φ ) [ ∂ ∂ λ ( u D ) + ∂ ∂ φ ( v D cos ( φ ) ) ] = 0 , ∂ u ∂ t − v ( 2 Ω sin ( φ ) ) + 1 a cos ( φ ) ∂ ∂ λ ( g ζ + U ) = 0 and ∂ v ∂ t + u ( 2 Ω sin ( φ ) ) + 1 a ∂ ∂ φ ( g ζ + U ) = 0 , {\displaystyle {\begin{aligned}{\frac {\partial \zeta }{\partial t}}&+{\frac {1}{a\cos(\varphi )}}\left[{\frac {\partial }{\partial \lambda }}(uD)+{\frac {\partial }{\partial \varphi }}\left(vD\cos(\varphi )\right)\right]=0,\\[2ex]{\frac {\partial u}{\partial t}}&-v\left(2\Omega \sin(\varphi )\right)+{\frac {1}{a\cos(\varphi )}}{\frac {\partial }{\partial \lambda }}\left(g\zeta +U\right)=0\qquad {\text{and}}\\[2ex]{\frac {\partial v}{\partial t}}&+u\left(2\Omega \sin(\varphi )\right)+{\frac {1}{a}}{\frac {\partial }{\partial \varphi }}\left(g\zeta +U\right)=0,\end{aligned}}} where Ω is the angular frequency of the planet's rotation, g is the planet's gravitational acceleration at the mean ocean surface, a is the planetary radius, and U is the external gravitational tidal-forcing potential. William Thomson (Lord Kelvin) rewrote Laplace's momentum terms using the curl to find an equation for vorticity. Under certain conditions this can be further rewritten as a conservation of vorticity. == On the figure of the Earth == During the years 1784–1787 he published some papers of exceptional power. Prominent among these is one read in 1783, reprinted as Part II of Théorie du Mouvement et de la figure elliptique des planètes in 1784, and in the third volume of the Mécanique céleste. In this work, Laplace completely determined the attraction of a spheroid on a particle outside it. This is memorable for the introduction into analysis of spherical harmonics or Laplace's coefficients, and also for the development of the use of what we would now call the gravitational potential in celestial mechanics. === Spherical harmonics === In 1783, in a paper sent to the Académie, Adrien-Marie Legendre had introduced what are now known as associated Legendre functions. If two points in a plane have polar coordinates (r, θ) and (r ', θ'), where r ' ≥ r, then, by elementary manipulation, the reciprocal of the distance between the points, d, can be written as: 1 d = 1 r ′ [ 1 − 2 cos ( θ ′ − θ ) r r ′ + ( r r ′ ) 2 ] − 1 2 . {\displaystyle {\frac {1}{d}}={\frac {1}{r'}}\left[1-2\cos(\theta '-\theta ){\frac {r}{r'}}+\left({\frac {r}{r'}}\right)^{2}\right]^{-{\tfrac {1}{2}}}.} This expression can be expanded in powers of r/r ' using Newton's generalised binomial theorem to give: 1 d = 1 r ′ ∑ k = 0 ∞ P k 0 ( cos ( θ ′ − θ ) ) ( r r ′ ) k . {\displaystyle {\frac {1}{d}}={\frac {1}{r'}}\sum _{k=0}^{\infty }P_{k}^{0}(\cos(\theta '-\theta ))\left({\frac {r}{r'}}\right)^{k}.} The sequence of functions P0k(cos φ) is the set of so-called "associated Legendre functions" and their usefulness arises from the fact that every function of the points on a circle can be expanded as a series of them. Laplace, with scant regard for credit to Legendre, made the non-trivial extension of the result to three dimensions to yield a more general set of functions, the spherical harmonics or Laplace coefficients. The latter term is not in common use now. : p. 340ff === Potential theory === This paper is also remarkable for the development of the idea of the scalar potential. The gravitational force acting on a body is, in modern language, a vector, having magnitude and direction. A potential function is a scalar function that defines how the vectors will behave. A scalar function is computationally and conceptually easier to deal with than a vector function. Alexis Clairaut had first suggested the idea in 1743 while working on a similar problem though he was using Newtonian-type geometric reasoning. Laplace described Clairaut's work as being "in the class of the most beautiful mathematical productions". However, Rouse Ball alleges that the idea "was appropriated from Joseph Louis Lagrange, who had used it in his memoirs of 1773, 1777 and 1780". The term "potential" itself was due to Daniel Bernoulli, who introduced it in his 1738 memoire Hydrodynamica. However, according to Rouse Ball, the term "potential function" was not actually used (to refer to a function V of the coordinates of space in Laplace's sense) until George Green's 1828 An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism. Laplace applied the language of calculus to the potential function and showed that it always satisfies the differential equation: ∇ 2 V = ∂ 2 V ∂ x 2 + ∂ 2 V ∂ y 2 + ∂ 2 V ∂ z 2 = 0. {\displaystyle \nabla ^{2}V={\partial ^{2}V \over \partial x^{2}}+{\partial ^{2}V \over \partial y^{2}}+{\partial ^{2}V \over \partial z^{2}}=0.} An analogous result for the velocity potential of a fluid had been obtained some years previously by Leonhard Euler. Laplace's subsequent work on gravitational attraction was based on this result. The quantity ∇2V has been termed the concentration of V and its value at any point indicates the "excess" of the value of V there over its mean value in the neighbourhood of the point. Laplace's equation, a special case of Poisson's equation, appears ubiquitously in mathematical physics. The concept of a potential occurs in fluid dynamics, electromagnetism and other areas. Rouse Ball speculated that it might be seen as "the outward sign" of one of the a priori forms in Kant's theory of perception. The spherical harmonics turn out to be critical to practical solutions of Laplace's equation. Laplace's equation in spherical coordinates, such as are used for mapping the sky, can be simplified, using the method of separation of variables into a radial part, depending solely on distance from the centre point, and an angular or spherical part. The solution to the spherical part of the equation can be expressed as a series of Laplace's spherical harmonics, simplifying practical computation. == Planetary and lunar inequalities == === Jupiter–Saturn great inequality === Laplace presented a memoir on planetary inequalities in three sections, in 1784, 1785, and 1786. This dealt mainly with the identification and explanation of the perturbations now known as the "great Jupiter–Saturn inequality". Laplace solved a longstanding problem in the study and prediction of the movements of these planets. He showed by general considerations, first, that the mutual action of two planets could never cause large changes in the eccentricities and inclinations of their orbits; but then, even more importantly, that peculiarities arose in the Jupiter–Saturn system because of the near approach to commensurability of the mean motions of Jupiter and Saturn. In this context commensurability means that the ratio of the two planets' mean motions is very nearly equal to a ratio between a pair of small whole numbers. Two periods of Saturn's orbit around the Sun almost equal five of Jupiter's. The corresponding difference between multiples of the mean motions, (2nJ − 5nS), corresponds to a period of nearly 900 years, and it occurs as a small divisor in the integration of a very small perturbing force with this same period. As a result, the integrated perturbations with this period are disproportionately large, about 0.8° degrees of arc in orbital longitude for Saturn and about 0.3° for Jupiter. Further developments of these theorems on planetary motion were given in his two memoirs of 1788 and 1789, but with the aid of Laplace's discoveries, the tables of the motions of Jupiter and Saturn could at last be made much more accurate. It was on the basis of Laplace's theory that Delambre computed his astronomical tables. === Books === Laplace now set himself the task to write a work which should "offer a complete solution of the great mechanical problem presented by the Solar System, and bring theory to coincide so closely with observation that empirical equations should no longer find a place in astronomical tables." The result is embodied in the Exposition du système du monde and the Mécanique céleste. The former was published in 1796, and gives a general explanation of the phenomena, but omits all details. It contains a summary of the history of astronomy. This summary procured for its author the honour of admission to the forty of the French Academy and is commonly esteemed one of the masterpieces of French literature, though it is not altogether reliable for the later periods of which it treats. Laplace developed the nebular hypothesis of the formation of the Solar System, first suggested by Emanuel Swedenborg and expanded by Immanuel Kant. This hypothesis remains the most widely accepted model in the study of the origin of planetary systems. According to Laplace's description of the hypothesis, the Solar System evolved from a globular mass of incandescent gas rotating around an axis through its centre of mass. As it cooled, this mass contracted, and successive rings broke off from its outer edge. These rings in their turn cooled, and finally condensed into the planets, while the Sun represented the central core which was still left. On this view, Laplace predicted that the more distant planets would be older than those nearer the Sun. As mentioned, the idea of the nebular hypothesis had been outlined by Immanuel Kant in 1755, who had also suggested "meteoric aggregations" and tidal friction as causes affecting the formation of the Solar System. Laplace was probably aware of this, but, like many writers of his time, he generally did not reference the work of others. Laplace's analytical discussion of the Solar System is given in his Mécanique céleste published in five volumes. The first two volumes, published in 1799, contain methods for calculating the motions of the planets, determining their figures, and resolving tidal problems. The third and fourth volumes, published in 1802 and 1805, contain applications of these methods, and several astronomical tables. The fifth volume, published in 1825, is mainly historical, but it gives as appendices the results of Laplace's latest researches. The Mécanique céleste contains numerous of Laplace's own investigations but many results are appropriated from other writers with little or no acknowledgement. The volume's conclusions, which are described by historians as the organised result of a century of work by other writers as well as Laplace, are presented by Laplace as if they were his discoveries alone. Jean-Baptiste Biot, who assisted Laplace in revising it for the press, says that Laplace himself was frequently unable to recover the details in the chain of reasoning, and, if satisfied that the conclusions were correct, he was content to insert the phrase, "Il est aisé à voir que..." ("It is easy to see that..."). The Mécanique céleste is not only the translation of Newton's Principia Mathematica into the language of differential calculus, but it completes parts of which Newton had been unable to fill in the details. The work was carried forward in a more finely tuned form in Félix Tisserand's Traité de mécanique céleste (1889–1896), but Laplace's treatise remains a standard authority. In the years 1784–1787, Laplace produced some memoirs of exceptional power. The significant among these was one issued in 1784, and reprinted in the third volume of the Mécanique céleste. In this work he completely determined the attraction of a spheroid on a particle outside it. This is known for the introduction into analysis of the potential, a useful mathematical concept of broad applicability to the physical sciences. == Optics == Laplace was a supporter of the corpuscle theory of light of Newton. In the fourth edition of Mécanique Céleste, Laplace assumed that short-ranged molecular forces were responsible for refraction of the corpuscles of light. Laplace and Étienne-Louis Malus also showed that Huygens principle of double refraction could be recovered from the principle of least action on light particles. However in 1815, Augustin-Jean Fresnel presented a new wave theory for diffraction to a commission of the French Academy with the help of François Arago. Laplace was one of the commission members and they ultimately awarded a prize to Fresnel for his new approach.: I.108 === Influence of gravity on light === Using corpuscular theory, Laplace also came close to propounding the concept of the black hole. He suggested that gravity could influence light and that there could be massive stars whose gravity is so great that not even light could escape from their surface (see escape velocity). However, this insight was so far ahead of its time that it played no role in the history of scientific development. == Arcueil == In 1806, Laplace bought a house in Arcueil, then a village and not yet absorbed into the Paris conurbation. The chemist Claude Louis Berthollet was a neighbour – their gardens were not separated – and the pair formed the nucleus of an informal scientific circle, latterly known as the Society of Arcueil. Because of their closeness to Napoleon, Laplace and Berthollet effectively controlled advancement in the scientific establishment and admission to the more prestigious offices. The Society built up a complex pyramid of patronage. In 1806, Laplace was also elected a foreign member of the Royal Swedish Academy of Sciences. == Analytic theory of probabilities == In 1812, Laplace issued his Théorie analytique des probabilités in which he laid down many fundamental results in statistics. The first half of this treatise was concerned with probability methods and problems, the second half with statistical methods and applications. Laplace's proofs are not always rigorous according to the standards of a later day, and his perspective slides back and forth between the Bayesian and non-Bayesian views with an ease that makes some of his investigations difficult to follow, but his conclusions remain basically sound even in those few situations where his analysis goes astray. In 1819, he published a popular account of his work on probability. This book bears the same relation to the Théorie des probabilités that the Système du monde does to the Mécanique céleste. In its emphasis on the analytical importance of probabilistic problems, especially in the context of the "approximation of formula functions of large numbers," Laplace's work goes beyond the contemporary view which almost exclusively considered aspects of practical applicability. Laplace's Théorie analytique remained the most influential book of mathematical probability theory to the end of the 19th century. The general relevance for statistics of Laplacian error theory was appreciated only by the end of the 19th century. However, it influenced the further development of a largely analytically oriented probability theory. === Inductive probability === In his Essai philosophique sur les probabilités (1814), Laplace set out a mathematical system of inductive reasoning based on probability, which we would today recognise as Bayesian. He begins the text with a series of principles of probability, the first seven being: Probability is the ratio of the "favored events" to the total possible events. The first principle assumes equal probabilities for all events. When this is not true, we must first determine the probabilities of each event. Then, the probability is the sum of the probabilities of all possible favoured events. For independent events, the probability of the occurrence of all is the probability of each multiplied together. When two events A and B depend on each other, the probability of compound event is the probability of A multiplied by the probability that, given A, B will occur. The probability that A will occur, given that B has occurred, is the probability of A and B occurring divided by the probability of B. Three corollaries are given for the sixth principle, which amount to Bayesian rule. Where event Ai ∈ {A1, A2, ... An} exhausts the list of possible causes for event B, Pr(B) = Pr(A1, A2, ..., An). Then Pr ( A i ∣ B ) = Pr ( A i ) Pr ( B ∣ A i ) ∑ j Pr ( A j ) Pr ( B ∣ A j ) . {\displaystyle \Pr(A_{i}\mid B)=\Pr(A_{i}){\frac {\Pr(B\mid A_{i})}{\sum _{j}\Pr(A_{j})\Pr(B\mid A_{j})}}.} The probability of a future event C is the sum of the products of the probability of each causes Bi drawn from the event observed A, by the probability that, this cause existing, the future event will occur. Symbolically, Pr ( C | A ) = ∑ i Pr ( C | B i ) Pr ( B i | A ) . {\displaystyle \Pr(C|A)=\sum _{i}\Pr(C|B_{i})\Pr(B_{i}|A).} One well-known formula arising from his system is the rule of succession, given as principle seven. Suppose that some trial has only two possible outcomes, labelled "success" and "failure". Under the assumption that little or nothing is known a priori about the relative plausibilities of the outcomes, Laplace derived a formula for the probability that the next trial will be a success. Pr ( next outcome is success ) = s + 1 n + 2 {\displaystyle \Pr({\text{next outcome is success}})={\frac {s+1}{n+2}}} where s is the number of previously observed successes and n is the total number of observed trials. It is still used as an estimator for the probability of an event if we know the event space, but have only a small number of samples. The rule of succession has been subject to much criticism, partly due to the example which Laplace chose to illustrate it. He calculated that the probability that the sun will rise tomorrow, given that it has never failed to in the past, was Pr ( sun will rise tomorrow ) = d + 1 d + 2 {\displaystyle \Pr({\text{sun will rise tomorrow}})={\frac {d+1}{d+2}}} where d is the number of times the sun has risen in the past. This result has been derided as absurd, and some authors have concluded that all applications of the Rule of Succession are absurd by extension. However, Laplace was fully aware of the absurdity of the result; immediately following the example, he wrote, "But this number [i.e., the probability that the sun will rise tomorrow] is far greater for him who, seeing in the totality of phenomena the principle regulating the days and seasons, realizes that nothing at the present moment can arrest the course of it." === Probability-generating function === The method of estimating the ratio of the number of favourable cases to the whole number of possible cases had been previously indicated by Laplace in a paper written in 1779. It consists of treating the successive values of any function as the coefficients in the expansion of another function, with reference to a different variable. The latter is therefore called the probability-generating function of the former. Laplace then shows how, by means of interpolation, these coefficients may be determined from the generating function. Next he attacks the converse problem, and from the coefficients he finds the generating function; this is effected by the solution of a finite difference equation. === Least squares and central limit theorem === The fourth chapter of this treatise includes an exposition of the method of least squares, a remarkable testimony to Laplace's command over the processes of analysis. In 1805 Legendre had published the method of least squares, making no attempt to tie it to the theory of probability. In 1809 Gauss had derived the normal distribution from the principle that the arithmetic mean of observations gives the most probable value for the quantity measured; then, turning this argument back upon itself, he showed that, if the errors of observation are normally distributed, the least squares estimates give the most probable values for the coefficients in regression situations. These two works seem to have spurred Laplace to complete work toward a treatise on probability he had contemplated as early as 1783. In two important papers in 1810 and 1811, Laplace first developed the characteristic function as a tool for large-sample theory and proved the first general central limit theorem. Then in a supplement to his 1810 paper written after he had seen Gauss's work, he showed that the central limit theorem provided a Bayesian justification for least squares: if one were combining observations, each one of which was itself the mean of a large number of independent observations, then the least squares estimates would not only maximise the likelihood function, considered as a posterior distribution, but also minimise the expected posterior error, all this without any assumption as to the error distribution or a circular appeal to the principle of the arithmetic mean. In 1811 Laplace took a different non-Bayesian tack. Considering a linear regression problem, he restricted his attention to linear unbiased estimators of the linear coefficients. After showing that members of this class were approximately normally distributed if the number of observations was large, he argued that least squares provided the "best" linear estimators. Here it is "best" in the sense that it minimised the asymptotic variance and thus both minimised the expected absolute value of the error, and maximised the probability that the estimate would lie in any symmetric interval about the unknown coefficient, no matter what the error distribution. His derivation included the joint limiting distribution of the least squares estimators of two parameters. == Laplace's demon == In 1814, Laplace published what may have been the first scientific articulation of causal determinism: We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be the present to it. This intellect is often referred to as Laplace's demon (in the same vein as Maxwell's demon) and sometimes Laplace's Superman (after Hans Reichenbach). Laplace, himself, did not use the word "demon", which was a later embellishment. As translated into English above, he simply referred to: "Une intelligence ... Rien ne serait incertain pour elle, et l'avenir comme le passé, serait présent à ses yeux." Even though Laplace is generally credited with having first formulated the concept of causal determinism, in a philosophical context the idea was actually widespread at the time, and can be found as early as 1756 in Maupertuis' 'Sur la Divination'. As well, Jesuit scientist Boscovich first proposed a version of scientific determinism very similar to Laplace's in his 1758 book Theoria philosophiae naturalis. == Laplace transforms == As early as 1744, Euler, followed by Lagrange, had started looking for solutions of differential equations in the form: z = ∫ X ( x ) e a x d x and z = ∫ X ( x ) x a d x . {\displaystyle z=\int X(x)e^{ax}\,dx{\text{ and }}z=\int X(x)x^{a}\,dx.} The Laplace transform has the form: F ( s ) = ∫ f ( t ) e − s t d t {\displaystyle F(s)=\int f(t)e^{-st}\,dt} This integral operator transforms a function of time ( t {\displaystyle t} ) into a function of a complex variable ( s {\displaystyle s} ), usually interpreted as complex frequency. == Other discoveries and accomplishments == === Mathematics === Among the other discoveries of Laplace in pure and applied mathematics are: Discussion, contemporaneously with Alexandre-Théophile Vandermonde, of the general theory of determinants, (1772); Proof that every equation of an odd degree must have at least one real quadratic factor; Laplace's method for approximating integrals Solution of the linear partial differential equation of the second order; He was the first to consider the difficult problems involved in equations of mixed differences, and to prove that the solution of an equation in finite differences of the first degree and the second order might always be obtained in the form of a continued fraction; In his theory of probabilities: de Moivre–Laplace theorem that approximates binomial distribution with a normal distribution Evaluation of several common definite integrals; General proof of the Lagrange reversion theorem. === Surface tension === Laplace built upon the qualitative work of Thomas Young to develop the theory of capillary action and the Young–Laplace equation. === Speed of sound === Laplace in 1816 was the first to point out that the speed of sound in air depends on the heat capacity ratio. Newton's original theory gave too low a value, because it does not take account of the adiabatic compression of the air which results in a local rise in temperature and pressure. Laplace's investigations in practical physics were confined to those carried on by him jointly with Lavoisier in the years 1782 to 1784 on the specific heat of various bodies. == Politics == === Minister of the Interior === In his early years, Laplace was careful never to become involved in politics, or indeed in life outside the Académie des sciences. He prudently withdrew from Paris during the most violent part of the Revolution. In November 1799, immediately after seizing power in the coup of 18 Brumaire, Napoleon appointed Laplace to the post of Minister of the Interior. The appointment, however, lasted only six weeks, after which Lucien Bonaparte, Napoleon's brother, was given the post. Evidently, once Napoleon's grip on power was secure, there was no need for a prestigious but inexperienced scientist in the government. Napoleon later (in his Mémoires de Sainte Hélène) wrote of Laplace's dismissal as follows: Geometrician of the first rank, Laplace was not long in showing himself a worse than average administrator; from his first actions in office we recognized our mistake. Laplace did not consider any question from the right angle: he sought subtleties everywhere, conceived only problems, and finally carried the spirit of "infinitesimals" into the administration. Grattan-Guinness, however, describes these remarks as "tendentious", since there seems to be no doubt that Laplace "was only appointed as a short-term figurehead, a place-holder while Napoleon consolidated power". === From Bonaparte to the Bourbons === Although Laplace was removed from office, it was desirable to retain his allegiance. He was accordingly raised to the senate, and to the third volume of the Mécanique céleste he prefixed a note that of all the truths therein contained the most precious to the author was the declaration he thus made of his devotion towards the peacemaker of Europe. In copies sold after the Bourbon Restoration this was struck out. (Pearson points out that the censor would not have allowed it anyway.) In 1814 it was evident that the empire was falling; Laplace hastened to tender his services to the Bourbons, and in 1817 during the Restoration he was rewarded with the title of marquis. According to Rouse Ball, the contempt that his more honest colleagues felt for his conduct in the matter may be read in the pages of Paul Louis Courier. His knowledge was useful on the numerous scientific commissions on which he served, and, says Rouse Ball, probably accounts for the manner in which his political insincerity was overlooked. Roger Hahn in his 2005 biography disputes this portrayal of Laplace as an opportunist and turncoat, pointing out that, like many in France, he had followed the debacle of Napoleon's Russian campaign with serious misgivings. The Laplaces, whose only daughter Sophie had died in childbirth in September 1813, were in fear for the safety of their son Émile, who was on the eastern front with the emperor. Napoleon had originally come to power promising stability, but it was clear that he had overextended himself, putting the nation at peril. It was at this point that Laplace's loyalty began to weaken. Although he still had easy access to Napoleon, his personal relations with the emperor cooled considerably. As a grieving father, he was particularly cut to the quick by Napoleon's insensitivity in an exchange related by Jean-Antoine Chaptal: "On his return from the rout in Leipzig, he [Napoleon] accosted Mr Laplace: 'Oh! I see that you have grown thin—Sire, I have lost my daughter—Oh! that's not a reason for losing weight. You are a mathematician; put this event in an equation, and you will find that it adds up to zero.'" === Political philosophy === In the second edition (1814) of the Essai philosophique, Laplace added some revealing comments on politics and governance. Since it is, he says, "the practice of the eternal principles of reason, justice and humanity that produce and preserve societies, there is a great advantage to adhere to these principles, and a great inadvisability to deviate from them". Noting "the depths of misery into which peoples have been cast" when ambitious leaders disregard these principles, Laplace makes a veiled criticism of Napoleon's conduct: "Every time a great power intoxicated by the love of conquest aspires to universal domination, the sense of liberty among the unjustly threatened nations breeds a coalition to which it always succumbs." Laplace argues that "in the midst of the multiple causes that direct and restrain various states, natural limits" operate, within which it is "important for the stability as well as the prosperity of empires to remain". States that transgress these limits cannot avoid being "reverted" to them, "just as is the case when the waters of the seas whose floor has been lifted by violent tempests sink back to their level by the action of gravity". About the political upheavals he had witnessed, Laplace formulated a set of principles derived from physics to favour evolutionary over revolutionary change: Let us apply to the political and moral sciences the method founded upon observation and calculation, which has served us so well in the natural sciences. Let us not offer fruitless and often injurious resistance to the inevitable benefits derived from the progress of enlightenment; but let us change our institutions and the usages that we have for a long time adopted only with extreme caution. We know from past experience the drawbacks they can cause, but we are unaware of the extent of ills that change may produce. In the face of this ignorance, the theory of probability instructs us to avoid all change, especially to avoid sudden changes which in the moral as well as the physical world never occur without a considerable loss of vital force. In these lines, Laplace expressed the views he had arrived at after experiencing the Revolution and the Empire. He believed that the stability of nature, as revealed through scientific findings, provided the model that best helped to preserve the human species. "Such views," Hahn comments, "were also of a piece with his steadfast character." In the Essai philosophique, Laplace also illustrates the potential of probabilities in political studies by applying the law of large numbers to justify the candidates’ integer-valued ranks used in the Borda method of voting, with which the new members of the Academy of Sciences were elected. Laplace’s verbal argument is so rigorous that it can easily be converted into a formal proof. == Death == Laplace died in Paris on 5 March 1827, which was the same day Alessandro Volta died. His brain was removed by his physician, François Magendie, and kept for many years, eventually being displayed in a roving anatomical museum in Britain. It was reportedly smaller than the average brain. Laplace was buried at Père Lachaise in Paris but in 1888 his remains were moved to Saint Julien de Mailloc in the canton of Orbec and reinterred on the family estate. The tomb is situated on a hill overlooking the village of St Julien de Mailloc, Normandy, France. == Religious opinions == === I had no need of that hypothesis === A frequently cited but potentially apocryphal interaction between Laplace and Napoleon purportedly concerns the existence of God. Although the conversation in question did occur, the exact words Laplace used and his intended meaning are not known. A typical version is provided by Rouse Ball: Laplace went in state to Napoleon to present a copy of his work, and the following account of the interview is well authenticated, and so characteristic of all the parties concerned that I quote it in full. Someone had told Napoleon that the book contained no mention of the name of God; Napoleon, who was fond of putting embarrassing questions, received it with the remark, 'M. Laplace, they tell me you have written this large book on the system of the universe, and have never even mentioned its Creator.' Laplace, who, though the most supple of politicians, was as stiff as a martyr on every point of his philosophy, drew himself up and answered bluntly, Je n'avais pas besoin de cette hypothèse-là. ("I had no need of that hypothesis.") Napoleon, greatly amused, told this reply to Lagrange, who exclaimed, Ah! c'est une belle hypothèse; ça explique beaucoup de choses. ("Ah, it is a fine hypothesis; it explains many things.") An earlier report, although without the mention of Laplace's name, is found in Antommarchi's The Last Moments of Napoleon (1825): Je m'entretenais avec L ..... je le félicitais d'un ouvrage qu'il venait de publier et lui demandais comment le nom de Dieu, qui se reproduisait sans cesse sous la plume de Lagrange, ne s'était pas présenté une seule fois sous la sienne. C'est, me répondit-il, que je n'ai pas eu besoin de cette hypothèse. ("While speaking with L ..... I congratulated him on a work which he had just published and asked him how the name of God, which appeared endlessly in the works of Lagrange, didn't occur even once in his. He replied that he had no need of that hypothesis.") In 1884, however, the astronomer Hervé Faye affirmed that this account of Laplace's exchange with Napoleon presented a "strangely transformed" (étrangement transformée) or garbled version of what had actually happened. It was not God that Laplace had treated as a hypothesis, but merely his intervention at a determinate point: In fact Laplace never said that. Here, I believe, is what truly happened. Newton, believing that the secular perturbations which he had sketched out in his theory would in the long run end up destroying the Solar System, says somewhere that God was obliged to intervene from time to time to remedy the evil and somehow keep the system working properly. This, however, was a pure supposition suggested to Newton by an incomplete view of the conditions of the stability of our little world. Science was not yet advanced enough at that time to bring these conditions into full view. But Laplace, who had discovered them by a deep analysis, would have replied to the First Consul that Newton had wrongly invoked the intervention of God to adjust from time to time the machine of the world (la machine du monde) and that he, Laplace, had no need of such an assumption. It was not God, therefore, that Laplace treated as a hypothesis, but his intervention in a certain place. Laplace's younger colleague, the astronomer François Arago, who gave his eulogy before the French Academy in 1827, told Faye of an attempt by Laplace to keep the garbled version of his interaction with Napoleon out of circulation. Faye writes: I have it on the authority of M. Arago that Laplace, warned shortly before his death that that anecdote was about to be published in a biographical collection, had requested him [Arago] to demand its deletion by the publisher. It was necessary to either explain or delete it, and the second way was the easiest. But, unfortunately, it was neither deleted nor explained. The Swiss-American historian of mathematics Florian Cajori appears to have been unaware of Faye's research, but in 1893 he came to a similar conclusion. Stephen Hawking said in 1999, "I don't think that Laplace was claiming that God does not exist. It's just that he doesn't intervene, to break the laws of Science." The only eyewitness account of Laplace's interaction with Napoleon is from the entry for 8 August 1802 in the diary of the British astronomer Sir William Herschel: The first Consul then asked a few questions relating to Astronomy and the construction of the heavens to which I made such answers as seemed to give him great satisfaction. He also addressed himself to Mr Laplace on the same subject, and held a considerable argument with him in which he differed from that eminent mathematician. The difference was occasioned by an exclamation of the first Consul, who asked in a tone of exclamation or admiration (when we were speaking of the extent of the sidereal heavens): 'And who is the author of all this!' Mons. De la Place wished to shew that a chain of natural causes would account for the construction and preservation of the wonderful system. This the first Consul rather opposed. Much may be said on the subject; by joining the arguments of both we shall be led to 'Nature and nature's God'. Since this makes no mention of Laplace's saying, "I had no need of that hypothesis," Daniel Johnson argues that "Laplace never used the words attributed to him." Arago's testimony, however, appears to imply that he did, only not in reference to the existence of God. === Views on God === Raised a Catholic, Laplace appears in adult life to have inclined to deism (presumably his considered position, since it is the only one found in his writings). However, some of his contemporaries thought he was an atheist, while a number of recent scholars have described him as agnostic. Faye thought that Laplace "did not profess atheism", but Napoleon, on Saint Helena, told General Gaspard Gourgaud, "I often asked Laplace what he thought of God. He owned that he was an atheist." Roger Hahn, in his biography of Laplace, mentions a dinner party at which "the geologist Jean-Étienne Guettard was staggered by Laplace's bold denunciation of the existence of God." It appeared to Guettard that Laplace's atheism "was supported by a thoroughgoing materialism." But the chemist Jean-Baptiste Dumas, who knew Laplace well in the 1820s, wrote that Laplace "provided materialists with their specious arguments, without sharing their convictions." Hahn states: "Nowhere in his writings, either public or private, does Laplace deny God's existence." Expressions occur in his private letters that appear inconsistent with atheism. On 17 June 1809, for instance, he wrote to his son, "Je prie Dieu qu'il veille sur tes jours. Aie-Le toujours présent à ta pensée, ainsi que ton père et ta mère [I pray that God watches over your days. Let Him be always present to your mind, as also your father and your mother]." Ian S. Glass, quoting Herschel's account of the celebrated exchange with Napoleon, writes that Laplace was "evidently a deist like Herschel". In Exposition du système du monde, Laplace quotes Newton's assertion that "the wondrous disposition of the Sun, the planets and the comets, can only be the work of an all-powerful and intelligent Being." This, says Laplace, is a "thought in which he [Newton] would be even more confirmed, if he had known what we have shown, namely that the conditions of the arrangement of the planets and their satellites are precisely those which ensure its stability." By showing that the "remarkable" arrangement of the planets could be entirely explained by the laws of motion, Laplace had eliminated the need for the "supreme intelligence" to intervene, as Newton had "made" it do. Laplace cites with approval Leibniz's criticism of Newton's invocation of divine intervention to restore order to the Solar System: "This is to have very narrow ideas about the wisdom and the power of God." He evidently shared Leibniz's astonishment at Newton's belief "that God has made his machine so badly that unless he affects it by some extraordinary means, the watch will very soon cease to go." In a group of manuscripts, preserved in relative secrecy in a black envelope in the library of the Académie des sciences and published for the first time by Hahn, Laplace mounted a deist critique of Christianity. It is, he writes, the "first and most infallible of principles ... to reject miraculous facts as untrue." As for the doctrine of transubstantiation, it "offends at the same time reason, experience, the testimony of all our senses, the eternal laws of nature, and the sublime ideas that we ought to form of the Supreme Being." It is the sheerest absurdity to suppose that "the sovereign lawgiver of the universe would suspend the laws that he has established, and which he seems to have maintained invariably." Laplace also ridiculed the use of probability in theology. Even following Pascal's reasoning presented in Pascal's wager, it is not worth making a bet, for the hope of profit – equal to the product of the value of the testimonies (infinitely small) and the value of the happiness they promise (which is significant but finite) – must necessarily be infinitely small. In old age, Laplace remained curious about the question of God and frequently discussed Christianity with the Swiss astronomer Jean-Frédéric-Théodore Maurice. He told Maurice that "Christianity is quite a beautiful thing" and praised its civilising influence. Maurice thought that the basis of Laplace's beliefs was, little by little, being modified, but that he held fast to his conviction that the invariability of the laws of nature did not permit of supernatural events. After Laplace's death, Poisson told Maurice, "You know that I do not share your [religious] opinions, but my conscience forces me to recount something that will surely please you." When Poisson had complimented Laplace about his "brilliant discoveries", the dying man had fixed him with a pensive look and replied, "Ah! We chase after phantoms [chimères]." These were his last words, interpreted by Maurice as a realisation of the ultimate "vanity" of earthly pursuits. Laplace received the last rites from the curé of the Missions Étrangères (in whose parish he was to be buried) and the curé of Arcueil. According to his biographer, Roger Hahn, it is "not credible" that Laplace "had a proper Catholic end", and he "remained a skeptic" to the very end of his life. Laplace in his last years has been described as an agnostic. === Excommunication of a comet === In 1470 the humanist scholar Bartolomeo Platina wrote that Pope Callixtus III had asked for prayers for deliverance from the Turks during a 1456 appearance of Halley's Comet. Platina's account does not accord with Church records, which do not mention the comet. Laplace is alleged to have embellished the story by claiming the Pope had "excommunicated" Halley's comet. What Laplace actually said, in Exposition du système du monde (1796), was that the Pope had ordered the comet to be "exorcised" (conjuré). It was Arago, in Des Comètes en général (1832), who first spoke of an excommunication. == Honors == Correspondent of the Royal Institute of the Netherlands in 1809. Foreign Honorary Member of the American Academy of Arts and Sciences in 1822. The asteroid 4628 Laplace is named for Laplace. A spur of the Montes Jura on the Moon is known as Promontorium Laplace. His name is one of the 72 names inscribed on the Eiffel Tower. The tentative working name of the European Space Agency Europa Jupiter System Mission is the "Laplace" space probe. A train station in the RER B in Arcueil bears his name. A street in Verkhnetemernitsky (near Rostov-on-Don, Russia). The Institute of Electrical and Electronics Engineers (IEEE) Signal Processing Society's Early Career Technical Achievement Award is named in his honor. == Quotations == I had no need of that hypothesis. ("Je n'avais pas besoin de cette hypothèse-là", allegedly as a reply to Napoleon, who had asked why he hadn't mentioned God in his book on astronomy.) It is therefore obvious that ... (Frequently used in the Celestial Mechanics when he had proved something and mislaid the proof, or found it clumsy. Notorious as a signal for something true, but hard to prove.) If we seek a cause wherever we perceive symmetry, it is not that we regard a symmetrical event as less possible than the others, but, since this event ought to be the effect of a regular cause or that of chance, the first of these suppositions is more probable than the second. The more extraordinary the event, the greater the need of its being supported by strong proofs. "We are so far from knowing all the agents of nature and their diverse modes of action that it would not be philosophical to deny phenomena solely because they are inexplicable in the actual state of our knowledge. But we ought to examine them with an attention all the more scrupulous as it appears more difficult to admit them." This is restated in Theodore Flournoy's work From India to the Planet Mars as the Principle of Laplace or, "The weight of the evidence should be proportioned to the strangeness of the facts." Most often repeated as "The weight of evidence for an extraordinary claim must be proportioned to its strangeness." (see also: Sagan standard) This simplicity of ratios will not appear astonishing if we consider that all the effects of nature are only mathematical results of a small number of immutable laws. Infinitely varied in her effects, nature is only simple in her causes. What we know is little, and what we are ignorant of is immense. (Fourier comments: "This was at least the meaning of his last words, which were articulated with difficulty.") One sees in this essay that the theory of probabilities is basically only common sense reduced to a calculus. It makes one estimate accurately what right-minded people feel by a sort of instinct, often without being able to give a reason for it. == List of works == Traité de mécanique céleste (in French). Vol. 1. Paris: Charles Crapelet. 1799. Traité de mécanique céleste (in French). Vol. 2. Paris: Charles Crapelet. 1799. Traité de mécanique céleste (in French). Vol. 3. Paris: Charles Crapelet. 1802. Traité de mécanique céleste (in French). Vol. 4. Paris: Charles Crapelet. 1805. Traité de mécanique céleste (in French). Vol. 5. Paris: Charles Louis Étienne Bachelier. 1852. Précis de l'histoire de l'astronomie (in Italian). Milano: Angelo Stanislao Brambilla. 1823. Exposition du système du monde (in French). Paris: Charles Louis Étienne Bachelier. 1824. == Bibliography == Œuvres complètes de Laplace, 14 vol. (1878–1912), Paris: Gauthier-Villars (copy from Gallica in French) Théorie du movement et de la figure elliptique des planètes (1784) Paris (not in Œuvres complètes) Précis de l'histoire de l'astronomie Alphonse Rebière, Mathématiques et mathématiciens, 3rd edition Paris, Nony & Cie, 1898. === English translations === Bowditch, N. (trans.) (1829–1839) Mécanique céleste, 4 vols, Boston New edition by Reprint Services ISBN 0-7812-2022-X – [1829–1839] (1966–1969) Celestial Mechanics, 5 vols, including the original French Pound, J. (trans.) (1809) The System of the World, 2 vols, London: Richard Phillips _ The System of the World (v.1) _ The System of the World (v.2) – [1809] (2007) The System of the World, vol.1, Kessinger, ISBN 1-4326-5367-9 Toplis, J. (trans.) (1814) A treatise upon analytical mechanics Nottingham: H. Barnett Laplace, Pierre Simon Marquis De (2007) [1902]. A Philosophical Essay on Probabilities. Translated by Truscott, F.W. & Emory, F.L. Cosimo. ISBN 978-1-60206-328-0., translated from the French 6th ed. (1840) A Philosophical Essay on Probabilities (1902) at the Internet Archive Dale, Andrew I.; Laplace, Pierre-Simon (1995). Philosophical Essay on Probabilities. Sources in the History of Mathematics and Physical Sciences. Vol. 13. Translated by Andrew I. Dale. Springer. doi:10.1007/978-1-4612-4184-3. hdl:2027/coo1.ark:/13960/t3126f008. ISBN 978-1-4612-8689-9., translated from the French 5th ed. (1825) == See also == History of the metre Laplace–Bayes estimator Ratio estimator Seconds pendulum List of things named after Pierre-Simon Laplace Pascal's wager == References == === Citations === === General sources === == External links == "Laplace, Pierre (1749–1827)". Eric Weisstein's World of Scientific Biography. Wolfram Research. Retrieved 24 August 2007. "Pierre-Simon Laplace" in the MacTutor History of Mathematics archive. "Bowditch's English translation of Laplace's preface". Mécanique Céleste. The MacTutor History of Mathematics archive. Retrieved 4 September 2007. Guide to the Pierre Simon Laplace Papers at The Bancroft Library Pierre-Simon Laplace at the Mathematics Genealogy Project English translation Archived 27 December 2012 at the Wayback Machine of a large part of Laplace's work in probability and statistics, provided by Richard Pulskamp Archived 29 October 2012 at the Wayback Machine Pierre-Simon Laplace – Œuvres complètes (last 7 volumes only) Gallica-Math "Sur le mouvement d'un corps qui tombe d'une grande hauteur" (Laplace 1803), online and analysed on BibNum Archived 2 April 2015 at the Wayback Machine (English). |
Wikipedia:Piers Bohl#0 | Piers Bohl (23 October 1865 – 25 December 1921) was a Latvian mathematician, who worked in differential equations, topology and quasiperiodic functions. == Biography == He was born in 1865 in Walk, Livonia, in the family of a poor Baltic German merchant. In 1884, after graduating from a German school in Viljandi, he entered the faculty of physics and mathematics at the University of Tartu. In 1893 Bohl was awarded his Master's degree. This was for an investigation of quasi-periodic functions. The notion of quasi-periodic functions was generalised still further by Harald Bohr when he introduced almost periodic functions. He has been the first to prove the three-dimensional case of the Brouwer fixed-point theorem, but his work was not noticed at the time. == Polynomial result on trinomial equations == In 1908, Bohl established a general theorem for locating the roots of complex trinomials of the form P ( z ) = z k + a z ℓ + b {\displaystyle P(z)=z^{k}+a\,z^{\ell }+b} , where k {\displaystyle k} and ℓ {\displaystyle \ell } are positive integers with k > ℓ {\displaystyle k>\ell } , and a {\displaystyle a} and b {\displaystyle b} are nonzero complex numbers. Rather than relying on heavy algebraic manipulations, he employed an elementary geometric construction: by interpreting the magnitudes of the coefficients | a | {\displaystyle |a|} , | b | {\displaystyle |b|} and the chosen radius (for instance, the unit circle) as the sides of a triangle, one can associate two angles that, together with the arguments of a {\displaystyle a} and b {\displaystyle b} , yield explicit bounds. These bounds determine exactly how many roots lie inside the circle, either by simple inequalities when one coefficient dominates, or by counting the integers in a specific interval when all three lengths can form a triangle. Bohl's result not only unifies numerous special‐case criteria (such as those later attributed to Schur, Cohn or Jury) but also provides direct formulas that apply regardless of the relative sizes or orientations of the coefficients. Although his work went largely unnoticed for many decades, it anticipates modern applications in the stability analysis of differential and difference equations, where knowing whether all characteristic roots lie within the unit circle is essential for determining asymptotic behaviour. == References == == External links == Piers Bohl at the Mathematics Genealogy Project Bohl biography at www-history.mcs.st-and.ac.uk http://www.mathematics.lv/lms_10_years_after.pdf |
Wikipedia:Pietro Cossali#0 | Pietro Cossali (29 June 1748 — 20 December 1815) was an Italian mathematician, physicist and astronomer. From 1787 to 1805, he taught physics at the University of Parma. In 1805, Napoleon named Cossali a professor of higher calculus at the University of Padua. From 1797 to 1799, he wrote Origin, Transmission to Italy, and Early Progress of Algebra There (Italian: Origine, transporto in Italia, primi progressi in essa dell'algebra), in which he describes mathematical achievements from the emergence of algebra due to Fibonacci to the new research on casus irreducibilis in the 18th century. This work can be considered the first professional text on the history of Italian mathematics. In this work, Cossali corrects some factual mistakes made earlier by Jean Paul de Gua de Malves, John Wallis and Jean-Étienne Montucla, although he makes another important error in attributing everything after Fibonacci and before Luca Pacioli to the latter. Besides his works on mathematics and its history, Cossali also wrote on astronomy. His articles were published in Ephémérides astronomiques. == References == == External links == Baldini, Ugo (1984). "COSSALI, Pietro". Dizionario Biografico degli Italiani, Volume 30: Cosattini–Crispolto (in Italian). Rome: Istituto dell'Enciclopedia Italiana. ISBN 978-8-81200032-6. |
Wikipedia:Pietro Mengoli#0 | Pietro Mengoli (1626, Bologna – June 7, 1686, Bologna) was an Italian mathematician and clergyman from Bologna, where he studied with Bonaventura Cavalieri at the University of Bologna, and succeeded him in 1647. He remained as professor there for the next 39 years of his life. Mengoli was pivotal figure in the development of calculus. He established the divergence of the harmonic series nearly forty years before Jacob Bernoulli, to whom the discovery is generally attributed; he gave a development in series of logarithms thirteen years before Nicholas Mercator published his famous treatise Logarithmotechnia. Mengoli also gave a definition of the definite integral which is not substantially different from that given more than a century later by Augustin-Louis Cauchy. == Biography == Born in 1626, Pietro Mengoli studied mathematics and mechanics at the University of Bologna. After the death of his teacher, Bonaventura Cavalieri (1647), Mengoli became a lecturer in the new chair of mechanics from 1649–50 and subsequently taught mathematics at the University of Bologna in the years from 1678 to 1685. He was awarded a doctorate in philosophy in 1650, and, three years later, in civil and canon law. Novae quadraturae arithmeticae (1650), Via regia ad mathematicas (1655) and Geometria (1659), his earliest writings, earned him wide reputation in Europe, especially in academic circles in London. In 1660 he was ordained a catholic priest. A decade of silence followed until, in 1670, the Speculationi di musica and Refrattioni e parallasse solare were published. During the 1670s Mengoli devoted himself to constructing a theory of metaphysics, in which he tried to demonstrate revealed truths more geometrico. Circolo (1672), Anno (1673), Arithmetica rationalis (1674) and Il mese (1681) are works devoted to the topics of "middle mathematics', cosmology and biblical chronology, logic and metaphysics. Mengoli wrote also a treatise on music theory, Speculazioni di musica [Speculations on music], much appreciated in his time and reviewed and partly translated by Henry Oldenburg in the Philosophical Transactions of the Royal Society. Mengoli died in Bologna in 1685. == Contributions == Mengoli first posed the famous Basel problem in 1650, solved in 1735 by Leonhard Euler. In 1650, he also proved that the sum of the alternating harmonic series is equal to the natural logarithm of 2. He also proved that the harmonic series has no upper bound, and provided a proof that Wallis' product for π {\displaystyle \pi } is correct. Mengoli anticipated the modern idea of limit of a sequence with his study of quasi-proportions in Geometriae speciosae elementa (1659). He used the term quasi-infinite for unbounded and quasi-null for vanishing. Mengoli proves theorems starting from clear hypotheses and explicitly stated properties, showing everything necessary ... proceeds to a step-by-step demonstration. In the margin he notes the theorems used in each line. Indeed, the work bears many similarities to a modern book and shows that Mengoli was ahead of his time in treating his subject with a high degree of rigor.: 261 == Six square problem == Mengoli became enthralled with a Diophantine problem posed by Jacques Ozanam called the six-square problem: find three integers such that their differences are squares and that the differences of their squares are also three squares. At first he thought that there was no solution, and in 1674 published his reasoning in Theorema Arthimeticum. But Ozanam then exhibited a solution: x = 2,288,168, y = 1,873,432, and z = 2,399,057. Humbled by his error, Mengoli made a study of Pythagorean triples to uncover the basis of this solution. He first solved an auxiliary Diophantine problem: find four numbers such that the sum of the first two is a square, the sum of the third and fourth is a square, their product is a square, and the ratio of the first two is greater than the ratio of the third to the fourth. He found two solutions: (112, 15, 35, 12) and (364, 27, 84, 13). Using these quadruples, and algebraic identities, he gave two solutions to the six-square problem beyond Ozanam’s solutions. Jacques de Billy also provided six-square problem solutions. == Works == Pietro Mengoli's works were all published in Bologna: 1650: Novae quadraturae arithmeticae seu de additione fractionum on infinite series 1659: Geometriae speciosae elementa on quasi-proportions to extend Euclid's proportionality of his Book 5, six definitions yield 61 theorems on quasi-proportion 1670: Refrattitione e parallase solare 1670: Speculattione di musica 1672: Circulo 1675: Anno on Biblical chronology 1681: Mese on cosmology Mese (in Italian). Bologna: eredi Vittorio Benacci. 1681. 1674: Arithmetica rationalis on logic 1675: Arithmetica realis on metaphysics == References == == Bibliography == Natucci, Alpinolo (1974). "Mengoli, Pietro". In Charles Coulston Gillispie (ed.). Dictionary of Scientific Biography. Vol. IX. New York: Charles Scribner's Sons. pp. 303–304. Retrieved 12 August 2023. G. Baroncini; M. Cavazza, eds. (1986). La Corrispondenza di Pietro Mengoli. Florence: Leo S. Olschki. ISBN 9788822234049. "Mengoli, Pietro". Biographical Encyclopedia of Scientists. CRC Press. 2008. p. 518. Retrieved 13 August 2023. == External links == Gozza, Paolo (1990). "Atomi, spiritus, suoni. le Speculationi di musica (1670) del galileiano Pietro Mengoli". Nuncius. 5 (2): 75–98. doi:10.1163/182539190X00039. O'Connor, John J.; Robertson, Edmund F., "Pietro Mengoli", MacTutor History of Mathematics Archive, University of St Andrews Giusti, Enrico (1991). "Le Prime Ricerche di Pietro Mengoli: La Somma delle Serie". Proceedings of the international meeting “Geometry and complex variables”. New York: Dekker: 195–213. Bagni, Giorgio Tomaso (2001). "Le relazioni simul e ordo di Pietro Mengoli introdotte nell'Arithmetica realis (1675): un'algebra di Lindenbaum nel XVII secolo" (PDF). Amicitiae causa. Scritti in memoria di mons. Luigi Pesce: 214–220. ISBN 9788887073300. Cavazza, Marta (2009). "Mengoli, Pietro". Dizionario Biografico degli Italiani, Volume 73: Meda–Messadaglia (in Italian). Rome: Istituto dell'Enciclopedia Italiana. ISBN 978-8-81200032-6. Massa Esteve, M.R.; Delshams, A. (2009). "Euler's beta integral in Pietro Mengoli's works". Archive for History of Exact Sciences. 63: 325–356. doi:10.1007/s00407-009-0042-5. Massa, M.R. (2015). "The Role of Indivisibles in Mengoli's Quadratures". Seventeenth-Century Indivisibles Revisited. Basel: Birkhäuser: 285–306. doi:10.1007/978-3-319-00131-9_13. hdl:2117/28047. Bell, Jordan; Blåsjö, Viktor (2018). "Pietro Mengoli's 1650 Proof that the Harmonic Series Diverges". Mathematics Magazine. 91 (5): 341–347. JSTOR 48665556. |
Wikipedia:Plancherel–Rotach asymptotics#0 | The Plancherel–Rotach asymptotics are asymptotic results for orthogonal polynomials. They are named after the Swiss mathematicians Michel Plancherel and his PhD student Walter Rotach, who first derived the asymptotics for the Hermite polynomial and Laguerre polynomial. Nowadays asymptotic expansions of this kind for orthogonal polynomials are referred to as Plancherel–Rotach asymptotics or of Plancherel–Rotach type. The case for the associated Laguerre polynomial was derived by the Swiss mathematician Egon Möcklin, another PhD student of Plancherel and George Pólya at ETH Zurich. == Hermite polynomials == Let H n ( x ) {\displaystyle H_{n}(x)} denote the n-th Hermite polynomial. Let ϵ {\displaystyle \epsilon } and ω {\displaystyle \omega } be positive and fixed, then for x = ( 2 n + 1 ) 1 / 2 cos φ {\displaystyle x=(2n+1)^{1/2}\cos \varphi } and ϵ ≤ φ ≤ π − ϵ {\displaystyle \epsilon \leq \varphi \leq \pi -\epsilon } e − x 2 / 2 H n ( x ) = 2 n / 2 + 1 / 4 ( n ! ) 1 / 2 ( π n ) − 1 / 4 ( sin φ ) − 1 / 2 { sin [ ( n 2 + 1 4 ) ( sin 2 φ − 2 φ ) + 3 π 4 ] + O ( n − 1 ) } {\displaystyle e^{-x^{2}/2}H_{n}(x)=2^{n/2+1/4}(n!)^{1/2}(\pi n)^{-1/4}(\sin \varphi )^{-1/2}{\bigg \{}\sin \left[\left({\tfrac {n}{2}}+{\tfrac {1}{4}}\right)(\sin 2\varphi -2\varphi )+3{\tfrac {\pi }{4}}\right]+{\mathcal {O}}(n^{-1}){\bigg \}}} for x = ( 2 n + 1 ) 1 / 2 cosh φ {\displaystyle x=(2n+1)^{1/2}\cosh \varphi } and ϵ ≤ φ ≤ ω {\displaystyle \epsilon \leq \varphi \leq \omega } e − x 2 / 2 H n ( x ) = 2 n / 2 − 3 / 4 ( n ! ) 1 / 2 ( π n ) − 1 / 4 ( sinh φ ) − 1 / 2 exp [ ( n 2 + 1 4 ) ( 2 φ − sinh 2 φ ) ] { 1 + O ( n − 1 ) } {\displaystyle e^{-x^{2}/2}H_{n}(x)=2^{n/2-3/4}(n!)^{1/2}(\pi n)^{-1/4}(\sinh \varphi )^{-1/2}\exp \left[\left({\tfrac {n}{2}}+{\tfrac {1}{4}}\right)(2\varphi -\sinh 2\varphi )\right]{\big \{}1+{\mathcal {O}}(n^{-1}){\big \}}} for x = ( 2 n + 1 ) 1 / 2 − 2 − 1 / 2 3 − 1 / 3 n − 1 / 6 t {\displaystyle x=(2n+1)^{1/2}-2^{-1/2}3^{-1/3}n^{-1/6}t} and t {\displaystyle t} complex and bounded e − x 2 / 2 H n ( x ) = 3 1 / 3 π − 3 / 4 2 n / 2 + 1 / 4 ( n ! ) 1 / 2 n − 1 / 12 { A ( t ) + O ( n − 2 / 3 ) } {\displaystyle e^{-x^{2}/2}H_{n}(x)=3^{1/3}\pi ^{-3/4}2^{n/2+1/4}(n!)^{1/2}n^{-1/12}{\bigg \{}A(t)+{\mathcal {O}}\left(n^{-{2/3}}\right){\bigg \}}} where A ( t ) = π Ai ( − 3 − 1 / 3 t ) {\displaystyle A(t)=\pi \operatorname {Ai} (-3^{-1/3}t)} and Ai {\displaystyle \operatorname {Ai} } denotes the Airy function. == (Associated) Laguerre polynomials == Let L n ( α ) ( x ) {\displaystyle L_{n}^{(\alpha )}(x)} denote the n-th associate Laguerre polynomial. Let α {\displaystyle \alpha } be arbitrary and real, ϵ {\displaystyle \epsilon } and ω {\displaystyle \omega } be positive and fixed, then for x = ( 4 n + 2 α + 2 ) cos 2 φ {\displaystyle x=(4n+2\alpha +2)\cos ^{2}\varphi } and ϵ ≤ φ ≤ π 2 − ϵ n − 1 / 2 {\displaystyle \epsilon \leq \varphi \leq {\tfrac {\pi }{2}}-\epsilon n^{-1/2}} e − x / 2 L n ( α ) ( x ) = ( − 1 ) n ( π sin φ ) − 1 / 2 x − α / 2 − 1 / 4 n α / 2 − 1 / 4 { sin [ ( n + α + 1 2 ) ( sin 2 φ − 2 φ ) + 3 π / 4 ] + ( n x ) − 1 / 2 O ( 1 ) } {\displaystyle e^{-x/2}L_{n}^{(\alpha )}(x)=(-1)^{n}(\pi \sin \varphi )^{-1/2}x^{-\alpha /2-1/4}n^{\alpha /2-1/4}{\big \{}\sin \left[\left(n+{\tfrac {\alpha +1}{2}}\right)(\sin 2\varphi -2\varphi )+3\pi /4\right]+(nx)^{-1/2}{\mathcal {O}}(1){\big \}}} for x = ( 4 n + 2 α + 2 ) cosh 2 φ {\displaystyle x=(4n+2\alpha +2)\cosh ^{2}\varphi } and ϵ ≤ φ ≤ ω {\displaystyle \epsilon \leq \varphi \leq \omega } e − x / 2 L n ( α ) ( x ) = 1 2 ( − 1 ) n ( π sinh φ ) − 1 / 2 x − α / 2 − 1 / 4 n α / 2 − 1 / 4 exp [ ( n + α + 1 2 ) ( 2 φ − sinh 2 φ ) ] { 1 + O ( n − 1 ) } {\displaystyle e^{-x/2}L_{n}^{(\alpha )}(x)={\tfrac {1}{2}}(-1)^{n}(\pi \sinh \varphi )^{-1/2}x^{-\alpha /2-1/4}n^{\alpha /2-1/4}\exp \left[\left(n+{\tfrac {\alpha +1}{2}}\right)(2\varphi -\sinh 2\varphi )\right]\{1+{\mathcal {O}}\left(n^{-1}\right)\}} for x = 4 n + 2 α + 2 − 2 ( 2 n / 3 ) 1 / 3 t {\displaystyle x=4n+2\alpha +2-2(2n/3)^{1/3}t} and t {\displaystyle t} complex and bounded e − x / 2 L n ( α ) ( x ) = ( − 1 ) n π − 1 2 − α − 1 / 3 3 1 / 3 n − 1 / 3 { A ( t ) + O ( n − 2 / 3 ) } {\displaystyle e^{-x/2}L_{n}^{(\alpha )}(x)=(-1)^{n}\pi ^{-1}2^{-\alpha -1/3}3^{1/3}n^{-1/3}{\bigg \{}A(t)+{\mathcal {O}}\left(n^{-2/3}\right){\bigg \}}} where A ( t ) = π Ai ( − 3 − 1 / 3 t ) {\displaystyle A(t)=\pi \operatorname {Ai} (-3^{-1/3}t)} and Ai {\displaystyle \operatorname {Ai} } denotes the Airy function. == Literature == Szegő, Gábor (1975). Orthogonal polynomials. Vol. 4. Providence, Rhode Island: American Mathematical Society. ISBN 0-8218-1023-5. == References == |
Wikipedia:Planisphaerium#0 | The Planisphaerium is a work by Ptolemy. The title can be translated as "celestial plane" or "star chart". In this work Ptolemy explored the mathematics of mapping figures inscribed in the celestial sphere onto a plane by what is now known as stereographic projection. This method of projection preserves the properties of circles. == Publication == Originally written in Ancient Greek, Planisphaerium was one of many scientific works which survived from antiquity in Arabic translation. One reason why Planisphaerium attracted interest was that stereographic projection was the mathematical basis of the plane astrolabe, an instrument which was widely used in the medieval Islamic world. In the 12th century the work was translated from Arabic into Latin by Herman of Carinthia, who also translated commentaries by Maslamah Ibn Ahmad al-Majriti. The oldest known translation is in Arabic done by an unknown scholar as part of the Translation Movement in Baghdad. == Planisphere == The word planisphere (Latin planisphaerium) was originally used in the second century by Ptolemy to describe the representation of a spherical Earth by a map drawn in the plane. Planisphere == Editions and translations == Commandino, Federico, ed. (1558). Ptolemaei Planisphaerium. Iordani Planisphaerium. Federici Commandini Vrbinatis in Ptolemaei Planisphaerium commentarius (in Latin). Venice: Paulus Manutius. == References == == External links == "Ptolemy on Astrolabes" |
Wikipedia:Plato's number#0 | Plato's number is a number enigmatically referred to by Plato in his dialogue the Republic (8.546b). The text is notoriously difficult to understand and its corresponding translations do not allow an unambiguous interpretation. There is no real agreement either about the meaning or the value of the number. It also has been called the "geometrical number" or the "nuptial number" (the "number of the bride"). The passage in which Plato introduced the number has been discussed ever since it was written, with no consensus in the debate. As for the number's actual value, 216 is the most frequently proposed value for it, but 3,600 or 12,960,000 are also commonly considered. An incomplete list of authors who mention or discourse about includes the names of Aristotle, Proclus for antiquity; Ficino and Cardano during the Renaissance; Zeller, Friedrich Schleiermacher, Paul Tannery and Friedrich Hultsch in the 19th century and further new names are currently added. Further in the Republic (9.587b) another number is mentioned, known as the "Number of the Tyrant". == Plato's text == Great lexical and syntactical differences are easily noted between the many translations of the Republic. Below is a typical text from a relatively recent translation of Republic 546b–c: Now for divine begettings there is a period comprehended by a perfect number, and for mortal by the first in which augmentations dominating and dominated when they have attained to three distances and four limits of the assimilating and the dissimilating, the waxing and the waning, render all things conversable and commensurable [546c] with one another, whereof a basal four-thirds wedded to the pempad yields two harmonies at the third augmentation, the one the product of equal factors taken one hundred times, the other of equal length one way but oblong,-one dimension of a hundred numbers determined by the rational diameters of the pempad lacking one in each case, or of the irrational lacking two; the other dimension of a hundred cubes of the triad. And this entire geometrical number is determinative of this thing, of better and inferior births. The 'entire geometrical number', mentioned shortly before the end of this text, is understood to be Plato's number. The introductory words mention (a period comprehended by) 'a perfect number' which is taken to be a reference to Plato's perfect year mentioned in his Timaeus (39d). The words are presented as uttered by the muses, so the whole passage is sometimes called the 'speech of the muses' or something similar. Indeed, Philip Melanchthon compared it to the proverbial obscurity of the Sibyls. Cicero famously described it as 'obscure' but others have seen some playfulness in its tone. == Interpretations == Shortly after Plato's time his meaning apparently did not cause puzzlement as Aristotle's casual remark attests. Half a millennium later, however, it was an enigma for the Neoplatonists, who had a somewhat mystic penchant and wrote frequently about it, proposing geometrical and numerical interpretations. Next, for nearly a thousand years, Plato's texts disappeared from Western Europe and it is only in the Renaissance that the enigma briefly resurfaced. During the 19th century, when classical scholars restored original texts, the problem reappeared. Schleiermacher interrupted his edition of Plato for a decade while attempting to make sense of the paragraph . Victor Cousin inserted a note that it has to be skipped in his French translation of Plato's works. In the early 20th century, scholarly findings suggested a Babylonian origin for the topic. Most interpreters argue that the value of Plato's number is 216 because it is the cube of 6, i.e. 63 = 216, which is remarkable for also being the sum of the cubes for the Pythagorean triple (3, 4, 5): 33 + 43 + 53 = 63. Such considerations tend to ignore the second part of the text where some other numbers and their relations are described. The opinions tend to converge about their values being 480,000 and 270,000 but there is little agreement about the details. It has been noted that 64 yields 1296 and 48 × 27 = 36 × 36 = 1296. Instead of multiplication some interpretations consider the sum of these factors: 48 + 27 = 75. Other values that have been proposed include: 17,500 = 100 × 100 + 4800 + 2700, by Otto Weber (1862). 760,000 = 750,000 + 10,000 = 19 × 4 × 10000, 19 being obtained from (4/3 + 5) × 3 and being the number of years in the Metonic cycle. 8128 = 26 × (27 − 1), a perfect number proposed by Cardano. It is known that such numbers can be decomposed into the sum of consecutive odd cubes, so 8128 = 13 + 33 + 53 + ... + 153. 1728 = 123 = 8 × 12 × 18, by Marsilio Ficino (1496). 5040 = 144 × 35 = (3 + 4 + 5)2 × (23 + 33), by Jacob Friedrich Fries (1823). == See also == Euler's sum of powers conjecture == References == == Further reading == Donaldson J., "On Plato's Number", Proceedings of the Philological Society, vol.1, iss. 8, p. 81-90, April 7, 1843 Adam J., The nuptial number of Plato: its solution and significance, London: C.J. Clay and Sons, 1891. Laird, A.G., Plato's Geometrical Number and the Comment of Proclus, The Collegiate Press, George Banta Publishing Company, Menasha, Wisconsin. 1918 Diès A., Le Nombre de Platon: Essai d'exégèse et d'Histoire, Paris 1936 Allen M., Nuptial Arithmetic: Marsilio Ficino's Commentary on the Fatal Number in Book VIII of Plato's Republic, UCLA 1994 Dumbrill R., Four Mathematical Texts from the Temple Library of Nippur: a source for Plato's number, ARANE 1 (2009): 27-37 [1] == External links == Five translations of Rep. 8.546 and 9.587 Weisstein, Eric W. "Plato's Numbers". MathWorld. Ramanujan And The Cubic Equation 33 + 43 + 53 = 63 Math world : Diophantine Equation--3rd Powers Sum of Consecutive Cubes Equals a Cube |
Wikipedia:Plethysm#0 | In algebra, plethysm is an operation on symmetric functions introduced by Dudley E. Littlewood, who denoted it by {λ} ⊗ {μ}. The word "plethysm" for this operation (after the Greek word πληθυσμός meaning "multiplication") was introduced later by Littlewood (1950, p. 289, 1950b, p.274), who said that the name was suggested by M. L. Clark. If symmetric functions are identified with operations in lambda rings, then plethysm corresponds to composition of operations. == In representation theory == Let V be a vector space over the complex numbers, considered as a representation of the general linear group GL(V). Each Young diagram λ corresponds to a Schur functor Lλ(-) on the category of GL(V)-representations. Given two Young diagrams λ and μ, consider the decomposition of Lλ(Lμ(V)) into a direct sum of irreducible representations of the group. By the representation theory of the general linear group we know that each summand is isomorphic to L ν ( V ) {\displaystyle L_{\nu }(V)} for a Young diagram ν {\displaystyle \nu } . So for some nonnegative multiplicities a λ , μ , ν {\displaystyle a_{\lambda ,\mu ,\nu }} there is an isomorphism L λ ( L μ ( V ) ) = ⨁ ν L ν ( V ) ⊕ a λ , μ , ν . {\displaystyle L_{\lambda }(L_{\mu }(V))=\bigoplus _{\nu }L_{\nu }(V)^{\oplus a_{\lambda ,\mu ,\nu }}.} The problem of (outer) plethysm is to find an expression for the multiplicities a λ , μ , ν {\displaystyle a_{\lambda ,\mu ,\nu }} . This formulation is closely related to the classical question. The character of the GL(V)-representation Lλ(V) is a symmetric function in dim(V) variables, known as the Schur polynomial sλ corresponding to the Young diagram λ. Schur polynomials form a basis in the space of symmetric functions. Hence to understand the plethysm of two symmetric functions it would be enough to know their expressions in that basis and an expression for a plethysm of two arbitrary Schur polynomials {sλ}⊗{sμ} . The second piece of data is precisely the character of Lλ(Lμ(V)). == References == Littlewood, D. E. (1936), "Polynomial concomitants and invariant matrices", J. London Math. Soc., 11 (1): 49–55, doi:10.1112/jlms/s1-11.1.49, Zbl 0013.14602 Littlewood, D. E. (1944), "Invariant theory, tensors and group characters", Philosophical Transactions of the Royal Society A, 239 (807): 305–365, doi:10.1098/rsta.1944.0001, JSTOR 91389, MR 0010594 Littlewood, Dudley E. (1950), The theory of group characters and matrix representations of groups, AMS Chelsea Publishing, Providence, RI, ISBN 978-0-8218-4067-2, MR 0002127 {{citation}}: ISBN / Date incompatibility (help) Littlewood, D. E. (1950b), A University Algebra, Melbourne, London, Toronto: William Heinemann, Ltd., MR 0045079 |
Wikipedia:Plethystic exponential#0 | In mathematics, the plethystic exponential is a certain operator defined on (formal) power series which, like the usual exponential function, translates addition into multiplication. This exponential operator appears naturally in the theory of symmetric functions, as a concise relation between the generating series for elementary, complete and power sums homogeneous symmetric polynomials in many variables. Its name comes from the operation called plethysm, defined in the context of so-called lambda rings. In combinatorics, the plethystic exponential is a generating function for many well studied sequences of integers, polynomials or power series, such as the number of integer partitions. It is also an important technique in the enumerative combinatorics of unlabelled graphs, and many other combinatorial objects. In geometry and topology, the plethystic exponential of a certain geometric/topologic invariant of a space, determines the corresponding invariant of its symmetric products. The inverse operator of the plethystic exponential is the plethystic logarithm. == Definition, main properties and basic examples == Let R [ [ x ] ] {\displaystyle R[[x]]} be a ring of formal power series in the variable x {\displaystyle x} , with coefficients in a commutative ring R {\displaystyle R} . Denote by R 0 [ [ x ] ] ⊂ R [ [ x ] ] {\displaystyle R^{0}[[x]]\subset R[[x]]} the ideal consisting of power series without constant term. Then, given f ( x ) ∈ R 0 [ [ x ] ] {\displaystyle f(x)\in R^{0}[[x]]} , its plethystic exponential PE [ f ] {\displaystyle {\text{PE}}[f]} is given by PE [ f ] ( x ) = exp ( ∑ k = 1 ∞ f ( x k ) k ) {\displaystyle {\text{PE}}[f](x)=\exp \left(\sum _{k=1}^{\infty }{\frac {f(x^{k})}{k}}\right)} where exp ( ⋅ ) {\displaystyle \exp(\cdot )} is the usual exponential function. It is readily verified that (writing simply PE [ f ] {\displaystyle {\text{PE}}[f]} when the variable is understood): PE [ 0 ] = 1 PE [ f + g ] = PE [ f ] PE [ g ] PE [ − f ] = PE [ f ] − 1 {\displaystyle {\begin{aligned}[ll]{\text{PE}}[0]&=1\\{\text{PE}}[f+g]&={\text{PE}}[f]{\text{PE}}[g]\\{\text{PE}}[-f]&={\text{PE}}[f]^{-1}\end{aligned}}} Some basic examples are: PE [ x n ] = 1 1 − x n , n ∈ N PE [ x 1 − x ] = 1 + ∑ n ≥ 1 p ( n ) x n {\displaystyle {\begin{aligned}[ll]{\text{PE}}[x^{n}]&={\frac {1}{1-x^{n}}},n\in \mathbb {N} \\{\text{PE}}\left[{\frac {x}{1-x}}\right]&=1+\sum _{n\geq 1}p(n)x^{n}\end{aligned}}} In this last example, p ( n ) {\displaystyle p(n)} is number of partitions of n ∈ N {\displaystyle n\in \mathbb {N} } . The plethystic exponential can be also defined for power series rings in many variables. == Product-sum formula == The plethystic exponential can be used to provide innumerous product-sum identities. This is a consequence of a product formula for plethystic exponentials themselves. If f ( x ) = ∑ k = 1 ∞ a k x k {\displaystyle f(x)=\sum _{k=1}^{\infty }a_{k}x^{k}} denotes a formal power series with real coefficients a k {\displaystyle a_{k}} , then it is not difficult to show that: PE [ f ] ( x ) = ∏ k = 1 ∞ ( 1 − x k ) − a k {\displaystyle {\text{PE}}[f](x)=\prod _{k=1}^{\infty }(1-x^{k})^{-a_{k}}} The analogous product expression also holds in the many variables case. One particularly interesting case is its relation to integer partitions and to the cycle index of the symmetric group. == Relation with symmetric functions == Working with variables x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} , denote by h k {\displaystyle h_{k}} the complete homogeneous symmetric polynomial, that is the sum of all monomials of degree k in the variables x i {\displaystyle x_{i}} , and by e k {\displaystyle e_{k}} the elementary symmetric polynomials. Then, the h k {\displaystyle h_{k}} and the e k {\displaystyle e_{k}} are related to the power sum polynomials: p k = x 1 k + ⋯ + x n k {\displaystyle p_{k}=x_{1}^{k}+\cdots +x_{n}^{k}} by Newton's identities, that can succinctly be written, using plethystic exponentials, as: ∑ n = 0 ∞ h n t n = PE [ p 1 t ] = PE [ x 1 t + ⋯ + x n t ] {\displaystyle \sum _{n=0}^{\infty }h_{n}\,t^{n}={\text{PE}}[p_{1}\,t]={\text{PE}}[x_{1}t+\cdots +x_{n}t]} ∑ n = 0 ∞ ( − 1 ) n e n t n = PE [ − p 1 t ] = PE [ − x 1 t − ⋯ − x n t ] {\displaystyle \sum _{n=0}^{\infty }(-1)^{n}e_{n}\,t^{n}={\text{PE}}[-p_{1}\,t]={\text{PE}}[-x_{1}t-\cdots -x_{n}t]} == Macdonald's formula for symmetric products == Let X be a finite CW complex, of dimension d, with Poincaré polynomial P X ( t ) = ∑ k = 0 d b k ( X ) t k {\displaystyle P_{X}(t)=\sum _{k=0}^{d}b_{k}(X)\,t^{k}} where b k ( X ) {\displaystyle b_{k}(X)} is its kth Betti number. Then the Poincaré polynomial of the nth symmetric product of X, denoted Sym n ( X ) {\displaystyle \operatorname {Sym} ^{n}(X)} , is obtained from the series expansion: PE [ P X ( − t ) x ] = ∏ k = 0 d ( 1 − t k x ) ( − 1 ) k + 1 b k ( X ) = ∑ n ≥ 0 P Sym n ( X ) ( − t ) x n {\displaystyle {\text{PE}}[P_{X}(-t)\,x]=\prod _{k=0}^{d}\left(1-t^{k}x\right)^{(-1)^{k+1}b_{k}(X)}=\sum _{n\geq 0}P_{\operatorname {Sym} ^{n}(X)}(-t)\,x^{n}} == The plethystic programme in physics == In a series of articles, a group of theoretical physicists, including Bo Feng, Amihay Hanany and Yang-Hui He, proposed a programme for systematically counting single and multi-trace gauge invariant operators of supersymmetric gauge theories. In the case of quiver gauge theories of D-branes probing Calabi–Yau singularities, this count is codified in the plethystic exponential of the Hilbert series of the singularity. == See also == Plethystic logarithm == References == |
Wikipedia:Plethystic substitution#0 | Plethystic substitution is a shorthand notation for a common kind of substitution in the algebra of symmetric functions and that of symmetric polynomials. It is essentially basic substitution of variables, but allows for a change in the number of variables used. == Definition == The formal definition of plethystic substitution relies on the fact that the ring of symmetric functions Λ R ( x 1 , x 2 , … ) {\displaystyle \Lambda _{R}(x_{1},x_{2},\ldots )} is generated as an R-algebra by the power sum symmetric functions p k = x 1 k + x 2 k + x 3 k + ⋯ . {\displaystyle p_{k}=x_{1}^{k}+x_{2}^{k}+x_{3}^{k}+\cdots .} For any symmetric function f {\displaystyle f} and any formal sum of monomials A = a 1 + a 2 + ⋯ {\displaystyle A=a_{1}+a_{2}+\cdots } , the plethystic substitution f[A] is the formal series obtained by making the substitutions p k ⟶ a 1 k + a 2 k + a 3 k + ⋯ {\displaystyle p_{k}\longrightarrow a_{1}^{k}+a_{2}^{k}+a_{3}^{k}+\cdots } in the decomposition of f {\displaystyle f} as a polynomial in the pk's. == Examples == If X {\displaystyle X} denotes the formal sum X = x 1 + x 2 + ⋯ {\displaystyle X=x_{1}+x_{2}+\cdots } , then f [ X ] = f ( x 1 , x 2 , … ) {\displaystyle f[X]=f(x_{1},x_{2},\ldots )} . One can write 1 / ( 1 − t ) {\displaystyle 1/(1-t)} to denote the formal sum 1 + t + t 2 + t 3 + ⋯ {\displaystyle 1+t+t^{2}+t^{3}+\cdots } , and so the plethystic substitution f [ 1 / ( 1 − t ) ] {\displaystyle f[1/(1-t)]} is simply the result of setting x i = t i − 1 {\displaystyle x_{i}=t^{i-1}} for each i. That is, f [ 1 1 − t ] = f ( 1 , t , t 2 , t 3 , … ) {\displaystyle f\left[{\frac {1}{1-t}}\right]=f(1,t,t^{2},t^{3},\ldots )} . Plethystic substitution can also be used to change the number of variables: if X = x 1 + x 2 + ⋯ , x n {\displaystyle X=x_{1}+x_{2}+\cdots ,x_{n}} , then f [ X ] = f ( x 1 , … , x n ) {\displaystyle f[X]=f(x_{1},\ldots ,x_{n})} is the corresponding symmetric function in the ring Λ R ( x 1 , … , x n ) {\displaystyle \Lambda _{R}(x_{1},\ldots ,x_{n})} of symmetric functions in n variables. Several other common substitutions are listed below. In all of the following examples, X = x 1 + x 2 + ⋯ {\displaystyle X=x_{1}+x_{2}+\cdots } and Y = y 1 + y 2 + ⋯ {\displaystyle Y=y_{1}+y_{2}+\cdots } are formal sums. If f {\displaystyle f} is a homogeneous symmetric function of degree d {\displaystyle d} , then f [ t X ] = t d f ( x 1 , x 2 , … ) {\displaystyle f[tX]=t^{d}f(x_{1},x_{2},\ldots )} If f {\displaystyle f} is a homogeneous symmetric function of degree d {\displaystyle d} , then f [ − X ] = ( − 1 ) d ω f ( x 1 , x 2 , … ) {\displaystyle f[-X]=(-1)^{d}\omega f(x_{1},x_{2},\ldots )} , where ω {\displaystyle \omega } is the well-known involution on symmetric functions that sends a Schur function s λ {\displaystyle s_{\lambda }} to the conjugate Schur function s λ ∗ {\displaystyle s_{\lambda ^{\ast }}} . The substitution S : f ↦ f [ − X ] {\displaystyle S:f\mapsto f[-X]} is the antipode for the Hopf algebra structure on the Ring of symmetric functions. p n [ X + Y ] = p n [ X ] + p n [ Y ] {\displaystyle p_{n}[X+Y]=p_{n}[X]+p_{n}[Y]} The map Δ : f ↦ f [ X + Y ] {\displaystyle \Delta :f\mapsto f[X+Y]} is the coproduct for the Hopf algebra structure on the ring of symmetric functions. h n [ X ( 1 − t ) ] {\displaystyle h_{n}\left[X(1-t)\right]} is the alternating Frobenius series for the exterior algebra of the defining representation of the symmetric group, where h n {\displaystyle h_{n}} denotes the complete homogeneous symmetric function of degree n {\displaystyle n} . h n [ X / ( 1 − t ) ] {\displaystyle h_{n}\left[X/(1-t)\right]} is the Frobenius series for the symmetric algebra of the defining representation of the symmetric group. == External links == Combinatorics, Symmetric Functions, and Hilbert Schemes (Haiman, 2002) == References == M. Haiman, Combinatorics, Symmetric Functions, and Hilbert Schemes, Current Developments in Mathematics 2002, no. 1 (2002), pp. 39–111. |
Wikipedia:Plimpton 322#0 | Plimpton 322 is a Babylonian clay tablet, believed to have been written around 1800 BC, that contains a mathematical table written in cuneiform script. Each row of the table relates to a Pythagorean triple, that is, a triple of integers ( s , ℓ , d ) {\displaystyle (s,\ell ,d)} that satisfies the Pythagorean theorem, s 2 + ℓ 2 = d 2 {\displaystyle s^{2}+\ell ^{2}=d^{2}} , the rule that equates the sum of the squares of the legs of a right triangle to the square of the hypotenuse. The era in which Plimpton 322 was written was roughly 13 to 15 centuries prior to the era in which the major Greek discoveries in geometry were made. At the time that Otto Neugebauer and Abraham Sachs first realized the mathematical significance of the tablet in the 1940s, a few Old Babylonian tablets making use of the Pythagorean rule were already known. In addition to providing further evidence that Mesopotamian scribes knew and used the rule, Plimpton 322 strongly suggested that they had a systematic method for generating Pythagorean triples as some of the triples are very large and unlikely to have been discovered by ad hoc methods. Row 4 of the table, for example, relates to the triple (12709,13500,18541). The table exclusively lists triples ( s , ℓ , d ) {\displaystyle (s,\ell ,d)} in which the longer leg, ℓ {\displaystyle \ell } , (which is not given on the tablet) is a regular number, that is a number whose prime factors are 2, 3, or 5. As a consequence, the ratios s ℓ {\displaystyle {\tfrac {s}{\ell }}} and d ℓ {\displaystyle {\tfrac {d}{\ell }}} of the other two sides to the long leg have exact, terminating representations in the Mesopotamians' sexagesimal (base-60) number system. The first column most likely contains the square of the latter ratio, d 2 ℓ 2 {\displaystyle {\tfrac {d^{2}}{\ell ^{2}}}} , and is in descending order, starting with a number close to 2, the value for the isosceles right triangle with angles 45 ∘ {\displaystyle 45^{\circ }} , 45 ∘ {\displaystyle 45^{\circ }} , 90 ∘ {\displaystyle 90^{\circ }} , and ending with the ratio for a triangle with angles roughly 32 ∘ {\displaystyle 32^{\circ }} , 58 ∘ {\displaystyle 58^{\circ }} , 90 ∘ {\displaystyle 90^{\circ }} . The Babylonians, however, are believed not to have made use of the concept of measured angle. Columns 2 and 3 are most commonly interpreted as containing the short side and hypotenuse. Due to some errors in the table and damage to the tablet, variant interpretations, still related to right triangles, are possible. Neugebauer and Sachs saw Plimpton 322 as a study of solutions to the Pythagorean equation in whole numbers, and suggested a number-theoretic motivation. They proposed that the table was compiled by means of a rule similar to the one used by Euclid in Elements. Many later scholars have favored a different proposal, in which a number x {\displaystyle x} , greater than 1, with regular numerator and denominator, is used to form the quantity 1 2 ( x + 1 x ) {\displaystyle {\tfrac {1}{2}}\left(x+{\tfrac {1}{x}}\right)} . This quantity has a finite sexagesimal representation and has the key property that if it is squared and 1 subtracted, the result has a rational square root also with a finite sexagesimal representation. This square root, in fact, equals 1 2 ( x − 1 x ) {\displaystyle {\tfrac {1}{2}}\left(x-{\tfrac {1}{x}}\right)} . The result is that ( 1 2 ( x − 1 x ) , 1 , 1 2 ( x + 1 x ) ) {\displaystyle \left({\frac {1}{2}}\left(x-{\frac {1}{x}}\right),1,{\frac {1}{2}}\left(x+{\frac {1}{x}}\right)\right)} is a rational Pythagorean triple, from which an integer Pythagorean triple can be obtained by rescaling. The column headings on the tablet, as well as the existence of tablets YBC 6967, MS 3052, and MS 3971 that contain related calculations, provide support for this proposal. The purpose of Plimpton 322 is not known. Most current scholars consider a number-theoretic motivation to be anachronistic, given what is known of Babylonian mathematics as a whole. The proposal that Plimpton 322 is a trigonometric table is ruled out for similar reasons, given that the Babylonians appear not to have had the concept of angle measure. Various proposals have been made, including that the tablet had some practical purpose in architecture or surveying, that it was geometrical investigation motivated by mathematical interest, or that it was compilation of parameters to enable a teacher to set problems for students. With regard to the latter proposal, Creighton Buck, reporting on never-published work of D. L. Voils, raises the possibility that the tablet may have only an incidental relation to right triangles, its primary purpose being to help set problems relating to reciprocal pairs, akin to modern day quadratic-equation problems. Other scholars, such as Jöran Friberg and Eleanor Robson, who also favor the teacher's aid interpretation, state that the intended problems probably did relate to right triangles. == Provenance and dating == Plimpton 322 is partly broken, approximately 13 cm wide, 9 cm tall, and 2 cm thick. New York publisher George Arthur Plimpton purchased the tablet from an archaeological dealer, Edgar J. Banks, in about 1922, and bequeathed it with the rest of his collection to Columbia University in the mid-1930s. According to Banks, the tablet came from Senkereh, a site in southern Iraq corresponding to the ancient city of Larsa. The tablet is believed to have been written around 1800 BC (using the middle chronology), based in part on the style of handwriting used for its cuneiform script: Robson (2002) writes that this handwriting "is typical of documents from southern Iraq of 4000–3500 years ago." More specifically, based on formatting similarities with other tablets from Larsa that have explicit dates written on them, Plimpton 322 might well be from the period 1822–1784 BC. Robson points out that Plimpton 322 was written in the same format as other administrative, rather than mathematical, documents of the period. == Content == The main content of Plimpton 322 is a table of numbers, with four columns and fifteen rows, in Babylonian sexagesimal notation. The fourth column is just a row number, in order from 1 to 15. The second and third columns are completely visible in the surviving tablet. However, the edge of the first column has been broken off, and there are two consistent extrapolations for what the missing digits could be; these interpretations differ only in whether or not each number starts with an additional digit equal to 1. With the differing extrapolations shown in parentheses, damaged portions of the first and fourth columns whose content is surmised shown in italics, and six presumed errors shown in boldface along with the generally proposed corrections in square brackets underneath, these numbers are Two possible alternatives for the correction in Row 15 are shown: either 53 in the third column should be replaced with twice its value, 1 46, or 56 in the second column should be replaced with half its value, 28. It is possible that additional columns were present in the broken-off part of the tablet to the left of these columns. Babylonian sexagesimal notation did not specify the power of 60 multiplying each number, which makes the interpretation of these numbers ambiguous. The numbers in the second and third columns are generally taken to be integers. The numbers in the first column can only be understood as fractions, and their values all lie between 1 and 2 (assuming the initial 1 is present—they lie between 0 and 1 if it is absent). These fractions are exact, not truncations or rounded off approximations. The decimal translation of the tablet under these assumptions is shown below. Most of the exact sexagesimal fractions in the first column do not have terminating decimal expansions and have been rounded to seven decimal places. *As before, an alternative possible correction to Row 15 has 28 in the second column and 53 in the third column. The entries in the second and third columns of Row 11, unlike those of all other rows except possibly Row 15, contain a common factor. It is possible that 45 and 1 15 are to be understood as 3/4 and 5/4, which is consistent with the standard (0.75,1,1.25) scaling of the familiar (3,4,5) right triangle in Babylonian mathematics. In each row, the number in the second column can be interpreted as the shorter side s {\displaystyle s} of a right triangle, and the number in the third column can be interpreted as the hypotenuse d {\displaystyle d} of the triangle. In all cases, the longer side l {\displaystyle l} is also an integer, making s {\displaystyle s} and d {\displaystyle d} two elements of a Pythagorean triple. The number in the first column is either the fraction s 2 / l 2 {\textstyle s^{2}/l^{2}} (if the "1" is not included) or d 2 l 2 = 1 + s 2 l 2 {\textstyle {\tfrac {d^{2}}{l^{2}}}\,=\,1+{\tfrac {s^{2}}{l^{2}}}} (if the "1" is included). In every case, the long side l {\displaystyle l} is a regular number, that is, an integer divisor of a power of 60 or, equivalently, a product of powers of 2, 3, and 5. It is for this reason that the numbers in the first column are exact, as dividing an integer by a regular number produces a terminating sexagesimal number. For instance, line 1 of the table can be interpreted as describing a triangle with short side 119 and hypotenuse 169, implying long side 169 2 − 119 2 = 120 {\displaystyle {\sqrt {169^{2}-119^{2}}}=120} , which is a regular number (23·3·5). The number in Column 1 is either (169/120)2 or (119/120)2. === Column headings === Each column has a heading, written in the Akkadian language. Some words are Sumerian logograms, which would have been understood by readers as standing for Akkadian words. These include ÍB.SI8, for Akkadian mithartum ("square"), MU.BI.IM, for Akkadian šumšu ("its line"), and SAG, for Akkadian pūtum ("width"). Each number in the fourth column is preceded by the Sumerogram KI, which, according to Neugebauer & Sachs (1945), "gives them the character of ordinal numbers." In the sexagesimal table above, italicized words and parts of words represent portions of the text that are unreadable due to damage to the tablet or illegibility, and that have been reconstructed by modern scholars. The terms ÍB.SI8 and takiltum have been left untranslated as there is ongoing debate about their precise meaning. The headings of Columns 2 and 3 could be translated as "square of the width" and "square of the diagonal", but Robson (2001) (pp. 173–174) argues that the term ÍB.SI8 can refer either to the area of the square or the side of the square, and that in this case it should be understood as "'square-side' or perhaps 'square root'". Similarly Britton, Proust & Shnider (2011) (p. 526) observe that the term often appears in the problems where completing the square is used to solve what would now be understood as quadratic equations, in which context it refers to the side of the completed square, but that it might also serve to indicate "that a linear dimension or line segment is meant".Neugebauer & Sachs (1945) (pp. 35, 39), on the other hand, exhibit instances where the term refers to outcomes of a wide variety of different mathematical operations and propose the translation "'solving number of the width (or the diagonal).'" Similarly, Friberg (1981) (p. 300) proposes the translation "root". In Column 1, the first parts of both lines of the heading are damaged. Neugebauer & Sachs (1945) reconstructed the first word as takilti (a form of takiltum), a reading that has been accepted by most subsequent researchers. The heading was generally regarded as untranslatable until Robson (2001) proposed inserting a 1 in the broken-off part of line 2 and succeeded in deciphering the illegible final word, producing the reading given in the table above. Based on a detailed linguistic analysis, Robson proposes translating takiltum as "holding square". Britton, Proust & Shnider (2011) survey the relatively few known occurrences of the word in Old Babylonian mathematics. While they note that, in almost all cases, it refers to the linear dimension of the auxiliary square added to a figure in the process of completing the square, and is the quantity subtracted in the last step of solving a quadratic, they agree with Robson that in this instance it is to be understood as referring to the area of a square.Friberg (2007), on the other hand, proposes that in the broken-off portion of the heading takiltum may have been preceded by a-ša ("area"). There is now widespread agreement that the heading describes the relationship between the squares on the width (short side) and diagonal of a rectangle with length (long side) 1: subtracting ("tearing out") area 1 from the square on the diagonal leaves the area of the square on the width. === Errors === As indicated in the table above, most scholars believe that the tablet contains six errors, and, with the exception of the two possible corrections in Row 15, there is widespread agreement as to what the correct values should be. There is less agreement about how the errors occurred and what they imply with regard to the method of the tablet's computation. A summary of the errors follows. The errors in Row 2, Column 1 (neglecting to leave spaces between 50 and 6 for absent 1s and 10s) and Row 9, Column 2 (writing 9 for 8) are universally regarded as minor errors in copying from a work tablet (or possibly from an earlier copy of the table). The error in Row 8, Column 1 (replacing the two sexagesimal digits 45 14 by their sum, 59) appears not to have been noticed in some of the early papers on the tablet. It has sometimes been regarded (for example in Robson (2001)) as a simple mistake made by the scribe in the process of copying from a work tablet. As discussed in Britton, Proust & Shnider (2011), however, a number of scholars have proposed that this error is much more plausibly explained as an error in the calculation leading up to the number, for example, the scribe's overlooking a medial zero (blank space representing a zero digit) when performing a multiplication. This explanation of the error is compatible with both of the main proposals for the method of construction of the table. (See below.) The remaining three errors have implications for the manner in which the tablet was computed. The number 7 12 1 in Row 13, Column 2, is the square of the correct value, 2 41. Assuming either that the lengths in Column 2 were computed by taking the square root of the area of the corresponding square, or that the length and the area were computed together, this error might be explained either as neglecting to take the square root, or copying the wrong number from a work tablet. If the error in Row 15 is understood as having written 56 instead of 28 in Column 2, then the error can be explained as a result of improper application of the trailing part algorithm, which is required if the table was computed by means of reciprocal pairs as described below. This error amounts to applying an iterative procedure for removing regular factors common to the numbers in Columns 2 and 3 an improper number of times in one of the columns. The number in Row 2, Column 3 has no obvious relationship to the correct number, and all explanations of how this number was obtained postulate multiple errors. Bruins (1957) observed that 3 12 01 might have been a simple miscopying of 3 13. If this were the case, then the explanation for the incorrect number 3 13 is similar to the explanation of the error in Row 15. An exception to the general consensus is Friberg (2007), where, in a departure from the earlier analysis by the same author (Friberg (1981)), it is hypothesized that the numbers in Row 15 are not in error, but were written as intended, and that the only error in Row 2, Column 3 was miswriting 3 13 as 3 12 01. Under this hypothesis, it is necessary to reinterpret Columns 2 and 3 as "the factor-reduced cores of the front and diagonal". The factor-reduced core of a number is the number with perfect-square regular factors removed; computing the factor-reduced core was part of the process of calculating square roots in Old Babylonian mathematics. According to Friberg, "it was never the intention of the author of Plimpton 322 to reduce his series of normalized diagonal triples (with length equal to 1 in each triple) to a corresponding series of primitive diagonal triples (wth the front, length, and the diagonal equal to integers without common factors)." == Construction of the table == Scholars still differ on how these numbers were generated. Buck (1980) and Robson (2001) both identify two main proposals for the method of construction of the table: the method of generating pairs, proposed in Neugebauer & Sachs (1945), and the method of reciprocal pairs, proposed by Bruins and elaborated on by Voils, Schmidt (1980), and Friberg. === Generating pairs === To use modern terminology, if p and q are natural numbers such that p>q then (p2 − q2, 2pq, p2 + q2) forms a Pythagorean triple. The triple is primitive, that is the three triangle sides have no common factor, if p and q are coprime and not both odd. Neugebauer and Sachs propose the tablet was generated by choosing p and q to be coprime regular numbers (but both may be odd—see Row 15) and computing d = p2 + q2, s = p2 − q2, and l = 2pq (so that l is also a regular number). For example, line 1 would be generated by setting p = 12 and q = 5. Buck and Robson both note that the presence of Column 1 is mysterious in this proposal, as it plays no role in the construction, and that the proposal does not explain why the rows of the table are ordered as they are, rather than, say, according to the value of p {\displaystyle p} or q {\displaystyle q} , which, under this hypothesis, might have been listed on columns to the left in the broken-off portion of the tablet. Robson also argues that the proposal does not explain how the errors in the table could have plausibly arisen and is not in keeping with the mathematical culture of the time. === Reciprocal pairs === In the reciprocal-pair proposal, the starting point is a single regular sexagesimal fraction x along with its reciprocal, 1/x. "Regular sexagesimal fraction" means that x is a product of (possibly negative) powers of 2, 3, and 5. The quantities (x−1/x)/2, 1, and (x+1/x)/2 then form what would now be called a rational Pythagorean triple. Moreover, the three sides all have finite sexagesimal representations. Advocates of this proposal point out that regular reciprocal pairs (x,1/x) show up in a different problem from roughly the same time and place as Plimpton 322, namely the problem of finding the sides of a rectangle of area 1 whose long side exceeds its short side by a given length c (which nowadays might be computed as the solutions to the quadratic equation x − 1 x = c {\textstyle x-{\tfrac {1}{x}}=c} ). Robson (2002) analyzes the tablet, YBC 6967, in which such a problem is solved by calculating a sequence of intermediate values v1 = c/2, v2 = v12, v3 = 1 + v2, and v4 = v31/2, from which one can calculate x = v4 + v1 and 1/x = v4 − v1. While the need to compute the square root of v3 will, in general result in answers that do not have finite sexagesimal representations, the problem on YBC 6967 was set up—meaning the value of c was suitably chosen—to give a nice answer. This is, in fact, the origin of the specification above that x be a regular sexagesimal fraction: choosing x in this way ensures that both x and 1/x have finite sexagesimal representations. To engineer a problem with a nice answer, the problem setter would simply need to choose such an x and let the initial datum c equal x − 1/x. As a side effect, this produces a rational Pythagorean triple, with legs v1 and 1 and hypotenuse v4. Robson notes that the problem on YBC 6967 actually solves the equation x − 1 00 x = x − 60 x = c {\textstyle x-{\tfrac {1\ 00}{x}}=x-{\tfrac {60}{x}}=c} , which entails replacing the expression for v3 above with v3 = 60 + v2. The side effect of obtaining a rational triple is thereby lost as the sides become v1, 60 {\displaystyle {\sqrt {60}}} , and v4. In this proposal it must be assumed that the Babylonians were familiar with both variants of the problem. Robson argues that the columns of Plimpton 322 can be interpreted as: v3 = ((x + 1/x)/2)2 = 1 + (c/2)2 in the first column, a·v1 = a·(x − 1/x)/2 for a suitable multiplier a in the second column, and a·v4 = a·(x + 1/x)/2 in the third column. In this interpretation, x and 1/x (or possibly v1 and v4) would have appeared on the tablet in the broken-off portion to the left of the first column. The presence of Column 1 is therefore explained as an intermediate step in the calculation, and the ordering of rows is by descending values of x (or v1). The multiplier a used to compute the values in columns 2 and 3, which can be thought of as a rescaling of the side lengths, arises from application of the "trailing part algorithm", in which both values are repeatedly multiplied by the reciprocal of any regular factor common to the last sexagesimal digits of both, until no such common factor remains. As discussed above, the errors in the tablet all have natural explanations in the reciprocal-pair proposal. On the other, Robson points out that the role of Columns 2 and 3 and the need for the multiplier a remain unexplained by this proposal, and suggests that the goal of the tablet's author was to provide parameters not for quadratic problems of the type solved on YBC 6967, but rather "for some sort of right-triangle problems." She also notes that the method used to generate the table and the use for which it was intended need not be the same. Strong additional support for the idea that the numbers on the tablet were generated using reciprocal pairs comes from two tablets, MS 3052 and MS 3971, from the Schøyen Collection. Jöran Friberg translated and analyzed the two tablets and discovered that both contain examples of the calculation of the diagonal and side lengths of a rectangle using reciprocal pairs as the starting point. The two tablets are both Old Babylonian, of approximately the same age as Plimpton 322, and both are believed to come from Uruk, near Larsa. Further analysis of the two tablets was carried out in Britton, Proust & Shnider (2011). MS 3971 contains a list of five problems, the third of which begins with "In order for you to see five diagonals" and concludes with "five diagonals". The given data for each of the five parts of the problem consist of a reciprocal pair. For each part the lengths of both the diagonal and the width (short side) of a rectangle are computed. The length (long side) is not stated but the calculation implies that it is taken to be 1. In modern terms, the calculation proceeds as follows: given x and 1/x, first compute (x+1/x)/2, the diagonal. Then compute [ 1 2 ( x + 1 x ) ] 2 − 1 , {\displaystyle {\sqrt {\left[{\frac {1}{2}}\left(x+{\frac {1}{x}}\right)\right]^{2}-1}},} the width. Due to damage to the part of the tablet containing the first of the five parts, the statement of the problem for this part, apart from traces of the initial data, and the solution have been lost. The other four parts are, for the most part intact, and all contain very similar text. The reason for taking the diagonal to be half the sum of the reciprocal pair is not stated in the intact text. The computation of the width is equivalent to (x−1/x)/2, but that this more direct method of computation has not been used, the rule relating the square of the diagonal to the sum of the squares of the sides having been preferred. The text of the second problem of MS 3052 has also been badly damaged, but what remains is structured similarly to the five parts of MS 3971, Problem 3. The problem contains a figure, which, according to Friberg, is likely a "rectangle without any diagonals". Britton, Proust & Shnider (2011) emphasize that the preserved portions of the text explicitly state the length to be 1 and explicitly compute the 1 that gets subtracted from the square of the diagonal in the process of calculating the width as the square of the length. The initial data and computed width and diagonal for the six problems on the two tablets are given in the table below. The parameters of MS 3971 § 3a are uncertain due to damage to the tablet. The parameters of the problem from MS 3052 correspond to a rescaling of the standard (3,4,5) right triangle, which appears as Row 11 of Plimpton 322. None of the parameters in the problems from MS 3971 match any of the rows of Plimpton 322. As discussed below, all of the rows of Plimpton 322 have x≥9/5, while all the problems on MS 3971 have x<9/5. The parameters of MS 3971 do, however, all correspond to rows of de Solla Price's proposed extension of the table of Plimpton 322, also discussed below. The role of the reciprocal pair is different in the problem on YBC 6967 than on MS 3052 and MS 3971 (and by extension, on Plimpton 322). In the problem of YBC 6967, the members of the reciprocal pair are the lengths of the sides of a rectangle of area 1. The geometric meaning of x and 1/x is not stated in the surviving text of the problems on MS 3052 and MS 3971. The goal appears to have been to apply a known procedure for producing rectangles with finite sexagesimal width and diagonal. The trailing point algorithm was not used to rescale the side lengths in these problems. === Comparison of the proposals === The quantity x in the reciprocal-pair proposal corresponds to the ratio p / q in the generating-pair proposal. Indeed, while the two proposals differ in calculation method, there is little mathematical difference between the results as both produce the same triples, apart from an overall factor of 2 in the case where p and q are both odd. (Unfortunately, the only place where this occurs in the tablet is in Row 15, which contains an error and cannot therefore be used to distinguish between the proposals.) Proponents of the reciprocal-pair proposal differ on whether x was computed from an underlying p and q, but with only the combinations p / q and q / p used in tablet computations or whether x was obtained directly from other sources, such as reciprocal tables. One difficulty with the latter hypothesis is that some of the needed values of x or 1/x are four-place sexagesimal numbers, and no four-place reciprocal tables are known. Neugebauer and Sachs had, in fact, noted the possibility of using reciprocal pairs in their original work, and rejected it for this reason. Robson, however, argues that known sources and computational methods of the Old Babylonian period can account for all values of x used. === Selection of pairs === Neugebauer and Sachs note that the triangle dimensions in the tablet range from those of a nearly isosceles right triangle (with short leg, 119, nearly equal to long leg, 120) to those of a right triangle with acute angles close to 30° and 60°, and that the angle decreases in a fairly uniform fashion in steps of approximately 1°. They suggest that the pairs p, q were chosen deliberately with this goal in mind. It was observed by de Solla Price (1964), working within the generating-pair framework, that every row of the table is generated by a q that satisfies 1 ≤ q<60, that is, that q is always a single-digit sexagesimal number. The ratio p/q takes its greatest value, 12/5=2.4, in Row 1 of the table, and is therefore always less than 2 + 1 ≈ 2.414 {\displaystyle {\sqrt {2}}+1\approx 2.414} , a condition which guarantees that p2 − q2 is the long leg and 2pq is the short leg of the triangle and which, in modern terms, implies that the angle opposite the leg of length p2 − q2 is less than 45°. The ratio is least in Row 15 where p/q=9/5 for an angle of about 31.9°. Furthermore, there are exactly 15 regular ratios between 9/5 and 12/5 inclusive for which q is a single-digit sexagesimal number, and these are in one-to-one correspondence with the rows of the tablet. He also points out that the even spacing of the numbers might not have been by design: it could also have arisen merely from the density of regular-number ratios in the range of numbers considered in the table. It was argued by de Solla Price that the natural lower bound for the ratio would be 1, which corresponds to an angle of 0°. He found that, maintaining the requirement that q be a single-digit sexagesimal number, there are 23 pairs in addition to the ones represented by the tablet, for a total of 38 pairs. He notes that the vertical scoring between columns on the tablet has been continued onto the back, suggesting that the scribe might have intended to extend the table. He claims that the available space would correctly accommodate 23 additional rows. Proponents of the reciprocal-pair proposal have also advocated this scheme. Robson (2001) does not directly address this proposal, but does agree that the table was not "full". She notes that in the reciprocal-pair proposal, every x represented in the tablet is at most a four-place sexagesimal number with at most a four-place reciprocal, and that the total number of places in x and 1/x together is never more than 7. If these properties are taken as requirements, there are exactly three values of x "missing" from the tablet, which she argues might have been omitted because they are unappealing in various ways. She admits the "shockingly ad hoc" nature of this scheme, which serves mainly as a rhetorical device for criticizing all attempts at divining the selection criteria of the tablet's author. == Purpose and authorship == Otto E. Neugebauer (1957) argued for a number-theoretic interpretation, but also believed that the entries in the table were the result of a deliberate selection process aimed at achieving the fairly regular decrease of the values in Column 1 within some specified bounds. Buck (1980) and Robson (2002) both mention the existence of a trigonometric explanation, which Robson attributes to the authors of various general histories and unpublished works, but which may derive from the observation in Neugebauer & Sachs (1945) that the values of the first column can be interpreted as the squared secant or tangent (depending on the missing digit) of the angle opposite the short side of the right triangle described by each row, and the rows are sorted by these angles in roughly one-degree increments. In other words, if you take the number in the first column, discounting the (1), and derive its square root, and then divide this into the number in column two, the result will be the length of the long side of the triangle. Consequently, the square root of the number (minus the one) in the first column is what we would today call the tangent of the angle opposite the short side. If the (1) is included, the square root of that number is the secant. In contraposition with these earlier explanations of the tablet, Robson (2002) claims that historical, cultural and linguistic evidence all reveal the tablet to be more likely constructed from "a list of regular reciprocal pairs." Robson argues on linguistic grounds that the trigonometric theory is "conceptually anachronistic": it depends on too many other ideas not present in the record of Babylonian mathematics from that time. In 2003, the MAA awarded Robson with the Lester R. Ford Award for her work, stating it is "unlikely that the author of Plimpton 322 was either a professional or amateur mathematician. More likely he seems to have been a teacher and Plimpton 322 a set of exercises." Robson takes an approach that in modern terms would be characterized as algebraic, though she describes it in concrete geometric terms and argues that the Babylonians would also have interpreted this approach geometrically. Thus, the tablet can be interpreted as giving a sequence of worked-out exercises. It makes use of mathematical methods typical of scribal schools of the time, and it is written in a document format used by administrators in that period. Therefore, Robson argues that the author was probably a scribe, a bureaucrat in Larsa. The repetitive mathematical set-up of the tablet, and of similar tablets such as BM 80209, would have been useful in allowing a teacher to set problems in the same format as each other but with different data. == See also == IM 67118 Moscow Mathematical Papyrus Rhind Mathematical Papyrus YBC 7289 == Notes == == References == == Further reading == Abdulaziz, Abdulrahman Ali (2010), The Plimpton 322 Tablet and the Babylonian Method of Generating Pythagorean Triples, arXiv:1004.0025, Bibcode:2010arXiv1004.0025A Casselman, Bill (2003), The Babylonian tablet Plimpton 322, University of British Columbia Kirby, Laurence (2011), Plimpton 322: The Ancient Roots of Modern Mathematics (Half-hour video documentary), Baruch College, City University of New York [1]Jens Kleb, "270 valid triples below, between and above the lines 1-15 of Plimpton 322", CDLN 2023:5, Cuneiform Digital Library Notes, 2023-02-22 ISSN: 1546-6566 === Exhibitions === "Before Pythagoras: The Culture of Old Babylonian Mathematics", Institute for the Study of the Ancient World, New York University, November 12 - December 17, 2010. Includes photo and description of Plimpton 322). Rothstein, Edward (November 27, 2010). "Masters of Math, From Old Babylon". New York Times. Retrieved 28 November 2010.. Review of "Before Pythagoras" exhibit, mentioning controversy over Plimpton 322. "Jewels in Her Crown: Treasures from the Special Collections of Columbia’s Libraries", Rare Book & Manuscript Library, Columbia University, October 8, 2004 - January 28, 2005. of Photo and description of Item 158: Plimpton Cuneiform 322. == External links == Cuneiform Digital Library Initiative (CDLI) catalog, including high-quality digital images: Plimpton 322, CDLI wiki article YBC 6967 MS 3052 MS 3971 |
Wikipedia:Plotting algorithms for the Mandelbrot set#0 | There are many programs and algorithms used to plot the Mandelbrot set and other fractals, some of which are described in fractal-generating software. These programs use a variety of algorithms to determine the color of individual pixels efficiently. == Escape time algorithm == The simplest algorithm for generating a representation of the Mandelbrot set is known as the "escape time" algorithm. A repeating calculation is performed for each x, y point in the plot area and based on the behavior of that calculation, a color is chosen for that pixel. === Unoptimized naïve escape time algorithm === In both the unoptimized and optimized escape time algorithms, the x and y locations of each point are used as starting values in a repeating, or iterating calculation (described in detail below). The result of each iteration is used as the starting values for the next. The values are checked during each iteration to see whether they have reached a critical "escape" condition, or "bailout". If that condition is reached, the calculation is stopped, the pixel is drawn, and the next x, y point is examined. For some starting values, escape occurs quickly, after only a small number of iterations. For starting values very close to but not in the set, it may take hundreds or thousands of iterations to escape. For values within the Mandelbrot set, escape will never occur. The programmer or user must choose how many iterations–or how much "depth"–they wish to examine. The higher the maximal number of iterations, the more detail and subtlety emerge in the final image, but the longer time it will take to calculate the fractal image. Escape conditions can be simple or complex. Because no complex number with a real or imaginary part greater than 2 can be part of the set, a common bailout is to escape when either coefficient exceeds 2. A more computationally complex method that detects escapes sooner, is to compute distance from the origin using the Pythagorean theorem, i.e., to determine the absolute value, or modulus, of the complex number. If this value exceeds 2, or equivalently, when the sum of the squares of the real and imaginary parts exceed 4, the point has reached escape. More computationally intensive rendering variations include the Buddhabrot method, which finds escaping points and plots their iterated coordinates. The color of each point represents how quickly the values reached the escape point. Often black is used to show values that fail to escape before the iteration limit, and gradually brighter colors are used for points that escape. This gives a visual representation of how many cycles were required before reaching the escape condition. To render such an image, the region of the complex plane we are considering is subdivided into a certain number of pixels. To color any such pixel, let c {\displaystyle c} be the midpoint of that pixel. We now iterate the critical point 0 under P c {\displaystyle P_{c}} , checking at each step whether the orbit point has modulus larger than 2. When this is the case, we know that c {\displaystyle c} does not belong to the Mandelbrot set, and we color our pixel according to the number of iterations used to find out. Otherwise, we keep iterating up to a fixed number of steps, after which we decide that our parameter is "probably" in the Mandelbrot set, or at least very close to it, and color the pixel black. In pseudocode, this algorithm would look as follows. The algorithm does not use complex numbers and manually simulates complex-number operations using two real numbers, for those who do not have a complex data type. The program may be simplified if the programming language includes complex-data-type operations. for each pixel (Px, Py) on the screen do x0 := scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.00, 0.47)) y0 := scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1.12, 1.12)) x := 0.0 y := 0.0 iteration := 0 max_iteration := 1000 while (x*x + y*y ≤ 2*2 AND iteration < max_iteration) do xtemp := x*x - y*y + x0 y := 2*x*y + y0 x := xtemp iteration := iteration + 1 color := palette[iteration] plot(Px, Py, color) Here, relating the pseudocode to c {\displaystyle c} , z {\displaystyle z} and P c {\displaystyle P_{c}} : z = x + i y {\displaystyle z=x+iy\ } z 2 = x 2 + 2 i x y {\displaystyle z^{2}=x^{2}+2ixy} - y 2 {\displaystyle y^{2}\ } c = x 0 + i y 0 {\displaystyle c=x_{0}+iy_{0}\ } and so, as can be seen in the pseudocode in the computation of x and y: x = R e ( z 2 + c ) = x 2 − y 2 + x 0 {\displaystyle x=\mathop {\mathrm {Re} } (z^{2}+c)=x^{2}-y^{2}+x_{0}} and y = I m ( z 2 + c ) = 2 x y + y 0 . {\displaystyle y=\mathop {\mathrm {Im} } (z^{2}+c)=2xy+y_{0}.\ } To get colorful images of the set, the assignment of a color to each value of the number of executed iterations can be made using one of a variety of functions (linear, exponential, etc.). One practical way, without slowing down calculations, is to use the number of executed iterations as an entry to a palette initialized at startup. If the color table has, for instance, 500 entries, then the color selection is n mod 500, where n is the number of iterations. === Optimized escape time algorithms === The code in the previous section uses an unoptimized inner while loop for clarity. In the unoptimized version, one must perform five multiplications per iteration. To reduce the number of multiplications the following code for the inner while loop may be used instead: x2:= 0 y2:= 0 w:= 0 while (x2 + y2 ≤ 4 and iteration < max_iteration) do x:= x2 - y2 + x0 y:= w - x2 - y2 + y0 x2:= x * x y2:= y * y w:= (x + y) * (x + y) iteration:= iteration + 1 The above code works via some algebraic simplification of the complex multiplication: ( i y + x ) 2 = − y 2 + 2 i y x + x 2 = x 2 − y 2 + 2 i y x {\displaystyle {\begin{aligned}(iy+x)^{2}&=-y^{2}+2iyx+x^{2}\\&=x^{2}-y^{2}+2iyx\end{aligned}}} Using the above identity, the number of multiplications can be reduced to three instead of five. The above inner while loop can be further optimized by expanding w to w = x 2 + 2 x y + y 2 {\displaystyle w=x^{2}+2xy+y^{2}} Substituting w into y = w − x 2 − y 2 + y 0 {\displaystyle y=w-x^{2}-y^{2}+y_{0}} yields y = 2 x y + y 0 {\displaystyle y=2xy+y_{0}} and hence calculating w is no longer needed. The further optimized pseudocode for the above is: x:= 0 y:= 0 x2:= 0 y2:= 0 while (x2 + y2 ≤ 4 and iteration < max_iteration) do y:= 2 * x * y + y0 x:= x2 - y2 + x0 x2:= x * x y2:= y * y iteration:= iteration + 1 Note that in the above pseudocode, 2 x y {\displaystyle 2xy} seems to increase the number of multiplications by 1, but since 2 is the multiplier the code can be optimized via ( x + x ) y {\displaystyle (x+x)y} . === Derivative Bailout or "derbail" === It is common to check the magnitude of z after every iteration, but there is another method we can use that can converge faster and reveal structure within julia sets. Instead of checking if the magnitude of z after every iteration is larger than a given value, we can instead check if the sum of each derivative of z up to the current iteration step is larger than a given bailout value: z n ′ := ( 2 ∗ z n − 1 ′ ∗ z n − 1 ) + 1 {\displaystyle z_{n}^{\prime }:=(2*z_{n-1}^{\prime }*z_{n-1})+1} The size of the dbail value can enhance the detail in the structures revealed within the dbail method with very large values. It is possible to find derivatives automatically by leveraging Automatic differentiation and computing the iterations using Dual numbers. Rendering fractals with the derbail technique can often require a large number of samples per pixel, as there can be precision issues which lead to fine detail and can result in noisy images even with samples in the hundreds or thousands. Python code: == Coloring algorithms == In addition to plotting the set, a variety of algorithms have been developed to efficiently color the set in an aesthetically pleasing way show structures of the data (scientific visualisation) === Histogram coloring === A more complex coloring method involves using a histogram which pairs each pixel with said pixel's maximum iteration count before escape/bailout. This method will equally distribute colors to the same overall area, and, importantly, is independent of the maximum number of iterations chosen. This algorithm has four passes. The first pass involves calculating the iteration counts associated with each pixel (but without any pixels being plotted). These are stored in an array: IterationCounts[x][y], where x and y are the x and y coordinates of said pixel on the screen respectively. The first step of the second pass is to create an array of size n, which is the maximum iteration count: NumIterationsPerPixel. Next, one must iterate over the array of pixel-iteration count pairs, IterationCounts[][], and retrieve each pixel's saved iteration count, i, via e.g. i = IterationCounts[x][y]. After each pixel's iteration count i is retrieved, it is necessary to index the NumIterationsPerPixel by i and increment the indexed value (which is initially zero) -- e.g. NumIterationsPerPixel[i] = NumIterationsPerPixel[i] + 1 . for (x = 0; x < width; x++) do for (y = 0; y < height; y++) do i:= IterationCounts[x][y] NumIterationsPerPixel[i]++ The third pass iterates through the NumIterationsPerPixel array and adds up all the stored values, saving them in total. The array index represents the number of pixels that reached that iteration count before bailout. total: = 0 for (i = 0; i < max_iterations; i++) do total += NumIterationsPerPixel[i] After this, the fourth pass begins and all the values in the IterationCounts array are indexed, and, for each iteration count i, associated with each pixel, the count is added to a global sum of all the iteration counts from 1 to i in the NumIterationsPerPixel array . This value is then normalized by dividing the sum by the total value computed earlier. hue[][]:= 0.0 for (x = 0; x < width; x++) do for (y = 0; y < height; y++) do iteration:= IterationCounts[x][y] for (i = 0; i <= iteration; i++) do hue[x][y] += NumIterationsPerPixel[i] / total /* Must be floating-point division. */ ... color = palette[hue[m, n]] ... Finally, the computed value is used, e.g. as an index to a color palette. This method may be combined with the smooth coloring method below for more aesthetically pleasing images. === Continuous (smooth) coloring === The escape time algorithm is popular for its simplicity. However, it creates bands of color, which, as a type of aliasing, can detract from an image's aesthetic value. This can be improved using an algorithm known as "normalized iteration count", which provides a smooth transition of colors between iterations. The algorithm associates a real number ν {\displaystyle \nu } with each value of z by using the connection of the iteration number with the potential function. This function is given by ϕ ( z ) = lim n → ∞ ( log | z n | / P n ) , {\displaystyle \phi (z)=\lim _{n\to \infty }(\log |z_{n}|/P^{n}),} where zn is the value after n iterations and P is the power for which z is raised to in the Mandelbrot set equation (zn+1 = znP + c, P is generally 2). If we choose a large bailout radius N (e.g., 10100), we have that log | z n | / P n = log ( N ) / P ν ( z ) {\displaystyle \log |z_{n}|/P^{n}=\log(N)/P^{\nu (z)}} for some real number ν ( z ) {\displaystyle \nu (z)} , and this is ν ( z ) = n − log P ( log | z n | / log ( N ) ) , {\displaystyle \nu (z)=n-\log _{P}(\log |z_{n}|/\log(N)),} and as n is the first iteration number such that |zn| > N, the number we subtract from n is in the interval [0, 1). For the coloring we must have a cyclic scale of colors (constructed mathematically, for instance) and containing H colors numbered from 0 to H − 1 (H = 500, for instance). We multiply the real number ν ( z ) {\displaystyle \nu (z)} by a fixed real number determining the density of the colors in the picture, take the integral part of this number modulo H, and use it to look up the corresponding color in the color table. For example, modifying the above pseudocode and also using the concept of linear interpolation would yield for each pixel (Px, Py) on the screen do x0:= scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1)) y0:= scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1)) x:= 0.0 y:= 0.0 iteration:= 0 max_iteration:= 1000 // Here N = 2^8 is chosen as a reasonable bailout radius. while x*x + y*y ≤ (1 << 16) and iteration < max_iteration do xtemp:= x*x - y*y + x0 y:= 2*x*y + y0 x:= xtemp iteration:= iteration + 1 // Used to avoid floating point issues with points inside the set. if iteration < max_iteration then // sqrt of inner term removed using log simplification rules. log_zn:= log(x*x + y*y) / 2 nu:= log(log_zn / log(2)) / log(2) // Rearranging the potential function. // Dividing log_zn by log(2) instead of log(N = 1<<8) // because we want the entire palette to range from the // center to radius 2, NOT our bailout radius. iteration:= iteration + 1 - nu color1:= palette[floor(iteration)] color2:= palette[floor(iteration) + 1] // iteration % 1 = fractional part of iteration. color:= linear_interpolate(color1, color2, iteration % 1) plot(Px, Py, color) === Exponentially mapped and cyclic iterations === Typically when we render a fractal, the range of where colors from a given palette appear along the fractal is static. If we desire to offset the location from the border of the fractal, or adjust their palette to cycle in a specific way, there are a few simple changes we can make when taking the final iteration count before passing it along to choose an item from our palette. When we have obtained the iteration count, we can make the range of colors non-linear. Raising a value normalized to the range [0,1] to a power n, maps a linear range to an exponential range, which in our case can nudge the appearance of colors along the outside of the fractal, and allow us to bring out other colors, or push in the entire palette closer to the border. v = ( ( i / m a x i ) S N ) 1.5 mod N {\displaystyle v=((\mathbf {i} /max_{i})^{\mathbf {S} }\mathbf {N} )^{1.5}{\bmod {\mathbf {N} }}} where i is our iteration count after bailout, max_i is our iteration limit, S is the exponent we are raising iters to, and N is the number of items in our palette. This scales the iter count non-linearly and scales the palette to cycle approximately proportionally to the zoom. We can then plug v into whatever algorithm we desire for generating a color. === Passing iterations into a color directly === One thing we may want to consider is avoiding having to deal with a palette or color blending at all. There are actually a handful of methods we can leverage to generate smooth, consistent coloring by constructing the color on the spot. ==== v refers to a normalized exponentially mapped cyclic iter count ==== ==== f(v) refers to the sRGB transfer function ==== A naive method for generating a color in this way is by directly scaling v to 255 and passing it into RGB as such rgb = [v * 255, v * 255, v * 255] One flaw with this is that RGB is non-linear due to gamma; consider linear sRGB instead. Going from RGB to sRGB uses an inverse companding function on the channels. This makes the gamma linear, and allows us to properly sum the colors for sampling. srgb = [v * 255, v * 255, v * 255] === HSV coloring === HSV Coloring can be accomplished by mapping iter count from [0,max_iter) to [0,360), taking it to the power of 1.5, and then modulo 360. We can then simply take the exponentially mapped iter count into the value and return hsv = [powf((i / max) * 360, 1.5) % 360, 100, (i / max) * 100] This method applies to HSL as well, except we pass a saturation of 50% instead. hsl = [powf((i / max) * 360, 1.5) % 360, 50, (i / max) * 100] === LCH coloring === One of the most perceptually uniform coloring methods involves passing in the processed iter count into LCH. If we utilize the exponentially mapped and cyclic method above, we can take the result of that into the Luma and Chroma channels. We can also exponentially map the iter count and scale it to 360, and pass this modulo 360 into the hue. x ∈ Q + s i = ( i / m a x i ) x v = 1.0 − c o s 2 ( π s i ) L = 75 − ( 75 v ) C = 28 + ( 75 − 75 v ) H = ( 360 s i ) 1.5 mod 3 60 {\textstyle {\begin{array}{lcl}x&\in &\mathbb {Q+} \\s_{i}&=&(i/max_{i})^{\mathbf {x} }\\v&=&1.0-cos^{2}(\pi s_{i})\\L&=&75-(75v)\\C&=&28+(75-75v)\\H&=&(360s_{i})^{1.5}{\bmod {3}}60\end{array}}} One issue we wish to avoid here is out-of-gamut colors. This can be achieved with a little trick based on the change in in-gamut colors relative to luma and chroma. As we increase luma, we need to decrease chroma to stay within gamut. s = iters/max_i; v = 1.0 - powf(cos(pi * s), 2.0); LCH = [75 - (75 * v), 28 + (75 - (75 * v)), powf(360 * s, 1.5) % 360]; == Advanced plotting algorithms == In addition to the simple and slow escape time algorithms already discussed, there are many other more advanced algorithms that can be used to speed up the plotting process. === Distance estimates === One can compute the distance from point c (in exterior or interior) to nearest point on the boundary of the Mandelbrot set. ==== Exterior distance estimation ==== The proof of the connectedness of the Mandelbrot set in fact gives a formula for the uniformizing map of the complement of M {\displaystyle M} (and the derivative of this map). By the Koebe quarter theorem, one can then estimate the distance between the midpoint of our pixel and the Mandelbrot set up to a factor of 4. In other words, provided that the maximal number of iterations is sufficiently high, one obtains a picture of the Mandelbrot set with the following properties: Every pixel that contains a point of the Mandelbrot set is colored black. Every pixel that is colored black is close to the Mandelbrot set. The upper bound b for the distance estimate of a pixel c (a complex number) from the Mandelbrot set is given by b = lim n → ∞ 2 ⋅ | P c n ( c ) | ⋅ ln | P c n ( c ) | | ∂ ∂ c P c n ( c ) | , {\displaystyle b=\lim _{n\to \infty }{\frac {2\cdot |{P_{c}^{n}(c)|\cdot \ln |{P_{c}^{n}(c)}}|}{|{\frac {\partial }{\partial {c}}}P_{c}^{n}(c)|}},} where P c ( z ) {\displaystyle P_{c}(z)\,} stands for complex quadratic polynomial P c n ( c ) {\displaystyle P_{c}^{n}(c)} stands for n iterations of P c ( z ) → z {\displaystyle P_{c}(z)\to z} or z 2 + c → z {\displaystyle z^{2}+c\to z} , starting with z = c {\displaystyle z=c} : P c 0 ( c ) = c {\displaystyle P_{c}^{0}(c)=c} , P c n + 1 ( c ) = P c n ( c ) 2 + c {\displaystyle P_{c}^{n+1}(c)=P_{c}^{n}(c)^{2}+c} ; ∂ ∂ c P c n ( c ) {\displaystyle {\frac {\partial }{\partial {c}}}P_{c}^{n}(c)} is the derivative of P c n ( c ) {\displaystyle P_{c}^{n}(c)} with respect to c. This derivative can be found by starting with ∂ ∂ c P c 0 ( c ) = 1 {\displaystyle {\frac {\partial }{\partial {c}}}P_{c}^{0}(c)=1} and then ∂ ∂ c P c n + 1 ( c ) = 2 ⋅ P c n ( c ) ⋅ ∂ ∂ c P c n ( c ) + 1 {\displaystyle {\frac {\partial }{\partial {c}}}P_{c}^{n+1}(c)=2\cdot {}P_{c}^{n}(c)\cdot {\frac {\partial }{\partial {c}}}P_{c}^{n}(c)+1} . This can easily be verified by using the chain rule for the derivative. The idea behind this formula is simple: When the equipotential lines for the potential function ϕ ( z ) {\displaystyle \phi (z)} lie close, the number | ϕ ′ ( z ) | {\displaystyle |\phi '(z)|} is large, and conversely, therefore the equipotential lines for the function ϕ ( z ) / | ϕ ′ ( z ) | {\displaystyle \phi (z)/|\phi '(z)|} should lie approximately regularly. From a mathematician's point of view, this formula only works in limit where n goes to infinity, but very reasonable estimates can be found with just a few additional iterations after the main loop exits. Once b is found, by the Koebe 1/4-theorem, we know that there is no point of the Mandelbrot set with distance from c smaller than b/4. The distance estimation can be used for drawing of the boundary of the Mandelbrot set, see the article Julia set. In this approach, pixels that are sufficiently close to M are drawn using a different color. This creates drawings where the thin "filaments" of the Mandelbrot set can be easily seen. This technique is used to good effect in the B&W images of Mandelbrot sets in the books "The Beauty of Fractals" and "The Science of Fractal Images". Here is a sample B&W image rendered using Distance Estimates: Distance Estimation can also be used to render 3D images of Mandelbrot and Julia sets ==== Interior distance estimation ==== It is also possible to estimate the distance of a limitly periodic (i.e., hyperbolic) point to the boundary of the Mandelbrot set. The upper bound b for the distance estimate is given by b = 1 − | ∂ ∂ z P c p ( z 0 ) | 2 | ∂ ∂ c ∂ ∂ z P c p ( z 0 ) + ∂ ∂ z ∂ ∂ z P c p ( z 0 ) ∂ ∂ c P c p ( z 0 ) 1 − ∂ ∂ z P c p ( z 0 ) | , {\displaystyle b={\frac {1-\left|{{\frac {\partial }{\partial {z}}}P_{c}^{p}(z_{0})}\right|^{2}}{\left|{{\frac {\partial }{\partial {c}}}{\frac {\partial }{\partial {z}}}P_{c}^{p}(z_{0})+{\frac {\partial }{\partial {z}}}{\frac {\partial }{\partial {z}}}P_{c}^{p}(z_{0}){\frac {{\frac {\partial }{\partial {c}}}P_{c}^{p}(z_{0})}{1-{\frac {\partial }{\partial {z}}}P_{c}^{p}(z_{0})}}}\right|}},} where p {\displaystyle p} is the period, c {\displaystyle c} is the point to be estimated, P c ( z ) {\displaystyle P_{c}(z)} is the complex quadratic polynomial P c ( z ) = z 2 + c {\displaystyle P_{c}(z)=z^{2}+c} P c p ( z 0 ) {\displaystyle P_{c}^{p}(z_{0})} is the p {\displaystyle p} -fold iteration of P c ( z ) → z {\displaystyle P_{c}(z)\to z} , starting with P c 0 ( z ) = z 0 {\displaystyle P_{c}^{0}(z)=z_{0}} z 0 {\displaystyle z_{0}} is any of the p {\displaystyle p} points that make the attractor of the iterations of P c ( z ) → z {\displaystyle P_{c}(z)\to z} starting with P c 0 ( z ) = c {\displaystyle P_{c}^{0}(z)=c} ; z 0 {\displaystyle z_{0}} satisfies z 0 = P c p ( z 0 ) {\displaystyle z_{0}=P_{c}^{p}(z_{0})} , ∂ ∂ c ∂ ∂ z P c p ( z 0 ) {\displaystyle {\frac {\partial }{\partial {c}}}{\frac {\partial }{\partial {z}}}P_{c}^{p}(z_{0})} , ∂ ∂ z ∂ ∂ z P c p ( z 0 ) {\displaystyle {\frac {\partial }{\partial {z}}}{\frac {\partial }{\partial {z}}}P_{c}^{p}(z_{0})} , ∂ ∂ c P c p ( z 0 ) {\displaystyle {\frac {\partial }{\partial {c}}}P_{c}^{p}(z_{0})} and ∂ ∂ z P c p ( z 0 ) {\displaystyle {\frac {\partial }{\partial {z}}}P_{c}^{p}(z_{0})} are various derivatives of P c p ( z ) {\displaystyle P_{c}^{p}(z)} , evaluated at z 0 {\displaystyle z_{0}} . Analogous to the exterior case, once b is found, we know that all points within the distance of b/4 from c are inside the Mandelbrot set. There are two practical problems with the interior distance estimate: first, we need to find z 0 {\displaystyle z_{0}} precisely, and second, we need to find p {\displaystyle p} precisely. The problem with z 0 {\displaystyle z_{0}} is that the convergence to z 0 {\displaystyle z_{0}} by iterating P c ( z ) {\displaystyle P_{c}(z)} requires, theoretically, an infinite number of operations. The problem with any given p {\displaystyle p} is that, sometimes, due to rounding errors, a period is falsely identified to be an integer multiple of the real period (e.g., a period of 86 is detected, while the real period is only 43=86/2). In such case, the distance is overestimated, i.e., the reported radius could contain points outside the Mandelbrot set. === Cardioid / bulb checking === One way to improve calculations is to find out beforehand whether the given point lies within the cardioid or in the period-2 bulb. Before passing the complex value through the escape time algorithm, first check that: p = ( x − 1 4 ) 2 + y 2 {\displaystyle p={\sqrt {\left(x-{\frac {1}{4}}\right)^{2}+y^{2}}}} , x ≤ p − 2 p 2 + 1 4 {\displaystyle x\leq p-2p^{2}+{\frac {1}{4}}} , ( x + 1 ) 2 + y 2 ≤ 1 16 {\displaystyle (x+1)^{2}+y^{2}\leq {\frac {1}{16}}} , where x represents the real value of the point and y the imaginary value. The first two equations determine that the point is within the cardioid, the last the period-2 bulb. The cardioid test can equivalently be performed without the square root: q = ( x − 1 4 ) 2 + y 2 , {\displaystyle q=\left(x-{\frac {1}{4}}\right)^{2}+y^{2},} q ( q + ( x − 1 4 ) ) ≤ 1 4 y 2 . {\displaystyle q\left(q+\left(x-{\frac {1}{4}}\right)\right)\leq {\frac {1}{4}}y^{2}.} 3rd- and higher-order buds do not have equivalent tests, because they are not perfectly circular. However, it is possible to find whether the points are within circles inscribed within these higher-order bulbs, preventing many, though not all, of the points in the bulb from being iterated. === Periodicity checking === To prevent having to do huge numbers of iterations for points inside the set, one can perform periodicity checking, which checks whether a point reached in iterating a pixel has been reached before. If so, the pixel cannot diverge and must be in the set. Periodicity checking is a trade-off, as the need to remember points costs data management instructions and memory, but saves computational instructions. However, checking against only one previous iteration can detect many periods with little performance overhead. For example, within the while loop of the pseudocode above, make the following modifications: xold := 0 yold := 0 period := 0 while (x*x + y*y ≤ 2*2 and iteration < max_iteration) do xtemp := x*x - y*y + x0 y := 2*x*y + y0 x := xtemp iteration := iteration + 1 if x ≈ xold and y ≈ yold then iteration := max_iteration /* Set to max for the color plotting */ break /* We are inside the Mandelbrot set, leave the while loop */ period:= period + 1 if period > 20 then period := 0 xold := x yold := y The above code stores away a new x and y value on every 20th iteration, thus it can detect periods that are up to 20 points long. === Border tracing / edge checking === Because the Mandelbrot set is full, any point enclosed by a closed shape whose borders lie entirely within the Mandelbrot set must itself be in the Mandelbrot set. Border tracing works by following the lemniscates of the various iteration levels (colored bands) all around the set, and then filling the entire band at once. This also provides a speed increase because large numbers of points can be now skipped. In the animation shown, points outside the set are colored with a 1000-iteration escape time algorithm. Tracing the set border and filling it, rather than iterating the interior points, reduces the total number of iterations by 93.16%. With a higher iteration limit the benefit would be even greater. === Rectangle checking === Rectangle checking is an older and simpler method for plotting the Mandelbrot set. The basic idea of rectangle checking is that if every pixel in a rectangle's border shares the same amount of iterations, then the rectangle can be safely filled using that number of iterations. There are several variations of the rectangle checking method, however, all of them are slower than the border tracing method because they end up calculating more pixels. One variant just calculates the corner pixels of each rectangle, however, this causes damaged pictures more often than calculating the entire border, thus it only works reasonably well if only small boxes of around 6x6 pixels are used, and no recursing in from bigger boxes. (Fractint method.) The most simple rectangle checking method lies in checking the borders of equally sized rectangles, resembling a grid pattern. (Mariani's algorithm.) A faster and slightly more advanced variant is to first calculate a bigger box, say 25x25 pixels. If the entire box border has the same color, then just fill the box with the same color. If not, then split the box into four boxes of 13x13 pixels, reusing the already calculated pixels as outer border, and sharing the inner "cross" pixels between the inner boxes. Again, fill in those boxes that has only one border color. And split those boxes that don't, now into four 7x7 pixel boxes. And then those that "fail" into 4x4 boxes. (Mariani-Silver algorithm.) Even faster is to split the boxes in half instead of into four boxes. Then it might be optimal to use boxes with a 1.4:1 aspect ratio, so they can be split like how A3 papers are folded into A4 and A5 papers. (The DIN approach.) As with border tracing, rectangle checking only works on areas with one discrete color. But even if the outer area uses smooth/continuous coloring then rectangle checking will still speed up the costly inner area of the Mandelbrot set. Unless the inner area also uses some smooth coloring method, for instance interior distance estimation. === Symmetry utilization === The horizontal symmetry of the Mandelbrot set allows for portions of the rendering process to be skipped upon the presence of the real axis in the final image. However, regardless of the portion that gets mirrored, the same number of points will be rendered. Julia sets have symmetry around the origin. This means that quadrant 1 and quadrant 3 are symmetric, and quadrants 2 and quadrant 4 are symmetric. Supporting symmetry for both Mandelbrot and Julia sets requires handling symmetry differently for the two different types of graphs. === Multithreading === Escape-time rendering of Mandelbrot and Julia sets lends itself extremely well to parallel processing. On multi-core machines the area to be plotted can be divided into a series of rectangular areas which can then be provided as a set of tasks to be rendered by a pool of rendering threads. This is an embarrassingly parallel computing problem. (Note that one gets the best speed-up by first excluding symmetric areas of the plot, and then dividing the remaining unique regions into rectangular areas.) Here is a short video showing the Mandelbrot set being rendered using multithreading and symmetry, but without boundary following: Finally, here is a video showing the same Mandelbrot set image being rendered using multithreading, symmetry, and boundary following: === Perturbation theory and series approximation === Very highly magnified images require more than the standard 64–128 or so bits of precision that most hardware floating-point units provide, requiring renderers to use slow "BigNum" or "arbitrary-precision" math libraries to calculate. However, this can be sped up by the exploitation of perturbation theory. Given z n + 1 = z n 2 + c {\displaystyle z_{n+1}=z_{n}^{2}+c} as the iteration, and a small epsilon and delta, it is the case that ( z n + ϵ ) 2 + ( c + δ ) = z n 2 + 2 z n ϵ + ϵ 2 + c + δ , {\displaystyle (z_{n}+\epsilon )^{2}+(c+\delta )=z_{n}^{2}+2z_{n}\epsilon +\epsilon ^{2}+c+\delta ,} or = z n + 1 + 2 z n ϵ + ϵ 2 + δ , {\displaystyle =z_{n+1}+2z_{n}\epsilon +\epsilon ^{2}+\delta ,} so if one defines ϵ n + 1 = 2 z n ϵ n + ϵ n 2 + δ , {\displaystyle \epsilon _{n+1}=2z_{n}\epsilon _{n}+\epsilon _{n}^{2}+\delta ,} one can calculate a single point (e.g. the center of an image) using high-precision arithmetic (z), giving a reference orbit, and then compute many points around it in terms of various initial offsets delta plus the above iteration for epsilon, where epsilon-zero is set to 0. For most iterations, epsilon does not need more than 16 significant figures, and consequently hardware floating-point may be used to get a mostly accurate image. There will often be some areas where the orbits of points diverge enough from the reference orbit that extra precision is needed on those points, or else additional local high-precision-calculated reference orbits are needed. By measuring the orbit distance between the reference point and the point calculated with low precision, it can be detected that it is not possible to calculate the point correctly, and the calculation can be stopped. These incorrect points can later be re-calculated e.g. from another closer reference point. Further, it is possible to approximate the starting values for the low-precision points with a truncated Taylor series, which often enables a significant amount of iterations to be skipped. Renderers implementing these techniques are publicly available and offer speedups for highly magnified images by around two orders of magnitude. An alternate explanation of the above: For the central point in the disc c {\displaystyle c} and its iterations z n {\displaystyle z_{n}} , and an arbitrary point in the disc c + δ {\displaystyle c+\delta } and its iterations z n ′ {\displaystyle z'_{n}} , it is possible to define the following iterative relationship: z n ′ = z n + ϵ n {\displaystyle z'_{n}=z_{n}+\epsilon _{n}} With ϵ 1 = δ {\displaystyle \epsilon _{1}=\delta } . Successive iterations of ϵ n {\displaystyle \epsilon _{n}} can be found using the following: z n + 1 ′ = z n ′ 2 + ( c + δ ) {\displaystyle z'_{n+1}={z'_{n}}^{2}+(c+\delta )} z n + 1 ′ = ( z n + ϵ n ) 2 + c + δ {\displaystyle z'_{n+1}=(z_{n}+\epsilon _{n})^{2}+c+\delta } z n + 1 ′ = z n 2 + c + 2 z n ϵ n + ϵ n 2 + δ {\displaystyle z'_{n+1}={z_{n}}^{2}+c+2z_{n}\epsilon _{n}+{\epsilon _{n}}^{2}+\delta } z n + 1 ′ = z n + 1 + 2 z n ϵ n + ϵ n 2 + δ {\displaystyle z'_{n+1}=z_{n+1}+2z_{n}\epsilon _{n}+{\epsilon _{n}}^{2}+\delta } Now from the original definition: z n + 1 ′ = z n + 1 + ϵ n + 1 {\displaystyle z'_{n+1}=z_{n+1}+\epsilon _{n+1}} , It follows that: ϵ n + 1 = 2 z n ϵ n + ϵ n 2 + δ {\displaystyle \epsilon _{n+1}=2z_{n}\epsilon _{n}+{\epsilon _{n}}^{2}+\delta } As the iterative relationship relates an arbitrary point to the central point by a very small change δ {\displaystyle \delta } , then most of the iterations of ϵ n {\displaystyle \epsilon _{n}} are also small and can be calculated using floating point hardware. However, for every arbitrary point in the disc it is possible to calculate a value for a given ϵ n {\displaystyle \epsilon _{n}} without having to iterate through the sequence from ϵ 0 {\displaystyle \epsilon _{0}} , by expressing ϵ n {\displaystyle \epsilon _{n}} as a power series of δ {\displaystyle \delta } . ϵ n = A n δ + B n δ 2 + C n δ 3 + … {\displaystyle \epsilon _{n}=A_{n}\delta +B_{n}\delta ^{2}+C_{n}\delta ^{3}+\dotsc } With A 1 = 1 , B 1 = 0 , C 1 = 0 , … {\displaystyle A_{1}=1,B_{1}=0,C_{1}=0,\dotsc } . Now given the iteration equation of ϵ {\displaystyle \epsilon } , it is possible to calculate the coefficients of the power series for each ϵ n {\displaystyle \epsilon _{n}} : ϵ n + 1 = 2 z n ϵ n + ϵ n 2 + δ {\displaystyle \epsilon _{n+1}=2z_{n}\epsilon _{n}+{\epsilon _{n}}^{2}+\delta } ϵ n + 1 = 2 z n ( A n δ + B n δ 2 + C n δ 3 + … ) + ( A n δ + B n δ 2 + C n δ 3 + … ) 2 + δ {\displaystyle \epsilon _{n+1}=2z_{n}(A_{n}\delta +B_{n}\delta ^{2}+C_{n}\delta ^{3}+\dotsc )+(A_{n}\delta +B_{n}\delta ^{2}+C_{n}\delta ^{3}+\dotsc )^{2}+\delta } ϵ n + 1 = ( 2 z n A n + 1 ) δ + ( 2 z n B n + A n 2 ) δ 2 + ( 2 z n C n + 2 A n B n ) δ 3 + … {\displaystyle \epsilon _{n+1}=(2z_{n}A_{n}+1)\delta +(2z_{n}B_{n}+{A_{n}}^{2})\delta ^{2}+(2z_{n}C_{n}+2A_{n}B_{n})\delta ^{3}+\dotsc } Therefore, it follows that: A n + 1 = 2 z n A n + 1 {\displaystyle A_{n+1}=2z_{n}A_{n}+1} B n + 1 = 2 z n B n + A n 2 {\displaystyle B_{n+1}=2z_{n}B_{n}+{A_{n}}^{2}} C n + 1 = 2 z n C n + 2 A n B n {\displaystyle C_{n+1}=2z_{n}C_{n}+2A_{n}B_{n}} ⋮ {\displaystyle \vdots } The coefficients in the power series can be calculated as iterative series using only values from the central point's iterations z {\displaystyle z} , and do not change for any arbitrary point in the disc. If δ {\displaystyle \delta } is very small, ϵ n {\displaystyle \epsilon _{n}} should be calculable to sufficient accuracy using only a few terms of the power series. As the Mandelbrot Escape Contours are 'continuous' over the complex plane, if a points escape time has been calculated, then the escape time of that points neighbours should be similar. Interpolation of the neighbouring points should provide a good estimation of where to start in the ϵ n {\displaystyle \epsilon _{n}} series. Further, separate interpolation of both real axis points and imaginary axis points should provide both an upper and lower bound for the point being calculated. If both results are the same (i.e. both escape or do not escape) then the difference Δ n {\displaystyle \Delta n} can be used to recuse until both an upper and lower bound can be established. If floating point hardware can be used to iterate the ϵ {\displaystyle \epsilon } series, then there exists a relation between how many iterations can be achieved in the time it takes to use BigNum software to compute a given ϵ n {\displaystyle \epsilon _{n}} . If the difference between the bounds is greater than the number of iterations, it is possible to perform binary search using BigNum software, successively halving the gap until it becomes more time efficient to find the escape value using floating point hardware. == References == |
Wikipedia:Plus Magazine#0 | Plus Magazine is an online popular mathematics magazine run under the Millennium Mathematics Project at the University of Cambridge. Plus contains: feature articles on all aspects of mathematics; reviews of popular maths books and events; a news section; mathematical puzzles and games; interviews with people in maths related careers; Plus Podcast – Maths on the Move == History == Plus was initially named PASS Maths (Public Awareness and Schools Support for Maths) in 1997, when it was a project of the Interactive Courseware Research and Development Group, based jointly at the University of Cambridge and Keele University. Plus is now part of the Millennium Mathematics Project, a long term national initiative based in Cambridge and active across the UK and internationally. Authors of articles in Plus include Stephen Hawking and Marcus du Sautoy. Plus won the 2001 Webby for Best Science Site on the Web, and has been described as "an excellent site put together by those with a real love for the subject". In 2006 the Millennium Mathematics Project, of which Plus is a part, won the Queen's Anniversary Prize for Higher Education. == References == == External links == Official website Millennium Mathematics Project official website at the Wayback Machine (archived 25 November 1999) |
Wikipedia:Pohlke's theorem#0 | Pohlke's theorem is the fundamental theorem of axonometry. It was established 1853 by the German painter and teacher of descriptive geometry Karl Wilhelm Pohlke. The first proof of the theorem was published 1864 by the German mathematician Hermann Amandus Schwarz, who was a student of Pohlke. Therefore the theorem is sometimes called theorem of Pohlke and Schwarz, too. == The theorem == Three arbitrary line sections O ¯ U ¯ , O ¯ V ¯ , O ¯ W ¯ {\displaystyle {\overline {O}}{\overline {U}},{\overline {O}}{\overline {V}},{\overline {O}}{\overline {W}}} in a plane originating at point O ¯ {\displaystyle {\overline {O}}} , which are not contained in a line, can be considered as the parallel projection of three edges O U , O V , O W {\displaystyle OU,OV,OW} of a cube. For a mapping of a unit cube, one has to apply an additional scaling either in the space or in the plane. Because a parallel projection and a scaling preserves ratios one can map an arbitrary point P = ( x , y , z ) {\displaystyle P=(x,y,z)} by the axonometric procedure below. Pohlke's theorem can be stated in terms of linear algebra as: Any affine mapping of the 3-dimensional space onto a plane can be considered as the composition of a similarity and a parallel projection. == Application to axonometry == Pohlke's theorem is the justification for the following easy procedure to construct a scaled parallel projection of a 3-dimensional object using coordinates,: Choose the images of the coordinate axes, not contained in a line. Choose for any coordinate axis forshortenings v x , v y , v z > 0. {\displaystyle v_{x},v_{y},v_{z}>0.} The image P ¯ {\displaystyle {\overline {P}}} of a point P = ( x , y , z ) {\displaystyle P=(x,y,z)} is determined by the three steps, starting at point O ¯ {\displaystyle {\overline {O}}} : go v x ⋅ x {\displaystyle v_{x}\cdot x} in x ¯ {\displaystyle {\overline {x}}} -direction, then go v y ⋅ y {\displaystyle v_{y}\cdot y} in y ¯ {\displaystyle {\overline {y}}} -direction, then go v z ⋅ z {\displaystyle v_{z}\cdot z} in z ¯ {\displaystyle {\overline {z}}} -direction and 4. mark the point as P ¯ {\displaystyle {\overline {P}}} . In order to get undistorted pictures, one has to choose the images of the axes and the forshortenings carefully (see Axonometry). In order to get an orthographic projection only the images of the axes are free and the forshortenings are determined. (see de:orthogonale Axonometrie). == Remarks on Schwarz's proof == Schwarz formulated and proved the more general statement: The vertices of any quadrilateral can be considered as an oblique parallel projection of the vertices of a tetrahedron that is similar to a given tetrahedron. and used a theorem of L’Huilier: Every triangle can be considered as the orthographic projection of a triangle of a given shape. == Notes == == References == K. Pohlke: Zehn Tafeln zur darstellenden Geometrie. Gaertner-Verlag, Berlin 1876 (Google Books.) Schwarz, H. A.:Elementarer Beweis des Pohlkeschen Fundamentalsatzes der Axonometrie, J. reine angew. Math. 63, 309–314, 1864. Arnold Emch: Proof of Pohlke's Theorem and Its Generalizations by Affinity, American Journal of Mathematics, Vol. 40, No. 4 (Oct., 1918), pp. 366–374 == External links == F. Klein: The fundamental Theorem of Pohlke, in Elementary Mathematics from a Higher Standpoint: Volume II: Geometry, p. 97, Christoph J. Scriba, Peter Schreiber: 5000 Years of Geometry: Mathematics in History and Culture, p. 398. Pohlke–Schwarz theorem, Encyclopedia of Mathematics. |
Wikipedia:Poincaré inequality#0 | In mathematics, the Poincaré inequality is a result in the theory of Sobolev spaces, named after the French mathematician Henri Poincaré. The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations. A very closely related result is Friedrichs' inequality. == Statement of the inequality == === The classical Poincaré inequality === Let p, so that 1 ≤ p < ∞ and Ω a subset bounded at least in one direction. Then there exists a constant C, depending only on Ω and p, so that, for every function u of the Sobolev space W01,p(Ω) of zero-trace (a.k.a. zero on the boundary) functions, ‖ u ‖ L p ( Ω ) ≤ C ‖ ∇ u ‖ L p ( Ω ) . {\displaystyle \|u\|_{L^{p}(\Omega )}\leq C\|\nabla u\|_{L^{p}(\Omega )}.} === Poincaré–Wirtinger inequality === Assume that 1 ≤ p ≤ ∞ and that Ω is a bounded connected open subset of the n-dimensional Euclidean space R n {\displaystyle \mathbb {R} ^{n}} with a Lipschitz boundary (i.e., Ω is a Lipschitz domain). Then there exists a constant C, depending only on Ω and p, such that for every function u in the Sobolev space W1,p(Ω), ‖ u − u Ω ‖ L p ( Ω ) ≤ C ‖ ∇ u ‖ L p ( Ω ) , {\displaystyle \|u-u_{\Omega }\|_{L^{p}(\Omega )}\leq C\|\nabla u\|_{L^{p}(\Omega )},} where u Ω = 1 | Ω | ∫ Ω u ( y ) d y {\displaystyle u_{\Omega }={\frac {1}{|\Omega |}}\int _{\Omega }u(y)\,\mathrm {d} y} is the average value of u over Ω, with |Ω| standing for the Lebesgue measure of the domain Ω. When Ω is a ball, the above inequality is called a (p,p)-Poincaré inequality; for more general domains Ω, the above is more familiarly known as a Sobolev inequality. The necessity to subtract the average value can be seen by considering constant functions for which the derivative is zero while, without subtracting the average, we can have the integral of the function as large as we wish. There are other conditions instead of subtracting the average that we can require in order to deal with this issue with constant functions, for example, requiring trace zero, or subtracting the average over some proper subset of the domain. The constant C in the Poincare inequality may be different from condition to condition. Also note that the issue is not just the constant functions, because it is the same as saying that adding a constant value to a function can increase its integral while the integral of its derivative remains the same. So, simply excluding the constant functions will not solve the issue. === Generalizations === In the context of metric measure spaces, the definition of a Poincaré inequality is slightly different. One definition is: a metric measure space supports a (q,p)-Poincare inequality for some 1 ≤ q , p < ∞ {\displaystyle 1\leq q,p<\infty } if there are constants C and λ ≥ 1 so that for each ball B in the space, μ ( B ) − 1 q ‖ u − u B ‖ L q ( B ) ≤ C rad ( B ) μ ( B ) − 1 p ‖ ∇ u ‖ L p ( λ B ) . {\displaystyle \mu (B)^{-{\frac {1}{q}}}\left\|u-u_{B}\right\|_{L^{q}(B)}\leq C\operatorname {rad} (B)\mu (B)^{-{\frac {1}{p}}}\|\nabla u\|_{L^{p}(\lambda B)}.} Here we have an enlarged ball in the right hand side. In the context of metric measure spaces, ‖ ∇ u ‖ {\displaystyle \|\nabla u\|} is the minimal p-weak upper gradient of u in the sense of Heinonen and Koskela. Whether a space supports a Poincaré inequality has turned out to have deep connections to the geometry and analysis of the space. For example, Cheeger has shown that a doubling space satisfying a Poincaré inequality admits a notion of differentiation. Such spaces include sub-Riemannian manifolds and Laakso spaces. There exist other generalizations of the Poincaré inequality to other Sobolev spaces. For example, consider the Sobolev space H1/2(T2), i.e. the space of functions u in the L2 space of the unit torus T2 with Fourier transform û satisfying [ u ] H 1 / 2 ( T 2 ) 2 = ∑ k ∈ Z 2 | k | | u ^ ( k ) | 2 < + ∞ . {\displaystyle [u]_{H^{1/2}(\mathbf {T} ^{2})}^{2}=\sum _{k\in \mathbf {Z} ^{2}}|k|\left|{\hat {u}}(k)\right|^{2}<+\infty .} In this context, the Poincaré inequality says: there exists a constant C such that, for every u ∈ H1/2(T2) with u identically zero on an open set E ⊆ T2, ∫ T 2 | u ( x ) | 2 d x ≤ C ( 1 + 1 cap ( E × { 0 } ) ) [ u ] H 1 / 2 ( T 2 ) 2 , {\displaystyle \int _{\mathbf {T} ^{2}}|u(x)|^{2}\,\mathrm {d} x\leq C\left(1+{\frac {1}{\operatorname {cap} (E\times \{0\})}}\right)[u]_{H^{1/2}(\mathbf {T} ^{2})}^{2},} where cap(E × {0}) denotes the harmonic capacity of E × {0} when thought of as a subset of R 3 {\displaystyle \mathbb {R} ^{3}} . Yet another generalization involves weighted Poincaré inequalities where the Lebesgue measure is replaced by a weighted version. == The Poincaré constant == The optimal constant C in the Poincaré inequality is sometimes known as the Poincaré constant for the domain Ω. Determining the Poincaré constant is, in general, a very hard task that depends upon the value of p and the geometry of the domain Ω. Certain special cases are tractable, however. For example, if Ω is a bounded, convex, Lipschitz domain with diameter d, then the Poincaré constant is at most d/2 for p = 1, d / π {\displaystyle d/\pi } for p = 2, and this is the best possible estimate on the Poincaré constant in terms of the diameter alone. For smooth functions, this can be understood as an application of the isoperimetric inequality to the function's level sets. In one dimension, this is Wirtinger's inequality for functions. However, in some special cases the constant C can be determined concretely. For example, for p = 2, it is well known that over the domain of unit isosceles right triangle, C = 1/π ( < d/π where d = 2 {\displaystyle d={\sqrt {2}}} ). Furthermore, for a smooth, bounded domain Ω, since the Rayleigh quotient for the Laplace operator in the space W 0 1 , 2 ( Ω ) {\displaystyle W_{0}^{1,2}(\Omega )} is minimized by the eigenfunction corresponding to the minimal eigenvalue λ1 of the (negative) Laplacian, it is a simple consequence that, for any u ∈ W 0 1 , 2 ( Ω ) {\displaystyle u\in W_{0}^{1,2}(\Omega )} , ‖ u ‖ L 2 2 ≤ λ 1 − 1 ‖ ∇ u ‖ L 2 2 {\displaystyle \|u\|_{L^{2}}^{2}\leq \lambda _{1}^{-1}\left\|\nabla u\right\|_{L^{2}}^{2}} and furthermore, that the constant λ1 is optimal. == Poincaré inequality on metric-measure spaces == Since the 90s there have been several fruitful ways to make sense of Sobolev functions on general metric measure spaces (metric spaces equipped with a measure that is often compatible with the metric in certain senses). For example, the approach based on "upper gradients" leads to Newtonian-Sobolev space of functions. Thus, it makes sense to say that a space "supports a Poincare inequality". It turns out that whether a space supports any Poincare inequality and if so, the critical exponent for which it does, is tied closely to the geometry of the space. For example, a space that supports a Poincare inequality must be path connected. Indeed, between any pair of points there must exist a rectifiable path with length comparable to the distance of the points. Much deeper connections have been found, e.g. through the notion of modulus of path families. A good and rather recent reference is the monograph "Sobolev Spaces on Metric Measure Spaces, an approach based on upper gradients" written by Heinonen et al. == Sobolev Slobodeckij Spaces and Poincaré Inequality == Given 0 < s < 1 {\displaystyle 0<s<1} and p ∈ [ 1 , ∞ ) {\displaystyle p\in [1,\infty )} , the Sobolev Slobodeckij space W s , p ( Ω ) {\displaystyle W^{s,p}(\Omega )} is defined as the set of all functions u {\displaystyle u} such that u ∈ L p ( Ω ) {\displaystyle u\in L^{p}(\Omega )} and the seminorm [ u ] s , p {\displaystyle [u]_{s,p}} is finite. The seminorm [ u ] s , p {\displaystyle [u]_{s,p}} is defined by: [ u ] s , p = ( ∫ Ω ∫ Ω | u ( x ) − u ( y ) | p | x − y | n + s p d x d y ) 1 / p {\displaystyle [u]_{s,p}=\left(\int _{\Omega }\int _{\Omega }{\frac {|u(x)-u(y)|^{p}}{|x-y|^{n+sp}}}\,dx\,dy\right)^{1/p}} The Poincaré Inequality in this context can be generalized as follows: ‖ u − u Ω ‖ L p ( Ω ) ≤ C [ u ] s , p {\displaystyle \|u-u_{\Omega }\|_{L^{p}(\Omega )}\leq C[u]_{s,p}} where u Ω {\displaystyle u_{\Omega }} is the average of u {\displaystyle u} over Ω {\displaystyle \Omega } and C {\displaystyle C} is a constant dependent on s , p {\displaystyle s,p} , and Ω {\displaystyle \Omega } . This inequality holds for every bounded Ω {\displaystyle \Omega } . === Proof of the Poincaré Inequality === The proof follows that of Irene Drelichman and Ricardo G. Durán. Let f Ω = 1 | Ω | ∫ Ω f ( x ) d x {\displaystyle f_{\Omega }={\frac {1}{|\Omega |}}\int _{\Omega }f(x)\,dx} . By applying Jensen's inequality, we obtain: ‖ f − f Ω ‖ L p ( Ω ) p = ‖ 1 | Ω | ∫ Ω ( f ( y ) − f ( x ) ) d x ‖ L p p = ∫ Ω | 1 | Ω | ∫ Ω f ( y ) − f ( x ) d y | p d x {\displaystyle \|f-f_{\Omega }\|_{L^{p}(\Omega )}^{p}=\left\|{\frac {1}{|\Omega |}}\int _{\Omega }(f(y)-f(x))\,dx\right\|_{L^{p}}^{p}=\int _{\Omega }\left|{\frac {1}{|\Omega |}}\int _{\Omega }f(y)-f(x)\,dy\right|^{p}\,dx} ≤ 1 | Ω | ∫ Ω ∫ Ω | f ( y ) − f ( x ) | p d y d x {\displaystyle \leq {\frac {1}{|\Omega |}}\int _{\Omega }\int _{\Omega }|f(y)-f(x)|^{p}\,dy\,dx} By exploiting the boundedness of Ω {\displaystyle \Omega } and further estimates: 1 | Ω | ∫ Ω ∫ Ω | f ( y ) − f ( x ) | p d y d x {\displaystyle {\frac {1}{|\Omega |}}\int _{\Omega }\int _{\Omega }|f(y)-f(x)|^{p}\,dy\,dx} ≤ diam ( Ω ) n + s p | Ω | ∫ Ω ∫ Ω | f ( y ) − f ( x ) | p | y − x | n + s p d y d x {\displaystyle \leq {\frac {{\text{diam}}(\Omega )^{n+sp}}{|\Omega |}}\int _{\Omega }\int _{\Omega }{\frac {|f(y)-f(x)|^{p}}{|y-x|^{n+sp}}}\,dy\,dx} It follows that the constant C {\displaystyle C} is given as C = diam ( Ω ) n p + s | Ω | 1 p {\displaystyle C={\frac {{\text{diam}}(\Omega )^{{\frac {n}{p}}+s}}{|\Omega |^{\frac {1}{p}}}}} , however, the reference with Theorem 1 indicates that this is not the optimal constant. === Poincaré on Balls === We can derive a growth constant for Balls in a manner similar to previous cases. The relationship is given by the following inequality: ‖ u − u Ω ‖ L p ( B R ( y ) ) ≤ C R s [ u ] W ˙ s , p ( B R ( x ) ) {\displaystyle \|u-u_{\Omega }\|_{L^{p}(B_{R}(y))}\leq CR^{s}[u]_{{\dot {W}}^{s,p}(B_{R}(x))}} ==== Sketch of the Proof ==== The proof proceeds similarly to the classical one, by using the scaling u R ( x ) = u ( R x ) {\displaystyle u_{R}(x)=u(Rx)} . Then, by using a form of chain rule for the fractional derivative, we get R s {\displaystyle R^{s}} as a result. == See also == Friedrichs' inequality Korn's inequality Spectral gap == References == |
Wikipedia:Poincaré space#0 | In algebraic topology, a Poincaré space is an n-dimensional topological space with a distinguished element μ of its nth homology group such that taking the cap product with an element of the kth cohomology group yields an isomorphism to the (n − k)th homology group. The space is essentially one for which Poincaré duality is valid; more precisely, one whose singular chain complex forms a Poincaré complex with respect to the distinguished element μ. For example, any closed, orientable, connected manifold M is a Poincaré space, where the distinguished element is the fundamental class [ M ] . {\displaystyle [M].} Poincaré spaces are used in surgery theory to analyze and classify manifolds. Not every Poincaré space is a manifold, but the difference can be studied, first by having a normal map from a manifold, and then via obstruction theory. == Other uses == Sometimes, Poincaré space means a homology sphere with non-trivial fundamental group—for instance, the Poincaré dodecahedral space in 3 dimensions. == See also == Stable normal bundle == References == |
Wikipedia:Poincaré transformation#0 | The Poincaré group, named after Henri Poincaré (1905), was first defined by Hermann Minkowski (1908) as the isometry group of Minkowski spacetime. It is a ten-dimensional non-abelian Lie group that is of importance as a model in our understanding of the most basic fundamentals of physics. == Overview == The Poincaré group consists of all coordinate transformations of Minkowski space that do not change the spacetime interval between events. For example, if everything were postponed by two hours, including the two events and the path you took to go from one to the other, then the time interval between the events recorded by a stopwatch that you carried with you would be the same. Or if everything were shifted five kilometres to the west, or turned 60 degrees to the right, you would also see no change in the interval. It turns out that the proper length of an object is also unaffected by such a shift. In total, there are ten degrees of freedom for such transformations. They may be thought of as translation through time or space (four degrees, one per dimension); reflection through a plane (three degrees, the freedom in orientation of this plane); or a "boost" in any of the three spatial directions (three degrees). Composition of transformations is the operation of the Poincaré group, with rotations being produced as the composition of an even number of reflections. In classical physics, the Galilean group is a comparable ten-parameter group that acts on absolute time and space. Instead of boosts, it features shear mappings to relate co-moving frames of reference. In general relativity, i.e. under the effects of gravity, Poincaré symmetry applies only locally. A treatment of symmetries in general relativity is not in the scope of this article. == Poincaré symmetry == Poincaré symmetry is the full symmetry of special relativity. It includes: translations (displacements) in time and space, forming the abelian Lie group of spacetime translations (P); rotations in space, forming the non-abelian Lie group of three-dimensional rotations (J); boosts, transformations connecting two uniformly moving bodies (K). The last two symmetries, J and K, together make the Lorentz group (see also Lorentz invariance); the semi-direct product of the spacetime translations group and the Lorentz group then produce the Poincaré group. Objects that are invariant under this group are then said to possess Poincaré invariance or relativistic invariance. 10 generators (in four spacetime dimensions) associated with the Poincaré symmetry, by Noether's theorem, imply 10 conservation laws: 1 for the energy – associated with translations through time 3 for the momentum – associated with translations through spatial dimensions 3 for the angular momentum – associated with rotations between spatial dimensions 3 for a quantity involving the velocity of the center of mass – associated with hyperbolic rotations between each spatial dimension and time == Poincaré group == The Poincaré group is the group of Minkowski spacetime isometries. It is a ten-dimensional noncompact Lie group. The four-dimensional abelian group of spacetime translations is a normal subgroup, while the six-dimensional Lorentz group is also a subgroup, the stabilizer of the origin. The Poincaré group itself is the minimal subgroup of the affine group which includes all translations and Lorentz transformations. More precisely, it is a semidirect product of the spacetime translations group and the Lorentz group, R 1 , 3 ⋊ O ( 1 , 3 ) , {\displaystyle \mathbf {R} ^{1,3}\rtimes \operatorname {O} (1,3)\,,} with group multiplication ( α , f ) ⋅ ( β , g ) = ( α + f ⋅ β , f ⋅ g ) {\displaystyle (\alpha ,f)\cdot (\beta ,g)=(\alpha +f\cdot \beta ,\;f\cdot g)} . Another way of putting this is that the Poincaré group is a group extension of the Lorentz group by a vector representation of it; it is sometimes dubbed, informally, as the inhomogeneous Lorentz group. In turn, it can also be obtained as a group contraction of the de Sitter group SO(4, 1) ~ Sp(2, 2), as the de Sitter radius goes to infinity. Its positive energy unitary irreducible representations are indexed by mass (nonnegative number) and spin (integer or half integer) and are associated with particles in quantum mechanics (see Wigner's classification). In accordance with the Erlangen program, the geometry of Minkowski space is defined by the Poincaré group: Minkowski space is considered as a homogeneous space for the group. In quantum field theory, the universal cover of the Poincaré group R 1 , 3 ⋊ SL ( 2 , C ) , {\displaystyle \mathbf {R} ^{1,3}\rtimes \operatorname {SL} (2,\mathbf {C} ),} which may be identified with the double cover R 1 , 3 ⋊ Spin ( 1 , 3 ) , {\displaystyle \mathbf {R} ^{1,3}\rtimes \operatorname {Spin} (1,3),} is more important, because representations of SO ( 1 , 3 ) {\displaystyle \operatorname {SO} (1,3)} are not able to describe fields with spin 1/2; i.e. fermions. Here SL ( 2 , C ) {\displaystyle \operatorname {SL} (2,\mathbf {C} )} is the group of complex 2 × 2 {\displaystyle 2\times 2} matrices with unit determinant, isomorphic to the Lorentz-signature spin group Spin ( 1 , 3 ) {\displaystyle \operatorname {Spin} (1,3)} . == Poincaré algebra == The Poincaré algebra is the Lie algebra of the Poincaré group. It is a Lie algebra extension of the Lie algebra of the Lorentz group. More specifically, the proper ( det Λ = 1 {\textstyle \det \Lambda =1} ), orthochronous ( Λ 0 0 ≥ 1 {\textstyle {\Lambda ^{0}}_{0}\geq 1} ) part of the Lorentz subgroup (its identity component), S O ( 1 , 3 ) + ↑ {\textstyle \mathrm {SO} (1,3)_{+}^{\uparrow }} , is connected to the identity and is thus provided by the exponentiation exp ( i a μ P μ ) exp ( i 2 ω μ ν M μ ν ) {\textstyle \exp \left(ia_{\mu }P^{\mu }\right)\exp \left({\frac {i}{2}}\omega _{\mu \nu }M^{\mu \nu }\right)} of this Lie algebra. In component form, the Poincaré algebra is given by the commutation relations: where P {\displaystyle P} is the generator of translations, M {\displaystyle M} is the generator of Lorentz transformations, and η {\displaystyle \eta } is the ( + , − , − , − ) {\displaystyle (+,-,-,-)} Minkowski metric (see Sign convention). The bottom commutation relation is the ("homogeneous") Lorentz group, consisting of rotations, J i = 1 2 ϵ i m n M m n {\textstyle J_{i}={\frac {1}{2}}\epsilon _{imn}M^{mn}} , and boosts, K i = M i 0 {\textstyle K_{i}=M_{i0}} . In this notation, the entire Poincaré algebra is expressible in noncovariant (but more practical) language as [ J m , P n ] = i ϵ m n k P k , [ J i , P 0 ] = 0 , [ K i , P k ] = i η i k P 0 , [ K i , P 0 ] = − i P i , [ J m , J n ] = i ϵ m n k J k , [ J m , K n ] = i ϵ m n k K k , [ K m , K n ] = − i ϵ m n k J k , {\displaystyle {\begin{aligned}[][J_{m},P_{n}]&=i\epsilon _{mnk}P_{k}~,\\[][J_{i},P_{0}]&=0~,\\[][K_{i},P_{k}]&=i\eta _{ik}P_{0}~,\\[][K_{i},P_{0}]&=-iP_{i}~,\\[][J_{m},J_{n}]&=i\epsilon _{mnk}J_{k}~,\\[][J_{m},K_{n}]&=i\epsilon _{mnk}K_{k}~,\\[][K_{m},K_{n}]&=-i\epsilon _{mnk}J_{k}~,\end{aligned}}} where the bottom line commutator of two boosts is often referred to as a "Wigner rotation". The simplification [ J m + i K m , J n − i K n ] = 0 {\textstyle [J_{m}+iK_{m},\,J_{n}-iK_{n}]=0} permits reduction of the Lorentz subalgebra to s u ( 2 ) ⊕ s u ( 2 ) {\textstyle {\mathfrak {su}}(2)\oplus {\mathfrak {su}}(2)} and efficient treatment of its associated representations. In terms of the physical parameters, we have [ H , p i ] = 0 [ H , L i ] = 0 [ H , K i ] = i ℏ c p i [ p i , p j ] = 0 [ p i , L j ] = i ℏ ϵ i j k p k [ p i , K j ] = i ℏ c H δ i j [ L i , L j ] = i ℏ ϵ i j k L k [ L i , K j ] = i ℏ ϵ i j k K k [ K i , K j ] = − i ℏ ϵ i j k L k {\displaystyle {\begin{aligned}\left[{\mathcal {H}},p_{i}\right]&=0\\\left[{\mathcal {H}},L_{i}\right]&=0\\\left[{\mathcal {H}},K_{i}\right]&=i\hbar cp_{i}\\\left[p_{i},p_{j}\right]&=0\\\left[p_{i},L_{j}\right]&=i\hbar \epsilon _{ijk}p_{k}\\\left[p_{i},K_{j}\right]&={\frac {i\hbar }{c}}{\mathcal {H}}\delta _{ij}\\\left[L_{i},L_{j}\right]&=i\hbar \epsilon _{ijk}L_{k}\\\left[L_{i},K_{j}\right]&=i\hbar \epsilon _{ijk}K_{k}\\\left[K_{i},K_{j}\right]&=-i\hbar \epsilon _{ijk}L_{k}\end{aligned}}} The Casimir invariants of this algebra are P μ P μ {\textstyle P_{\mu }P^{\mu }} and W μ W μ {\textstyle W_{\mu }W^{\mu }} where W μ {\textstyle W_{\mu }} is the Pauli–Lubanski pseudovector; they serve as labels for the representations of the group. The Poincaré group is the full symmetry group of any relativistic field theory. As a result, all elementary particles fall in representations of this group. These are usually specified by the four-momentum squared of each particle (i.e. its mass squared) and the intrinsic quantum numbers J P C {\textstyle J^{PC}} , where J {\displaystyle J} is the spin quantum number, P {\displaystyle P} is the parity and C {\displaystyle C} is the charge-conjugation quantum number. In practice, charge conjugation and parity are violated by many quantum field theories; where this occurs, P {\displaystyle P} and C {\displaystyle C} are forfeited. Since CPT symmetry is invariant in quantum field theory, a time-reversal quantum number may be constructed from those given. As a topological space, the group has four connected components: the component of the identity; the time reversed component; the spatial inversion component; and the component which is both time-reversed and spatially inverted. == Other dimensions == The definitions above can be generalized to arbitrary dimensions in a straightforward manner. The d-dimensional Poincaré group is analogously defined by the semi-direct product IO ( 1 , d − 1 ) := R 1 , d − 1 ⋊ O ( 1 , d − 1 ) {\displaystyle \operatorname {IO} (1,d-1):=\mathbf {R} ^{1,d-1}\rtimes \operatorname {O} (1,d-1)} with the analogous multiplication ( α , f ) ⋅ ( β , g ) = ( α + f ⋅ β , f ⋅ g ) {\displaystyle (\alpha ,f)\cdot (\beta ,g)=(\alpha +f\cdot \beta ,\;f\cdot g)} . The Lie algebra retains its form, with indices µ and ν now taking values between 0 and d − 1. The alternative representation in terms of Ji and Ki has no analogue in higher dimensions. == See also == Euclidean group Galilean group Representation theory of the Poincaré group Wigner's classification Symmetry in quantum mechanics Pauli–Lubanski pseudovector Particle physics and representation theory Continuous spin particle super-Poincaré algebra == Notes == == References == Wu-Ki Tung (1985). Group Theory in Physics. World Scientific Publishing. ISBN 9971-966-57-3. Weinberg, Steven (1995). The Quantum Theory of Fields. Vol. 1. Cambridge: Cambridge University press. ISBN 978-0-521-55001-7. L.H. Ryder (1996). Quantum Field Theory (2nd ed.). Cambridge University Press. p. 62. ISBN 0-52147-8146. |
Wikipedia:Point reflection#0 | In geometry, a point reflection (also called a point inversion or central inversion) is a geometric transformation of affine space in which every point is reflected across a designated inversion center, which remains fixed. In Euclidean or pseudo-Euclidean spaces, a point reflection is an isometry (preserves distance). In the Euclidean plane, a point reflection is the same as a half-turn rotation (180° or π radians), while in three-dimensional Euclidean space a point reflection is an improper rotation which preserves distances but reverses orientation. A point reflection is an involution: applying it twice is the identity transformation. An object that is invariant under a point reflection is said to possess point symmetry (also called inversion symmetry or central symmetry). A point group including a point reflection among its symmetries is called centrosymmetric. Inversion symmetry is found in many crystal structures and molecules, and has a major effect upon their physical properties. == Terminology == The term reflection is loose, and considered by some an abuse of language, with inversion preferred; however, point reflection is widely used. Such maps are involutions, meaning that they have order 2 – they are their own inverse: applying them twice yields the identity map – which is also true of other maps called reflections. More narrowly, a reflection refers to a reflection in a hyperplane ( n − 1 {\displaystyle n-1} dimensional affine subspace – a point on the line, a line in the plane, a plane in 3-space), with the hyperplane being fixed, but more broadly reflection is applied to any involution of Euclidean space, and the fixed set (an affine space of dimension k, where 1 ≤ k ≤ n − 1 {\displaystyle 1\leq k\leq n-1} ) is called the mirror. In dimension 1 these coincide, as a point is a hyperplane in the line. In terms of linear algebra, assuming the origin is fixed, involutions are exactly the diagonalizable maps with all eigenvalues either 1 or −1. Reflection in a hyperplane has a single −1 eigenvalue (and multiplicity n − 1 {\displaystyle n-1} on the 1 eigenvalue), while point reflection has only the −1 eigenvalue (with multiplicity n). The term inversion should not be confused with inversive geometry, where inversion is defined with respect to a circle. == Examples == In two dimensions, a point reflection is the same as a rotation of 180 degrees. In three dimensions, a point reflection can be described as a 180-degree rotation composed with reflection across the plane of rotation, perpendicular to the axis of rotation. In dimension n, point reflections are orientation-preserving if n is even, and orientation-reversing if n is odd. == Formula == Given a vector a in the Euclidean space Rn, the formula for the reflection of a across the point p is R e f p ( a ) = 2 p − a . {\displaystyle \mathrm {Ref} _{\mathbf {p} }(\mathbf {a} )=2\mathbf {p} -\mathbf {a} .} In the case where p is the origin, point reflection is simply the negation of the vector a. In Euclidean geometry, the inversion of a point X with respect to a point P is a point X* such that P is the midpoint of the line segment with endpoints X and X*. In other words, the vector from X to P is the same as the vector from P to X*. The formula for the inversion in P is x* = 2p − x where p, x and x* are the position vectors of P, X and X* respectively. This mapping is an isometric involutive affine transformation which has exactly one fixed point, which is P. == Point reflection as a special case of uniform scaling or homothety == When the inversion point P coincides with the origin, point reflection is equivalent to a special case of uniform scaling: uniform scaling with scale factor equal to −1. This is an example of linear transformation. When P does not coincide with the origin, point reflection is equivalent to a special case of homothetic transformation: homothety with homothetic center coinciding with P, and scale factor −1. (This is an example of non-linear affine transformation.) == Point reflection group == The composition of two point reflections is a translation. Specifically, point reflection at p followed by point reflection at q is translation by the vector 2(q − p). The set consisting of all point reflections and translations is Lie subgroup of the Euclidean group. It is a semidirect product of Rn with a cyclic group of order 2, the latter acting on Rn by negation. It is precisely the subgroup of the Euclidean group that fixes the line at infinity pointwise. In the case n = 1, the point reflection group is the full isometry group of the line. == Point reflections in mathematics == Point reflection across the center of a sphere yields the antipodal map. A symmetric space is a Riemannian manifold with an isometric reflection across each point. Symmetric spaces play an important role in the study of Lie groups and Riemannian geometry. == Point reflection in analytic geometry == Given the point P ( x , y ) {\displaystyle P(x,y)} and its reflection P ′ ( x ′ , y ′ ) {\displaystyle P'(x',y')} with respect to the point C ( x c , y c ) {\displaystyle C(x_{c},y_{c})} , the latter is the midpoint of the segment P P ′ ¯ {\displaystyle {\overline {PP'}}} ; { x c = x + x ′ 2 y c = y + y ′ 2 {\displaystyle {\begin{cases}x_{c}={\frac {x+x'}{2}}\\y_{c}={\frac {y+y'}{2}}\end{cases}}} Hence, the equations to find the coordinates of the reflected point are { x ′ = 2 x c − x y ′ = 2 y c − y {\displaystyle {\begin{cases}x'=2x_{c}-x\\y'=2y_{c}-y\end{cases}}} Particular is the case in which the point C has coordinates ( 0 , 0 ) {\displaystyle (0,0)} (see the paragraph below) { x ′ = − x y ′ = − y {\displaystyle {\begin{cases}x'=-x\\y'=-y\end{cases}}} == Properties == In even-dimensional Euclidean space, say 2N-dimensional space, the inversion in a point P is equivalent to N rotations over angles π in each plane of an arbitrary set of N mutually orthogonal planes intersecting at P. These rotations are mutually commutative. Therefore, inversion in a point in even-dimensional space is an orientation-preserving isometry or direct isometry. In odd-dimensional Euclidean space, say (2N + 1)-dimensional space, it is equivalent to N rotations over π in each plane of an arbitrary set of N mutually orthogonal planes intersecting at P, combined with the reflection in the 2N-dimensional subspace spanned by these rotation planes. Therefore, it reverses rather than preserves orientation, it is an indirect isometry. Geometrically in 3D it amounts to rotation about an axis through P by an angle of 180°, combined with reflection in the plane through P which is perpendicular to the axis; the result does not depend on the orientation (in the other sense) of the axis. Notations for the type of operation, or the type of group it generates, are 1 ¯ {\displaystyle {\overline {1}}} , Ci, S2, and 1×. The group type is one of the three symmetry group types in 3D without any pure rotational symmetry, see cyclic symmetries with n = 1. The following point groups in three dimensions contain inversion: Cnh and Dnh for even n S2n and Dnd for odd n Th, Oh, and Ih Closely related to inverse in a point is reflection in respect to a plane, which can be thought of as an "inversion in a plane". == Inversion centers in crystals and molecules == Inversion symmetry plays a major role in the properties of materials, as also do other symmetry operations. Some molecules contain an inversion center when a point exists through which all atoms can reflect while retaining symmetry. In many cases they can be considered as polyhedra, categorized by their coordination number and bond angles. For example, four-coordinate polyhedra are classified as tetrahedra, while five-coordinate environments can be square pyramidal or trigonal bipyramidal depending on the bonding angles. Six-coordinate octahedra are an example of centrosymmetric polyhedra, as the central atom acts as an inversion center through which the six bonded atoms retain symmetry. Tetrahedra, on the other hand, are non-centrosymmetric as an inversion through the central atom would result in a reversal of the polyhedron. Polyhedra with an odd (versus even) coordination number are not centrosymmtric. Polyhedra containing inversion centers are known as centrosymmetric, while those without are non-centrosymmetric. The presence or absence of an inversion center has a strong influence on the optical properties; for instance molecules without inversion symmetry have a dipole moment and can directly interact with photons, while those with inversion have no dipole moment and only interact via Raman scattering. The later is named after C. V. Raman who was awarded the 1930 Nobel Prize in Physics for his discovery. In addition, in crystallography, the presence of inversion centers for periodic structures distinguishes between centrosymmetric and non-centrosymmetric compounds. All crystalline compounds come from a repetition of an atomic building block known as a unit cell, and these unit cells define which polyhedra form and in what order. In many materials such as oxides these polyhedra can link together via corner-, edge- or face sharing, depending on which atoms share common bonds and also the valence. In other cases such as for metals and alloys the structures are better considered as arrangements of close-packed atoms. Crystals which do not have inversion symmetry also display the piezoelectric effect. The presence or absence of inversion symmetry also has numerous consequences for the properties of solids, as does the mathematical relationships between the different crystal symmetries. Real polyhedra in crystals often lack the uniformity anticipated in their bonding geometry. Common irregularities found in crystallography include distortions and disorder. Distortion involves the warping of polyhedra due to nonuniform bonding lengths, often due to differing electrostatic interactions between heteroatoms or electronic effects such as Jahn–Teller distortions. For instance, a titanium center will likely bond evenly to six oxygens in an octahedra, but distortion would occur if one of the oxygens were replaced with a more electronegative fluorine. Distortions will not change the inherent geometry of the polyhedra—a distorted octahedron is still classified as an octahedron, but strong enough distortions can have an effect on the centrosymmetry of a compound. Disorder involves a split occupancy over two or more sites, in which an atom will occupy one crystallographic position in a certain percentage of polyhedra and the other in the remaining positions. Disorder can influence the centrosymmetry of certain polyhedra as well, depending on whether or not the occupancy is split over an already-present inversion center. Centrosymmetry applies to the crystal structure as a whole, not just individual polyhedra. Crystals are classified into thirty-two crystallographic point groups which describe how the different polyhedra arrange themselves in space in the bulk structure. Of these thirty-two point groups, eleven are centrosymmetric. The presence of noncentrosymmetric polyhedra does not guarantee that the point group will be the same—two non-centrosymmetric shapes can be oriented in space in a manner which contains an inversion center between the two. Two tetrahedra facing each other can have an inversion center in the middle, because the orientation allows for each atom to have a reflected pair. The inverse is also true, as multiple centrosymmetric polyhedra can be arranged to form a noncentrosymmetric point group. == Inversion with respect to the origin == Inversion with respect to the origin corresponds to additive inversion of the position vector, and also to scalar multiplication by −1. The operation commutes with every other linear transformation, but not with translation: it is in the center of the general linear group. "Inversion" without indicating "in a point", "in a line" or "in a plane", means this inversion; in physics 3-dimensional reflection through the origin is also called a parity transformation. In mathematics, reflection through the origin refers to the point reflection of Euclidean space Rn across the origin of the Cartesian coordinate system. Reflection through the origin is an orthogonal transformation corresponding to scalar multiplication by − 1 {\displaystyle -1} , and can also be written as − I {\displaystyle -I} , where I {\displaystyle I} is the identity matrix. In three dimensions, this sends ( x , y , z ) ↦ ( − x , − y , − z ) {\displaystyle (x,y,z)\mapsto (-x,-y,-z)} , and so forth. === Representations === As a scalar matrix, it is represented in every basis by a matrix with − 1 {\displaystyle -1} on the diagonal, and, together with the identity, is the center of the orthogonal group O ( n ) {\displaystyle O(n)} . It is a product of n orthogonal reflections (reflection through the axes of any orthogonal basis); note that orthogonal reflections commute. In 2 dimensions, it is in fact rotation by 180 degrees, and in dimension 2 n {\displaystyle 2n} , it is rotation by 180 degrees in n orthogonal planes; note again that rotations in orthogonal planes commute. === Properties === It has determinant ( − 1 ) n {\displaystyle (-1)^{n}} (from the representation by a matrix or as a product of reflections). Thus it is orientation-preserving in even dimension, thus an element of the special orthogonal group SO(2n), and it is orientation-reversing in odd dimension, thus not an element of SO(2n + 1) and instead providing a splitting of the map O ( 2 n + 1 ) → ± 1 {\displaystyle O(2n+1)\to \pm 1} , showing that O ( 2 n + 1 ) = S O ( 2 n + 1 ) × { ± I } {\displaystyle O(2n+1)=SO(2n+1)\times \{\pm I\}} as an internal direct product. Together with the identity, it forms the center of the orthogonal group. It preserves every quadratic form, meaning Q ( − v ) = Q ( v ) {\displaystyle Q(-v)=Q(v)} , and thus is an element of every indefinite orthogonal group as well. It equals the identity if and only if the characteristic is 2. It is the longest element of the Coxeter group of signed permutations. Analogously, it is a longest element of the orthogonal group, with respect to the generating set of reflections: elements of the orthogonal group all have length at most n with respect to the generating set of reflections, and reflection through the origin has length n, though it is not unique in this: other maximal combinations of rotations (and possibly reflections) also have maximal length. === Geometry === In SO(2r), reflection through the origin is the farthest point from the identity element with respect to the usual metric. In O(2r + 1), reflection through the origin is not in SO(2r+1) (it is in the non-identity component), and there is no natural sense in which it is a "farther point" than any other point in the non-identity component, but it does provide a base point in the other component. === Clifford algebras and spin groups === It should not be confused with the element − 1 ∈ S p i n ( n ) {\displaystyle -1\in \mathrm {Spin} (n)} in the spin group. This is particularly confusing for even spin groups, as − I ∈ S O ( 2 n ) {\displaystyle -I\in SO(2n)} , and thus in Spin ( n ) {\displaystyle \operatorname {Spin} (n)} there is both − 1 {\displaystyle -1} and 2 lifts of − I {\displaystyle -I} . Reflection through the identity extends to an automorphism of a Clifford algebra, called the main involution or grade involution. Reflection through the identity lifts to a pseudoscalar. == See also == Affine involution Circle inversion Clifford algebra Congruence (geometry) Estermann measure Euclidean group Kovner–Besicovitch measure Orthogonal group Parity (physics) Reflection (mathematics) Riemannian symmetric space Spin group == Notes == == References == |
Wikipedia:Poisson summation formula#0 | In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation. For a smooth, complex valued function s ( x ) {\displaystyle s(x)} on R {\displaystyle \mathbb {R} } which decays at infinity with all derivatives (Schwartz function), the simplest version of the Poisson summation formula states that where S {\displaystyle S} is the Fourier transform of s {\displaystyle s} , i.e., S ( f ) ≜ ∫ − ∞ ∞ s ( x ) e − i 2 π f x d x . {\textstyle S(f)\triangleq \int _{-\infty }^{\infty }s(x)\ e^{-i2\pi fx}\,dx.} The summation formula can be restated in many equivalent ways, but a simple one is the following. Suppose that f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} (L1 for L1 space) and Λ {\displaystyle \Lambda } is a unimodular lattice in R n {\displaystyle \mathbb {R} ^{n}} . Then the periodization of f {\displaystyle f} , which is defined as the sum f Λ ( x ) = ∑ λ ∈ Λ f ( x + λ ) , {\textstyle f_{\Lambda }(x)=\sum _{\lambda \in \Lambda }f(x+\lambda ),} converges in the L 1 {\displaystyle L^{1}} norm of R n / Λ {\displaystyle \mathbb {R} ^{n}/\Lambda } to an L 1 ( R n / Λ ) {\displaystyle L^{1}(\mathbb {R} ^{n}/\Lambda )} function having Fourier series f Λ ( x ) ∼ ∑ λ ′ ∈ Λ ′ f ^ ( λ ′ ) e 2 π i λ ′ x {\displaystyle f_{\Lambda }(x)\sim \sum _{\lambda '\in \Lambda '}{\hat {f}}(\lambda ')e^{2\pi i\lambda 'x}} where Λ ′ {\displaystyle \Lambda '} is the dual lattice to Λ {\displaystyle \Lambda } . (Note that the Fourier series on the right-hand side need not converge in L 1 {\displaystyle L^{1}} or otherwise.) == Periodization of a function == Let s ( x ) {\textstyle s\left(x\right)} be a smooth, complex valued function on R {\displaystyle \mathbb {R} } which decays at infinity with all derivatives (Schwartz function), and its Fourier transform S ( f ) {\displaystyle S\left(f\right)} , defined as S ( f ) = ∫ − ∞ ∞ s ( x ) e − 2 π i x f d x . {\displaystyle S(f)=\int _{-\infty }^{\infty }s(x)e^{-2\pi ixf}dx.} Then S ( f ) {\displaystyle S(f)} is also a Schwartz function, and we have the reciprocal relationship that s ( x ) = ∫ − ∞ ∞ S ( f ) e 2 π i x f d f . {\displaystyle s(x)=\int _{-\infty }^{\infty }S(f)e^{2\pi ixf}df.} The periodization of s ( x ) {\displaystyle s(x)} with period P > 0 {\displaystyle P>0} is given by s P ( x ) ≜ ∑ n = − ∞ ∞ s ( x + n P ) . {\displaystyle s_{_{P}}(x)\triangleq \sum _{n=-\infty }^{\infty }s(x+nP).} Likewise, the periodization of S ( f ) {\displaystyle S(f)} with period 1 / T {\displaystyle 1/T} , where T > 0 {\displaystyle T>0} , is S 1 / T ( f ) ≜ ∑ k = − ∞ ∞ S ( f + k / T ) . {\displaystyle S_{1/T}(f)\triangleq \sum _{k=-\infty }^{\infty }S(f+k/T).} Then Eq.1, ∑ n = − ∞ ∞ s ( n ) = ∑ k = − ∞ ∞ S ( k ) , {\displaystyle \sum _{n=-\infty }^{\infty }s(n)=\sum _{k=-\infty }^{\infty }S(k),} is a special case (P=1, x=0) of this generalization: which is a Fourier series expansion with coefficients that are samples of the function S ( f ) . {\displaystyle S(f).} Conversely, Eq.2 follows from Eq.1 by applying the known behavior of the Fourier transform under translations (see the Fourier transform properties time scaling and shifting). Similarly: also known as the important Discrete-time Fourier transform. == Derivations == We prove that, if s ∈ L 1 ( R ) {\displaystyle s\in L^{1}(\mathbb {R} )} , then the (possibly divergent) Fourier series of s P ( x ) {\displaystyle s_{P}(x)} is s P ( x ) ∼ ∑ k = − ∞ ∞ 1 P S ( k P ) e 2 π i k / P . {\displaystyle s_{_{P}}(x)\sim \sum _{k=-\infty }^{\infty }{\frac {1}{P}}S\left({\frac {k}{P}}\right)e^{2\pi ik/P}.} When s ( x ) {\displaystyle s(x)} is a Schwartz function, this establishes equality in Eq.2 of the previous section. First, the periodization s P ( x ) {\displaystyle s_{P}(x)} converges in L 1 {\displaystyle L^{1}} norm to an L 1 ( [ 0 , P ] ) {\displaystyle L^{1}([0,P])} function which is periodic on R {\displaystyle \mathbb {R} } , and therefore integrable on any interval of length P . {\displaystyle P.} We must therefore show that the Fourier series coefficients of s P ( x ) {\displaystyle s_{_{P}}(x)} are 1 P S ( k P ) {\textstyle {\frac {1}{P}}S\left({\frac {k}{P}}\right)} where S ( f ) {\textstyle S\left(f\right)} is the Fourier transform of s ( x ) {\textstyle s\left(x\right)} . (Not S [ k ] {\textstyle S\left[k\right]} , which is the Fourier coefficient of s P ( x ) {\displaystyle s_{_{P}}(x)} .) Proceeding from the definition of the Fourier coefficients we have: S [ k ] ≜ 1 P ∫ 0 P s P ( x ) ⋅ e − i 2 π k P x d x = 1 P ∫ 0 P ( ∑ n = − ∞ ∞ s ( x + n P ) ) ⋅ e − i 2 π k P x d x = 1 P ∑ n = − ∞ ∞ ∫ 0 P s ( x + n P ) ⋅ e − i 2 π k P x d x , {\displaystyle {\begin{aligned}S[k]\ &\triangleq \ {\frac {1}{P}}\int _{0}^{P}s_{_{P}}(x)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx\\&=\ {\frac {1}{P}}\int _{0}^{P}\left(\sum _{n=-\infty }^{\infty }s(x+nP)\right)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx\\&=\ {\frac {1}{P}}\sum _{n=-\infty }^{\infty }\int _{0}^{P}s(x+nP)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx,\end{aligned}}} where the interchange of summation with integration is justified by dominated convergence. With a change of variables ( τ = x + n P {\displaystyle \tau =x+nP} ), this becomes the following, completing the proof of Eq.2: S [ k ] = 1 P ∑ n = − ∞ ∞ ∫ n P ( n + 1 ) P s ( τ ) e − i 2 π k P τ e i 2 π k n ⏟ 1 d τ = 1 P ∫ − ∞ ∞ s ( τ ) e − i 2 π k P τ d τ ≜ 1 P ⋅ S ( k P ) . {\displaystyle {\begin{aligned}S[k]={\frac {1}{P}}\sum _{n=-\infty }^{\infty }\int _{nP}^{(n+1)P}s(\tau )\ e^{-i2\pi {\frac {k}{P}}\tau }\ \underbrace {e^{i2\pi kn}} _{1}\,d\tau \ =\ {\frac {1}{P}}\int _{-\infty }^{\infty }s(\tau )\ e^{-i2\pi {\frac {k}{P}}\tau }d\tau \triangleq {\frac {1}{P}}\cdot S\left({\frac {k}{P}}\right)\end{aligned}}.} This proves Eq.2 for L 1 {\displaystyle L^{1}} functions, in the sense that the right-hand side is the (possibly divergent) Fourier series of the left-hand side. Similarly, if S ( f ) {\displaystyle S(f)} is in L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} , a similar proof shows the corresponding version of Eq.3. Finally, if s P ( x ) {\displaystyle s_{_{P}}(x)} has an absolutely convergent Fourier series, then Eq.2 holds as an equality almost everywhere. This is the case, in particular, when s ( x ) {\displaystyle s(x)} is a Schwartz function. Similarly, Eq.3 holds when S ( f ) {\displaystyle S(f)} is a Schwartz function. == Distributional formulation == These equations can be interpreted in the language of distributions: §7.2 for a function s {\displaystyle s} whose derivatives are all rapidly decreasing (see Schwartz function). The Poisson summation formula arises as a particular case of the Convolution Theorem on tempered distributions, using the Dirac comb distribution and its Fourier series: ∑ n = − ∞ ∞ δ ( x − n T ) ≡ ∑ k = − ∞ ∞ 1 T ⋅ e − i 2 π k T x ⟺ F 1 T ⋅ ∑ k = − ∞ ∞ δ ( f − k / T ) . {\displaystyle \sum _{n=-\infty }^{\infty }\delta (x-nT)\equiv \sum _{k=-\infty }^{\infty }{\frac {1}{T}}\cdot e^{-i2\pi {\frac {k}{T}}x}\quad {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\quad {\frac {1}{T}}\cdot \sum _{k=-\infty }^{\infty }\delta (f-k/T).} In other words, the periodization of a Dirac delta δ , {\displaystyle \delta ,} resulting in a Dirac comb, corresponds to the discretization of its spectrum which is constantly one. Hence, this again is a Dirac comb but with reciprocal increments. For the case T = 1 , {\displaystyle T=1,} Eq.1 readily follows: ∑ k = − ∞ ∞ S ( k ) = ∑ k = − ∞ ∞ ( ∫ − ∞ ∞ s ( x ) e − i 2 π k x d x ) = ∫ − ∞ ∞ s ( x ) ( ∑ k = − ∞ ∞ e − i 2 π k x ) ⏟ ∑ n = − ∞ ∞ δ ( x − n ) d x = ∑ n = − ∞ ∞ ( ∫ − ∞ ∞ s ( x ) δ ( x − n ) d x ) = ∑ n = − ∞ ∞ s ( n ) . {\displaystyle {\begin{aligned}\sum _{k=-\infty }^{\infty }S(k)&=\sum _{k=-\infty }^{\infty }\left(\int _{-\infty }^{\infty }s(x)\ e^{-i2\pi kx}dx\right)=\int _{-\infty }^{\infty }s(x)\underbrace {\left(\sum _{k=-\infty }^{\infty }e^{-i2\pi kx}\right)} _{\sum _{n=-\infty }^{\infty }\delta (x-n)}dx\\&=\sum _{n=-\infty }^{\infty }\left(\int _{-\infty }^{\infty }s(x)\ \delta (x-n)\ dx\right)=\sum _{n=-\infty }^{\infty }s(n).\end{aligned}}} Similarly: ∑ k = − ∞ ∞ S ( f − k / T ) = ∑ k = − ∞ ∞ F { s ( x ) ⋅ e i 2 π k T x } = F { s ( x ) ∑ k = − ∞ ∞ e i 2 π k T x ⏟ T ∑ n = − ∞ ∞ δ ( x − n T ) } = F { ∑ n = − ∞ ∞ T ⋅ s ( n T ) ⋅ δ ( x − n T ) } = ∑ n = − ∞ ∞ T ⋅ s ( n T ) ⋅ F { δ ( x − n T ) } = ∑ n = − ∞ ∞ T ⋅ s ( n T ) ⋅ e − i 2 π n T f . {\displaystyle {\begin{aligned}\sum _{k=-\infty }^{\infty }S(f-k/T)&=\sum _{k=-\infty }^{\infty }{\mathcal {F}}\left\{s(x)\cdot e^{i2\pi {\frac {k}{T}}x}\right\}\\&={\mathcal {F}}{\bigg \{}s(x)\underbrace {\sum _{k=-\infty }^{\infty }e^{i2\pi {\frac {k}{T}}x}} _{T\sum _{n=-\infty }^{\infty }\delta (x-nT)}{\bigg \}}={\mathcal {F}}\left\{\sum _{n=-\infty }^{\infty }T\cdot s(nT)\cdot \delta (x-nT)\right\}\\&=\sum _{n=-\infty }^{\infty }T\cdot s(nT)\cdot {\mathcal {F}}\left\{\delta (x-nT)\right\}=\sum _{n=-\infty }^{\infty }T\cdot s(nT)\cdot e^{-i2\pi nTf}.\end{aligned}}} Or:: 143 ∑ k = − ∞ ∞ S ( f − k / T ) = S ( f ) ∗ ∑ k = − ∞ ∞ δ ( f − k / T ) = S ( f ) ∗ F { T ∑ n = − ∞ ∞ δ ( x − n T ) } = F { s ( x ) ⋅ T ∑ n = − ∞ ∞ δ ( x − n T ) } = F { ∑ n = − ∞ ∞ T ⋅ s ( n T ) ⋅ δ ( x − n T ) } as above . {\displaystyle {\begin{aligned}\sum _{k=-\infty }^{\infty }S(f-k/T)&=S(f)*\sum _{k=-\infty }^{\infty }\delta (f-k/T)\\&=S(f)*{\mathcal {F}}\left\{T\sum _{n=-\infty }^{\infty }\delta (x-nT)\right\}\\&={\mathcal {F}}\left\{s(x)\cdot T\sum _{n=-\infty }^{\infty }\delta (x-nT)\right\}={\mathcal {F}}\left\{\sum _{n=-\infty }^{\infty }T\cdot s(nT)\cdot \delta (x-nT)\right\}\quad {\text{as above}}.\end{aligned}}} The Poisson summation formula can also be proved quite conceptually using the compatibility of Pontryagin duality with short exact sequences such as 0 → Z → R → R / Z → 0. {\textstyle 0\to \mathbb {Z} \to \mathbb {R} \to \mathbb {R} /\mathbb {Z} \to 0.} == Applicability == Eq.2 holds provided s ( x ) {\displaystyle s(x)} is a continuous integrable function which satisfies | s ( x ) | + | S ( x ) | ≤ C ( 1 + | x | ) − 1 − δ {\textstyle |s(x)|+|S(x)|\leq C(1+|x|)^{-1-\delta }} for some C > 0 , δ > 0 {\displaystyle C>0,\delta >0} and every x . {\displaystyle x.} Note that such s ( x ) {\displaystyle s(x)} is uniformly continuous, this together with the decay assumption on s {\displaystyle s} , show that the series defining s P {\displaystyle s_{_{P}}} converges uniformly to a continuous function. Eq.2 holds in the strong sense that both sides converge uniformly and absolutely to the same limit. Eq.2 holds in a pointwise sense under the strictly weaker assumption that s {\displaystyle s} has bounded variation and 2 ⋅ s ( x ) = lim ε → 0 s ( x + ε ) + lim ε → 0 s ( x − ε ) . {\displaystyle 2\cdot s(x)=\lim _{\varepsilon \to 0}s(x+\varepsilon )+\lim _{\varepsilon \to 0}s(x-\varepsilon ).} The Fourier series on the right-hand side of Eq.2 is then understood as a (conditionally convergent) limit of symmetric partial sums. As shown above, Eq.2 holds under the much less restrictive assumption that s ( x ) {\displaystyle s(x)} is in L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} , but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) Fourier series of s P ( x ) . {\displaystyle s_{_{P}}(x).} In this case, one may extend the region where equality holds by considering summability methods such as Cesàro summability. When interpreting convergence in this way Eq.2, case x = 0 , {\displaystyle x=0,} holds under the less restrictive conditions that s ( x ) {\displaystyle s(x)} is integrable and 0 is a point of continuity of s P ( x ) {\displaystyle s_{_{P}}(x)} . However, Eq.2 may fail to hold even when both s {\displaystyle s} and S {\displaystyle S} are integrable and continuous, and the sums converge absolutely. == Applications == === Method of images === In partial differential equations, the Poisson summation formula provides a rigorous justification for the fundamental solution of the heat equation with absorbing rectangular boundary by the method of images. Here the heat kernel on R 2 {\displaystyle \mathbb {R} ^{2}} is known, and that of a rectangle is determined by taking the periodization. The Poisson summation formula similarly provides a connection between Fourier analysis on Euclidean spaces and on the tori of the corresponding dimensions. In one dimension, the resulting solution is called a theta function. In electrodynamics, the method is also used to accelerate the computation of periodic Green's functions. === Sampling === In the statistical study of time-series, if s {\displaystyle s} is a function of time, then looking only at its values at equally spaced points of time is called "sampling." In applications, typically the function s {\displaystyle s} is band-limited, meaning that there is some cutoff frequency f o {\displaystyle f_{o}} such that S ( f ) {\displaystyle S(f)} is zero for frequencies exceeding the cutoff: S ( f ) = 0 {\displaystyle S(f)=0} for | f | > f o . {\displaystyle |f|>f_{o}.} For band-limited functions, choosing the sampling rate 1 T > 2 f o {\displaystyle {\tfrac {1}{T}}>2f_{o}} guarantees that no information is lost: since S {\displaystyle S} can be reconstructed from these sampled values. Then, by Fourier inversion, so can s . {\displaystyle s.} This leads to the Nyquist–Shannon sampling theorem. === Ewald summation === Computationally, the Poisson summation formula is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space. (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation. === Approximations of integrals === The Poisson summation formula is also useful to bound the errors obtained when an integral is approximated by a (Riemann) sum. Consider an approximation of S ( 0 ) = ∫ − ∞ ∞ d x s ( x ) {\textstyle S(0)=\int _{-\infty }^{\infty }dx\,s(x)} as δ ∑ n = − ∞ ∞ s ( n δ ) {\textstyle \delta \sum _{n=-\infty }^{\infty }s(n\delta )} , where δ ≪ 1 {\displaystyle \delta \ll 1} is the size of the bin. Then, according to Eq.2 this approximation coincides with ∑ k = − ∞ ∞ S ( k / δ ) {\textstyle \sum _{k=-\infty }^{\infty }S(k/\delta )} . The error in the approximation can then be bounded as | ∑ k ≠ 0 S ( k / δ ) | ≤ ∑ k ≠ 0 | S ( k / δ ) | {\textstyle \left|\sum _{k\neq 0}S(k/\delta )\right|\leq \sum _{k\neq 0}|S(k/\delta )|} . This is particularly useful when the Fourier transform of s ( x ) {\displaystyle s(x)} is rapidly decaying if 1 / δ ≫ 1 {\displaystyle 1/\delta \gg 1} . === Lattice points inside a sphere === The Poisson summation formula may be used to derive Landau's asymptotic formula for the number of lattice points inside a large Euclidean sphere. It can also be used to show that if an integrable function, s {\displaystyle s} and S {\displaystyle S} both have compact support then s = 0. {\displaystyle s=0.} === Number theory === In number theory, Poisson summation can also be used to derive a variety of functional equations including the functional equation for the Riemann zeta function. One important such use of Poisson summation concerns theta functions: periodic summations of Gaussians. Put q = e i π τ {\displaystyle q=e^{i\pi \tau }} , for τ {\displaystyle \tau } a complex number in the upper half plane, and define the theta function: θ ( τ ) = ∑ n q n 2 . {\displaystyle \theta (\tau )=\sum _{n}q^{n^{2}}.} The relation between θ ( − 1 / τ ) {\displaystyle \theta (-1/\tau )} and θ ( τ ) {\displaystyle \theta (\tau )} turns out to be important for number theory, since this kind of relation is one of the defining properties of a modular form. By choosing s ( x ) = e − π x 2 {\displaystyle s(x)=e^{-\pi x^{2}}} and using the fact that S ( f ) = e − π f 2 , {\displaystyle S(f)=e^{-\pi f^{2}},} one can conclude: θ ( − 1 τ ) = τ i θ ( τ ) , {\displaystyle \theta \left({-1 \over \tau }\right)={\sqrt {\tau \over i}}\theta (\tau ),} by putting 1 / λ = τ / i . {\displaystyle {1/\lambda }={\sqrt {\tau /i}}.} It follows from this that θ 8 {\displaystyle \theta ^{8}} has a simple transformation property under τ ↦ − 1 / τ {\displaystyle \tau \mapsto {-1/\tau }} and this can be used to prove Jacobi's formula for the number of different ways to express an integer as the sum of eight perfect squares. === Sphere packings === Cohn & Elkies proved an upper bound on the density of sphere packings using the Poisson summation formula, which subsequently led to a proof of optimal sphere packings in dimension 8 and 24. === Other === Let s ( x ) = e − a x {\displaystyle s(x)=e^{-ax}} for 0 ≤ x {\displaystyle 0\leq x} and s ( x ) = 0 {\displaystyle s(x)=0} for x < 0 {\displaystyle x<0} to get coth ( x ) = x ∑ n ∈ Z 1 x 2 + π 2 n 2 = 1 x + 2 x ∑ n ∈ Z + 1 x 2 + π 2 n 2 . {\displaystyle \coth(x)=x\sum _{n\in \mathbb {Z} }{\frac {1}{x^{2}+\pi ^{2}n^{2}}}={\frac {1}{x}}+2x\sum _{n\in \mathbb {Z} _{+}}{\frac {1}{x^{2}+\pi ^{2}n^{2}}}.} It can be used to prove the functional equation for the theta function. Poisson's summation formula appears in Ramanujan's notebooks and can be used to prove some of his formulas, in particular it can be used to prove one of the formulas in Ramanujan's first letter to Hardy. It can be used to calculate the quadratic Gauss sum. == Generalizations == The Poisson summation formula holds in Euclidean space of arbitrary dimension. Let Λ {\displaystyle \Lambda } be the lattice in R d {\displaystyle \mathbb {R} ^{d}} consisting of points with integer coordinates. For a function s {\displaystyle s} in L 1 ( R d ) {\displaystyle L^{1}(\mathbb {R} ^{d})} , consider the series given by summing the translates of s {\displaystyle s} by elements of Λ {\displaystyle \Lambda } : P s ( x ) = ∑ ν ∈ Λ s ( x + ν ) . {\displaystyle \mathbb {P} s(x)=\sum _{\nu \in \Lambda }s(x+\nu ).} Theorem For s {\displaystyle s} in L 1 ( R d ) {\displaystyle L^{1}(\mathbb {R} ^{d})} , the above series converges pointwise almost everywhere, and defines a Λ {\displaystyle \Lambda } -periodic function on R d {\displaystyle \mathbb {R} ^{d}} , hence a function P s ( x ¯ ) {\displaystyle \mathbb {P} s({\bar {x}})} on the torus R d / Λ . {\displaystyle \mathbb {R} ^{d}/\Lambda .} a.e. P s {\displaystyle \mathbb {P} s} lies in L 1 ( R d / Λ ) {\displaystyle L^{1}(\mathbb {R} ^{d}/\Lambda )} with ‖ P s ‖ L 1 ( R d / Λ ) ≤ ‖ s ‖ L 1 ( R ) . {\displaystyle \|\mathbb {P} s\|_{L_{1}(\mathbb {R} ^{d}/\Lambda )}\leq \|s\|_{L_{1}(\mathbb {R} )}.} Moreover, for all ν {\displaystyle \nu } in Λ , {\displaystyle \Lambda ,} P S ( ν ) = ∫ R d / Λ P s ( x ¯ ) e − i 2 π ν ⋅ x ¯ d x ¯ {\displaystyle \mathbb {P} S(\nu )=\int _{\mathbb {R} ^{d}/\Lambda }\mathbb {P} s({\bar {x}})e^{-i2\pi \nu \cdot {\bar {x}}}d{\bar {x}}} (the Fourier transform of P s {\displaystyle \mathbb {P} s} on the torus R d / Λ {\displaystyle \mathbb {R} ^{d}/\Lambda } ) equals S ( ν ) = ∫ R d s ( x ) e − i 2 π ν ⋅ x d x {\displaystyle S(\nu )=\int _{\mathbb {R} ^{d}}s(x)e^{-i2\pi \nu \cdot x}\,dx} (the Fourier transform of s {\displaystyle s} on R d {\displaystyle \mathbb {R} ^{d}} ). When s {\displaystyle s} is in addition continuous, and both s {\displaystyle s} and S {\displaystyle S} decay sufficiently fast at infinity, then one can "invert" the Fourier series back to their domain R d {\displaystyle \mathbb {R} ^{d}} and make a stronger statement. More precisely, if | s ( x ) | + | S ( x ) | ≤ C ( 1 + | x | ) − d − δ {\displaystyle |s(x)|+|S(x)|\leq C(1+|x|)^{-d-\delta }} for some C, δ > 0, then: VII §2 ∑ ν ∈ Λ s ( x + ν ) = ∑ ν ∈ Λ S ( ν ) e i 2 π ν ⋅ x , {\displaystyle \sum _{\nu \in \Lambda }s(x+\nu )=\sum _{\nu \in \Lambda }S(\nu )e^{i2\pi \nu \cdot x},} where both series converge absolutely and uniformly on Λ. When d = 1 and x = 0, this gives Eq.1 above. More generally, a version of the statement holds if Λ is replaced by a more general lattice in a finite dimensional vector space V {\displaystyle V} . Choose a translation invariant measure m {\displaystyle m} on V {\displaystyle V} . It is unique up to positive scalar. Again for a function s ∈ L 1 ( V , m ) {\displaystyle s\in L_{1}(V,m)} we define the periodisation P s ( x ) = ∑ ν ∈ Λ s ( x + ν ) {\displaystyle \mathbb {P} s(x)=\sum _{\nu \in \Lambda }s(x+\nu )} as above. The dual lattice Λ ′ {\displaystyle \Lambda '} is defined as a subset of the dual vector space V ′ {\displaystyle V'} that evaluates to integers on the lattice Λ {\displaystyle \Lambda } or alternatively, by Pontryagin duality, as the characters of V {\displaystyle V} that contain Λ {\displaystyle \Lambda } in the kernel. Then the statement is that for all ν ∈ Λ ′ {\displaystyle \nu \in \Lambda '} the Fourier transform P S {\displaystyle \mathbb {P} S} of the periodisation P s {\displaystyle \mathbb {P} s} as a function on V / Λ {\displaystyle V/\Lambda } and the Fourier transform S {\displaystyle S} of s {\displaystyle s} on V {\displaystyle V} itself are related by proper normalisation P S ( ν ) = 1 m ( V / Λ ) ∫ V / Λ P s ( x ¯ ) e − i 2 π ⟨ ν , x ¯ ⟩ m ( d x ¯ ) = 1 m ( V / Λ ) ∫ V s ( x ) e − i 2 π ⟨ ν , x ⟩ m ( d x ) = 1 m ( V / Λ ) S ( ν ) {\displaystyle {\begin{aligned}\mathbb {P} S(\nu )&={\frac {1}{m(V/\Lambda )}}\int _{V/\Lambda }\mathbb {P} s({\bar {x}})e^{-i2\pi \langle \nu ,{\bar {x}}\rangle }m(d{\bar {x}})\\&={\frac {1}{m(V/\Lambda )}}\int _{V}s(x)e^{-i2\pi \langle \nu ,x\rangle }m(dx)\\&={\frac {1}{m(V/\Lambda )}}S(\nu )\end{aligned}}} Note that the right-hand side is independent of the choice of invariant measure μ {\displaystyle \mu } . If s {\displaystyle s} and S {\displaystyle S} are continuous and tend to zero faster than 1 / r dim ( V ) + δ {\displaystyle 1/r^{\dim(V)+\delta }} then ∑ λ ∈ Λ s ( λ + x ) = ∑ ν ∈ Λ ′ P S ( ν ) e i 2 π ⟨ ν , x ⟩ = 1 m ( V / Λ ) ∑ ν ∈ Λ ′ S ( ν ) e i 2 π ⟨ ν , x ⟩ {\displaystyle \sum _{\lambda \in \Lambda }s(\lambda +x)=\sum _{\nu \in \Lambda '}\mathbb {P} S(\nu )e^{i2\pi \langle \nu ,x\rangle }={\frac {1}{m(V/\Lambda )}}\sum _{\nu \in \Lambda '}S(\nu )e^{i2\pi \langle \nu ,x\rangle }} In particular ∑ λ ∈ Λ s ( λ ) = 1 m ( V / Λ ) ∑ ν ∈ Λ ′ S ( ν ) {\displaystyle \sum _{\lambda \in \Lambda }s(\lambda )={\frac {1}{m(V/\Lambda )}}\sum _{\nu \in \Lambda '}S(\nu )} This is applied in the theory of theta functions and is a possible method in geometry of numbers. In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis. === Selberg trace formula === Further generalization to locally compact abelian groups is required in number theory. In non-commutative harmonic analysis, the idea is taken even further in the Selberg trace formula but takes on a much deeper character. A series of mathematicians applying harmonic analysis to number theory, most notably Martin Eichler, Atle Selberg, Robert Langlands, and James Arthur, have generalised the Poisson summation formula to the Fourier transform on non-commutative locally compact reductive algebraic groups G {\displaystyle G} with a discrete subgroup Γ {\displaystyle \Gamma } such that G / Γ {\displaystyle G/\Gamma } has finite volume. For example, G {\displaystyle G} can be the real points of S L n {\displaystyle SL_{n}} and Γ {\displaystyle \Gamma } can be the integral points of S L n {\displaystyle SL_{n}} . In this setting, G {\displaystyle G} plays the role of the real number line in the classical version of Poisson summation, and Γ {\displaystyle \Gamma } plays the role of the integers n {\displaystyle n} that appear in the sum. The generalised version of Poisson summation is called the Selberg Trace Formula and has played a role in proving many cases of Artin's conjecture and in Wiles's proof of Fermat's Last Theorem. The left-hand side of Eq.1 becomes a sum over irreducible unitary representations of G {\displaystyle G} , and is called "the spectral side," while the right-hand side becomes a sum over conjugacy classes of Γ {\displaystyle \Gamma } , and is called "the geometric side." The Poisson summation formula is the archetype for vast developments in harmonic analysis and number theory. === Semiclassical trace formula === The Selberg trace formula was later generalized to more general smooth manifolds (without any algebraic structure) by Gutzwiller, Balian-Bloch, Chazarain, Colin de Verdière, Duistermaat-Guillemin, Uribe, Guillemin-Melrose, Zelditch and others. The "wave trace" or "semiclassical trace" formula relates geometric and spectral properties of the underlying topological space. The spectral side is the trace of a unitary group of operators (e.g., the Schrödinger or wave propagator) which encodes the spectrum of a differential operator and the geometric side is a sum of distributions which are supported at the lengths of periodic orbits of a corresponding Hamiltonian system. The Hamiltonian is given by the principal symbol of the differential operator which generates the unitary group. For the Laplacian, the "wave trace" has singular support contained in the set of lengths of periodic geodesics; this is called the Poisson relation. === Convolution theorem === The Poisson summation formula is a particular case of the convolution theorem on tempered distributions. If one of the two factors is the Dirac comb, one obtains periodic summation on one side and sampling on the other side of the equation. Applied to the Dirac delta function and its Fourier transform, the function that is constantly 1, this yields the Dirac comb identity. == See also == Fourier analysis § Summary Post's inversion formula Voronoi formula Discrete-time Fourier transform Explicit formulae for L-functions == References == |
Wikipedia:Pokhozhaev's identity#0 | Pokhozhaev's identity is an integral relation satisfied by stationary localized solutions to a nonlinear Schrödinger equation or nonlinear Klein–Gordon equation. It was obtained by S.I. Pokhozhaev and is similar to the virial theorem. This relation is also known as G.H. Derrick's theorem. Similar identities can be derived for other equations of mathematical physics. == The Pokhozhaev identity for the stationary nonlinear Schrödinger equation == Here is a general form due to H. Berestycki and P.-L. Lions. Let g ( s ) {\displaystyle g(s)} be continuous and real-valued, with g ( 0 ) = 0 {\displaystyle g(0)=0} . Denote G ( s ) = ∫ 0 s g ( t ) d t {\displaystyle G(s)=\int _{0}^{s}g(t)\,dt} . Let u ∈ L l o c ∞ ( R n ) , ∇ u ∈ L 2 ( R n ) , G ( u ) ∈ L 1 ( R n ) , n ∈ N , {\displaystyle u\in L_{\mathrm {loc} }^{\infty }(\mathbb {R} ^{n}),\qquad \nabla u\in L^{2}(\mathbb {R} ^{n}),\qquad G(u)\in L^{1}(\mathbb {R} ^{n}),\qquad n\in \mathbb {N} ,} be a solution to the equation − ∇ 2 u = g ( u ) {\displaystyle -\nabla ^{2}u=g(u)} , in the sense of distributions. Then u {\displaystyle u} satisfies the relation n − 2 2 ∫ R n | ∇ u ( x ) | 2 d x = n ∫ R n G ( u ( x ) ) d x . {\displaystyle {\frac {n-2}{2}}\int _{\mathbb {R} ^{n}}|\nabla u(x)|^{2}\,dx=n\int _{\mathbb {R} ^{n}}G(u(x))\,dx.} == The Pokhozhaev identity for the stationary nonlinear Dirac equation == There is a form of the virial identity for the stationary nonlinear Dirac equation in three spatial dimensions (and also the Maxwell-Dirac equations) and in arbitrary spatial dimension. Let n ∈ N , N ∈ N {\displaystyle n\in \mathbb {N} ,\,N\in \mathbb {N} } and let α i , 1 ≤ i ≤ n {\displaystyle \alpha ^{i},\,1\leq i\leq n} and β {\displaystyle \beta } be the self-adjoint Dirac matrices of size N × N {\displaystyle N\times N} : α i α j + α j α i = 2 δ i j I N , β 2 = I N , α i β + β α i = 0 , 1 ≤ i , j ≤ n . {\displaystyle \alpha ^{i}\alpha ^{j}+\alpha ^{j}\alpha ^{i}=2\delta _{ij}I_{N},\quad \beta ^{2}=I_{N},\quad \alpha ^{i}\beta +\beta \alpha ^{i}=0,\quad 1\leq i,j\leq n.} Let D 0 = − i α ⋅ ∇ = − i ∑ i = 1 n α i ∂ ∂ x i {\displaystyle D_{0}=-\mathrm {i} \alpha \cdot \nabla =-\mathrm {i} \sum _{i=1}^{n}\alpha ^{i}{\frac {\partial }{\partial x^{i}}}} be the massless Dirac operator. Let g ( s ) {\displaystyle g(s)} be continuous and real-valued, with g ( 0 ) = 0 {\displaystyle g(0)=0} . Denote G ( s ) = ∫ 0 s g ( t ) d t {\displaystyle G(s)=\int _{0}^{s}g(t)\,dt} . Let ϕ ∈ L l o c ∞ ( R n , C N ) {\displaystyle \phi \in L_{\mathrm {loc} }^{\infty }(\mathbb {R} ^{n},\mathbb {C} ^{N})} be a spinor-valued solution that satisfies the stationary form of the nonlinear Dirac equation, ω ϕ = D 0 ϕ + g ( ϕ ∗ β ϕ ) β ϕ , {\displaystyle \omega \phi =D_{0}\phi +g(\phi ^{\ast }\beta \phi )\beta \phi ,} in the sense of distributions, with some ω ∈ R {\displaystyle \omega \in \mathbb {R} } . Assume that ϕ ∈ H 1 ( R n , C N ) , G ( ϕ ∗ β ϕ ) ∈ L 1 ( R n ) . {\displaystyle \phi \in H^{1}(\mathbb {R} ^{n},\mathbb {C} ^{N}),\qquad G(\phi ^{\ast }\beta \phi )\in L^{1}(\mathbb {R} ^{n}).} Then ϕ {\displaystyle \phi } satisfies the relation ω ∫ R n ϕ ( x ) ∗ ϕ ( x ) d x = n − 1 n ∫ R n ϕ ( x ) ∗ D 0 ϕ ( x ) d x + ∫ R n G ( ϕ ( x ) ∗ β ϕ ( x ) ) d x . {\displaystyle \omega \int _{\mathbb {R} ^{n}}\phi (x)^{\ast }\phi (x)\,dx={\frac {n-1}{n}}\int _{\mathbb {R} ^{n}}\phi (x)^{\ast }D_{0}\phi (x)\,dx+\int _{\mathbb {R} ^{n}}G(\phi (x)^{\ast }\beta \phi (x))\,dx.} == See also == Virial theorem Derrick's theorem == References == |
Wikipedia:Polarization identity#0 | In linear algebra, a branch of mathematics, the polarization identity is any one of a family of formulas that express the inner product of two vectors in terms of the norm of a normed vector space. If a norm arises from an inner product then the polarization identity can be used to express this inner product entirely in terms of the norm. The polarization identity shows that a norm can arise from at most one inner product; however, there exist norms that do not arise from any inner product. The norm associated with any inner product space satisfies the parallelogram law: ‖ x + y ‖ 2 + ‖ x − y ‖ 2 = 2 ‖ x ‖ 2 + 2 ‖ y ‖ 2 . {\displaystyle \|x+y\|^{2}+\|x-y\|^{2}=2\|x\|^{2}+2\|y\|^{2}.} In fact, as observed by John von Neumann, the parallelogram law characterizes those norms that arise from inner products. Given a normed space ( H , ‖ ⋅ ‖ ) {\displaystyle (H,\|\cdot \|)} , the parallelogram law holds for ‖ ⋅ ‖ {\displaystyle \|\cdot \|} if and only if there exists an inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } on H {\displaystyle H} such that ‖ x ‖ 2 = ⟨ x , x ⟩ {\displaystyle \|x\|^{2}=\langle x,\ x\rangle } for all x ∈ H , {\displaystyle x\in H,} in which case this inner product is uniquely determined by the norm via the polarization identity. == Polarization identities == Any inner product on a vector space induces a norm by the equation ‖ x ‖ = ⟨ x , x ⟩ . {\displaystyle \|x\|={\sqrt {\langle x,x\rangle }}.} The polarization identities reverse this relationship, recovering the inner product from the norm. Every inner product satisfies: ‖ x + y ‖ 2 = ‖ x ‖ 2 + ‖ y ‖ 2 + 2 Re ⟨ x , y ⟩ for all vectors x , y . {\displaystyle \|x+y\|^{2}=\|x\|^{2}+\|y\|^{2}+2\operatorname {Re} \langle x,y\rangle \qquad {\text{ for all vectors }}x,y.} Solving for Re ⟨ x , y ⟩ {\displaystyle \operatorname {Re} \langle x,y\rangle } gives the formula Re ⟨ x , y ⟩ = 1 2 ( ‖ x + y ‖ 2 − ‖ x ‖ 2 − ‖ y ‖ 2 ) . {\displaystyle \operatorname {Re} \langle x,y\rangle ={\frac {1}{2}}\left(\|x+y\|^{2}-\|x\|^{2}-\|y\|^{2}\right).} If the inner product is real then Re ⟨ x , y ⟩ = ⟨ x , y ⟩ {\displaystyle \operatorname {Re} \langle x,y\rangle =\langle x,y\rangle } and this formula becomes a polarization identity for real inner products. === Real vector spaces === If the vector space is over the real numbers then the polarization identities are: ⟨ x , y ⟩ = 1 4 ( ‖ x + y ‖ 2 − ‖ x − y ‖ 2 ) = 1 2 ( ‖ x + y ‖ 2 − ‖ x ‖ 2 − ‖ y ‖ 2 ) = 1 2 ( ‖ x ‖ 2 + ‖ y ‖ 2 − ‖ x − y ‖ 2 ) . {\displaystyle {\begin{alignedat}{4}\langle x,y\rangle &={\frac {1}{4}}\left(\|x+y\|^{2}-\|x-y\|^{2}\right)\\[3pt]&={\frac {1}{2}}\left(\|x+y\|^{2}-\|x\|^{2}-\|y\|^{2}\right)\\[3pt]&={\frac {1}{2}}\left(\|x\|^{2}+\|y\|^{2}-\|x-y\|^{2}\right).\\[3pt]\end{alignedat}}} These various forms are all equivalent by the parallelogram law: 2 ‖ x ‖ 2 + 2 ‖ y ‖ 2 = ‖ x + y ‖ 2 + ‖ x − y ‖ 2 . {\displaystyle 2\|x\|^{2}+2\|y\|^{2}=\|x+y\|^{2}+\|x-y\|^{2}.} This further implies that L p {\displaystyle L^{p}} class is not a Hilbert space whenever p ≠ 2 {\displaystyle p\neq 2} , as the parallelogram law is not satisfied. For the sake of counterexample, consider x = 1 A {\displaystyle x=1_{A}} and y = 1 B {\displaystyle y=1_{B}} for any two disjoint subsets A , B {\displaystyle A,B} of general domain Ω ⊂ R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} and compute the measure of both sets under parallelogram law. === Complex vector spaces === For vector spaces over the complex numbers, the above formulas are not quite correct because they do not describe the imaginary part of the (complex) inner product. However, an analogous expression does ensure that both real and imaginary parts are retained. The complex part of the inner product depends on whether it is antilinear in the first or the second argument. The notation ⟨ x | y ⟩ , {\displaystyle \langle x|y\rangle ,} which is commonly used in physics will be assumed to be antilinear in the first argument while ⟨ x , y ⟩ , {\displaystyle \langle x,\,y\rangle ,} which is commonly used in mathematics, will be assumed to be antilinear in its second argument. They are related by the formula: ⟨ x , y ⟩ = ⟨ y | x ⟩ for all x , y ∈ H . {\displaystyle \langle x,\,y\rangle =\langle y\,|\,x\rangle \quad {\text{ for all }}x,y\in H.} The real part of any inner product (no matter which argument is antilinear and no matter if it is real or complex) is a symmetric bilinear map that for any x , y ∈ H {\displaystyle x,y\in H} is always equal to: R ( x , y ) : = Re ⟨ x ∣ y ⟩ = Re ⟨ x , y ⟩ = 1 4 ( ‖ x + y ‖ 2 − ‖ x − y ‖ 2 ) = 1 2 ( ‖ x + y ‖ 2 − ‖ x ‖ 2 − ‖ y ‖ 2 ) = 1 2 ( ‖ x ‖ 2 + ‖ y ‖ 2 − ‖ x − y ‖ 2 ) . {\displaystyle {\begin{alignedat}{4}R(x,y):&=\operatorname {Re} \langle x\mid y\rangle =\operatorname {Re} \langle x,y\rangle \\&={\frac {1}{4}}\left(\|x+y\|^{2}-\|x-y\|^{2}\right)\\&={\frac {1}{2}}\left(\|x+y\|^{2}-\|x\|^{2}-\|y\|^{2}\right)\\[3pt]&={\frac {1}{2}}\left(\|x\|^{2}+\|y\|^{2}-\|x-y\|^{2}\right).\\[3pt]\end{alignedat}}} It is always a symmetric map, meaning that R ( x , y ) = R ( y , x ) for all x , y ∈ H , {\displaystyle R(x,y)=R(y,x)\quad {\text{ for all }}x,y\in H,} and it also satisfies: R ( i x , y ) = − R ( x , i y ) for all x , y ∈ H , {\displaystyle R(ix,y)=-R(x,iy)\quad {\text{ for all }}x,y\in H,} which in plain English says that to move a factor of i {\displaystyle i} to the other argument, introduce a negative sign. These properties can be proven either from the properties of inner products directly or from properties of norms by using the polarization identity. Unlike its real part, the imaginary part of a complex inner product depends on which argument is antilinear. Antilinear in first argument The polarization identities for the inner product ⟨ x | y ⟩ , {\displaystyle \langle x\,|\,y\rangle ,} which is antilinear in the first argument, are ⟨ x | y ⟩ = 1 4 ( ‖ x + y ‖ 2 − ‖ x − y ‖ 2 − i ‖ x + i y ‖ 2 + i ‖ x − i y ‖ 2 ) = 1 4 ∑ k = 0 3 i k ‖ x + ( − i ) k y ‖ 2 = R ( x , y ) − i R ( x , i y ) = R ( x , y ) + i R ( i x , y ) {\displaystyle {\begin{alignedat}{4}\langle x\,|\,y\rangle &={\frac {1}{4}}\left(\|x+y\|^{2}-\|x-y\|^{2}-i\|x+iy\|^{2}+i\|x-iy\|^{2}\right)\\&={\frac {1}{4}}\sum _{k=0}^{3}i^{k}\|x+(-i)^{k}y\|^{2}\\&=R(x,y)-iR(x,iy)\\&=R(x,y)+iR(ix,y)\\\end{alignedat}}} where x , y ∈ H . {\displaystyle x,y\in H.} The second to last equality is similar to the formula expressing a linear functional φ {\displaystyle \varphi } in terms of its real part: φ ( y ) = Re φ ( y ) − i ( Re φ ) ( i y ) . {\displaystyle \varphi (y)=\operatorname {Re} \varphi (y)-i(\operatorname {Re} \varphi )(iy).} Antilinear in second argument The polarization identities for the inner product ⟨ x , y ⟩ , {\displaystyle \langle x,\ y\rangle ,} which is antilinear in the second argument, follows from that of ⟨ x | y ⟩ {\displaystyle \langle x\,|\,y\rangle } by the relationship: ⟨ x , y ⟩ := ⟨ y | x ⟩ = ⟨ x | y ⟩ ¯ for all x , y ∈ H . {\displaystyle \langle x,\ y\rangle :=\langle y\,|\,x\rangle ={\overline {\langle x\,|\,y\rangle }}\quad {\text{ for all }}x,y\in H.} So for any x , y ∈ H , {\displaystyle x,y\in H,} ⟨ x , y ⟩ = 1 4 ( ‖ x + y ‖ 2 − ‖ x − y ‖ 2 + i ‖ x + i y ‖ 2 − i ‖ x − i y ‖ 2 ) = R ( x , y ) + i R ( x , i y ) = R ( x , y ) − i R ( i x , y ) . {\displaystyle {\begin{alignedat}{4}\langle x,\,y\rangle &={\frac {1}{4}}\left(\|x+y\|^{2}-\|x-y\|^{2}+i\|x+iy\|^{2}-i\|x-iy\|^{2}\right)\\&=R(x,y)+iR(x,iy)\\&=R(x,y)-iR(ix,y).\\\end{alignedat}}} This expression can be phrased symmetrically as: ⟨ x , y ⟩ = 1 4 ∑ k = 0 3 i k ‖ x + i k y ‖ 2 . {\displaystyle \langle x,y\rangle ={\frac {1}{4}}\sum _{k=0}^{3}i^{k}\left\|x+i^{k}y\right\|^{2}.} Summary of both cases Thus if R ( x , y ) + i I ( x , y ) {\displaystyle R(x,y)+iI(x,y)} denotes the real and imaginary parts of some inner product's value at the point ( x , y ) ∈ H × H {\displaystyle (x,y)\in H\times H} of its domain, then its imaginary part will be: I ( x , y ) = { R ( i x , y ) if antilinear in the 1 st argument R ( x , i y ) if antilinear in the 2 nd argument {\displaystyle I(x,y)~=~{\begin{cases}~R({\color {red}i}x,y)&\qquad {\text{ if antilinear in the }}{\color {red}1}{\text{st argument}}\\~R(x,{\color {blue}i}y)&\qquad {\text{ if antilinear in the }}{\color {blue}2}{\text{nd argument}}\\\end{cases}}} where the scalar i {\displaystyle i} is always located in the same argument that the inner product is antilinear in. Using R ( i x , y ) = − R ( x , i y ) {\displaystyle R(ix,y)=-R(x,iy)} , the above formula for the imaginary part becomes: I ( x , y ) = { − R ( x , i y ) if antilinear in the 1 st argument − R ( i x , y ) if antilinear in the 2 nd argument {\displaystyle I(x,y)~=~{\begin{cases}-R(x,{\color {black}i}y)&\qquad {\text{ if antilinear in the }}{\color {black}1}{\text{st argument}}\\-R({\color {black}i}x,y)&\qquad {\text{ if antilinear in the }}{\color {black}2}{\text{nd argument}}\\\end{cases}}} == Reconstructing the inner product == In a normed space ( H , ‖ ⋅ ‖ ) , {\displaystyle (H,\|\cdot \|),} if the parallelogram law ‖ x + y ‖ 2 + ‖ x − y ‖ 2 = 2 ‖ x ‖ 2 + 2 ‖ y ‖ 2 {\displaystyle \|x+y\|^{2}~+~\|x-y\|^{2}~=~2\|x\|^{2}+2\|y\|^{2}} holds, then there exists a unique inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\ \cdot \rangle } on H {\displaystyle H} such that ‖ x ‖ 2 = ⟨ x , x ⟩ {\displaystyle \|x\|^{2}=\langle x,\ x\rangle } for all x ∈ H . {\displaystyle x\in H.} Another necessary and sufficient condition for there to exist an inner product that induces a given norm ‖ ⋅ ‖ {\displaystyle \|\cdot \|} is for the norm to satisfy Ptolemy's inequality, which is: ‖ x − y ‖ ‖ z ‖ + ‖ y − z ‖ ‖ x ‖ ≥ ‖ x − z ‖ ‖ y ‖ for all vectors x , y , z . {\displaystyle \|x-y\|\,\|z\|~+~\|y-z\|\,\|x\|~\geq ~\|x-z\|\,\|y\|\qquad {\text{ for all vectors }}x,y,z.} == Applications and consequences == If H {\displaystyle H} is a complex Hilbert space then ⟨ x ∣ y ⟩ {\displaystyle \langle x\mid y\rangle } is real if and only if its imaginary part is 0 = R ( x , i y ) = 1 4 ( ‖ x + i y ‖ 2 − ‖ x − i y ‖ 2 ) {\displaystyle 0=R(x,iy)={\frac {1}{4}}\left(\Vert x+iy\Vert ^{2}-\Vert x-iy\Vert ^{2}\right)} , which happens if and only if ‖ x + i y ‖ = ‖ x − i y ‖ {\displaystyle \Vert x+iy\Vert =\Vert x-iy\Vert } . Similarly, ⟨ x ∣ y ⟩ {\displaystyle \langle x\mid y\rangle } is (purely) imaginary if and only if ‖ x + y ‖ = ‖ x − y ‖ {\displaystyle \Vert x+y\Vert =\Vert x-y\Vert } . For example, from ‖ x + i x ‖ = | 1 + i | ‖ x ‖ = 2 ‖ x ‖ = | 1 − i | ‖ x ‖ = ‖ x − i x ‖ {\displaystyle \|x+ix\|=|1+i|\|x\|={\sqrt {2}}\|x\|=|1-i|\|x\|=\|x-ix\|} it can be concluded that ⟨ x | x ⟩ {\displaystyle \langle x|x\rangle } is real and that ⟨ x | i x ⟩ {\displaystyle \langle x|ix\rangle } is purely imaginary. === Isometries === If A : H → Z {\displaystyle A:H\to Z} is a linear isometry between two Hilbert spaces (so ‖ A h ‖ = ‖ h ‖ {\displaystyle \|Ah\|=\|h\|} for all h ∈ H {\displaystyle h\in H} ) then ⟨ A h , A k ⟩ Z = ⟨ h , k ⟩ H for all h , k ∈ H ; {\displaystyle \langle Ah,Ak\rangle _{Z}=\langle h,k\rangle _{H}\quad {\text{ for all }}h,k\in H;} that is, linear isometries preserve inner products. If A : H → Z {\displaystyle A:H\to Z} is instead an antilinear isometry then ⟨ A h , A k ⟩ Z = ⟨ h , k ⟩ H ¯ = ⟨ k , h ⟩ H for all h , k ∈ H . {\displaystyle \langle Ah,Ak\rangle _{Z}={\overline {\langle h,k\rangle _{H}}}=\langle k,h\rangle _{H}\quad {\text{ for all }}h,k\in H.} === Relation to the law of cosines === The second form of the polarization identity can be written as ‖ u − v ‖ 2 = ‖ u ‖ 2 + ‖ v ‖ 2 − 2 ( u ⋅ v ) . {\displaystyle \|{\textbf {u}}-{\textbf {v}}\|^{2}=\|{\textbf {u}}\|^{2}+\|{\textbf {v}}\|^{2}-2({\textbf {u}}\cdot {\textbf {v}}).} This is essentially a vector form of the law of cosines for the triangle formed by the vectors u {\displaystyle {\textbf {u}}} , v {\displaystyle {\textbf {v}}} , and u − v {\displaystyle {\textbf {u}}-{\textbf {v}}} . In particular, u ⋅ v = ‖ u ‖ ‖ v ‖ cos θ , {\displaystyle {\textbf {u}}\cdot {\textbf {v}}=\|{\textbf {u}}\|\,\|{\textbf {v}}\|\cos \theta ,} where θ {\displaystyle \theta } is the angle between the vectors u {\displaystyle {\textbf {u}}} and v {\displaystyle {\textbf {v}}} . The equation is numerically unstable if u and v are similar because of catastrophic cancellation and should be avoided for numeric computation. === Derivation === The basic relation between the norm and the dot product is given by the equation ‖ v ‖ 2 = v ⋅ v . {\displaystyle \|{\textbf {v}}\|^{2}={\textbf {v}}\cdot {\textbf {v}}.} Then ‖ u + v ‖ 2 = ( u + v ) ⋅ ( u + v ) = ( u ⋅ u ) + ( u ⋅ v ) + ( v ⋅ u ) + ( v ⋅ v ) = ‖ u ‖ 2 + ‖ v ‖ 2 + 2 ( u ⋅ v ) , {\displaystyle {\begin{aligned}\|{\textbf {u}}+{\textbf {v}}\|^{2}&=({\textbf {u}}+{\textbf {v}})\cdot ({\textbf {u}}+{\textbf {v}})\\[3pt]&=({\textbf {u}}\cdot {\textbf {u}})+({\textbf {u}}\cdot {\textbf {v}})+({\textbf {v}}\cdot {\textbf {u}})+({\textbf {v}}\cdot {\textbf {v}})\\[3pt]&=\|{\textbf {u}}\|^{2}+\|{\textbf {v}}\|^{2}+2({\textbf {u}}\cdot {\textbf {v}}),\end{aligned}}} and similarly ‖ u − v ‖ 2 = ‖ u ‖ 2 + ‖ v ‖ 2 − 2 ( u ⋅ v ) . {\displaystyle \|{\textbf {u}}-{\textbf {v}}\|^{2}=\|{\textbf {u}}\|^{2}+\|{\textbf {v}}\|^{2}-2({\textbf {u}}\cdot {\textbf {v}}).} Forms (1) and (2) of the polarization identity now follow by solving these equations for u ⋅ v {\displaystyle {\textbf {u}}\cdot {\textbf {v}}} , while form (3) follows from subtracting these two equations. (Adding these two equations together gives the parallelogram law.) == Generalizations == === Jordan–von Neumann theorems === The standard Jordan–von Neumann theorem, as stated previously, is that the if a norm satisfies the parallelogram law, then it can be induced by an inner product defined by the polarization identity. There are variants of the theorem. Define various senses of orthogonality: isosceles: ‖ x + y ‖ = ‖ x − y ‖ {\textstyle \|x+y\|=\|x-y\|} Roberts’: ‖ x + t y ‖ = ‖ x − t y ‖ {\textstyle \left\|x+ty\right\|=\left\|x-ty\right\|} for all scalar t {\textstyle t} . Pythagorean: ‖ x + y ‖ 2 = ‖ x ‖ 2 + ‖ y ‖ 2 {\textstyle \left\|x+y\right\|^{2}=\|x\|^{2}+\left\|y\right\|^{2}} Birkhoff–James: ‖ x ‖ ≤ ‖ x + t y ‖ {\textstyle \|x\|\leq \|x+ty\|} for all scalar t {\textstyle t} . Let V {\textstyle V} be a vector space over the real or complex numbers. Let ‖ ⋅ ‖ {\textstyle \|\cdot \|} be a norm over V {\textstyle V} . We consider conditions for which the norm is induced by an inner product. In the following statements, whenever a scalar appears, the scalar may be restricted to be merely real, even when V {\textstyle V} is over the complex numbers. (von Neumann–Jordan condition) The norm satisfies the parallelogram identity. (weakened von Neumann–Jordan condition) ‖ x + y ‖ 2 + ‖ x − y ‖ 2 = 4 {\textstyle \|x+y\|^{2}+\|x-y\|^{2}=4} for all unit vectors x , y {\textstyle x,y} . That is, the norm satisfies the parallelogram identity for unit vectors. For any x , y ∈ V {\textstyle x,y\in V} , the set of points equidistant to x , y {\textstyle x,y} is flat, that is, an affine subspace. Orthogonality in either isosceles or Roberts’ sense is either additive or homogeneous on one variable. For every two-dimensional subspace W ⊂ V {\textstyle W\subset V} , for every x ∈ W {\textstyle x\in W} , there exists y ∈ W {\textstyle y\in W} that is Roberts’ orthogonal to x {\textstyle x} . Isosceles orthogonality implies Pythagorean orthogonality. Pythagorean orthogonality implies isosceles orthogonality. If x , y {\textstyle x,y} are Pythagorean orthogonal, then so are x , − y {\textstyle x,-y} . Birkhoff–James orthogonality is symmetric. If ‖ x ‖ = ‖ y ‖ {\textstyle \|x\|=\|y\|} and t , s {\textstyle t,s} are real, then ‖ t x + s y ‖ = ‖ s x + t y ‖ {\textstyle \|tx+sy\|=\|sx+ty\|} . For the real vector space, there is also the condition: Any two-dimensional slice of the unit sphere is an ellipse, that is, parameterizable as { x cos θ + y sin θ : θ ∈ [ 0 , 2 π ] } {\textstyle \{x\cos \theta +y\sin \theta :\theta \in [0,2\pi ]\}} , for some unit vectors x , y {\textstyle x,y} . The Banach-Mazur rotation problem: Given a separable Banach space V {\textstyle V} such that for any two unit vectors x , y , {\textstyle x,y,} there exists a linear surjective isometry T {\textstyle T} such that T ( x ) = y {\textstyle T(x)=y} or T ( y ) = x {\textstyle T(y)=x} , is V {\textstyle V} isometrically isomorphic to a Hilbert space? The general case of the problem is open. When the space is parable finite-dimensional, the answer is yes. In other words, given a finite-dimensional normed vector space over the real or complex numbers, if any point on the unit sphere can be mapped (rotated) to any other point by a linear isometry, then the norm is induced by an inner product. === Symmetric bilinear forms === The polarization identities are not restricted to inner products. If B {\displaystyle B} is any symmetric bilinear form on a vector space, and Q {\displaystyle Q} is the quadratic form defined by Q ( v ) = B ( v , v ) , {\displaystyle Q(v)=B(v,v),} then 2 B ( u , v ) = Q ( u + v ) − Q ( u ) − Q ( v ) , 2 B ( u , v ) = Q ( u ) + Q ( v ) − Q ( u − v ) , 4 B ( u , v ) = Q ( u + v ) − Q ( u − v ) . {\displaystyle {\begin{aligned}2B(u,v)&=Q(u+v)-Q(u)-Q(v),\\2B(u,v)&=Q(u)+Q(v)-Q(u-v),\\4B(u,v)&=Q(u+v)-Q(u-v).\end{aligned}}} The so-called symmetrization map generalizes the latter formula, replacing Q {\displaystyle Q} by a homogeneous polynomial of degree k {\displaystyle k} defined by Q ( v ) = B ( v , … , v ) , {\displaystyle Q(v)=B(v,\ldots ,v),} where B {\displaystyle B} is a symmetric k {\displaystyle k} -linear map. The formulas above even apply in the case where the field of scalars has characteristic two, though the left-hand sides are all zero in this case. Consequently, in characteristic two there is no formula for a symmetric bilinear form in terms of a quadratic form, and they are in fact distinct notions, a fact which has important consequences in L-theory; for brevity, in this context "symmetric bilinear forms" are often referred to as "symmetric forms". These formulas also apply to bilinear forms on modules over a commutative ring, though again one can only solve for B ( u , v ) {\displaystyle B(u,v)} if 2 is invertible in the ring, and otherwise these are distinct notions. For example, over the integers, one distinguishes integral quadratic forms from integral symmetric forms, which are a narrower notion. More generally, in the presence of a ring involution or where 2 is not invertible, one distinguishes ε {\displaystyle \varepsilon } -quadratic forms and ε {\displaystyle \varepsilon } -symmetric forms; a symmetric form defines a quadratic form, and the polarization identity (without a factor of 2) from a quadratic form to a symmetric form is called the "symmetrization map", and is not in general an isomorphism. This has historically been a subtle distinction: over the integers it was not until the 1950s that relation between "twos out" (integral quadratic form) and "twos in" (integral symmetric form) was understood – see discussion at integral quadratic form; and in the algebraization of surgery theory, Mishchenko originally used symmetric L-groups, rather than the correct quadratic L-groups (as in Wall and Ranicki) – see discussion at L-theory. === Homogeneous polynomials of higher degree === Finally, in any of these contexts these identities may be extended to homogeneous polynomials (that is, algebraic forms) of arbitrary degree, where it is known as the polarization formula, and is reviewed in greater detail in the article on the polarization of an algebraic form. == See also == Inner product space – Vector space with generalized dot product Law of cosines – Generalization of Pythagorean theorem Mazur–Ulam theorem – Surjective isometries are affine mappings Minkowski distance – Vector distance using pth powers Parallelogram law – Sides and diagonals have equal sums of squares Ptolemy's inequality – Relation between distances of four points == Notes and references == == Bibliography == Lax, Peter D. (2002). Functional Analysis (PDF). Pure and Applied Mathematics. New York: Wiley-Interscience. ISBN 978-0-471-55604-6. OCLC 47767143. Retrieved July 22, 2020. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. |
Wikipedia:Polarization of an algebraic form#0 | In mathematics, in particular in algebra, polarization is a technique for expressing a homogeneous polynomial in a simpler fashion by adjoining more variables. Specifically, given a homogeneous polynomial, polarization produces a unique symmetric multilinear form from which the original polynomial can be recovered by evaluating along a certain diagonal. Although the technique is deceptively simple, it has applications in many areas of abstract mathematics: in particular to algebraic geometry, invariant theory, and representation theory. Polarization and related techniques form the foundations for Weyl's invariant theory. == The technique == The fundamental ideas are as follows. Let f ( u ) {\displaystyle f(\mathbf {u} )} be a polynomial in n {\displaystyle n} variables u = ( u 1 , u 2 , … , u n ) . {\displaystyle \mathbf {u} =\left(u_{1},u_{2},\ldots ,u_{n}\right).} Suppose that f {\displaystyle f} is homogeneous of degree d , {\displaystyle d,} which means that f ( t u ) = t d f ( u ) for all t . {\displaystyle f(t\mathbf {u} )=t^{d}f(\mathbf {u} )\quad {\text{ for all }}t.} Let u ( 1 ) , u ( 2 ) , … , u ( d ) {\displaystyle \mathbf {u} ^{(1)},\mathbf {u} ^{(2)},\ldots ,\mathbf {u} ^{(d)}} be a collection of indeterminates with u ( i ) = ( u 1 ( i ) , u 2 ( i ) , … , u n ( i ) ) , {\displaystyle \mathbf {u} ^{(i)}=\left(u_{1}^{(i)},u_{2}^{(i)},\ldots ,u_{n}^{(i)}\right),} so that there are d n {\displaystyle dn} variables altogether. The polar form of f {\displaystyle f} is a polynomial F ( u ( 1 ) , u ( 2 ) , … , u ( d ) ) {\displaystyle F\left(\mathbf {u} ^{(1)},\mathbf {u} ^{(2)},\ldots ,\mathbf {u} ^{(d)}\right)} which is linear separately in each u ( i ) {\displaystyle \mathbf {u} ^{(i)}} (that is, F {\displaystyle F} is multilinear), symmetric in the u ( i ) , {\displaystyle \mathbf {u} ^{(i)},} and such that F ( u , u , … , u ) = f ( u ) . {\displaystyle F\left(\mathbf {u} ,\mathbf {u} ,\ldots ,\mathbf {u} \right)=f(\mathbf {u} ).} The polar form of f {\displaystyle f} is given by the following construction F ( u ( 1 ) , … , u ( d ) ) = 1 d ! ∂ ∂ λ 1 … ∂ ∂ λ d f ( λ 1 u ( 1 ) + ⋯ + λ d u ( d ) ) | λ = 0 . {\displaystyle F\left({\mathbf {u} }^{(1)},\dots ,{\mathbf {u} }^{(d)}\right)={\frac {1}{d!}}{\frac {\partial }{\partial \lambda _{1}}}\dots {\frac {\partial }{\partial \lambda _{d}}}f(\lambda _{1}{\mathbf {u} }^{(1)}+\dots +\lambda _{d}{\mathbf {u} }^{(d)})|_{\lambda =0}.} In other words, F {\displaystyle F} is a constant multiple of the coefficient of λ 1 λ 2 … λ d {\displaystyle \lambda _{1}\lambda _{2}\ldots \lambda _{d}} in the expansion of f ( λ 1 u ( 1 ) + ⋯ + λ d u ( d ) ) . {\displaystyle f\left(\lambda _{1}\mathbf {u} ^{(1)}+\cdots +\lambda _{d}\mathbf {u} ^{(d)}\right).} == Examples == A quadratic example. Suppose that x = ( x , y ) {\displaystyle \mathbf {x} =(x,y)} and f ( x ) {\displaystyle f(\mathbf {x} )} is the quadratic form f ( x ) = x 2 + 3 x y + 2 y 2 . {\displaystyle f(\mathbf {x} )=x^{2}+3xy+2y^{2}.} Then the polarization of f {\displaystyle f} is a function in x ( 1 ) = ( x ( 1 ) , y ( 1 ) ) {\displaystyle \mathbf {x} ^{(1)}=(x^{(1)},y^{(1)})} and x ( 2 ) = ( x ( 2 ) , y ( 2 ) ) {\displaystyle \mathbf {x} ^{(2)}=(x^{(2)},y^{(2)})} given by F ( x ( 1 ) , x ( 2 ) ) = x ( 1 ) x ( 2 ) + 3 2 x ( 2 ) y ( 1 ) + 3 2 x ( 1 ) y ( 2 ) + 2 y ( 1 ) y ( 2 ) . {\displaystyle F\left(\mathbf {x} ^{(1)},\mathbf {x} ^{(2)}\right)=x^{(1)}x^{(2)}+{\frac {3}{2}}x^{(2)}y^{(1)}+{\frac {3}{2}}x^{(1)}y^{(2)}+2y^{(1)}y^{(2)}.} More generally, if f {\displaystyle f} is any quadratic form then the polarization of f {\displaystyle f} agrees with the conclusion of the polarization identity. A cubic example. Let f ( x , y ) = x 3 + 2 x y 2 . {\displaystyle f(x,y)=x^{3}+2xy^{2}.} Then the polarization of f {\displaystyle f} is given by F ( x ( 1 ) , y ( 1 ) , x ( 2 ) , y ( 2 ) , x ( 3 ) , y ( 3 ) ) = x ( 1 ) x ( 2 ) x ( 3 ) + 2 3 x ( 1 ) y ( 2 ) y ( 3 ) + 2 3 x ( 3 ) y ( 1 ) y ( 2 ) + 2 3 x ( 2 ) y ( 3 ) y ( 1 ) . {\displaystyle F\left(x^{(1)},y^{(1)},x^{(2)},y^{(2)},x^{(3)},y^{(3)}\right)=x^{(1)}x^{(2)}x^{(3)}+{\frac {2}{3}}x^{(1)}y^{(2)}y^{(3)}+{\frac {2}{3}}x^{(3)}y^{(1)}y^{(2)}+{\frac {2}{3}}x^{(2)}y^{(3)}y^{(1)}.} == Mathematical details and consequences == The polarization of a homogeneous polynomial of degree d {\displaystyle d} is valid over any commutative ring in which d ! {\displaystyle d!} is a unit. In particular, it holds over any field of characteristic zero or whose characteristic is strictly greater than d . {\displaystyle d.} === The polarization isomorphism (by degree) === For simplicity, let k {\displaystyle k} be a field of characteristic zero and let A = k [ x ] {\displaystyle A=k[\mathbf {x} ]} be the polynomial ring in n {\displaystyle n} variables over k . {\displaystyle k.} Then A {\displaystyle A} is graded by degree, so that A = ⨁ d A d . {\displaystyle A=\bigoplus _{d}A_{d}.} The polarization of algebraic forms then induces an isomorphism of vector spaces in each degree A d ≅ Sym d k n {\displaystyle A_{d}\cong \operatorname {Sym} ^{d}k^{n}} where Sym d {\displaystyle \operatorname {Sym} ^{d}} is the d {\displaystyle d} -th symmetric power. These isomorphisms can be expressed independently of a basis as follows. If V {\displaystyle V} is a finite-dimensional vector space and A {\displaystyle A} is the ring of k {\displaystyle k} -valued polynomial functions on V {\displaystyle V} graded by homogeneous degree, then polarization yields an isomorphism A d ≅ Sym d V ∗ . {\displaystyle A_{d}\cong \operatorname {Sym} ^{d}V^{*}.} === The algebraic isomorphism === Furthermore, the polarization is compatible with the algebraic structure on A {\displaystyle A} , so that A ≅ Sym ∙ V ∗ {\displaystyle A\cong \operatorname {Sym} ^{\bullet }V^{*}} where Sym ∙ V ∗ {\displaystyle \operatorname {Sym} ^{\bullet }V^{*}} is the full symmetric algebra over V ∗ . {\displaystyle V^{*}.} === Remarks === For fields of positive characteristic p , {\displaystyle p,} the foregoing isomorphisms apply if the graded algebras are truncated at degree p − 1. {\displaystyle p-1.} There do exist generalizations when V {\displaystyle V} is an infinite-dimensional topological vector space. == See also == Homogeneous function – Function with a multiplicative scaling behaviour == References == Claudio Procesi (2007) Lie Groups: an approach through invariants and representations, Springer, ISBN 9780387260402 . |
Wikipedia:Polish School of Mathematics#0 | The Polish School of Mathematics was the mathematics community that flourished in Poland in the 20th century, particularly during the Interbellum between World Wars I and II. == Overview == The Polish School of Mathematics subsumed: the Lwów School of Mathematics - mostly focused on functional analysis; the Warsaw School of Mathematics - mostly focused on set theory, mathematical logic and topology; and the Kraków School of Mathematics - mostly focused on differential equations, analytic functions, differential geometry. === Nomenclature === Poland's mathematicians provided a name to Polish notation and Polish space. === Background === It has been debated what stimulated the exceptional efflorescence of mathematics in Poland after World War I. Important preparatory work had been done by the Polish "Positivists" following the disastrous January 1863 Uprising. The Positivists extolled science and technology, and popularized slogans of "organic work" and "building from the foundations." In the 20th century, mathematics was a field of endeavor that could be successfully pursued even with the limited resources that Poland commanded in the interbellum period. === Historical Influences === Over the centuries, Polish mathematicians have influenced the course of history. Copernicus used mathematics to buttress his revolutionary heliocentric theory. Four hundred years later, Marian Rejewski — subsequently assisted by fellow mathematician-cryptologists Jerzy Różycki and Henryk Zygalski — in December 1932 first broke the German Enigma machine cipher, thus laying the foundations for British World War II reading of Enigma ciphers ("Ultra"). After the war, Stanisław Ulam showed Edward Teller how to construct a practicable hydrogen bomb. == See also == Lwów-Warsaw School of Logic. == References == Kazimierz Kuratowski (1980) A Half Century of Polish Mathematics: Remembrances and Reflections, Oxford, Pergamon Press, ISBN 0-08-023046-6. Roman Murawski (2014) The Philosophy and Mathematics of Logic in the 1920s and 1930s in Poland, Maria Kantor translator, Birkhäuser ISBN 978-3-0348-0830-9 |
Wikipedia:Pollard's kangaroo algorithm#0 | In computational number theory and computational algebra, Pollard's kangaroo algorithm (also Pollard's lambda algorithm, see Naming below) is an algorithm for solving the discrete logarithm problem. The algorithm was introduced in 1978 by the number theorist John M. Pollard, in the same paper as his better-known Pollard's rho algorithm for solving the same problem. Although Pollard described the application of his algorithm to the discrete logarithm problem in the multiplicative group of units modulo a prime p, it is in fact a generic discrete logarithm algorithm—it will work in any finite cyclic group. == Algorithm == Suppose G {\displaystyle G} is a finite cyclic group of order n {\displaystyle n} which is generated by the element α {\displaystyle \alpha } , and we seek to find the discrete logarithm x {\displaystyle x} of the element β {\displaystyle \beta } to the base α {\displaystyle \alpha } . In other words, one seeks x ∈ Z n {\displaystyle x\in Z_{n}} such that α x = β {\displaystyle \alpha ^{x}=\beta } . The lambda algorithm allows one to search for x {\displaystyle x} in some interval [ a , … , b ] ⊂ Z n {\displaystyle [a,\ldots ,b]\subset Z_{n}} . One may search the entire range of possible logarithms by setting a = 0 {\displaystyle a=0} and b = n − 1 {\displaystyle b=n-1} . 1. Choose a set S {\displaystyle S} of positive integers of mean roughly b − a {\displaystyle {\sqrt {b-a}}} and define a pseudorandom map f : G → S {\displaystyle f:G\rightarrow S} . 2. Choose an integer N {\displaystyle N} and compute a sequence of group elements { x 0 , x 1 , … , x N } {\displaystyle \{x_{0},x_{1},\ldots ,x_{N}\}} according to: x 0 = α b {\displaystyle x_{0}=\alpha ^{b}\,} x i + 1 = x i α f ( x i ) for i = 0 , 1 , … , N − 1 {\displaystyle x_{i+1}=x_{i}\alpha ^{f(x_{i})}{\text{ for }}i=0,1,\ldots ,N-1} 3. Compute d = ∑ i = 0 N − 1 f ( x i ) . {\displaystyle d=\sum _{i=0}^{N-1}f(x_{i}).} Observe that: x N = x 0 α d = α b + d . {\displaystyle x_{N}=x_{0}\alpha ^{d}=\alpha ^{b+d}\,.} 4. Begin computing a second sequence of group elements { y 0 , y 1 , … } {\displaystyle \{y_{0},y_{1},\ldots \}} according to: y 0 = β {\displaystyle y_{0}=\beta \,} y i + 1 = y i α f ( y i ) for i = 0 , 1 , … , N − 1 {\displaystyle y_{i+1}=y_{i}\alpha ^{f(y_{i})}{\text{ for }}i=0,1,\ldots ,N-1} and a corresponding sequence of integers { d 0 , d 1 , … } {\displaystyle \{d_{0},d_{1},\ldots \}} according to: d n = ∑ i = 0 n − 1 f ( y i ) {\displaystyle d_{n}=\sum _{i=0}^{n-1}f(y_{i})} . Observe that: y i = y 0 α d i = β α d i for i = 0 , 1 , … , N − 1 {\displaystyle y_{i}=y_{0}\alpha ^{d_{i}}=\beta \alpha ^{d_{i}}{\mbox{ for }}i=0,1,\ldots ,N-1} 5. Stop computing terms of { y i } {\displaystyle \{y_{i}\}} and { d i } {\displaystyle \{d_{i}\}} when either of the following conditions are met: A) y j = x N {\displaystyle y_{j}=x_{N}} for some j {\displaystyle j} . If the sequences { x i } {\displaystyle \{x_{i}\}} and { y j } {\displaystyle \{y_{j}\}} "collide" in this manner, then we have: x N = y j ⇒ α b + d = β α d j ⇒ β = α b + d − d j ⇒ x ≡ b + d − d j ( mod n ) {\displaystyle x_{N}=y_{j}\Rightarrow \alpha ^{b+d}=\beta \alpha ^{d_{j}}\Rightarrow \beta =\alpha ^{b+d-d_{j}}\Rightarrow x\equiv b+d-d_{j}{\pmod {n}}} and so we are done. B) d i > b − a + d {\displaystyle d_{i}>b-a+d} . If this occurs, then the algorithm has failed to find x {\displaystyle x} . Subsequent attempts can be made by changing the choice of S {\displaystyle S} and/or f {\displaystyle f} . == Complexity == Pollard gives the time complexity of the algorithm as O ( b − a ) {\displaystyle O({\sqrt {b-a}})} , using a probabilistic argument based on the assumption that f {\displaystyle f} acts pseudorandomly. Since a , b {\displaystyle a,b} can be represented using O ( log b ) {\displaystyle O(\log b)} bits, this is exponential in the problem size (though still a significant improvement over the trivial brute-force algorithm that takes time O ( b − a ) {\displaystyle O(b-a)} ). For an example of a subexponential time discrete logarithm algorithm, see the index calculus algorithm. == Naming == The algorithm is well known by two names. The first is "Pollard's kangaroo algorithm". This name is a reference to an analogy used in the paper presenting the algorithm, where the algorithm is explained in terms of using a tame kangaroo to trap a wild kangaroo. Pollard has explained that this analogy was inspired by a "fascinating" article published in the same issue of Scientific American as an exposition of the RSA public key cryptosystem. The article described an experiment in which a kangaroo's "energetic cost of locomotion, measured in terms of oxygen consumption at various speeds, was determined by placing kangaroos on a treadmill". The second is "Pollard's lambda algorithm". Much like the name of another of Pollard's discrete logarithm algorithms, Pollard's rho algorithm, this name refers to the similarity between a visualisation of the algorithm and the Greek letter lambda ( λ {\displaystyle \lambda } ). The shorter stroke of the letter lambda corresponds to the sequence { x i } {\displaystyle \{x_{i}\}} , since it starts from the position b to the right of x. Accordingly, the longer stroke corresponds to the sequence { y i } {\displaystyle \{y_{i}\}} , which "collides with" the first sequence (just like the strokes of a lambda intersect) and then follows it subsequently. Pollard has expressed a preference for the name "kangaroo algorithm", as this avoids confusion with some parallel versions of his rho algorithm, which have also been called "lambda algorithms". == See also == Dynkin's card trick Kruskal count Rainbow table == References == == Further reading == Montenegro, Ravi [at Wikidata]; Tetali, Prasad V. (2010-11-07) [2009-05-31]. How Long Does it Take to Catch a Wild Kangaroo? (PDF). Proceedings of the forty-first annual ACM symposium on Theory of computing (STOC 2009). pp. 553–560. arXiv:0812.0789. doi:10.1145/1536414.1536490. S2CID 12797847. Archived (PDF) from the original on 2023-08-20. Retrieved 2023-08-20. |
Wikipedia:Polykay#0 | In statistics, a polykay, or generalised k-statistic, (denoted k r , s {\displaystyle k_{r,s}} ) is a statistic defined as a linear combination of sample moments. == Etymology == The word polykay was coined by American mathematician John Tukey in 1956, from poly, "many" or "much", and kay, the phonetic spelling of the letter "k", as in k-statistic. == References == |
Wikipedia:Polylogarithmic function#0 | In mathematics, a polylogarithmic function in n is a polynomial in the logarithm of n, a k ( log n ) k + a k − 1 ( log n ) k − 1 + ⋯ + a 1 ( log n ) + a 0 . {\displaystyle a_{k}(\log n)^{k}+a_{k-1}(\log n)^{k-1}+\cdots +a_{1}(\log n)+a_{0}.} The notation logkn is often used as a shorthand for (log n)k, analogous to sin2θ for (sin θ)2. In computer science, polylogarithmic functions occur as the order of time for some data structure operations. Additionally, the exponential function of a polylogarithmic function produces a function with quasi-polynomial growth, and algorithms with this as their time complexity are said to take quasi-polynomial time. All polylogarithmic functions of n are o(nε) for every exponent ε > 0 (for the meaning of this symbol, see small o notation), that is, a polylogarithmic function grows more slowly than any positive exponent. This observation is the basis for the soft O notation Õ(n). == References == |
Wikipedia:Polynomial#0 | In mathematics, a polynomial is a mathematical expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2 − yz + 1. Polynomials appear in many areas of mathematics and science. For example, they are used to form polynomial equations, which encode a wide range of problems, from elementary word problems to complicated scientific problems; they are used to define polynomial functions, which appear in settings ranging from basic chemistry and physics to economics and social science; and they are used in calculus and numerical analysis to approximate other functions. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, which are central concepts in algebra and algebraic geometry. == Etymology == The word polynomial joins two diverse roots: the Greek poly, meaning "many", and the Latin nomen, or "name". It was derived from the term binomial by replacing the Latin root bi- with the Greek poly-. That is, it means a sum of many terms (many monomials). The word polynomial was first used in the 17th century. == Notation and terminology == The x occurring in a polynomial is commonly called a variable or an indeterminate. When the polynomial is considered as an expression, x is a fixed symbol which does not have any value (its value is "indeterminate"). However, when one considers the function defined by the polynomial, then x represents the argument of the function, and is therefore called a "variable". Many authors use these two words interchangeably. A polynomial P in the indeterminate x is commonly denoted either as P or as P(x). Formally, the name of the polynomial is P, not P(x), but the use of the functional notation P(x) dates from a time when the distinction between a polynomial and the associated function was unclear. Moreover, the functional notation is often useful for specifying, in a single phrase, a polynomial and its indeterminate. For example, "let P(x) be a polynomial" is a shorthand for "let P be a polynomial in the indeterminate x". On the other hand, when it is not necessary to emphasize the name of the indeterminate, many formulas are much simpler and easier to read if the name(s) of the indeterminate(s) do not appear at each occurrence of the polynomial. The ambiguity of having two notations for a single mathematical object may be formally resolved by considering the general meaning of the functional notation for polynomials. If a denotes a number, a variable, another polynomial, or, more generally, any expression, then P(a) denotes, by convention, the result of substituting a for x in P. Thus, the polynomial P defines the function a ↦ P ( a ) , {\displaystyle a\mapsto P(a),} which is the polynomial function associated to P. Frequently, when using this notation, one supposes that a is a number. However, one may use it over any domain where addition and multiplication are defined (that is, any ring). In particular, if a is a polynomial then P(a) is also a polynomial. More specifically, when a is the indeterminate x, then the image of x by this function is the polynomial P itself (substituting x for x does not change anything). In other words, P ( x ) = P , {\displaystyle P(x)=P,} which justifies formally the existence of two notations for the same polynomial. == Definition == A polynomial expression is an expression that can be built from constants and symbols called variables or indeterminates by means of addition, multiplication and exponentiation to a non-negative integer power. The constants are generally numbers, but may be any expression that do not involve the indeterminates, and represent mathematical objects that can be added and multiplied. Two polynomial expressions are considered as defining the same polynomial if they may be transformed, one to the other, by applying the usual properties of commutativity, associativity and distributivity of addition and multiplication. For example ( x − 1 ) ( x − 2 ) {\displaystyle (x-1)(x-2)} and x 2 − 3 x + 2 {\displaystyle x^{2}-3x+2} are two polynomial expressions that represent the same polynomial; so, one has the equality ( x − 1 ) ( x − 2 ) = x 2 − 3 x + 2 {\displaystyle (x-1)(x-2)=x^{2}-3x+2} . A polynomial in a single indeterminate x can always be written (or rewritten) in the form a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 , {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\dotsb +a_{2}x^{2}+a_{1}x+a_{0},} where a 0 , … , a n {\displaystyle a_{0},\ldots ,a_{n}} are constants that are called the coefficients of the polynomial, and x {\displaystyle x} is the indeterminate. The word "indeterminate" means that x {\displaystyle x} represents no particular value, although any value may be substituted for it. The mapping that associates the result of this substitution to the substituted value is a function, called a polynomial function. This can be expressed more concisely by using summation notation: ∑ k = 0 n a k x k {\displaystyle \sum _{k=0}^{n}a_{k}x^{k}} That is, a polynomial can either be zero or can be written as the sum of a finite number of non-zero terms. Each term consists of the product of a number – called the coefficient of the term – and a finite number of indeterminates, raised to non-negative integer powers. == Classification == The exponent on an indeterminate in a term is called the degree of that indeterminate in that term; the degree of the term is the sum of the degrees of the indeterminates in that term, and the degree of a polynomial is the largest degree of any term with nonzero coefficient. Because x = x1, the degree of an indeterminate without a written exponent is one. A term with no indeterminates and a polynomial with no indeterminates are called, respectively, a constant term and a constant polynomial. The degree of a constant term and of a nonzero constant polynomial is 0. The degree of the zero polynomial 0 (which has no terms at all) is generally treated as not defined (but see below). For example: − 5 x 2 y {\displaystyle -5x^{2}y} is a term. The coefficient is −5, the indeterminates are x and y, the degree of x is two, while the degree of y is one. The degree of the entire term is the sum of the degrees of each indeterminate in it, so in this example the degree is 2 + 1 = 3. Forming a sum of several terms produces a polynomial. For example, the following is a polynomial: 3 x 2 ⏟ t e r m 1 − 5 x ⏟ t e r m 2 + 4 ⏟ t e r m 3 . {\displaystyle \underbrace {_{\,}3x^{2}} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {1} \end{smallmatrix}}\underbrace {-_{\,}5x} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {2} \end{smallmatrix}}\underbrace {+_{\,}4} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {3} \end{smallmatrix}}.} It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero. Polynomials of small degree have been given specific names. A polynomial of degree zero is a constant polynomial, or simply a constant. Polynomials of degree one, two or three are respectively linear polynomials, quadratic polynomials and cubic polynomials. For higher degrees, the specific names are not commonly used, although quartic polynomial (for degree four) and quintic polynomial (for degree five) are sometimes used. The names for the degrees may be applied to the polynomial or to its terms. For example, the term 2x in x2 + 2x + 1 is a linear term in a quadratic polynomial. The polynomial 0, which may be considered to have no terms at all, is called the zero polynomial. Unlike other constant polynomials, its degree is not zero. Rather, the degree of the zero polynomial is either left explicitly undefined, or defined as negative (either −1 or −∞). The zero polynomial is also unique in that it is the only polynomial in one indeterminate that has an infinite number of roots. The graph of the zero polynomial, f(x) = 0, is the x-axis. In the case of polynomials in more than one indeterminate, a polynomial is called homogeneous of degree n if all of its non-zero terms have degree n. The zero polynomial is homogeneous, and, as a homogeneous polynomial, its degree is undefined. For example, x3y2 + 7x2y3 − 3x5 is homogeneous of degree 5. For more details, see Homogeneous polynomial. The commutative law of addition can be used to rearrange terms into any preferred order. In polynomials with one indeterminate, the terms are usually ordered according to degree, either in "descending powers of x", with the term of largest degree first, or in "ascending powers of x". The polynomial 3x2 − 5x + 4 is written in descending powers of x. The first term has coefficient 3, indeterminate x, and exponent 2. In the second term, the coefficient is −5. The third term is a constant. Because the degree of a non-zero polynomial is the largest degree of any one term, this polynomial has degree two. Two terms with the same indeterminates raised to the same powers are called "similar terms" or "like terms", and they can be combined, using the distributive law, into a single term whose coefficient is the sum of the coefficients of the terms that were combined. It may happen that this makes the coefficient 0. Polynomials can be classified by the number of terms with nonzero coefficients, so that a one-term polynomial is called a monomial, a two-term polynomial is called a binomial, and a three-term polynomial is called a trinomial. A real polynomial is a polynomial with real coefficients. When it is used to define a function, the domain is not so restricted. However, a real polynomial function is a function from the reals to the reals that is defined by a real polynomial. Similarly, an integer polynomial is a polynomial with integer coefficients, and a complex polynomial is a polynomial with complex coefficients. A polynomial in one indeterminate is called a univariate polynomial, a polynomial in more than one indeterminate is called a multivariate polynomial. A polynomial with two indeterminates is called a bivariate polynomial. These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance, when working with univariate polynomials, one does not exclude constant polynomials (which may result from the subtraction of non-constant polynomials), although strictly speaking, constant polynomials do not contain any indeterminates at all. It is possible to further classify multivariate polynomials as bivariate, trivariate, and so on, according to the maximum number of indeterminates allowed. Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on. It is also common to say simply "polynomials in x, y, and z", listing the indeterminates allowed. == Operations == === Addition and subtraction === Polynomials can be added using the associative law of addition (grouping all their terms together into a single sum), possibly followed by reordering (using the commutative law) and combining of like terms. For example, if P = 3 x 2 − 2 x + 5 x y − 2 {\displaystyle P=3x^{2}-2x+5xy-2} and Q = − 3 x 2 + 3 x + 4 y 2 + 8 {\displaystyle Q=-3x^{2}+3x+4y^{2}+8} then the sum P + Q = 3 x 2 − 2 x + 5 x y − 2 − 3 x 2 + 3 x + 4 y 2 + 8 {\displaystyle P+Q=3x^{2}-2x+5xy-2-3x^{2}+3x+4y^{2}+8} can be reordered and regrouped as P + Q = ( 3 x 2 − 3 x 2 ) + ( − 2 x + 3 x ) + 5 x y + 4 y 2 + ( 8 − 2 ) {\displaystyle P+Q=(3x^{2}-3x^{2})+(-2x+3x)+5xy+4y^{2}+(8-2)} and then simplified to P + Q = x + 5 x y + 4 y 2 + 6. {\displaystyle P+Q=x+5xy+4y^{2}+6.} When polynomials are added together, the result is another polynomial. Subtraction of polynomials is similar. === Multiplication === Polynomials can also be multiplied. To expand the product of two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other. For example, if P = 2 x + 3 y + 5 Q = 2 x + 5 y + x y + 1 {\displaystyle {\begin{aligned}\color {Red}P&\color {Red}{=2x+3y+5}\\\color {Blue}Q&\color {Blue}{=2x+5y+xy+1}\end{aligned}}} then P Q = ( 2 x ⋅ 2 x ) + ( 2 x ⋅ 5 y ) + ( 2 x ⋅ x y ) + ( 2 x ⋅ 1 ) + ( 3 y ⋅ 2 x ) + ( 3 y ⋅ 5 y ) + ( 3 y ⋅ x y ) + ( 3 y ⋅ 1 ) + ( 5 ⋅ 2 x ) + ( 5 ⋅ 5 y ) + ( 5 ⋅ x y ) + ( 5 ⋅ 1 ) {\displaystyle {\begin{array}{rccrcrcrcr}{\color {Red}{P}}{\color {Blue}{Q}}&{=}&&({\color {Red}{2x}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{1}})\\&&+&({\color {Red}{3y}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{1}})\\&&+&({\color {Red}{5}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{5}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{5}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{5}}\cdot {\color {Blue}{1}})\end{array}}} Carrying out the multiplication in each term produces P Q = 4 x 2 + 10 x y + 2 x 2 y + 2 x + 6 x y + 15 y 2 + 3 x y 2 + 3 y + 10 x + 25 y + 5 x y + 5. {\displaystyle {\begin{array}{rccrcrcrcr}PQ&=&&4x^{2}&+&10xy&+&2x^{2}y&+&2x\\&&+&6xy&+&15y^{2}&+&3xy^{2}&+&3y\\&&+&10x&+&25y&+&5xy&+&5.\end{array}}} Combining similar terms yields P Q = 4 x 2 + ( 10 x y + 6 x y + 5 x y ) + 2 x 2 y + ( 2 x + 10 x ) + 15 y 2 + 3 x y 2 + ( 3 y + 25 y ) + 5 {\displaystyle {\begin{array}{rcccrcrcrcr}PQ&=&&4x^{2}&+&(10xy+6xy+5xy)&+&2x^{2}y&+&(2x+10x)\\&&+&15y^{2}&+&3xy^{2}&+&(3y+25y)&+&5\end{array}}} which can be simplified to P Q = 4 x 2 + 21 x y + 2 x 2 y + 12 x + 15 y 2 + 3 x y 2 + 28 y + 5. {\displaystyle PQ=4x^{2}+21xy+2x^{2}y+12x+15y^{2}+3xy^{2}+28y+5.} As in the example, the product of polynomials is always a polynomial. === Composition === Given a polynomial f {\displaystyle f} of a single variable and another polynomial g of any number of variables, the composition f ∘ g {\displaystyle f\circ g} is obtained by substituting each copy of the variable of the first polynomial by the second polynomial. For example, if f ( x ) = x 2 + 2 x {\displaystyle f(x)=x^{2}+2x} and g ( x ) = 3 x + 2 {\displaystyle g(x)=3x+2} then ( f ∘ g ) ( x ) = f ( g ( x ) ) = ( 3 x + 2 ) 2 + 2 ( 3 x + 2 ) . {\displaystyle (f\circ g)(x)=f(g(x))=(3x+2)^{2}+2(3x+2).} A composition may be expanded to a sum of terms using the rules for multiplication and division of polynomials. The composition of two polynomials is another polynomial. === Division === The division of one polynomial by another is not typically a polynomial. Instead, such ratios are a more general family of objects, called rational fractions, rational expressions, or rational functions, depending on context. This is analogous to the fact that the ratio of two integers is a rational number, not necessarily an integer. For example, the fraction 1/(x2 + 1) is not a polynomial, and it cannot be written as a finite sum of powers of the variable x. For polynomials in one variable, there is a notion of Euclidean division of polynomials, generalizing the Euclidean division of integers. This notion of the division a(x)/b(x) results in two polynomials, a quotient q(x) and a remainder r(x), such that a = b q + r and degree(r) < degree(b). The quotient and remainder may be computed by any of several algorithms, including polynomial long division and synthetic division. When the denominator b(x) is monic and linear, that is, b(x) = x − c for some constant c, then the polynomial remainder theorem asserts that the remainder of the division of a(x) by b(x) is the evaluation a(c). In this case, the quotient may be computed by Ruffini's rule, a special case of synthetic division. === Factoring === All polynomials with coefficients in a unique factorization domain (for example, the integers or a field) also have a factored form in which the polynomial is written as a product of irreducible polynomials and a constant. This factored form is unique up to the order of the factors and their multiplication by an invertible constant. In the case of the field of complex numbers, the irreducible factors are linear. Over the real numbers, they have the degree either one or two. Over the integers and the rational numbers the irreducible factors may have any degree. For example, the factored form of 5 x 3 − 5 {\displaystyle 5x^{3}-5} is 5 ( x − 1 ) ( x 2 + x + 1 ) {\displaystyle 5(x-1)\left(x^{2}+x+1\right)} over the integers and the reals, and 5 ( x − 1 ) ( x + 1 + i 3 2 ) ( x + 1 − i 3 2 ) {\displaystyle 5(x-1)\left(x+{\frac {1+i{\sqrt {3}}}{2}}\right)\left(x+{\frac {1-i{\sqrt {3}}}{2}}\right)} over the complex numbers. The computation of the factored form, called factorization is, in general, too difficult to be done by hand-written computation. However, efficient polynomial factorization algorithms are available in most computer algebra systems. === Calculus === Calculating derivatives and integrals of polynomials is particularly simple, compared to other kinds of functions. The derivative of the polynomial P = a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 = ∑ i = 0 n a i x i {\displaystyle P=a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{2}x^{2}+a_{1}x+a_{0}=\sum _{i=0}^{n}a_{i}x^{i}} with respect to x is the polynomial n a n x n − 1 + ( n − 1 ) a n − 1 x n − 2 + ⋯ + 2 a 2 x + a 1 = ∑ i = 1 n i a i x i − 1 . {\displaystyle na_{n}x^{n-1}+(n-1)a_{n-1}x^{n-2}+\dots +2a_{2}x+a_{1}=\sum _{i=1}^{n}ia_{i}x^{i-1}.} Similarly, the general antiderivative (or indefinite integral) of P {\displaystyle P} is a n x n + 1 n + 1 + a n − 1 x n n + ⋯ + a 2 x 3 3 + a 1 x 2 2 + a 0 x + c = c + ∑ i = 0 n a i x i + 1 i + 1 {\displaystyle {\frac {a_{n}x^{n+1}}{n+1}}+{\frac {a_{n-1}x^{n}}{n}}+\dots +{\frac {a_{2}x^{3}}{3}}+{\frac {a_{1}x^{2}}{2}}+a_{0}x+c=c+\sum _{i=0}^{n}{\frac {a_{i}x^{i+1}}{i+1}}} where c is an arbitrary constant. For example, antiderivatives of x2 + 1 have the form 1/3x3 + x + c. For polynomials whose coefficients come from more abstract settings (for example, if the coefficients are integers modulo some prime number p, or elements of an arbitrary ring), the formula for the derivative can still be interpreted formally, with the coefficient kak understood to mean the sum of k copies of ak. For example, over the integers modulo p, the derivative of the polynomial xp + x is the polynomial 1. == Polynomial functions == A polynomial function is a function that can be defined by evaluating a polynomial. More precisely, a function f of one argument from a given domain is a polynomial function if there exists a polynomial a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{2}x^{2}+a_{1}x+a_{0}} that evaluates to f ( x ) {\displaystyle f(x)} for all x in the domain of f (here, n is a non-negative integer and a0, a1, a2, ..., an are constant coefficients). Generally, unless otherwise specified, polynomial functions have complex coefficients, arguments, and values. In particular, a polynomial, restricted to have real coefficients, defines a function from the complex numbers to the complex numbers. If the domain of this function is also restricted to the reals, the resulting function is a real function that maps reals to reals. For example, the function f, defined by f ( x ) = x 3 − x , {\displaystyle f(x)=x^{3}-x,} is a polynomial function of one variable. Polynomial functions of several variables are similarly defined, using polynomials in more than one indeterminate, as in f ( x , y ) = 2 x 3 + 4 x 2 y + x y 5 + y 2 − 7. {\displaystyle f(x,y)=2x^{3}+4x^{2}y+xy^{5}+y^{2}-7.} According to the definition of polynomial functions, there may be expressions that obviously are not polynomials but nevertheless define polynomial functions. An example is the expression ( 1 − x 2 ) 2 , {\displaystyle \left({\sqrt {1-x^{2}}}\right)^{2},} which takes the same values as the polynomial 1 − x 2 {\displaystyle 1-x^{2}} on the interval [ − 1 , 1 ] {\displaystyle [-1,1]} , and thus both expressions define the same polynomial function on this interval. Every polynomial function is continuous, smooth, and entire. The evaluation of a polynomial is the computation of the corresponding polynomial function; that is, the evaluation consists of substituting a numerical value to each indeterminate and carrying out the indicated multiplications and additions. For polynomials in one indeterminate, the evaluation is usually more efficient (lower number of arithmetic operations to perform) using Horner's method, which consists of rewriting the polynomial as ( ( ( ( ( a n x + a n − 1 ) x + a n − 2 ) x + ⋯ + a 3 ) x + a 2 ) x + a 1 ) x + a 0 . {\displaystyle (((((a_{n}x+a_{n-1})x+a_{n-2})x+\dotsb +a_{3})x+a_{2})x+a_{1})x+a_{0}.} === Graphs === A polynomial function in one real variable can be represented by a graph. The graph of the zero polynomial is the x-axis. The graph of a degree 0 polynomial is a horizontal line with y-intercept a0 The graph of a degree 1 polynomial (or linear function) is an oblique line with y-intercept a0 and slope a1. The graph of a degree 2 polynomial is a parabola. The graph of a degree 3 polynomial is a cubic curve. The graph of any polynomial with degree 2 or greater is a continuous non-linear curve. A non-constant polynomial function tends to infinity when the variable increases indefinitely (in absolute value). If the degree is higher than one, the graph does not have any asymptote. It has two parabolic branches with vertical direction (one branch for positive x and one for negative x). Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior. == Equations == A polynomial equation, also called an algebraic equation, is an equation of the form a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 = 0. {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\dotsb +a_{2}x^{2}+a_{1}x+a_{0}=0.} For example, 3 x 2 + 4 x − 5 = 0 {\displaystyle 3x^{2}+4x-5=0} is a polynomial equation. When considering equations, the indeterminates (variables) of polynomials are also called unknowns, and the solutions are the possible values of the unknowns for which the equality is true (in general more than one solution may exist). A polynomial equation stands in contrast to a polynomial identity like (x + y)(x − y) = x2 − y2, where both expressions represent the same polynomial in different forms, and as a consequence any evaluation of both members gives a valid equality. In elementary algebra, methods such as the quadratic formula are taught for solving all first degree and second degree polynomial equations in one variable. There are also formulas for the cubic and quartic equations. For higher degrees, the Abel–Ruffini theorem asserts that there can not exist a general formula in radicals. However, root-finding algorithms may be used to find numerical approximations of the roots of a polynomial expression of any degree. The number of solutions of a polynomial equation with real coefficients may not exceed the degree, and equals the degree when the complex solutions are counted with their multiplicity. This fact is called the fundamental theorem of algebra. === Solving equations === A root of a nonzero univariate polynomial P is a value a of x such that P(a) = 0. In other words, a root of P is a solution of the polynomial equation P(x) = 0 or a zero of the polynomial function defined by P. In the case of the zero polynomial, every number is a zero of the corresponding function, and the concept of root is rarely considered. A number a is a root of a polynomial P if and only if the linear polynomial x − a divides P, that is if there is another polynomial Q such that P = (x − a) Q. It may happen that a power (greater than 1) of x − a divides P; in this case, a is a multiple root of P, and otherwise a is a simple root of P. If P is a nonzero polynomial, there is a highest power m such that (x − a)m divides P, which is called the multiplicity of a as a root of P. The number of roots of a nonzero polynomial P, counted with their respective multiplicities, cannot exceed the degree of P, and equals this degree if all complex roots are considered (this is a consequence of the fundamental theorem of algebra). The coefficients of a polynomial and its roots are related by Vieta's formulas. Some polynomials, such as x2 + 1, do not have any roots among the real numbers. If, however, the set of accepted solutions is expanded to the complex numbers, every non-constant polynomial has at least one root; this is the fundamental theorem of algebra. By successively dividing out factors x − a, one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree 1; as a consequence, the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial. There may be several meanings of "solving an equation". One may want to express the solutions as explicit numbers; for example, the unique solution of 2x − 1 = 0 is 1/2. This is, in general, impossible for equations of degree greater than one, and, since the ancient times, mathematicians have searched to express the solutions as algebraic expressions; for example, the golden ratio ( 1 + 5 ) / 2 {\displaystyle (1+{\sqrt {5}})/2} is the unique positive solution of x 2 − x − 1 = 0. {\displaystyle x^{2}-x-1=0.} In the ancient times, they succeeded only for degrees one and two. For quadratic equations, the quadratic formula provides such expressions of the solutions. Since the 16th century, similar formulas (using cube roots in addition to square roots), although much more complicated, are known for equations of degree three and four (see cubic equation and quartic equation). But formulas for degree 5 and higher eluded researchers for several centuries. In 1824, Niels Henrik Abel proved the striking result that there are equations of degree 5 whose solutions cannot be expressed by a (finite) formula, involving only arithmetic operations and radicals (see Abel–Ruffini theorem). In 1830, Évariste Galois proved that most equations of degree higher than four cannot be solved by radicals, and showed that for each equation, one may decide whether it is solvable by radicals, and, if it is, solve it. This result marked the start of Galois theory and group theory, two important branches of modern algebra. Galois himself noted that the computations implied by his method were impracticable. Nevertheless, formulas for solvable equations of degrees 5 and 6 have been published (see quintic function and sextic equation). When there is no algebraic expression for the roots, and when such an algebraic expression exists but is too complicated to be useful, the unique way of solving it is to compute numerical approximations of the solutions. There are many methods for that; some are restricted to polynomials and others may apply to any continuous function. The most efficient algorithms allow solving easily (on a computer) polynomial equations of degree higher than 1,000 (see Root-finding algorithm). For polynomials with more than one indeterminate, the combinations of values for the variables for which the polynomial function takes the value zero are generally called zeros instead of "roots". The study of the sets of zeros of polynomials is the object of algebraic geometry. For a set of polynomial equations with several unknowns, there are algorithms to decide whether they have a finite number of complex solutions, and, if this number is finite, for computing the solutions. See System of polynomial equations. The special case where all the polynomials are of degree one is called a system of linear equations, for which another range of different solution methods exist, including the classical Gaussian elimination. A polynomial equation for which one is interested only in the solutions which are integers is called a Diophantine equation. Solving Diophantine equations is generally a very hard task. It has been proved that there cannot be any general algorithm for solving them, or even for deciding whether the set of solutions is empty (see Hilbert's tenth problem). Some of the most famous problems that have been solved during the last fifty years are related to Diophantine equations, such as Fermat's Last Theorem. == Polynomial expressions == Polynomials where indeterminates are substituted for some other mathematical objects are often considered, and sometimes have a special name. === Trigonometric polynomials === A trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. If sin(nx) and cos(nx) are expanded in terms of sin(x) and cos(x), a trigonometric polynomial becomes a polynomial in the two variables sin(x) and cos(x) (using the multiple-angle formulae). Conversely, every polynomial in sin(x) and cos(x) may be converted, with Product-to-sum identities, into a linear combination of functions sin(nx) and cos(nx). This equivalence explains why linear combinations are called polynomials. For complex coefficients, there is no difference between such a function and a finite Fourier series. Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are also used in the discrete Fourier transform. === Matrix polynomials === A matrix polynomial is a polynomial with square matrices as variables. Given an ordinary, scalar-valued polynomial P ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + ⋯ + a n x n , {\displaystyle P(x)=\sum _{i=0}^{n}{a_{i}x^{i}}=a_{0}+a_{1}x+a_{2}x^{2}+\cdots +a_{n}x^{n},} this polynomial evaluated at a matrix A is P ( A ) = ∑ i = 0 n a i A i = a 0 I + a 1 A + a 2 A 2 + ⋯ + a n A n , {\displaystyle P(A)=\sum _{i=0}^{n}{a_{i}A^{i}}=a_{0}I+a_{1}A+a_{2}A^{2}+\cdots +a_{n}A^{n},} where I is the identity matrix. A matrix polynomial equation is an equality between two matrix polynomials, which holds for the specific matrices in question. A matrix polynomial identity is a matrix polynomial equation which holds for all matrices A in a specified matrix ring Mn(R). === Exponential polynomials === A bivariate polynomial where the second variable is substituted for an exponential function applied to the first variable, for example P(x, ex), may be called an exponential polynomial. == Related concepts == === Rational functions === A rational fraction is the quotient (algebraic fraction) of two polynomials. Any algebraic expression that can be rewritten as a rational fraction is a rational function. While polynomial functions are defined for all values of the variables, a rational function is defined only for the values of the variables for which the denominator is not zero. The rational fractions include the Laurent polynomials, but do not limit denominators to powers of an indeterminate. === Laurent polynomials === Laurent polynomials are like polynomials, but allow negative powers of the variable(s) to occur. === Power series === Formal power series are like polynomials, but allow infinitely many non-zero terms to occur, so that they do not have finite degree. Unlike polynomials they cannot in general be explicitly and fully written down (just like irrational numbers cannot), but the rules for manipulating their terms are the same as for polynomials. Non-formal power series also generalize polynomials, but the multiplication of two power series may not converge. == Polynomial ring == A polynomial f over a commutative ring R is a polynomial all of whose coefficients belong to R. It is straightforward to verify that the polynomials in a given set of indeterminates over R form a commutative ring, called the polynomial ring in these indeterminates, denoted R [ x ] {\displaystyle R[x]} in the univariate case and R [ x 1 , … , x n ] {\displaystyle R[x_{1},\ldots ,x_{n}]} in the multivariate case. One has R [ x 1 , … , x n ] = ( R [ x 1 , … , x n − 1 ] ) [ x n ] . {\displaystyle R[x_{1},\ldots ,x_{n}]=\left(R[x_{1},\ldots ,x_{n-1}]\right)[x_{n}].} So, most of the theory of the multivariate case can be reduced to an iterated univariate case. The map from R to R[x] sending r to itself considered as a constant polynomial is an injective ring homomorphism, by which R is viewed as a subring of R[x]. In particular, R[x] is an algebra over R. One can think of the ring R[x] as arising from R by adding one new element x to R, and extending in a minimal way to a ring in which x satisfies no other relations than the obligatory ones, plus commutation with all elements of R (that is xr = rx). To do this, one must add all powers of x and their linear combinations as well. Formation of the polynomial ring, together with forming factor rings by factoring out ideals, are important tools for constructing new rings out of known ones. For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ring R[x] over the real numbers by factoring out the ideal of multiples of the polynomial x2 + 1. Another example is the construction of finite fields, which proceeds similarly, starting out with the field of integers modulo some prime number as the coefficient ring R (see modular arithmetic). If R is commutative, then one can associate with every polynomial P in R[x] a polynomial function f with domain and range equal to R. (More generally, one can take domain and range to be any same unital associative algebra over R.) One obtains the value f(r) by substitution of the value r for the symbol x in P. One reason to distinguish between polynomials and polynomial functions is that, over some rings, different polynomials may give rise to the same polynomial function (see Fermat's little theorem for an example where R is the integers modulo p). This is not the case when R is the real or complex numbers, whence the two concepts are not always distinguished in analysis. An even more important reason to distinguish between polynomials and polynomial functions is that many operations on polynomials (like Euclidean division) require looking at what a polynomial is composed of as an expression rather than evaluating it at some constant value for x. === Divisibility === If R is an integral domain and f and g are polynomials in R[x], it is said that f divides g or f is a divisor of g if there exists a polynomial q in R[x] such that f q = g. If a ∈ R , {\displaystyle a\in R,} then a is a root of f if and only x − a {\displaystyle x-a} divides f. In this case, the quotient can be computed using the polynomial long division. If F is a field and f and g are polynomials in F[x] with g ≠ 0, then there exist unique polynomials q and r in F[x] with f = q g + r {\displaystyle f=q\,g+r} and such that the degree of r is smaller than the degree of g (using the convention that the polynomial 0 has a negative degree). The polynomials q and r are uniquely determined by f and g. This is called Euclidean division, division with remainder or polynomial long division and shows that the ring F[x] is a Euclidean domain. Analogously, prime polynomials (more correctly, irreducible polynomials) can be defined as non-zero polynomials which cannot be factorized into the product of two non-constant polynomials. In the case of coefficients in a ring, "non-constant" must be replaced by "non-constant or non-unit" (both definitions agree in the case of coefficients in a field). Any polynomial may be decomposed into the product of an invertible constant by a product of irreducible polynomials. If the coefficients belong to a field or a unique factorization domain this decomposition is unique up to the order of the factors and the multiplication of any non-unit factor by a unit (and division of the unit factor by the same unit). When the coefficients belong to integers, rational numbers or a finite field, there are algorithms to test irreducibility and to compute the factorization into irreducible polynomials (see Factorization of polynomials). These algorithms are not practicable for hand-written computation, but are available in any computer algebra system. Eisenstein's criterion can also be used in some cases to determine irreducibility. == Applications == === Positional notation === In modern positional numbers systems, such as the decimal system, the digits and their positions in the representation of an integer, for example, 45, are a shorthand notation for a polynomial in the radix or base, in this case, 4 × 101 + 5 × 100. As another example, in radix 5, a string of digits such as 132 denotes the (decimal) number 1 × 52 + 3 × 51 + 2 × 50 = 42. This representation is unique. Let b be a positive integer greater than 1. Then every positive integer a can be expressed uniquely in the form a = r m b m + r m − 1 b m − 1 + ⋯ + r 1 b + r 0 , {\displaystyle a=r_{m}b^{m}+r_{m-1}b^{m-1}+\dotsb +r_{1}b+r_{0},} where m is a nonnegative integer and the r's are integers such that 0 < rm < b and 0 ≤ ri < b for i = 0, 1, . . . , m − 1. === Interpolation and approximation === The simple structure of polynomial functions makes them quite useful in analyzing general functions using polynomial approximations. An important example in calculus is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial function, and the Stone–Weierstrass theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial function. Practical methods of approximation include polynomial interpolation and the use of splines. === Other applications === Polynomials are frequently used to encode information about some other object. The characteristic polynomial of a matrix or linear operator contains information about the operator's eigenvalues. The minimal polynomial of an algebraic element records the simplest algebraic relation satisfied by that element. The chromatic polynomial of a graph counts the number of proper colourings of that graph. The term "polynomial", as an adjective, can also be used for quantities or functions that can be written in polynomial form. For example, in computational complexity theory the phrase polynomial time means that the time it takes to complete an algorithm is bounded by a polynomial function of some variable, such as the size of the input. == History == Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. However, the elegant and practical notation we use today only developed beginning in the 15th century. Before that, equations were written out in words. For example, an algebra problem from the Chinese Arithmetic in Nine Sections, c. 200 BCE, begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou." We would write 3x + 2y + z = 29. === History of the notation === The earliest known use of the equal sign is in Robert Recorde's The Whetstone of Witte, 1557. The signs + for addition, − for subtraction, and the use of a letter for an unknown appear in Michael Stifel's Arithemetica integra, 1544. René Descartes, in La géometrie, 1637, introduced the concept of the graph of a polynomial equation. He popularized the use of letters from the beginning of the alphabet to denote constants and letters from the end of the alphabet to denote variables, as can be seen above, in the general formula for a polynomial in one variable, where the as denote constants and x denotes a variable. Descartes introduced the use of superscripts to denote exponents as well. == See also == List of polynomial topics == Notes == == References == == External links == Markushevich, A.I. (2001) [1994], "Polynomial", Encyclopedia of Mathematics, EMS Press "Euler's Investigations on the Roots of Equations". Archived from the original on September 24, 2012. |
Wikipedia:Polynomial decomposition#0 | In mathematics, a polynomial decomposition expresses a polynomial f as the functional composition g ∘ h {\displaystyle g\circ h} of polynomials g and h, where g and h have degree greater than 1; it is an algebraic functional decomposition. Algorithms are known for decomposing univariate polynomials in polynomial time. Polynomials which are decomposable in this way are composite polynomials; those which are not are indecomposable polynomials or sometimes prime polynomials (not to be confused with irreducible polynomials, which cannot be factored into products of polynomials). The degree of a composite polynomial is always a composite number, the product of the degrees of the composed polynomials. The rest of this article discusses only univariate polynomials; algorithms also exist for multivariate polynomials of arbitrary degree. == Examples == In the simplest case, one of the polynomials is a monomial. For example, f = x 6 − 3 x 3 + 1 {\displaystyle f=x^{6}-3x^{3}+1} decomposes into g = x 2 − 3 x + 1 and h = x 3 {\displaystyle g=x^{2}-3x+1{\text{ and }}h=x^{3}} since f ( x ) = ( g ∘ h ) ( x ) = g ( h ( x ) ) = g ( x 3 ) = ( x 3 ) 2 − 3 ( x 3 ) + 1 , {\displaystyle f(x)=(g\circ h)(x)=g(h(x))=g(x^{3})=(x^{3})^{2}-3(x^{3})+1,} using the ring operator symbol ∘ to denote function composition. Less trivially, x 6 − 6 x 5 + 21 x 4 − 44 x 3 + 68 x 2 − 64 x + 41 = ( x 3 + 9 x 2 + 32 x + 41 ) ∘ ( x 2 − 2 x ) . {\displaystyle {\begin{aligned}&x^{6}-6x^{5}+21x^{4}-44x^{3}+68x^{2}-64x+41\\={}&(x^{3}+9x^{2}+32x+41)\circ (x^{2}-2x).\end{aligned}}} == Uniqueness == A polynomial may have distinct decompositions into indecomposable polynomials where f = g 1 ∘ g 2 ∘ ⋯ ∘ g m = h 1 ∘ h 2 ∘ ⋯ ∘ h n {\displaystyle f=g_{1}\circ g_{2}\circ \cdots \circ g_{m}=h_{1}\circ h_{2}\circ \cdots \circ h_{n}} where g i ≠ h i {\displaystyle g_{i}\neq h_{i}} for some i {\displaystyle i} . The restriction in the definition to polynomials of degree greater than one excludes the infinitely many decompositions possible with linear polynomials. Joseph Ritt proved that m = n {\displaystyle m=n} , and the degrees of the components are the same, but possibly in different order; this is Ritt's polynomial decomposition theorem. For example, x 2 ∘ x 3 = x 3 ∘ x 2 {\displaystyle x^{2}\circ x^{3}=x^{3}\circ x^{2}} . == Applications == A polynomial decomposition may enable more efficient evaluation of a polynomial. For example, x 8 + 4 x 7 + 10 x 6 + 16 x 5 + 19 x 4 + 16 x 3 + 10 x 2 + 4 x − 1 = ( x 2 − 2 ) ∘ ( x 2 ) ∘ ( x 2 + x + 1 ) {\displaystyle {\begin{aligned}&x^{8}+4x^{7}+10x^{6}+16x^{5}+19x^{4}+16x^{3}+10x^{2}+4x-1\\={}&\left(x^{2}-2\right)\circ \left(x^{2}\right)\circ \left(x^{2}+x+1\right)\end{aligned}}} can be calculated with 3 multiplications and 3 additions using the decomposition, while Horner's method would require 7 multiplications and 8 additions. A polynomial decomposition enables calculation of symbolic roots using radicals, even for some irreducible polynomials. This technique is used in many computer algebra systems. For example, using the decomposition x 6 − 6 x 5 + 15 x 4 − 20 x 3 + 15 x 2 − 6 x − 1 = ( x 3 − 2 ) ∘ ( x 2 − 2 x + 1 ) , {\displaystyle {\begin{aligned}&x^{6}-6x^{5}+15x^{4}-20x^{3}+15x^{2}-6x-1\\={}&\left(x^{3}-2\right)\circ \left(x^{2}-2x+1\right),\end{aligned}}} the roots of this irreducible polynomial can be calculated as 1 ± 2 1 / 6 , 1 ± − 1 ± 3 i 2 1 / 3 . {\displaystyle 1\pm 2^{1/6},1\pm {\frac {\sqrt {-1\pm {\sqrt {3}}i}}{2^{1/3}}}.} Even in the case of quartic polynomials, where there is an explicit formula for the roots, solving using the decomposition often gives a simpler form. For example, the decomposition x 4 − 8 x 3 + 18 x 2 − 8 x + 2 = ( x 2 + 1 ) ∘ ( x 2 − 4 x + 1 ) {\displaystyle {\begin{aligned}&x^{4}-8x^{3}+18x^{2}-8x+2\\={}&(x^{2}+1)\circ (x^{2}-4x+1)\end{aligned}}} gives the roots 2 ± 3 ± i {\displaystyle 2\pm {\sqrt {3\pm i}}} but straightforward application of the quartic formula gives equivalent results but in a form that is difficult to simplify and difficult to understand; one of the four roots is: 2 − 9 ( 8 10 i 3 3 / 2 + 72 ) 2 / 3 + 36 ( 8 10 i 3 3 / 2 + 72 ) 1 / 3 + 156 ( 8 10 i 3 3 / 2 + 72 ) 1 / 3 6 − − ( 8 10 i 3 3 / 2 + 72 ) 1 / 3 − 52 3 ( 8 10 i 3 3 / 2 + 72 ) 1 / 3 + 8 2 . {\displaystyle 2-{\frac {\sqrt {{9\left({\frac {8{\sqrt {10}}i}{3^{3/2}}}+72\right)^{2/3}+36\left({\frac {8{\sqrt {10}}i}{3^{3/2}}}+72\right)^{1/3}+156} \over {\left({\frac {8{\sqrt {10}}i}{3^{3/2}}}+72\right)^{1/3}}}}{6}}-{{\sqrt {-\left({\frac {8{\sqrt {10}}i}{3^{3/2}}}+72\right)^{1/3}-{{52} \over {3\left({\frac {8{\sqrt {10}}i}{3^{3/2}}}+72\right)^{1/3}}}+8}} \over 2}.} == Algorithms == The first algorithm for polynomial decomposition was published in 1985, though it had been discovered in 1976, and implemented in the Macsyma/Maxima computer algebra system. That algorithm takes exponential time in worst case, but works independently of the characteristic of the underlying field. A 1989 algorithm runs in polynomial time but with restrictions on the characteristic. A 2014 algorithm calculates a decomposition in polynomial time and without restrictions on the characteristic. == Notes == == References == Joel S. Cohen (2003). "Chapter 5. Polynomial Decomposition". Computer Algebra and Symbolic Computation: Mathematical Methods. ISBN 1-56881-159-4. |
Wikipedia:Polynomial differential form#0 | In algebra, the ring of polynomial differential forms on the standard n-simplex is the differential graded algebra: Ω poly ∗ ( [ n ] ) = Q [ t 0 , . . . , t n , d t 0 , . . . , d t n ] / ( ∑ t i − 1 , ∑ d t i ) . {\displaystyle \Omega _{\text{poly}}^{*}([n])=\mathbb {Q} [t_{0},...,t_{n},dt_{0},...,dt_{n}]/(\sum t_{i}-1,\sum dt_{i}).} Varying n, it determines the simplicial commutative dg algebra: Ω poly ∗ {\displaystyle \Omega _{\text{poly}}^{*}} (each u : [ n ] → [ m ] {\displaystyle u:[n]\to [m]} induces the map Ω poly ∗ ( [ m ] ) → Ω poly ∗ ( [ n ] ) , t i ↦ ∑ u ( j ) = i t j {\displaystyle \Omega _{\text{poly}}^{*}([m])\to \Omega _{\text{poly}}^{*}([n]),t_{i}\mapsto \sum _{u(j)=i}t_{j}} ). == References == Aldridge Bousfield and V. K. A. M. Gugenheim, §1 and §2 of: On PL De Rham Theory and Rational Homotopy Type, Memoirs of the A. M. S., vol. 179, 1976. Hinich, Vladimir (1997-02-11). "Homological algebra of homotopy algebras". arXiv:q-alg/9702015. == External links == https://ncatlab.org/nlab/show/differential+forms+on+simplices https://mathoverflow.net/questions/220532/polynomial-differential-forms-on-bg |
Wikipedia:Polynomial greatest common divisor#0 | In algebra, the greatest common divisor (frequently abbreviated as GCD) of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers. In the important case of univariate polynomials over a field the polynomial GCD may be computed, like for the integer GCD, by the Euclidean algorithm using long division. The polynomial GCD is defined only up to the multiplication by an invertible constant. The similarity between the integer GCD and the polynomial GCD allows extending to univariate polynomials all the properties that may be deduced from the Euclidean algorithm and Euclidean division. Moreover, the polynomial GCD has specific properties that make it a fundamental notion in various areas of algebra. Typically, the roots of the GCD of two polynomials are the common roots of the two polynomials, and this provides information on the roots without computing them. For example, the multiple roots of a polynomial are the roots of the GCD of the polynomial and its derivative, and further GCD computations allow computing the square-free factorization of the polynomial, which provides polynomials whose roots are the roots of a given multiplicity of the original polynomial. The greatest common divisor may be defined and exists, more generally, for multivariate polynomials over a field or the ring of integers, and also over a unique factorization domain. There exist algorithms to compute them as soon as one has a GCD algorithm in the ring of coefficients. These algorithms proceed by a recursion on the number of variables to reduce the problem to a variant of the Euclidean algorithm. They are a fundamental tool in computer algebra, because computer algebra systems use them systematically to simplify fractions. Conversely, most of the modern theory of polynomial GCD has been developed to satisfy the need for efficiency of computer algebra systems. == General definition == Let p and q be polynomials with coefficients in an integral domain F, typically a field or the integers. A greatest common divisor of p and q is a polynomial d that divides p and q, and such that every common divisor of p and q also divides d. Every pair of polynomials (not both zero) has a GCD if and only if F is a unique factorization domain. If F is a field and p and q are not both zero, a polynomial d is a greatest common divisor if and only if it divides both p and q, and it has the greatest degree among the polynomials having this property. If p = q = 0, the GCD is 0. However, some authors consider that it is not defined in this case. The greatest common divisor of p and q is usually denoted "gcd(p, q)". The greatest common divisor is not unique: if d is a GCD of p and q, then the polynomial f is another GCD if and only if there is an invertible element u of F such that f = u d {\displaystyle f=ud} and d = u − 1 f . {\displaystyle d=u^{-1}f.} In other words, the GCD is unique up to the multiplication by an invertible constant. In the case of the integers, this indetermination has been settled by choosing, as the GCD, the unique one which is positive (there is another one, which is its opposite). With this convention, the GCD of two integers is also the greatest (for the usual ordering) common divisor. However, since there is no natural total order for polynomials over an integral domain, one cannot proceed in the same way here. For univariate polynomials over a field, one can additionally require the GCD to be monic (that is to have 1 as its coefficient of the highest degree), but in more general cases there is no general convention. Therefore, equalities like d = gcd(p, q) or gcd(p, q) = gcd(r, s) are common abuses of notation which should be read "d is a GCD of p and q" and "p and q have the same set of GCDs as r and s". In particular, gcd(p, q) = 1 means that the invertible constants are the only common divisors. In this case, by analogy with the integer case, one says that p and q are coprime polynomials. == Properties == As stated above, the GCD of two polynomials exists if the coefficients belong either to a field, the ring of the integers, or more generally to a unique factorization domain. If c is any common divisor of p and q, then c divides their GCD. gcd ( p , q ) = gcd ( q , p ) . {\displaystyle \gcd(p,q)=\gcd(q,p).} gcd ( p , q ) = gcd ( q , p + r q ) {\displaystyle \gcd(p,q)=\gcd(q,p+rq)} for any polynomial r. This property is at the basis of the proof of Euclidean algorithm. For any invertible element k of the ring of the coefficients, gcd ( p , q ) = gcd ( p , k q ) {\displaystyle \gcd(p,q)=\gcd(p,kq)} . Hence gcd ( p , q ) = gcd ( a 1 p + b 1 q , a 2 p + b 2 q ) {\displaystyle \gcd(p,q)=\gcd(a_{1}p+b_{1}q,a_{2}p+b_{2}q)} for any scalars a 1 , b 1 , a 2 , b 2 {\displaystyle a_{1},b_{1},a_{2},b_{2}} such that a 1 b 2 − a 2 b 1 {\displaystyle a_{1}b_{2}-a_{2}b_{1}} is invertible. If gcd ( p , r ) = 1 {\displaystyle \gcd(p,r)=1} , then gcd ( p , q ) = gcd ( p , q r ) {\displaystyle \gcd(p,q)=\gcd(p,qr)} . If gcd ( q , r ) = 1 {\displaystyle \gcd(q,r)=1} , then gcd ( p , q r ) = gcd ( p , q ) gcd ( p , r ) {\displaystyle \gcd(p,qr)=\gcd(p,q)\,\gcd(p,r)} . For two univariate polynomials p and q over a field, there exist polynomials a and b, such that gcd ( p , q ) = a p + b q {\displaystyle \gcd(p,q)=ap+bq} and gcd ( p , q ) {\displaystyle \gcd(p,q)} divides every such linear combination of p and q (Bézout's identity). The greatest common divisor of three or more polynomials may be defined similarly as for two polynomials. It may be computed recursively from GCD's of two polynomials by the identities: gcd ( p , q , r ) = gcd ( p , gcd ( q , r ) ) , {\displaystyle \gcd(p,q,r)=\gcd(p,\gcd(q,r)),} and gcd ( p 1 , p 2 , … , p n ) = gcd ( p 1 , gcd ( p 2 , … , p n ) ) . {\displaystyle \gcd(p_{1},p_{2},\dots ,p_{n})=\gcd(p_{1},\gcd(p_{2},\dots ,p_{n})).} == GCD by hand computation == There are several ways to find the greatest common divisor of two polynomials. Two of them are: Factorization of polynomials, in which one finds the factors of each expression, then selects the set of common factors held by all from within each set of factors. This method may be useful only in simple cases, as factoring is usually more difficult than computing the greatest common divisor. The Euclidean algorithm, which can be used to find the GCD of two polynomials in the same manner as for two numbers. === Factoring === To find the GCD of two polynomials using factoring, simply factor the two polynomials completely. Then, take the product of all common factors. At this stage, we do not necessarily have a monic polynomial, so finally multiply this by a constant to make it a monic polynomial. This will be the GCD of the two polynomials as it includes all common divisors and is monic. Example one: Find the GCD of x2 + 7x + 6 and x2 − 5x − 6. Thus, their GCD is x + 1. === Euclidean algorithm === Factoring polynomials can be difficult, especially if the polynomials have a large degree. The Euclidean algorithm is a method that works for any pair of polynomials. It makes repeated use of Euclidean division. When using this algorithm on two numbers, the size of the numbers decreases at each stage. With polynomials, the degree of the polynomials decreases at each stage. The last nonzero remainder, made monic if necessary, is the GCD of the two polynomials. More specifically, for finding the gcd of two polynomials a(x) and b(x), one can suppose b ≠ 0 (otherwise, the GCD is a(x)), and deg ( b ( x ) ) ≤ deg ( a ( x ) ) . {\displaystyle \deg(b(x))\leq \deg(a(x))\,.} The Euclidean division provides two polynomials q(x), the quotient and r(x), the remainder such that a ( x ) = q 0 ( x ) b ( x ) + r 0 ( x ) and deg ( r 0 ( x ) ) < deg ( b ( x ) ) {\displaystyle a(x)=q_{0}(x)b(x)+r_{0}(x)\quad {\text{and}}\quad \deg(r_{0}(x))<\deg(b(x))} A polynomial g(x) divides both a(x) and b(x) if and only if it divides both b(x) and r0(x). Thus gcd ( a ( x ) , b ( x ) ) = gcd ( b ( x ) , r 0 ( x ) ) . {\displaystyle \gcd(a(x),b(x))=\gcd(b(x),r_{0}(x)).} Setting a 1 ( x ) = b ( x ) , b 1 ( x ) = r 0 ( x ) , {\displaystyle a_{1}(x)=b(x),b_{1}(x)=r_{0}(x),} one can repeat the Euclidean division to get new polynomials q1(x), r1(x), a2(x), b2(x) and so on. At each stage we have deg ( a k + 1 ) + deg ( b k + 1 ) < deg ( a k ) + deg ( b k ) , {\displaystyle \deg(a_{k+1})+\deg(b_{k+1})<\deg(a_{k})+\deg(b_{k}),} so the sequence will eventually reach a point at which b N ( x ) = 0 {\displaystyle b_{N}(x)=0} and one has got the GCD: gcd ( a , b ) = gcd ( a 1 , b 1 ) = ⋯ = gcd ( a N , 0 ) = a N . {\displaystyle \gcd(a,b)=\gcd(a_{1},b_{1})=\cdots =\gcd(a_{N},0)=a_{N}.} Example: finding the GCD of x2 + 7x + 6 and x2 − 5x − 6: Since 12 x + 12 is the last nonzero remainder, it is a GCD of the original polynomials, and the monic GCD is x + 1. In this example, it is not difficult to avoid introducing denominators by factoring out 12 before the second step. This can always be done by using pseudo-remainder sequences, but, without care, this may introduce very large integers during the computation. Therefore, for computer computation, other algorithms are used, that are described below. This method works only if one can test the equality to zero of the coefficients that occur during the computation. So, in practice, the coefficients must be integers, rational numbers, elements of a finite field, or must belong to some finitely generated field extension of one of the preceding fields. If the coefficients are floating-point numbers that represent real numbers that are known only approximately, then one must know the degree of the GCD for having a well defined computation result (that is a numerically stable result; in this cases other techniques may be used, usually based on singular value decomposition. == Univariate polynomials with coefficients in a field == The case of univariate polynomials over a field is especially important for several reasons. Firstly, it is the most elementary case and therefore appears in most first courses in algebra. Secondly, it is very similar to the case of the integers, and this analogy is the source of the notion of Euclidean domain. A third reason is that the theory and the algorithms for the multivariate case and for coefficients in a unique factorization domain are strongly based on this particular case. Last but not least, polynomial GCD algorithms and derived algorithms allow one to get useful information on the roots of a polynomial, without computing them. === Euclidean division === Euclidean division of polynomials, which is used in Euclid's algorithm for computing GCDs, is very similar to Euclidean division of integers. Its existence is based on the following theorem: Given two univariate polynomials a and b ≠ 0 defined over a field, there exist two polynomials q (the quotient) and r (the remainder) which satisfy a = b q + r {\displaystyle a=bq+r} and deg ( r ) < deg ( b ) , {\displaystyle \deg(r)<\deg(b),} where "deg(...)" denotes the degree and the degree of the zero polynomial is defined as being negative. Moreover, q and r are uniquely defined by these relations. The difference from Euclidean division of the integers is that, for the integers, the degree is replaced by the absolute value, and that to have uniqueness one has to suppose that r is non-negative. The rings for which such a theorem exists are called Euclidean domains. Like for the integers, the Euclidean division of the polynomials may be computed by the long division algorithm. This algorithm is usually presented for paper-and-pencil computation, but it works well on computers when formalized as follows (note that the names of the variables correspond exactly to the regions of the paper sheet in a pencil-and-paper computation of long division). In the following computation "deg" stands for the degree of its argument (with the convention deg(0) < 0), and "lc" stands for the leading coefficient, the coefficient of the highest degree of the variable. Euclidean division Input: a and b ≠ 0 two polynomials in the variable x; Output: q, the quotient, and r, the remainder; begin q := 0 r := a d := deg(b) c := lc(b) while deg(r) ≥ d do s := (lc(r)/c) ⋅ xdeg(r)−d q := q + s r := r − sb end do return (q, r) end The proof of the validity of this algorithm relies on the fact that during the whole "while" loop, we have a = bq + r and deg(r) is a non-negative integer that decreases at each iteration. Thus the proof of the validity of this algorithm also proves the validity of the Euclidean division. === Euclid's algorithm === As for the integers, the Euclidean division allows us to define Euclid's algorithm for computing GCDs. Starting from two polynomials a and b, Euclid's algorithm consists of recursively replacing the pair (a, b) by (b, rem(a, b)) (where "rem(a, b)" denotes the remainder of the Euclidean division, computed by the algorithm of the preceding section), until b = 0. The GCD is the last non zero remainder. Euclid's algorithm may be formalized in the recursive programming style as: gcd ( a , b ) := { a if b = 0 gcd ( b , rem ( a , b ) ) otherwise . {\displaystyle \gcd(a,b):={\begin{cases}a&{\text{if }}b=0\\\gcd(b,\operatorname {rem} (a,b))&{\text{otherwise}}.\end{cases}}} In the imperative programming style, the same algorithm becomes, giving a name to each intermediate remainder: r0 := a r1 := b for (i := 1; ri ≤ 0; i := i + 1) do ri+1 := rem(ri−1, ri) end do return ri-1. The sequence of the degrees of the ri is strictly decreasing. Thus after, at most, deg(b) steps, one get a null remainder, say rk. As (a, b) and (b, rem(a,b)) have the same divisors, the set of the common divisors is not changed by Euclid's algorithm and thus all pairs (ri, ri+1) have the same set of common divisors. The common divisors of a and b are thus the common divisors of rk−1 and 0. Thus rk−1 is a GCD of a and b. This not only proves that Euclid's algorithm computes GCDs but also proves that GCDs exist. === Bézout's identity and extended GCD algorithm === Bézout's identity is a GCD related theorem, initially proved for the integers, which is valid for every principal ideal domain. In the case of the univariate polynomials over a field, it may be stated as follows. The interest of this result in the case of the polynomials is that there is an efficient algorithm to compute the polynomials u and v. This algorithm differs from Euclid's algorithm by a few more computations done at each iteration of the loop. It is therefore called extended GCD algorithm. Another difference with Euclid's algorithm is that it also uses the quotient, denoted "quo", of the Euclidean division instead of only the remainder. This algorithm works as follows. Extended GCD algorithm Input: a, b, univariate polynomials Output: g, the GCD of a and b u, v, as in above statement a1, b1, such that a = g a1 b = g b1 Begin (r0, r1) := (a, b) (s0, s1) := (1, 0) (t0, t1) := (0, 1) for (i := 1; ri ≠ 0; i := i+1) do q := quo(ri−1, ri) ri+1 := ri−1 − qri si+1 := si−1 − qsi ti+1 := ti−1 − qti end do g := ri−1 u := si−1 v := ti−1 a1 := (−1)i−1 ti b1 := (−1)i si End The proof that the algorithm satisfies its output specification relies on the fact that, for every i we have r i = a s i + b t i {\displaystyle r_{i}=as_{i}+bt_{i}} s i t i + 1 − t i s i + 1 = s i t i − 1 − t i s i − 1 , {\displaystyle s_{i}t_{i+1}-t_{i}s_{i+1}=s_{i}t_{i-1}-t_{i}s_{i-1},} the latter equality implying s i t i + 1 − t i s i + 1 = ( − 1 ) i . {\displaystyle s_{i}t_{i+1}-t_{i}s_{i+1}=(-1)^{i}.} The assertion on the degrees follows from the fact that, at every iteration, the degrees of si and ti increase at most as the degree of ri decreases. An interesting feature of this algorithm is that, when the coefficients of Bezout's identity are needed, one gets for free the quotient of the input polynomials by their GCD. ==== Arithmetic of algebraic extensions ==== An important application of the extended GCD algorithm is that it allows one to compute division in algebraic field extensions. Let L an algebraic extension of a field K, generated by an element whose minimal polynomial f has degree n. The elements of L are usually represented by univariate polynomials over K of degree less than n. The addition in L is simply the addition of polynomials: a + L b = a + K [ X ] b . {\displaystyle a+_{L}b=a+_{K[X]}b.} The multiplication in L is the multiplication of polynomials followed by the division by f: a ⋅ L b = rem ( a . K [ X ] b , f ) . {\displaystyle a\cdot _{L}b=\operatorname {rem} (a._{K[X]}b,f).} The inverse of a non zero element a of L is the coefficient u in Bézout's identity au + fv = 1, which may be computed by extended GCD algorithm. (the GCD is 1 because the minimal polynomial f is irreducible). The degrees inequality in the specification of extended GCD algorithm shows that a further division by f is not needed to get deg(u) < deg(f). === Subresultants === In the case of univariate polynomials, there is a strong relationship between the greatest common divisors and resultants. More precisely, the resultant of two polynomials P, Q is a polynomial function of the coefficients of P and Q which has the value zero if and only if the GCD of P and Q is not constant. The subresultants theory is a generalization of this property that allows characterizing generically the GCD of two polynomials, and the resultant is the 0-th subresultant polynomial. The i-th subresultant polynomial Si(P ,Q) of two polynomials P and Q is a polynomial of degree at most i whose coefficients are polynomial functions of the coefficients of P and Q, and the i-th principal subresultant coefficient si(P ,Q) is the coefficient of degree i of Si(P, Q). They have the property that the GCD of P and Q has a degree d if and only if s 0 ( P , Q ) = ⋯ = s d − 1 ( P , Q ) = 0 , s d ( P , Q ) ≠ 0. {\displaystyle s_{0}(P,Q)=\cdots =s_{d-1}(P,Q)=0\ ,s_{d}(P,Q)\neq 0.} In this case, Sd(P ,Q) is a GCD of P and Q and S 0 ( P , Q ) = ⋯ = S d − 1 ( P , Q ) = 0. {\displaystyle S_{0}(P,Q)=\cdots =S_{d-1}(P,Q)=0.} Every coefficient of the subresultant polynomials is defined as the determinant of a submatrix of the Sylvester matrix of P and Q. This implies that subresultants "specialize" well. More precisely, subresultants are defined for polynomials over any commutative ring R, and have the following property. Let φ be a ring homomorphism of R into another commutative ring S. It extends to another homomorphism, denoted also φ between the polynomials rings over R and S. Then, if P and Q are univariate polynomials with coefficients in R such that deg ( P ) = deg ( φ ( P ) ) {\displaystyle \deg(P)=\deg(\varphi (P))} and deg ( Q ) = deg ( φ ( Q ) ) , {\displaystyle \deg(Q)=\deg(\varphi (Q)),} then the subresultant polynomials and the principal subresultant coefficients of φ(P) and φ(Q) are the image by φ of those of P and Q. The subresultants have two important properties which make them fundamental for the computation on computers of the GCD of two polynomials with integer coefficients. Firstly, their definition through determinants allows bounding, through Hadamard inequality, the size of the coefficients of the GCD. Secondly, this bound and the property of good specialization allow computing the GCD of two polynomials with integer coefficients through modular computation and Chinese remainder theorem (see below). ==== Technical definition ==== Let P = p 0 + p 1 X + ⋯ + p m X m , Q = q 0 + q 1 X + ⋯ + q n X n . {\displaystyle P=p_{0}+p_{1}X+\cdots +p_{m}X^{m},\quad Q=q_{0}+q_{1}X+\cdots +q_{n}X^{n}.} be two univariate polynomials with coefficients in a field K. Let us denote by P i {\displaystyle {\mathcal {P}}_{i}} the K vector space of dimension i of polynomials of degree less than i. For non-negative integer i such that i ≤ m and i ≤ n, let φ i : P n − i × P m − i → P m + n − i {\displaystyle \varphi _{i}:{\mathcal {P}}_{n-i}\times {\mathcal {P}}_{m-i}\rightarrow {\mathcal {P}}_{m+n-i}} be the linear map such that φ i ( A , B ) = A P + B Q . {\displaystyle \varphi _{i}(A,B)=AP+BQ.} The resultant of P and Q is the determinant of the Sylvester matrix, which is the (square) matrix of φ 0 {\displaystyle \varphi _{0}} on the bases of the powers of X. Similarly, the i-subresultant polynomial is defined in term of determinants of submatrices of the matrix of φ i . {\displaystyle \varphi _{i}.} Let us describe these matrices more precisely; Let pi = 0 for i < 0 or i > m, and qi = 0 for i < 0 or i > n. The Sylvester matrix is the (m + n) × (m + n)-matrix such that the coefficient of the i-th row and the j-th column is pm+j−i for j ≤ n and qj−i for j > n: S = ( p m 0 ⋯ 0 q n 0 ⋯ 0 p m − 1 p m ⋯ 0 q n − 1 q n ⋯ 0 p m − 2 p m − 1 ⋱ 0 q n − 2 q n − 1 ⋱ 0 ⋮ ⋮ ⋱ p m ⋮ ⋮ ⋱ q n ⋮ ⋮ ⋯ p m − 1 ⋮ ⋮ ⋯ q n − 1 p 0 p 1 ⋯ ⋮ q 0 q 1 ⋯ ⋮ 0 p 0 ⋱ ⋮ 0 q 0 ⋱ ⋮ ⋮ ⋮ ⋱ p 1 ⋮ ⋮ ⋱ q 1 0 0 ⋯ p 0 0 0 ⋯ q 0 ) . {\displaystyle S={\begin{pmatrix}p_{m}&0&\cdots &0&q_{n}&0&\cdots &0\\p_{m-1}&p_{m}&\cdots &0&q_{n-1}&q_{n}&\cdots &0\\p_{m-2}&p_{m-1}&\ddots &0&q_{n-2}&q_{n-1}&\ddots &0\\\vdots &\vdots &\ddots &p_{m}&\vdots &\vdots &\ddots &q_{n}\\\vdots &\vdots &\cdots &p_{m-1}&\vdots &\vdots &\cdots &q_{n-1}\\p_{0}&p_{1}&\cdots &\vdots &q_{0}&q_{1}&\cdots &\vdots \\0&p_{0}&\ddots &\vdots &0&q_{0}&\ddots &\vdots \\\vdots &\vdots &\ddots &p_{1}&\vdots &\vdots &\ddots &q_{1}\\0&0&\cdots &p_{0}&0&0&\cdots &q_{0}\end{pmatrix}}.} The matrix Ti of φ i {\displaystyle \varphi _{i}} is the (m + n − i) × (m + n − 2i)-submatrix of S which is obtained by removing the last i rows of zeros in the submatrix of the columns 1 to n − i and n + 1 to m + n − i of S (that is removing i columns in each block and the i last rows of zeros). The principal subresultant coefficient si is the determinant of the m + n − 2i first rows of Ti. Let Vi be the (m + n − 2i) × (m + n − i) matrix defined as follows. First we add (i + 1) columns of zeros to the right of the (m + n − 2i − 1) × (m + n − 2i − 1) identity matrix. Then we border the bottom of the resulting matrix by a row consisting in (m + n − i − 1) zeros followed by Xi, Xi−1, ..., X, 1: V i = ( 1 0 ⋯ 0 0 0 ⋯ 0 0 1 ⋯ 0 0 0 ⋯ 0 ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ 0 0 0 ⋯ 1 0 0 ⋯ 0 0 0 ⋯ 0 X i X i − 1 ⋯ 1 ) . {\displaystyle V_{i}={\begin{pmatrix}1&0&\cdots &0&0&0&\cdots &0\\0&1&\cdots &0&0&0&\cdots &0\\\vdots &\vdots &\ddots &\vdots &\vdots &\ddots &\vdots &0\\0&0&\cdots &1&0&0&\cdots &0\\0&0&\cdots &0&X^{i}&X^{i-1}&\cdots &1\end{pmatrix}}.} With this notation, the i-th subresultant polynomial is the determinant of the matrix product ViTi. Its coefficient of degree j is the determinant of the square submatrix of Ti consisting in its m + n − 2i − 1 first rows and the (m + n − i − j)-th row. ==== Sketch of the proof ==== It is not obvious that, as defined, the subresultants have the desired properties. Nevertheless, the proof is rather simple if the properties of linear algebra and those of polynomials are put together. As defined, the columns of the matrix Ti are the vectors of the coefficients of some polynomials belonging to the image of φ i {\displaystyle \varphi _{i}} . The definition of the i-th subresultant polynomial Si shows that the vector of its coefficients is a linear combination of these column vectors, and thus that Si belongs to the image of φ i . {\displaystyle \varphi _{i}.} If the degree of the GCD is greater than i, then Bézout's identity shows that every non zero polynomial in the image of φ i {\displaystyle \varphi _{i}} has a degree larger than i. This implies that Si = 0. If, on the other hand, the degree of the GCD is i, then Bézout's identity again allows proving that the multiples of the GCD that have a degree lower than m + n − i are in the image of φ i {\displaystyle \varphi _{i}} . The vector space of these multiples has the dimension m + n − 2i and has a base of polynomials of pairwise different degrees, not smaller than i. This implies that the submatrix of the m + n − 2i first rows of the column echelon form of Ti is the identity matrix and thus that si is not 0. Thus Si is a polynomial in the image of φ i {\displaystyle \varphi _{i}} , which is a multiple of the GCD and has the same degree. It is thus a greatest common divisor. === GCD and root finding === ==== Square-free factorization ==== Most root-finding algorithms behave badly with polynomials that have multiple roots. It is therefore useful to detect and remove them before calling a root-finding algorithm. A GCD computation allows detection of the existence of multiple roots, since the multiple roots of a polynomial are the roots of the GCD of the polynomial and its derivative. After computing the GCD of the polynomial and its derivative, further GCD computations provide the complete square-free factorization of the polynomial, which is a factorization f = ∏ i = 1 deg ( f ) f i i {\displaystyle f=\prod _{i=1}^{\deg(f)}f_{i}^{i}} where, for each i, the polynomial fi either is 1 if f does not have any root of multiplicity i or is a square-free polynomial (that is a polynomial without multiple root) whose roots are exactly the roots of multiplicity i of f (see Yun's algorithm). Thus the square-free factorization reduces root-finding of a polynomial with multiple roots to root-finding of several square-free polynomials of lower degree. The square-free factorization is also the first step in most polynomial factorization algorithms. ==== Sturm sequence ==== The Sturm sequence of a polynomial with real coefficients is the sequence of the remainders provided by a variant of Euclid's algorithm applied to the polynomial and its derivative. For getting the Sturm sequence, one simply replaces the instruction r i + 1 := rem ( r i − 1 , r i ) {\displaystyle r_{i+1}:=\operatorname {rem} (r_{i-1},r_{i})} of Euclid's algorithm by r i + 1 := − rem ( r i − 1 , r i ) . {\displaystyle r_{i+1}:=-\operatorname {rem} (r_{i-1},r_{i}).} Let V(a) be the number of changes of signs in the sequence, when evaluated at a point a. Sturm's theorem asserts that V(a) − V(b) is the number of real roots of the polynomial in the interval [a, b]. Thus the Sturm sequence allows computing the number of real roots in a given interval. By subdividing the interval until every subinterval contains at most one root, this provides an algorithm that locates the real roots in intervals of arbitrary small length. == GCD over a ring and its field of fractions == In this section, we consider polynomials over a unique factorization domain R, typically the ring of the integers, and over its field of fractions F, typically the field of the rational numbers, and we denote R[X] and F[X] the rings of polynomials in a set of variables over these rings. === Primitive part–content factorization === The content of a polynomial p ∈ R[X], denoted "cont(p)", is the GCD of its coefficients. A polynomial q ∈ F[X] may be written q = p c {\displaystyle q={\frac {p}{c}}} where p ∈ R[X] and c ∈ R: it suffices to take for c a multiple of all denominators of the coefficients of q (for example their product) and p = cq. The content of q is defined as: cont ( q ) = cont ( p ) c . {\displaystyle \operatorname {cont} (q)={\frac {\operatorname {cont} (p)}{c}}.} In both cases, the content is defined up to the multiplication by a unit of R. The primitive part of a polynomial in R[X] or F[X] is defined by primpart ( p ) = p cont ( p ) . {\displaystyle \operatorname {primpart} (p)={\frac {p}{\operatorname {cont} (p)}}.} In both cases, it is a polynomial in R[X] that is primitive, which means that 1 is a GCD of its coefficients. Thus every polynomial in R[X] or F[X] may be factorized as p = cont ( p ) primpart ( p ) , {\displaystyle p=\operatorname {cont} (p)\,\operatorname {primpart} (p),} and this factorization is unique up to the multiplication of the content by a unit of R and of the primitive part by the inverse of this unit. Gauss's lemma implies that the product of two primitive polynomials is primitive. It follows that primpart ( p q ) = primpart ( p ) primpart ( q ) {\displaystyle \operatorname {primpart} (pq)=\operatorname {primpart} (p)\operatorname {primpart} (q)} and cont ( p q ) = cont ( p ) cont ( q ) . {\displaystyle \operatorname {cont} (pq)=\operatorname {cont} (p)\operatorname {cont} (q).} === Relation between the GCD over R and over F === The relations of the preceding section imply a strong relation between the GCD's in R[X] and in F[X]. To avoid ambiguities, the notation "gcd" will be indexed, in the following, by the ring in which the GCD is computed. If q1 and q2 belong to F[X], then primpart ( gcd F [ X ] ( q 1 , q 2 ) ) = gcd R [ X ] ( primpart ( q 1 ) , primpart ( q 2 ) ) . {\displaystyle \operatorname {primpart} (\gcd _{F[X]}(q_{1},q_{2}))=\gcd _{R[X]}(\operatorname {primpart} (q_{1}),\operatorname {primpart} (q_{2})).} If p1 and p2 belong to R[X], then gcd R [ X ] ( p 1 , p 2 ) = gcd R ( cont ( p 1 ) , cont ( p 2 ) ) gcd R [ X ] ( primpart ( p 1 ) , primpart ( p 2 ) ) , {\displaystyle \gcd _{R[X]}(p_{1},p_{2})=\gcd _{R}(\operatorname {cont} (p_{1}),\operatorname {cont} (p_{2}))\gcd _{R[X]}(\operatorname {primpart} (p_{1}),\operatorname {primpart} (p_{2})),} and gcd R [ X ] ( primpart ( p 1 ) , primpart ( p 2 ) ) = primpart ( gcd F [ X ] ( p 1 , p 2 ) ) . {\displaystyle \gcd _{R[X]}(\operatorname {primpart} (p_{1}),\operatorname {primpart} (p_{2}))=\operatorname {primpart} (\gcd _{F[X]}(p_{1},p_{2})).} Thus the computation of polynomial GCD's is essentially the same problem over F[X] and over R[X]. For univariate polynomials over the rational numbers, one may think that Euclid's algorithm is a convenient method for computing the GCD. However, it involves simplifying a large number of fractions of integers, and the resulting algorithm is not efficient. For this reason, methods have been designed to modify Euclid's algorithm for working only with polynomials over the integers. They consist of replacing the Euclidean division, which introduces fractions, by a so-called pseudo-division, and replacing the remainder sequence of the Euclid's algorithm by so-called pseudo-remainder sequences (see below). === Proof that GCD exists for multivariate polynomials === In the previous section we have seen that the GCD of polynomials in R[X] may be deduced from GCDs in R and in F[X]. A closer look on the proof shows that this allows us to prove the existence of GCDs in R[X], if they exist in R and in F[X]. In particular, if GCDs exist in R, and if X is reduced to one variable, this proves that GCDs exist in R[X] (Euclid's algorithm proves the existence of GCDs in F[X]). A polynomial in n variables may be considered as a univariate polynomial over the ring of polynomials in (n − 1) variables. Thus a recursion on the number of variables shows that if GCDs exist and may be computed in R, then they exist and may be computed in every multivariate polynomial ring over R. In particular, if R is either the ring of the integers or a field, then GCDs exist in R[x1, ..., xn], and what precedes provides an algorithm to compute them. The proof that a polynomial ring over a unique factorization domain is also a unique factorization domain is similar, but it does not provide an algorithm, because there is no general algorithm to factor univariate polynomials over a field (there are examples of fields for which there does not exist any factorization algorithm for the univariate polynomials). == Pseudo-remainder sequences == In this section, we consider an integral domain Z (typically the ring Z of the integers) and its field of fractions Q (typically the field Q of the rational numbers). Given two polynomials A and B in the univariate polynomial ring Z[X], the Euclidean division (over Q) of A by B provides a quotient and a remainder which may not belong to Z[X]. For, if one applies Euclid's algorithm to the following polynomials X 8 + X 6 − 3 X 4 − 3 X 3 + 8 X 2 + 2 X − 5 {\displaystyle X^{8}+X^{6}-3X^{4}-3X^{3}+8X^{2}+2X-5} and 3 X 6 + 5 X 4 − 4 X 2 − 9 X + 21 , {\displaystyle 3X^{6}+5X^{4}-4X^{2}-9X+21,} the successive remainders of Euclid's algorithm are − 5 9 X 4 + 1 9 X 2 − 1 3 , − 117 25 X 2 − 9 X + 441 25 , 233150 19773 X − 102500 6591 , − 1288744821 543589225 . {\displaystyle {\begin{aligned}&-{\tfrac {5}{9}}X^{4}+{\tfrac {1}{9}}X^{2}-{\tfrac {1}{3}},\\&-{\tfrac {117}{25}}X^{2}-9X+{\tfrac {441}{25}},\\&{\tfrac {233150}{19773}}X-{\tfrac {102500}{6591}},\\&-{\tfrac {1288744821}{543589225}}.\end{aligned}}} One sees that, despite the small degree and the small size of the coefficients of the input polynomials, one has to manipulate and simplify integer fractions of rather large size. The pseudo-division has been introduced to allow a variant of Euclid's algorithm for which all remainders belong to Z[X]. If deg ( A ) = a {\displaystyle \deg(A)=a} and deg ( B ) = b {\displaystyle \deg(B)=b} and a ≥ b, the pseudo-remainder of the pseudo-division of A by B, denoted by prem(A,B) is prem ( A , B ) = rem ( lc ( B ) a − b + 1 A , B ) , {\displaystyle \operatorname {prem} (A,B)=\operatorname {rem} (\operatorname {lc} (B)^{a-b+1}A,B),} where lc(B) is the leading coefficient of B (the coefficient of Xb). The pseudo-remainder of the pseudo-division of two polynomials in Z[X] belongs always to Z[X]. A pseudo-remainder sequence is the sequence of the (pseudo) remainders ri obtained by replacing the instruction r i + 1 := rem ( r i − 1 , r i ) {\displaystyle r_{i+1}:=\operatorname {rem} (r_{i-1},r_{i})} of Euclid's algorithm by r i + 1 := prem ( r i − 1 , r i ) α , {\displaystyle r_{i+1}:={\frac {\operatorname {prem} (r_{i-1},r_{i})}{\alpha }},} where α is an element of Z that divides exactly every coefficient of the numerator. Different choices of α give different pseudo-remainder sequences, which are described in the next subsections. As the common divisors of two polynomials are not changed if the polynomials are multiplied by invertible constants (in Q), the last nonzero term in a pseudo-remainder sequence is a GCD (in Q[X]) of the input polynomials. Therefore, pseudo-remainder sequences allows computing GCD's in Q[X] without introducing fractions in Q. In some contexts, it is essential to control the sign of the leading coefficient of the pseudo-remainder. This is typically the case when computing resultants and subresultants, or for using Sturm's theorem. This control can be done either by replacing lc(B) by its absolute value in the definition of the pseudo-remainder, or by controlling the sign of α (if α divides all coefficients of a remainder, the same is true for −α). === Trivial pseudo-remainder sequence === The simplest (to define) remainder sequence consists in taking always α = 1. In practice, it is not interesting, as the size of the coefficients grows exponentially with the degree of the input polynomials. This appears clearly on the example of the preceding section, for which the successive pseudo-remainders are − 15 X 4 + 3 X 2 − 9 , {\displaystyle -15\,X^{4}+3\,X^{2}-9,} 15795 X 2 + 30375 X − 59535 , {\displaystyle 15795\,X^{2}+30375\,X-59535,} 1254542875143750 X − 1654608338437500 , {\displaystyle 1254542875143750\,X-1654608338437500,} 12593338795500743100931141992187500. {\displaystyle 12593338795500743100931141992187500.} The number of digits of the coefficients of the successive remainders is more than doubled at each iteration of the algorithm. This is typical behavior of the trivial pseudo-remainder sequences. === Primitive pseudo-remainder sequence === The primitive pseudo-remainder sequence consists in taking for α the content of the numerator. Thus all the ri are primitive polynomials. The primitive pseudo-remainder sequence is the pseudo-remainder sequence, which generates the smallest coefficients. However it requires to compute a number of GCD's in Z, and therefore is not sufficiently efficient to be used in practice, especially when Z is itself a polynomial ring. With the same input as in the preceding sections, the successive remainders, after division by their content are − 5 X 4 + X 2 − 3 , {\displaystyle -5\,X^{4}+X^{2}-3,} 13 X 2 + 25 X − 49 , {\displaystyle 13\,X^{2}+25\,X-49,} 4663 X − 6150 , {\displaystyle 4663\,X-6150,} 1. {\displaystyle 1.} The small size of the coefficients hides the fact that a number of integers GCD and divisions by the GCD have been computed. === Subresultant pseudo-remainder sequence === A subresultant sequence can be also computed with pseudo-remainders. The process consists in choosing α in such a way that every ri is a subresultant polynomial. Surprisingly, the computation of α is very easy (see below). On the other hand, the proof of correctness of the algorithm is difficult, because it should take into account all the possibilities for the difference of degrees of two consecutive remainders. The coefficients in the subresultant sequence are rarely much larger than those of the primitive pseudo-remainder sequence. As GCD computations in Z are not needed, the subresultant sequence with pseudo-remainders gives the most efficient computation. With the same input as in the preceding sections, the successive remainders are 15 X 4 − 3 X 2 + 9 , {\displaystyle 15\,X^{4}-3\,X^{2}+9,} 65 X 2 + 125 X − 245 , {\displaystyle 65\,X^{2}+125\,X-245,} 9326 X − 12300 , {\displaystyle 9326\,X-12300,} 260708. {\displaystyle 260708.} The coefficients have a reasonable size. They are obtained without any GCD computation, only exact divisions. This makes this algorithm more efficient than that of primitive pseudo-remainder sequences. The algorithm computing the subresultant sequence with pseudo-remainders is given below. In this algorithm, the input (a, b) is a pair of polynomials in Z[X]. The ri are the successive pseudo remainders in Z[X], the variables i and di are non negative integers, and the Greek letters denote elements in Z. The functions deg() and rem() denote the degree of a polynomial and the remainder of the Euclidean division. In the algorithm, this remainder is always in Z[X]. Finally the divisions denoted / are always exact and have their result either in Z[X] or in Z. r0 := a r1 := b for (i := 1; ri ≠ 0; i := i+1) do di := deg(ri−1) − deg(ri) γi := lc(ri) if i = 1 then β1 := (−1)d1+1 ψ1 := −1 else ψi := (−γi−1)di−1 / ψi−1di−1−1 βi := −γi−1ψidi end if ri+1 := rem(γidi+1 ri−1, ri) / βi end for Note: "lc" stands for the leading coefficient, the coefficient of the highest degree of the variable. This algorithm computes not only the greatest common divisor (the last non zero ri), but also all the subresultant polynomials: The remainder ri is the (deg(ri−1) − 1)-th subresultant polynomial. If deg(ri) < deg(ri−1) − 1, the deg(ri)-th subresultant polynomial is lc(ri)deg(ri−1)−deg(ri)−1ri. All the other subresultant polynomials are zero. === Sturm sequence with pseudo-remainders === One may use pseudo-remainders for constructing sequences having the same properties as Sturm sequences. This requires to control the signs of the successive pseudo-remainders, in order to have the same signs as in the Sturm sequence. This may be done by defining a modified pseudo-remainder as follows. If deg ( A ) = a {\displaystyle \deg(A)=a} and deg ( B ) = b {\displaystyle \deg(B)=b} and a ≥ b, the modified pseudo-remainder prem2(A, B) of the pseudo-division of A by B is prem2 ( A , B ) = − rem ( | lc ( B ) | a − b + 1 A , B ) , {\displaystyle \operatorname {prem2} (A,B)=-\operatorname {rem} (\left|\operatorname {lc} (B)\right|^{a-b+1}A,B),} where |lc(B)| is the absolute value of the leading coefficient of B (the coefficient of Xb). For input polynomials with integer coefficients, this allows retrieval of Sturm sequences consisting of polynomials with integer coefficients. The subresultant pseudo-remainder sequence may be modified similarly, in which case the signs of the remainders coincide with those computed over the rationals. Note that the algorithm for computing the subresultant pseudo-remainder sequence given above will compute wrong subresultant polynomials if one uses − p r e m 2 ( A , B ) {\displaystyle -\mathrm {prem2} (A,B)} instead of prem ( A , B ) {\displaystyle \operatorname {prem} (A,B)} . == Modular GCD algorithm == If f and g are polynomials in F[x] for some finitely generated field F, the Euclidean Algorithm is the most natural way to compute their GCD. However, modern computer algebra systems only use it if F is finite because of a phenomenon called intermediate expression swell. Although degrees keep decreasing during the Euclidean algorithm, if F is not finite then the bit size of the polynomials can increase (sometimes dramatically) during the computations because repeated arithmetic operations in F tends to lead to larger expressions. For example, the addition of two rational numbers whose denominators are bounded by b leads to a rational number whose denominator is bounded by b2, so in the worst case, the bit size could nearly double with just one operation. To expedite the computation, take a ring D for which f and g are in D[x], and take an ideal I such that D/I is a finite ring. Then compute the GCD over this finite ring with the Euclidean Algorithm. Using reconstruction techniques (Chinese remainder theorem, rational reconstruction, etc.) one can recover the GCD of f and g from its image modulo a number of ideals I. One can prove that this works provided that one discards modular images with non-minimal degrees, and avoids ideals I modulo which a leading coefficient vanishes. Suppose F = Q ( 3 ) {\displaystyle F=\mathbb {Q} ({\sqrt {3}})} , D = Z [ 3 ] {\displaystyle D=\mathbb {Z} [{\sqrt {3}}]} , f = 3 x 3 − 5 x 2 + 4 x + 9 {\displaystyle f={\sqrt {3}}x^{3}-5x^{2}+4x+9} and g = x 4 + 4 x 2 + 3 3 x − 6 {\displaystyle g=x^{4}+4x^{2}+3{\sqrt {3}}x-6} . If we take I = ( 2 ) {\displaystyle I=(2)} then D / I {\displaystyle D/I} is a finite ring (not a field since I {\displaystyle I} is not maximal in D {\displaystyle D} ). The Euclidean algorithm applied to the images of f , g {\displaystyle f,g} in ( D / I ) [ x ] {\displaystyle (D/I)[x]} succeeds and returns 1. This implies that the GCD of f , g {\displaystyle f,g} in F [ x ] {\displaystyle F[x]} must be 1 as well. Note that this example could easily be handled by any method because the degrees were too small for expression swell to occur, but it illustrates that if two polynomials have GCD 1, then the modular algorithm is likely to terminate after a single ideal I {\displaystyle I} . == See also == List of polynomial topics Multivariate division algorithm == Notes == == References == === Citations === === Bibliography === |
Wikipedia:Polynomial identity testing#0 | In mathematics, polynomial identity testing (PIT) is the problem of efficiently determining whether two multivariate polynomials are identical. More formally, a PIT algorithm is given an arithmetic circuit that computes a polynomial p in a field, and decides whether p is the zero polynomial. Determining the computational complexity required for polynomial identity testing, in particular finding deterministic algorithms for PIT, is one of the most important open problems in algebraic complexity theory. == Description == The question "Does ( x + y ) ( x − y ) {\displaystyle (x+y)(x-y)} equal x 2 − y 2 ? {\displaystyle x^{2}-y^{2}\,?} " is a question about whether two polynomials are identical. As with any polynomial identity testing question, it can be trivially transformed into the question "Is a certain polynomial equal to 0?"; in this case we can ask "Does ( x + y ) ( x − y ) − ( x 2 − y 2 ) = 0 {\displaystyle (x+y)(x-y)-(x^{2}-y^{2})=0} "? If we are given the polynomial as an algebraic expression (rather than as a black-box), we can confirm that the equality holds through brute-force multiplication and addition, but the time complexity of the brute-force approach grows as ( n + d d ) {\displaystyle {\tbinom {n+d}{d}}} , where n {\displaystyle n} is the number of variables (here, n = 2 {\displaystyle n=2} : x {\displaystyle x} is the first and y {\displaystyle y} is the second), and d {\displaystyle d} is the degree of the polynomial (here, d = 2 {\displaystyle d=2} ). If n {\displaystyle n} and d {\displaystyle d} are both large, ( n + d d ) {\displaystyle {\tbinom {n+d}{d}}} grows exponentially. PIT concerns whether a polynomial is identical to the zero polynomial, rather than whether the function implemented by the polynomial always evaluates to zero in the given domain. For example, the field with two elements, GF(2), contains only the elements 0 and 1. In GF(2), x 2 − x {\displaystyle x^{2}-x} always evaluates to zero; despite this, PIT does not consider x 2 − x {\displaystyle x^{2}-x} to be equal to the zero polynomial. Determining the computational complexity required for polynomial identity testing is one of the most important open problems in the mathematical subfield known as "algebraic computing complexity". The study of PIT is a building-block to many other areas of computational complexity, such as the proof that IP=PSPACE. In addition, PIT has applications to Tutte matrices and also to primality testing, where PIT techniques led to the AKS primality test, the first deterministic (though impractical) polynomial time algorithm for primality testing. == Formal problem statement == Given an arithmetic circuit that computes a polynomial in a field, determine whether the polynomial is equal to the zero polynomial (that is, the polynomial with no nonzero terms). == Solutions == In some cases, the specification of the arithmetic circuit is not given to the PIT solver, and the PIT solver can only input values into a "black box" that implements the circuit, and then analyze the output. Note that the solutions below assume that any operation (such as multiplication) in the given field takes constant time; further, all black-box algorithms below assume the size of the field is larger than the degree of the polynomial. The Schwartz–Zippel algorithm provides a practical probabilistic solution, by simply randomly testing inputs and checking whether the output is zero. It was the first randomized polynomial time PIT algorithm to be proven correct. The larger the domain the inputs are drawn from, the less likely Schwartz–Zippel is to fail. If random bits are in short supply, the Chen-Kao algorithm (over the rationals) or the Lewin-Vadhan algorithm (over any field) require fewer random bits at the cost of more required runtime. A sparse PIT has at most m {\displaystyle m} nonzero monomial terms. A sparse PIT can be deterministically solved in polynomial time of the size of the circuit and the number m {\displaystyle m} of monomials, see also. A low degree PIT has an upper bound on the degree of the polynomial. Any low degree PIT problem can be reduced in subexponential time of the size of the circuit to a PIT problem for depth-four circuits; therefore, PIT for circuits of depth-four (and below) is intensely studied. == See also == Applications of Schwartz–Zippel lemma == External links == Lecture notes on "Polynomial Identity Testing by the Schwartz-Zippel Lemma" Polynomial Identity Testing by Michael Forbes - MIT on YouTube Prize winner for Polynomial Identity Testing == References == |
Wikipedia:Polynomial long division#0 | In algebra, polynomial long division is an algorithm for dividing a polynomial by another polynomial of the same or lower degree, a generalized version of the familiar arithmetic technique called long division. It can be done easily by hand, because it separates an otherwise complex division problem into smaller ones. Sometimes using a shorthand version called synthetic division is faster, with less writing and fewer calculations. Another abbreviated method is polynomial short division (Blomqvist's method). Polynomial long division is an algorithm that implements the Euclidean division of polynomials, which starting from two polynomials A (the dividend) and B (the divisor) produces, if B is not zero, a quotient Q and a remainder R such that A = BQ + R, and either R = 0 or the degree of R is lower than the degree of B. These conditions uniquely define Q and R, which means that Q and R do not depend on the method used to compute them. The result R = 0 occurs if and only if the polynomial A has B as a factor. Thus long division is a means for testing whether one polynomial has another as a factor, and, if it does, for factoring it out. For example, if a root r of A is known, it can be factored out by dividing A by (x – r). == Example == === Polynomial long division === Find the quotient and the remainder of the division of ( x 3 − 2 x 2 − 4 ) {\displaystyle (x^{3}-2x^{2}-4)} , the dividend, by ( x − 3 ) {\displaystyle (x-3)} , the divisor. The dividend is first rewritten like this: x 3 − 2 x 2 + 0 x − 4. {\displaystyle x^{3}-2x^{2}+0x-4.} The quotient and remainder can then be determined as follows: Divide the first term of the dividend by the highest term of the divisor (meaning the one with the highest power of x, which in this case is x). Place the result above the bar (x3 ÷ x = x2). x − 3 ) x 3 − 2 x 2 x − 3 ) x 3 − 2 x 2 + 0 x − 4 ¯ {\displaystyle {\begin{array}{l}{\color {White}x-3\ )\ x^{3}-2}x^{2}\\x-3\ {\overline {)\ x^{3}-2x^{2}+0x-4}}\end{array}}} Multiply the divisor by the result just obtained (the first term of the eventual quotient). Write the result under the first two terms of the dividend (x2 · (x − 3) = x3 − 3x2). x − 3 ) x 3 − 2 x 2 x − 3 ) x 3 − 2 x 2 + 0 x − 4 ¯ x − 3 ) x 3 − 3 x 2 {\displaystyle {\begin{array}{l}{\color {White}x-3\ )\ x^{3}-2}x^{2}\\x-3\ {\overline {)\ x^{3}-2x^{2}+0x-4}}\\{\color {White}x-3\ )\ }x^{3}-3x^{2}\end{array}}} Subtract the product just obtained from the appropriate terms of the original dividend (being careful that subtracting something having a minus sign is equivalent to adding something having a plus sign), and write the result underneath (x3 − 2x2) − (x3 − 3x2) = −2x2 + 3x2 = x2 Then, "bring down" the next term from the dividend. x − 3 ) x 3 − 2 x 2 x − 3 ) x 3 − 2 x 2 + 0 x − 4 ¯ x − 3 ) x 3 − 3 x 2 _ x − 3 ) 0 x 3 + x 2 + 0 x {\displaystyle {\begin{array}{l}{\color {White}x-3\ )\ x^{3}-2}x^{2}\\x-3\ {\overline {)\ x^{3}-2x^{2}+0x-4}}\\{\color {White}x-3\ )\ }{\underline {x^{3}-3x^{2}}}\\{\color {White}x-3\ )\ 0x^{3}}+{\color {White}}x^{2}+0x\end{array}}} Repeat the previous three steps, except this time use the two terms that have just been written as the dividend. x 2 + 1 x + 3 x − 3 ) x 3 − 2 x 2 + 0 x − 4 ¯ x 3 − 3 x 2 + 0 x − 4 _ + x 2 + 0 x − 4 + x 2 − 3 x − 4 _ + 3 x − 4 {\displaystyle {\begin{array}{r}x^{2}+{\color {White}1}x{\color {White}{}+3}\\x-3\ {\overline {)\ x^{3}-2x^{2}+0x-4}}\\{\underline {x^{3}-3x^{2}{\color {White}{}+0x-4}}}\\+x^{2}+0x{\color {White}{}-4}\\{\underline {+x^{2}-3x{\color {White}{}-4}}}\\+3x-4\\\end{array}}} Repeat step 4. This time, there is nothing to "bring down". x 2 + 1 x + 3 x − 3 ) x 3 − 2 x 2 + 0 x − 4 ¯ x 3 − 3 x 2 + 0 x − 4 _ + x 2 + 0 x − 4 + x 2 − 3 x − 4 _ + 3 x − 4 + 3 x − 9 _ + 5 {\displaystyle {\begin{array}{r}x^{2}+{\color {White}1}x+3\\x-3\ {\overline {)\ x^{3}-2x^{2}+0x-4}}\\{\underline {x^{3}-3x^{2}{\color {White}{}+0x-4}}}\\+x^{2}+0x{\color {White}{}-4}\\{\underline {+x^{2}-3x{\color {White}{}-4}}}\\+3x-4\\{\underline {+3x-9}}\\+5\end{array}}} The polynomial above the bar is the quotient q(x), and the number left over (5) is the remainder r(x). x 3 − 2 x 2 − 4 = ( x − 3 ) ( x 2 + x + 3 ) ⏟ q ( x ) + 5 ⏟ r ( x ) {\displaystyle {x^{3}-2x^{2}-4}=(x-3)\,\underbrace {(x^{2}+x+3)} _{q(x)}+\underbrace {5} _{r(x)}} The long division algorithm for arithmetic is very similar to the above algorithm, in which the variable x is replaced (in base 10) by the specific number 10. === Polynomial short division === Blomqvist's method is an abbreviated version of the long division above. This pen-and-paper method uses the same algorithm as polynomial long division, but mental calculation is used to determine remainders. This requires less writing, and can therefore be a faster method once mastered. The division is at first written in a similar way as long multiplication with the dividend at the top, and the divisor below it. The quotient is to be written below the bar from left to right. x 3 − 2 x 2 + 0 x − 4 ÷ x − 3 _ {\displaystyle {\begin{matrix}\qquad \qquad x^{3}-2x^{2}+{0x}-4\\{\underline {\div \quad \qquad \qquad \qquad \qquad x-3}}\end{matrix}}} Divide the first term of the dividend by the highest term of the divisor (x3 ÷ x = x2). Place the result below the bar. x3 has been divided leaving no remainder, and can therefore be marked as used by crossing it out. The result x2 is then multiplied by the second term in the divisor −3 = −3x2. Determine the partial remainder by subtracting −2x2 − (−3x2) = x2. Mark −2x2 as used and place the new remainder x2 above it. x 2 x 3 + − 2 x 2 + 0 x − 4 ÷ x − 3 _ x 2 {\displaystyle {\begin{matrix}\qquad x^{2}\\\qquad \quad {\bcancel {x^{3}}}+{\bcancel {-2x^{2}}}+{0x}-4\\{\underline {\div \qquad \qquad \qquad \qquad \qquad x-3}}\\x^{2}\qquad \qquad \end{matrix}}} Divide the highest term of the remainder by the highest term of the divisor (x2 ÷ x = x). Place the result (+x) below the bar. x2 has been divided leaving no remainder, and can therefore be marked as used. The result x is then multiplied by the second term in the divisor −3 = −3x. Determine the partial remainder by subtracting 0x − (−3x) = 3x. Mark 0x as used and place the new remainder 3x above it. x 2 3 x x 3 + − 2 x 2 + 0 x − 4 ÷ x − 3 _ x 2 + x {\displaystyle {\begin{matrix}\qquad \qquad \quad {\bcancel {x^{2}}}\quad 3x\\\qquad \quad {\bcancel {x^{3}}}+{\bcancel {-2x^{2}}}+{\bcancel {0x}}-4\\{\underline {\div \qquad \qquad \qquad \qquad \qquad x-3}}\\x^{2}+x\qquad \end{matrix}}} Divide the highest term of the remainder by the highest term of the divisor (3x ÷ x = 3). Place the result (+3) below the bar. 3x has been divided leaving no remainder, and can therefore be marked as used. The result 3 is then multiplied by the second term in the divisor −3 = −9. Determine the partial remainder by subtracting −4 − (−9) = 5. Mark −4 as used and place the new remainder 5 above it. x 2 3 x 5 x 3 + − 2 x 2 + 0 x − 4 ÷ x − 3 _ x 2 + x + 3 {\displaystyle {\begin{matrix}\quad \qquad \qquad \qquad {\bcancel {x^{2}}}\quad {\bcancel {3x}}\quad 5\\\qquad \quad {\bcancel {x^{3}}}+{\bcancel {-2x^{2}}}+{\bcancel {0x}}{\bcancel {-4}}\\{\underline {\div \qquad \qquad \qquad \qquad \qquad x-3}}\\x^{2}+x+3\qquad \end{matrix}}} The polynomial below the bar is the quotient q(x), and the number left over (5) is the remainder r(x). == Pseudocode == The algorithm can be represented in pseudocode as follows, where +, −, and × represent polynomial arithmetic, and lead(r) / lead(d) represents the polynomial obtained by dividing the two leading terms: function n / d is require d ≠ 0 q ← 0 r ← n // At each step n = d × q + r while r ≠ 0 and degree(r) ≥ degree(d) do t ← lead(r) / lead(d) // Divide the leading terms q ← q + t r ← r − t × d return (q, r) This works equally well when degree(n) < degree(d); in that case the result is just the trivial (0, n). This algorithm describes exactly the above paper and pencil method: d is written on the left of the ")"; q is written, term after term, above the horizontal line, the last term being the value of t; the region under the horizontal line is used to compute and write down the successive values of r. == Euclidean division == For every pair of polynomials (A, B) such that B ≠ 0, polynomial division provides a quotient Q and a remainder R such that A = B Q + R , {\displaystyle A=BQ+R,} and either R=0 or degree(R) < degree(B). Moreover (Q, R) is the unique pair of polynomials having this property. The process of getting the uniquely defined polynomials Q and R from A and B is called Euclidean division (sometimes division transformation). Polynomial long division is thus an algorithm for Euclidean division. == Applications == === Factoring polynomials === Sometimes one or more roots of a polynomial are known, perhaps having been found using the rational root theorem. If one root r of a polynomial P(x) of degree n is known then polynomial long division can be used to factor P(x) into the form (x − r)Q(x) where Q(x) is a polynomial of degree n − 1. Q(x) is simply the quotient obtained from the division process; since r is known to be a root of P(x), it is known that the remainder must be zero. Likewise, if several roots r, s, . . . of P(x) are known, a linear factor (x − r) can be divided out to obtain Q(x), and then (x − s) can be divided out of Q(x), etc. Alternatively, the quadratic factor ( x − r ) ( x − s ) = x 2 − ( r + s ) x + r s {\displaystyle (x-r)(x-s)=x^{2}-(r{+}s)x+rs} can be divided out of P(x) to obtain a quotient of degree n − 2. This method is especially useful for cubic polynomials, and sometimes all the roots of a higher-degree polynomial can be obtained. For example, if the rational root theorem produces a single (rational) root of a quintic polynomial, it can be factored out to obtain a quartic (fourth degree) quotient; the explicit formula for the roots of a quartic polynomial can then be used to find the other four roots of the quintic. There is, however, no general way to solve a quintic by purely algebraic methods, see Abel–Ruffini theorem. === Finding tangents to polynomial functions === Polynomial long division can be used to find the equation of the line that is tangent to the graph of the function defined by the polynomial P(x) at a particular point x = r. If R(x) is the remainder of the division of P(x) by (x − r)2, then the equation of the tangent line at x = r to the graph of the function y = P(x) is y = R(x), regardless of whether or not r is a root of the polynomial. ==== Example ==== Find the equation of the line that is tangent to the following curve y = ( x 3 − 12 x 2 − 42 ) {\displaystyle y=(x^{3}-12x^{2}-42)} at: x = 1 {\displaystyle x=1} Begin by dividing the polynomial by: ( x − 1 ) 2 = ( x 2 − 2 x + 1 ) {\displaystyle (x-1)^{2}=(x^{2}-2x+1)} x − 10 x 2 − 2 x + 1 ) x 3 − 12 x 2 + 0 x − 42 ¯ x 3 − 0 2 x 2 + 1 x _ − 42 − 10 x 2 − 01 x − 42 − 10 x 2 + 20 x − 10 _ − 21 x − 32 {\displaystyle {\begin{array}{r}x-10\\x^{2}-2x+1\ {\overline {)\ x^{3}-12x^{2}+0x-42}}\\{\underline {x^{3}-{\color {White}0}2x^{2}+{\color {White}1}x}}{\color {White}{}-42}\\-10x^{2}-{\color {White}01}x-42\\{\underline {-10x^{2}+20x-10}}\\-21x-32\end{array}}} The tangent line is y = ( − 21 x − 32 ) {\displaystyle y=(-21x-32)} === Cyclic redundancy check === A cyclic redundancy check uses the remainder of polynomial division to detect errors in transmitted messages. == See also == Polynomial remainder theorem Synthetic division, a more concise method of performing Euclidean polynomial division Ruffini's rule Euclidean domain Gröbner basis Greatest common divisor of two polynomials == References == |
Wikipedia:Polynomial mapping#0 | In algebra, a polynomial map or polynomial mapping P : V → W {\displaystyle P:V\to W} between vector spaces over an infinite field k is a polynomial in linear functionals with coefficients in k; i.e., it can be written as P ( v ) = ∑ i 1 , … , i n λ i 1 ( v ) ⋯ λ i n ( v ) w i 1 , … , i n {\displaystyle P(v)=\sum _{i_{1},\dots ,i_{n}}\lambda _{i_{1}}(v)\cdots \lambda _{i_{n}}(v)w_{i_{1},\dots ,i_{n}}} where the λ i j : V → k {\displaystyle \lambda _{i_{j}}:V\to k} are linear functionals and the w i 1 , … , i n {\displaystyle w_{i_{1},\dots ,i_{n}}} are vectors in W. For example, if W = k m {\displaystyle W=k^{m}} , then a polynomial mapping can be expressed as P ( v ) = ( P 1 ( v ) , … , P m ( v ) ) {\displaystyle P(v)=(P_{1}(v),\dots ,P_{m}(v))} where the P i {\displaystyle P_{i}} are (scalar-valued) polynomial functions on V. (The abstract definition has an advantage that the map is manifestly free of a choice of basis.) When V, W are finite-dimensional vector spaces and are viewed as algebraic varieties, then a polynomial mapping is precisely a morphism of algebraic varieties. One fundamental outstanding question regarding polynomial mappings is the Jacobian conjecture, which concerns the sufficiency of a polynomial mapping to be invertible. == See also == Polynomial functor == References == Claudio Procesi (2007) Lie Groups: an approach through invariants and representation, Springer, ISBN 9780387260402. |
Wikipedia:Polynomial transformation#0 | In mathematics, a polynomial transformation consists of computing the polynomial whose roots are a given function of the roots of a polynomial. Polynomial transformations such as Tschirnhaus transformations are often used to simplify the solution of algebraic equations. == Simple examples == === Translating the roots === Let P ( x ) = a 0 x n + a 1 x n − 1 + ⋯ + a n {\displaystyle P(x)=a_{0}x^{n}+a_{1}x^{n-1}+\cdots +a_{n}} be a polynomial, and α 1 , … , α n {\displaystyle \alpha _{1},\ldots ,\alpha _{n}} be its complex roots (not necessarily distinct). For any constant c, the polynomial whose roots are α 1 + c , … , α n + c {\displaystyle \alpha _{1}+c,\ldots ,\alpha _{n}+c} is Q ( y ) = P ( y − c ) = a 0 ( y − c ) n + a 1 ( y − c ) n − 1 + ⋯ + a n . {\displaystyle Q(y)=P(y-c)=a_{0}(y-c)^{n}+a_{1}(y-c)^{n-1}+\cdots +a_{n}.} If the coefficients of P are integers and the constant c = p q {\displaystyle c={\frac {p}{q}}} is a rational number, the coefficients of Q may be not integers, but the polynomial cn Q has integer coefficients and has the same roots as Q. A special case is when c = a 1 n a 0 . {\displaystyle c={\frac {a_{1}}{na_{0}}}.} The resulting polynomial Q does not have any term in yn − 1. === Reciprocals of the roots === Let P ( x ) = a 0 x n + a 1 x n − 1 + ⋯ + a n {\displaystyle P(x)=a_{0}x^{n}+a_{1}x^{n-1}+\cdots +a_{n}} be a polynomial. The polynomial whose roots are the reciprocals of the roots of P as roots is its reciprocal polynomial Q ( y ) = y n P ( 1 y ) = a n y n + a n − 1 y n − 1 + ⋯ + a 0 . {\displaystyle Q(y)=y^{n}P\left({\frac {1}{y}}\right)=a_{n}y^{n}+a_{n-1}y^{n-1}+\cdots +a_{0}.} === Scaling the roots === Let P ( x ) = a 0 x n + a 1 x n − 1 + ⋯ + a n {\displaystyle P(x)=a_{0}x^{n}+a_{1}x^{n-1}+\cdots +a_{n}} be a polynomial, and c be a non-zero constant. A polynomial whose roots are the product by c of the roots of P is Q ( y ) = c n P ( y c ) = a 0 y n + a 1 c y n − 1 + ⋯ + a n c n . {\displaystyle Q(y)=c^{n}P\left({\frac {y}{c}}\right)=a_{0}y^{n}+a_{1}cy^{n-1}+\cdots +a_{n}c^{n}.} The factor cn appears here because, if c and the coefficients of P are integers or belong to some integral domain, the same is true for the coefficients of Q. In the special case where c = a 0 {\displaystyle c=a_{0}} , all coefficients of Q are multiple of c, and Q c {\displaystyle {\frac {Q}{c}}} is a monic polynomial, whose coefficients belong to any integral domain containing c and the coefficients of P. This polynomial transformation is often used to reduce questions on algebraic numbers to questions on algebraic integers. Combining this with a translation of the roots by a 1 n a 0 {\displaystyle {\frac {a_{1}}{na_{0}}}} , allows to reduce any question on the roots of a polynomial, such as root-finding, to a similar question on a simpler polynomial, which is monic and does not have a term of degree n − 1. For examples of this, see Cubic function § Reduction to a depressed cubic or Quartic function § Converting to a depressed quartic. == Transformation by a rational function == All preceding examples are polynomial transformations by a rational function, also called Tschirnhaus transformations. Let f ( x ) = g ( x ) h ( x ) {\displaystyle f(x)={\frac {g(x)}{h(x)}}} be a rational function, where g and h are coprime polynomials. The polynomial transformation of a polynomial P by f is the polynomial Q (defined up to the product by a non-zero constant) whose roots are the images by f of the roots of P. Such a polynomial transformation may be computed as a resultant. In fact, the roots of the desired polynomial Q are exactly the complex numbers y such that there is a complex number x such that one has simultaneously (if the coefficients of P, g and h are not real or complex numbers, "complex number" has to be replaced by "element of an algebraically closed field containing the coefficients of the input polynomials") P ( x ) = 0 y h ( x ) − g ( x ) = 0 . {\displaystyle {\begin{aligned}P(x)&=0\\y\,h(x)-g(x)&=0\,.\end{aligned}}} This is exactly the defining property of the resultant Res x ( y h ( x ) − g ( x ) , P ( x ) ) . {\displaystyle \operatorname {Res} _{x}(y\,h(x)-g(x),P(x)).} This is generally difficult to compute by hand. However, as most computer algebra systems have a built-in function to compute resultants, it is straightforward to compute it with a computer. === Properties === If the polynomial P is irreducible, then either the resulting polynomial Q is irreducible, or it is a power of an irreducible polynomial. Let α {\displaystyle \alpha } be a root of P and consider L, the field extension generated by α {\displaystyle \alpha } . The former case means that f ( α ) {\displaystyle f(\alpha )} is a primitive element of L, which has Q as minimal polynomial. In the latter case, f ( α ) {\displaystyle f(\alpha )} belongs to a subfield of L and its minimal polynomial is the irreducible polynomial that has Q as power. == Transformation for equation-solving == Polynomial transformations have been applied to the simplification of polynomial equations for solution, where possible, by radicals. Descartes introduced the transformation of a polynomial of degree d which eliminates the term of degree d − 1 by a translation of the roots. Such a polynomial is termed depressed. This already suffices to solve the quadratic by square roots. In the case of the cubic, Tschirnhaus transformations replace the variable by a quadratic function, thereby making it possible to eliminate two terms, and so can be used to eliminate the linear term in a depressed cubic to achieve the solution of the cubic by a combination of square and cube roots. The Bring–Jerrard transformation, which is quartic in the variable, brings a quintic into Bring-Jerrard normal form with terms of degree 5,1, and 0. == See also == Tschirnhaus transformation Adamchik transformation == References == Adamchik, Victor S.; Jeffrey, David J. (2003). "Polynomial transformations of Tschirnhaus, Bring and Jerrard" (PDF). SIGSAM Bull. 37 (3): 90–94. Zbl 1055.65063. Archived from the original (PDF) on 2009-02-26. |
Wikipedia:Pompeiu derivative#0 | In mathematical analysis, a Pompeiu derivative is a real-valued function of one real variable that is the derivative of an everywhere differentiable function and that vanishes in a dense set. In particular, a Pompeiu derivative is discontinuous at every point where it is not 0. Whether non-identically zero such functions may exist was a problem that arose in the context of early-1900s research on functional differentiability and integrability. The question was affirmatively answered by Dimitrie Pompeiu by constructing an explicit example; these functions are therefore named after him. == Pompeiu's construction == Pompeiu's construction is described here. Let x 3 {\displaystyle {\sqrt[{3}]{x}}} denote the real cube root of the real number x. Let { q j } j ∈ N {\displaystyle \{q_{j}\}_{j\in \mathbb {N} }} be an enumeration of the rational numbers in the unit interval [0, 1]. Let { a j } j ∈ N {\displaystyle \{a_{j}\}_{j\in \mathbb {N} }} be positive real numbers with ∑ j a j < ∞ {\displaystyle \sum _{j}a_{j}<\infty } . Define g : [ 0 , 1 ] → R {\displaystyle g\colon [0,1]\rightarrow \mathbb {R} } by g ( x ) := a 0 + ∑ j = 1 ∞ a j x − q j 3 . {\displaystyle g(x):=a_{0}+\sum _{j=1}^{\infty }\,a_{j}{\sqrt[{3}]{x-q_{j}}}.} For each x in [0, 1], each term of the series is less than or equal to aj in absolute value, so the series uniformly converges to a continuous, strictly increasing function g(x), by the Weierstrass M-test. Moreover, it turns out that the function g is differentiable, with g ′ ( x ) := 1 3 ∑ j = 1 ∞ a j ( x − q j ) 2 3 > 0 , {\displaystyle g'(x):={\frac {1}{3}}\sum _{j=1}^{\infty }{\frac {a_{j}}{\sqrt[{3}]{(x-q_{j})^{2}}}}>0,} at every point where the sum is finite; also, at all other points, in particular, at each of the qj, one has g′(x) := +∞. Since the image of g is a closed bounded interval with left endpoint g ( 0 ) = a 0 − ∑ j = 1 ∞ a j q j 3 , {\displaystyle g(0)=a_{0}-\sum _{j=1}^{\infty }\,a_{j}{\sqrt[{3}]{q_{j}}},} up to the choice of a 0 {\displaystyle a_{0}} , we can assume g ( 0 ) = 0 {\displaystyle g(0)=0} and up to the choice of a multiplicative factor we can assume that g maps the interval [0, 1] onto itself. Since g is strictly increasing it is injective, and hence a homeomorphism; and by the theorem of differentiation of the inverse function, its inverse f := g−1 has a finite derivative at every point, which vanishes at least at the points { g ( q j ) } j ∈ N . {\displaystyle \{g(q_{j})\}_{j\in \mathbb {N} }.} These form a dense subset of [0, 1] (actually, it vanishes in many other points; see below). == Properties == It is known that the zero-set of a derivative of any everywhere differentiable function (and more generally, of any Baire class one function) is a Gδ subset of the real line. By definition, for any Pompeiu function, this set is a dense Gδ set; therefore it is a residual set. In particular, it possesses uncountably many points. A linear combination af(x) + bg(x) of Pompeiu functions is a derivative, and vanishes on the set { f = 0} ∩ {g = 0}, which is a dense G δ {\displaystyle G_{\delta }} set by the Baire category theorem. Thus, Pompeiu functions form a vector space of functions. A limit function of a uniformly convergent sequence of Pompeiu derivatives is a Pompeiu derivative. Indeed, it is a derivative, due to the theorem of limit under the sign of derivative. Moreover, it vanishes in the intersection of the zero sets of the functions of the sequence: since these are dense Gδ sets, the zero set of the limit function is also dense. As a consequence, the class E of all bounded Pompeiu derivatives on an interval [a, b] is a closed linear subspace of the Banach space of all bounded functions under the uniform distance (hence, it is a Banach space). Pompeiu's above construction of a positive function is a rather peculiar example of a Pompeiu's function: a theorem of Weil states that generically a Pompeiu derivative assumes both positive and negative values in dense sets, in the precise meaning that such functions constitute a residual set of the Banach space E. == References == Pompeiu, Dimitrie (1907). "Sur les fonctions dérivées". Mathematische Annalen (in French). 63 (3): 326–332. doi:10.1007/BF01449201. MR 1511410. Andrew M. Bruckner, "Differentiation of real functions"; CRM Monograph series, Montreal (1994). |
Wikipedia:Pompeiu problem#0 | In mathematics, the Pompeiu problem is a conjecture in integral geometry, named for Dimitrie Pompeiu, who posed the problem in 1929, as follows. Suppose f is a nonzero continuous function defined on a Euclidean space, and K is a simply connected Lipschitz domain, so that the integral of f vanishes on every congruent copy of K. Then the domain is a ball. A special case is Schiffer's conjecture. == References == Pompeiu, Dimitrie (1929), "Sur certains systèmes d'équations linéaires et sur une propriété intégrale des fonctions de plusieurs variables", Comptes Rendus de l'Académie des Sciences, Série I, 188: 1138–1139 Ciatti, Paolo (2008), Topics in mathematical analysis, Series on analysis, applications and computation, vol. 3, World Scientific, ISBN 978-981-281-105-9 == External links == Pompeiu problem at Department of Geometry, Bolyai Institute, University of Szeged, Hungary Pompeiu problem at SpringerLink encyclopaedia of mathematics The Pompeiu problem, Schiffer's conjecture, |
Wikipedia:Pontryagin duality#0 | In mathematics, Pontryagin duality is a duality between locally compact abelian groups that allows generalizing Fourier transform to all such groups, which include the circle group (the multiplicative group of complex numbers of modulus one), the finite abelian groups (with the discrete topology), and the additive group of the integers (also with the discrete topology), the real numbers, and every finite-dimensional vector space over the reals or a p-adic field. The Pontryagin dual of a locally compact abelian group is the locally compact abelian topological group, consisting of the continuous group homomorphisms from the group to the circle group, with the operation of pointwise multiplication and the topology of uniform convergence on compact sets. The Pontryagin duality theorem establishes Pontryagin duality by stating that any locally compact abelian group is naturally isomorphic with its bidual (the dual of its dual). The Fourier inversion theorem is a special case of this theorem. The subject is named after Lev Pontryagin who laid down the foundations for the theory of locally compact abelian groups and their duality during his early mathematical works in 1934. Pontryagin's treatment relied on the groups being second-countable and either compact or discrete. This was improved to cover the general locally compact abelian groups by Egbert van Kampen in 1935 and André Weil in 1940. == Introduction == Pontryagin duality places in a unified context a number of observations about functions on the real line or on finite abelian groups: Suitably regular complex-valued periodic functions on the real line have Fourier series and these functions can be recovered from their Fourier series; Suitably regular complex-valued functions on the real line have Fourier transforms that are also functions on the real line and, just as for periodic functions, these functions can be recovered from their Fourier transforms; and Complex-valued functions on a finite abelian group have discrete Fourier transforms, which are functions on the dual group, which is a (non-canonically) isomorphic group. Moreover, any function on a finite abelian group can be recovered from its discrete Fourier transform. The theory, introduced by Lev Pontryagin and combined with the Haar measure introduced by John von Neumann, André Weil and others depends on the theory of the dual group of a locally compact abelian group. It is analogous to the dual vector space of a vector space: a finite-dimensional vector space V {\displaystyle V} and its dual vector space V ∗ {\displaystyle V^{*}} are not naturally isomorphic, but the endomorphism algebra (matrix algebra) of one is isomorphic to the opposite of the endomorphism algebra of the other: End ( V ) ≅ End ( V ∗ ) op , {\displaystyle {\text{End}}(V)\cong {{\text{End}}(V^{*})}^{\text{op}},} via the transpose. Similarly, a group G {\displaystyle G} and its dual group G ^ {\displaystyle {\widehat {G}}} are not in general isomorphic, but their endomorphism rings are opposite to each other: End ( G ) ≅ End ( G ^ ) op {\displaystyle {\text{End}}(G)\cong {\text{End}}({\widehat {G}})^{\text{op}}} . More categorically, this is not just an isomorphism of endomorphism algebras, but a contravariant equivalence of categories – see § Categorical considerations. == Definition == A topological group is a locally compact group if the underlying topological space is locally compact and Hausdorff; a topological group is abelian if the underlying group is abelian. Examples of locally compact abelian groups include finite abelian groups, the integers (both for the discrete topology, which is also induced by the usual metric), the real numbers, the circle group T (both with their usual metric topology), and also the p-adic numbers (with their usual p-adic topology). For a locally compact abelian group G {\displaystyle G} , the Pontryagin dual is the group G ^ {\displaystyle {\widehat {G}}} of continuous group homomorphisms from G {\displaystyle G} to the circle group T {\displaystyle T} . That is, G ^ := Hom ( G , T ) . {\displaystyle {\widehat {G}}:=\operatorname {Hom} (G,T).} The Pontryagin dual G ^ {\displaystyle {\widehat {G}}} is usually endowed with the topology given by uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from G {\displaystyle G} to T {\displaystyle T} ). For example, Z / n Z ^ = Z / n Z , Z ^ = T , R ^ = R , T ^ = Z . {\displaystyle {\widehat {\mathbb {Z} /n\mathbb {Z} }}=\mathbb {Z} /n\mathbb {Z} ,\ {\widehat {\mathbb {Z} }}=T,\ {\widehat {\mathbb {R} }}=\mathbb {R} ,\ {\widehat {T}}=\mathbb {Z} .} == Pontryagin duality theorem == Canonical means that there is a naturally defined map ev G : G → G ^ ^ {\displaystyle \operatorname {ev} _{G}\colon G\to {\widehat {\widehat {G}}}} ; more importantly, the map should be functorial in G {\displaystyle G} . For the multiplicative character χ {\displaystyle \chi } of the group G {\displaystyle G} , the canonical isomorphism ev G {\displaystyle \operatorname {ev} _{G}} is defined on x ∈ G {\displaystyle x\in G} as follows: ev G ( x ) ( χ ) = χ ( x ) ∈ T . {\displaystyle \operatorname {ev} _{G}(x)(\chi )=\chi (x)\in \mathbb {T} .} That is, ev G ( x ) : ( χ ↦ χ ( x ) ) . {\displaystyle \operatorname {ev} _{G}(x):(\chi \mapsto \chi (x)).} In other words, each group element x {\displaystyle x} is identified to the evaluation character on the dual. This is strongly analogous to the canonical isomorphism between a finite-dimensional vector space and its double dual, V ≅ V ∗ ∗ {\displaystyle V\cong V^{**}} , and it is worth mentioning that any vector space V {\displaystyle V} is an abelian group. If G {\displaystyle G} is a finite abelian group, then G ≅ G ^ {\displaystyle G\cong {\widehat {G}}} but this isomorphism is not canonical. Making this statement precise (in general) requires thinking about dualizing not only on groups, but also on maps between the groups, in order to treat dualization as a functor and prove the identity functor and the dualization functor are not naturally equivalent. Also the duality theorem implies that for any group (not necessarily finite) the dualization functor is an exact functor. == Pontryagin duality and the Fourier transform == === Haar measure === One of the most remarkable facts about a locally compact group G {\displaystyle G} is that it carries an essentially unique natural measure, the Haar measure, which allows one to consistently measure the "size" of sufficiently regular subsets of G {\displaystyle G} . "Sufficiently regular subset" here means a Borel set; that is, an element of the σ-algebra generated by the compact sets. More precisely, a right Haar measure on a locally compact group G {\displaystyle G} is a countably additive measure μ defined on the Borel sets of G {\displaystyle G} which is right invariant in the sense that μ ( A x ) = μ ( A ) {\displaystyle \mu (Ax)=\mu (A)} for x {\displaystyle x} an element of G {\displaystyle G} and A {\displaystyle A} a Borel subset of G {\displaystyle G} and also satisfies some regularity conditions (spelled out in detail in the article on Haar measure). Except for positive scaling factors, a Haar measure on G {\displaystyle G} is unique. The Haar measure on G {\displaystyle G} allows us to define the notion of integral for (complex-valued) Borel functions defined on the group. In particular, one may consider various Lp spaces associated to the Haar measure μ {\displaystyle \mu } . Specifically, L μ p ( G ) = { ( f : G → C ) | ∫ G | f ( x ) | p d μ ( x ) < ∞ } . {\displaystyle {\mathcal {L}}_{\mu }^{p}(G)=\left\{(f:G\to \mathbb {C} )\ {\Big |}\ \int _{G}|f(x)|^{p}\ d\mu (x)<\infty \right\}.} Note that, since any two Haar measures on G {\displaystyle G} are equal up to a scaling factor, this L p {\displaystyle L^{p}} -space is independent of the choice of Haar measure and thus perhaps could be written as L p ( G ) {\displaystyle L^{p}(G)} . However, the L p {\displaystyle L^{p}} -norm on this space depends on the choice of Haar measure, so if one wants to talk about isometries it is important to keep track of the Haar measure being used. === Fourier transform and Fourier inversion formula for L1-functions === The dual group of a locally compact abelian group is used as the underlying space for an abstract version of the Fourier transform. If f ∈ L 1 ( G ) {\displaystyle f\in L^{1}(G)} , then the Fourier transform is the function f ^ {\displaystyle {\widehat {f}}} on G ^ {\displaystyle {\widehat {G}}} defined by f ^ ( χ ) = ∫ G f ( x ) χ ( x ) ¯ d μ ( x ) , {\displaystyle {\widehat {f}}(\chi )=\int _{G}f(x){\overline {\chi (x)}}\ d\mu (x),} where the integral is relative to Haar measure μ {\displaystyle \mu } on G {\displaystyle G} . This is also denoted ( F f ) ( χ ) {\displaystyle ({\mathcal {F}}f)(\chi )} . Note the Fourier transform depends on the choice of Haar measure. It is not too difficult to show that the Fourier transform of an L 1 {\displaystyle L^{1}} function on G {\displaystyle G} is a bounded continuous function on G ^ {\displaystyle {\widehat {G}}} which vanishes at infinity. The inverse Fourier transform of an integrable function on G ^ {\displaystyle {\widehat {G}}} is given by g ˇ ( x ) = ∫ G ^ g ( χ ) χ ( x ) d ν ( χ ) , {\displaystyle {\check {g}}(x)=\int _{\widehat {G}}g(\chi )\chi (x)\ d\nu (\chi ),} where the integral is relative to the Haar measure ν {\displaystyle \nu } on the dual group G ^ {\displaystyle {\widehat {G}}} . The measure ν {\displaystyle \nu } on G ^ {\displaystyle {\widehat {G}}} that appears in the Fourier inversion formula is called the dual measure to μ {\displaystyle \mu } and may be denoted μ ^ {\displaystyle {\widehat {\mu }}} . The various Fourier transforms can be classified in terms of their domain and transform domain (the group and dual group) as follows (note that T {\displaystyle \mathbb {T} } is Circle group): As an example, suppose G = R n {\displaystyle G=\mathbb {R} ^{n}} , so we can think about G ^ {\displaystyle {\widehat {G}}} as R n {\displaystyle \mathbb {R} ^{n}} by the pairing ( v , w ) ↦ e i v ⋅ w . {\displaystyle (\mathbf {v} ,\mathbf {w} )\mapsto e^{i\mathbf {v} \cdot \mathbf {w} }.} If μ {\displaystyle \mu } is the Lebesgue measure on Euclidean space, we obtain the ordinary Fourier transform on R n {\displaystyle \mathbb {R} ^{n}} and the dual measure needed for the Fourier inversion formula is μ ^ = ( 2 π ) − n μ {\displaystyle {\widehat {\mu }}=(2\pi )^{-n}\mu } . If we want to get a Fourier inversion formula with the same measure on both sides (that is, since we can think about R n {\displaystyle \mathbb {R} ^{n}} as its own dual space we can ask for μ ^ {\displaystyle {\widehat {\mu }}} to equal μ {\displaystyle \mu } ) then we need to use μ = ( 2 π ) − n 2 × Lebesgue measure μ ^ = ( 2 π ) − n 2 × Lebesgue measure {\displaystyle {\begin{aligned}\mu &=(2\pi )^{-{\frac {n}{2}}}\times {\text{Lebesgue measure}}\\{\widehat {\mu }}&=(2\pi )^{-{\frac {n}{2}}}\times {\text{Lebesgue measure}}\end{aligned}}} However, if we change the way we identify R n {\displaystyle \mathbb {R} ^{n}} with its dual group, by using the pairing ( v , w ) ↦ e 2 π i v ⋅ w , {\displaystyle (\mathbf {v} ,\mathbf {w} )\mapsto e^{2\pi i\mathbf {v} \cdot \mathbf {w} },} then Lebesgue measure on R n {\displaystyle \mathbb {R} ^{n}} is equal to its own dual measure. This convention minimizes the number of factors of 2 π {\displaystyle 2\pi } that show up in various places when computing Fourier transforms or inverse Fourier transforms on Euclidean space. (In effect it limits the 2 π {\displaystyle 2\pi } only to the exponent rather than as a pre-factor outside the integral sign.) Note that the choice of how to identify R n {\displaystyle \mathbb {R} ^{n}} with its dual group affects the meaning of the term "self-dual function", which is a function on R n {\displaystyle \mathbb {R} ^{n}} equal to its own Fourier transform: using the classical pairing ( v , w ) ↦ e i v ⋅ w {\displaystyle (\mathbf {v} ,\mathbf {w} )\mapsto e^{i\mathbf {v} \cdot \mathbf {w} }} the function e − 1 2 x 2 {\displaystyle e^{-{\frac {1}{2}}x^{2}}} is self-dual. But using the pairing, which keeps the pre-factor as unity, ( v , w ) ↦ e 2 π i v ⋅ w {\displaystyle (\mathbf {v} ,\mathbf {w} )\mapsto e^{2\pi i\mathbf {v} \cdot \mathbf {w} }} makes e − π x 2 {\displaystyle e^{-\pi x^{2}}} self-dual instead. This second definition for the Fourier transform has the advantage that it maps the multiplicative identity to the convolution identity, which is useful as L 1 {\displaystyle L^{1}} is a convolution algebra. See the next section on the group algebra. In addition, this form is also necessarily isometric on L 2 {\displaystyle L^{2}} spaces. See below at Plancherel and L2 Fourier inversion theorems. === Group algebra === The space of integrable functions on a locally compact abelian group G {\displaystyle G} is an algebra, where multiplication is convolution: the convolution of two integrable functions f {\displaystyle f} and g {\displaystyle g} is defined as ( f ∗ g ) ( x ) = ∫ G f ( x − y ) g ( y ) d μ ( y ) . {\displaystyle (f*g)(x)=\int _{G}f(x-y)g(y)\ d\mu (y).} This algebra is referred to as the Group Algebra of G {\displaystyle G} . By the Fubini–Tonelli theorem, the convolution is submultiplicative with respect to the L 1 {\displaystyle L^{1}} norm, making L 1 ( G ) {\displaystyle L^{1}(G)} a Banach algebra. The Banach algebra L 1 ( G ) {\displaystyle L^{1}(G)} has a multiplicative identity element if and only if G {\displaystyle G} is a discrete group, namely the function that is 1 at the identity and zero elsewhere. In general, however, it has an approximate identity which is a net (or generalized sequence) { e i } i ∈ I {\displaystyle \{e_{i}\}_{i\in I}} indexed on a directed set I {\displaystyle I} such that f ∗ e i → f . {\displaystyle f*e_{i}\to f.} The Fourier transform takes convolution to multiplication, i.e. it is a homomorphism of abelian Banach algebras L 1 ( G ) → C 0 ( G ^ ) {\displaystyle L^{1}(G)\to C_{0}\left({\widehat {G}}\right)} (of norm ≤ 1): F ( f ∗ g ) ( χ ) = F ( f ) ( χ ) ⋅ F ( g ) ( χ ) . {\displaystyle {\mathcal {F}}(f*g)(\chi )={\mathcal {F}}(f)(\chi )\cdot {\mathcal {F}}(g)(\chi ).} In particular, to every group character on G {\displaystyle G} corresponds a unique multiplicative linear functional on the group algebra defined by f ↦ f ^ ( χ ) . {\displaystyle f\mapsto {\widehat {f}}(\chi ).} It is an important property of the group algebra that these exhaust the set of non-trivial (that is, not identically zero) multiplicative linear functionals on the group algebra; see section 34 of (Loomis 1953). This means the Fourier transform is a special case of the Gelfand transform. === Plancherel and L2 Fourier inversion theorems === As we have stated, the dual group of a locally compact abelian group is a locally compact abelian group in its own right and thus has a Haar measure, or more precisely a whole family of scale-related Haar measures. Since the complex-valued continuous functions of compact support on G {\displaystyle G} are L 2 {\displaystyle L^{2}} -dense, there is a unique extension of the Fourier transform from that space to a unitary operator F : L μ 2 ( G ) → L ν 2 ( G ^ ) . {\displaystyle {\mathcal {F}}:L_{\mu }^{2}(G)\to L_{\nu }^{2}\left({\widehat {G}}\right).} and we have the formula ∀ f ∈ L 2 ( G ) : ∫ G | f ( x ) | 2 d μ ( x ) = ∫ G ^ | f ^ ( χ ) | 2 d ν ( χ ) . {\displaystyle \forall f\in L^{2}(G):\quad \int _{G}|f(x)|^{2}\ d\mu (x)=\int _{\widehat {G}}\left|{\widehat {f}}(\chi )\right|^{2}\ d\nu (\chi ).} Note that for non-compact locally compact groups G {\displaystyle G} the space L 1 ( G ) {\displaystyle L^{1}(G)} does not contain L 2 ( G ) {\displaystyle L^{2}(G)} , so the Fourier transform of general L 2 {\displaystyle L^{2}} -functions on G {\displaystyle G} is "not" given by any kind of integration formula (or really any explicit formula). To define the L 2 {\displaystyle L^{2}} Fourier transform one has to resort to some technical trick such as starting on a dense subspace like the continuous functions with compact support and then extending the isometry by continuity to the whole space. This unitary extension of the Fourier transform is what we mean by the Fourier transform on the space of square integrable functions. The dual group also has an inverse Fourier transform in its own right; it can be characterized as the inverse (or adjoint, since it is unitary) of the L 2 {\displaystyle L^{2}} Fourier transform. This is the content of the L 2 {\displaystyle L^{2}} Fourier inversion formula which follows. In the case G = T {\displaystyle G=\mathbb {T} } the dual group G ^ {\displaystyle {\widehat {G}}} is naturally isomorphic to the group of integers Z {\displaystyle \mathbb {Z} } and the Fourier transform specializes to the computation of coefficients of Fourier series of periodic functions. If G {\displaystyle G} is a finite group, we recover the discrete Fourier transform. Note that this case is very easy to prove directly. == Bohr compactification and almost-periodicity == One important application of Pontryagin duality is the following characterization of compact abelian topological groups: That G {\displaystyle G} being compact implies G ^ {\displaystyle {\widehat {G}}} is discrete or that G {\displaystyle G} being discrete implies that G ^ {\displaystyle {\widehat {G}}} is compact is an elementary consequence of the definition of the compact-open topology on G ^ {\displaystyle {\widehat {G}}} and does not need Pontryagin duality. One uses Pontryagin duality to prove the converses. The Bohr compactification is defined for any topological group G {\displaystyle G} , regardless of whether G {\displaystyle G} is locally compact or abelian. One use made of Pontryagin duality between compact abelian groups and discrete abelian groups is to characterize the Bohr compactification of an arbitrary abelian locally compact topological group. The Bohr compactification B ( G ) {\displaystyle B(G)} of G {\displaystyle G} is H ^ {\displaystyle {\widehat {H}}} , where H has the group structure G ^ {\displaystyle {\widehat {G}}} , but given the discrete topology. Since the inclusion map ι : H → G ^ {\displaystyle \iota :H\to {\widehat {G}}} is continuous and a homomorphism, the dual morphism G ∼ G ^ ^ → H ^ {\displaystyle G\sim {\widehat {\widehat {G}}}\to {\widehat {H}}} is a morphism into a compact group which is easily shown to satisfy the requisite universal property. == Categorical considerations == Pontryagin duality can also profitably be considered functorially. In what follows, LCA is the category of locally compact abelian groups and continuous group homomorphisms. The dual group construction of G ^ {\displaystyle {\widehat {G}}} is a contravariant functor LCA → LCA, represented (in the sense of representable functors) by the circle group T {\displaystyle \mathbb {T} } as G ^ = Hom ( G , T ) . {\displaystyle {\widehat {G}}={\text{Hom}}(G,\mathbb {T} ).} In particular, the double dual functor G → G ^ ^ {\displaystyle G\to {\widehat {\widehat {G}}}} is covariant. A categorical formulation of Pontryagin duality then states that the natural transformation between the identity functor on LCA and the double dual functor is an isomorphism. Unwinding the notion of a natural transformation, this means that the maps G → Hom ( Hom ( G , T ) , T ) {\displaystyle G\to \operatorname {Hom} (\operatorname {Hom} (G,T),T)} are isomorphisms for any locally compact abelian group G {\displaystyle G} , and these isomorphisms are functorial in G {\displaystyle G} . This isomorphism is analogous to the double dual of finite-dimensional vector spaces (a special case, for real and complex vector spaces). An immediate consequence of this formulation is another common categorical formulation of Pontryagin duality: the dual group functor is an equivalence of categories from LCA to LCAop. The duality interchanges the subcategories of discrete groups and compact groups. If R {\displaystyle R} is a ring and G {\displaystyle G} is a left R {\displaystyle R} –module, the dual group G ^ {\displaystyle {\widehat {G}}} will become a right R {\displaystyle R} –module; in this way we can also see that discrete left R {\displaystyle R} –modules will be Pontryagin dual to compact right R {\displaystyle R} –modules. The ring End ( G ) {\displaystyle {\text{End}}(G)} of endomorphisms in LCA is changed by duality into its opposite ring (change the multiplication to the other order). For example, if G {\displaystyle G} is an infinite cyclic discrete group, G ^ {\displaystyle {\widehat {G}}} is a circle group: the former has End ( G ) = Z {\displaystyle {\text{End}}(G)=\mathbb {Z} } so this is true also of the latter. == Generalizations == Generalizations of Pontryagin duality are constructed in two main directions: for commutative topological groups that are not locally compact, and for noncommutative topological groups. The theories in these two cases are very different. === Dualities for commutative topological groups === When G {\displaystyle G} is a Hausdorff abelian topological group, the group G ^ {\displaystyle {\widehat {G}}} with the compact-open topology is a Hausdorff abelian topological group and the natural mapping from G {\displaystyle G} to its double-dual G ^ ^ {\displaystyle {\widehat {\widehat {G}}}} makes sense. If this mapping is an isomorphism, it is said that G {\displaystyle G} satisfies Pontryagin duality (or that G {\displaystyle G} is a reflexive group, or a reflective group). This has been extended in a number of directions beyond the case that G {\displaystyle G} is locally compact. In particular, Samuel Kaplan showed in 1948 and 1950 that arbitrary products and countable inverse limits of locally compact (Hausdorff) abelian groups satisfy Pontryagin duality. Note that an infinite product of locally compact non-compact spaces is not locally compact. Later, in 1975, Rangachari Venkataraman showed, among other facts, that every open subgroup of an abelian topological group which satisfies Pontryagin duality itself satisfies Pontryagin duality. More recently, Sergio Ardanza-Trevijano and María Jesús Chasco have extended the results of Kaplan mentioned above. They showed that direct and inverse limits of sequences of abelian groups satisfying Pontryagin duality also satisfy Pontryagin duality if the groups are metrizable or k ω {\displaystyle k_{\omega }} -spaces but not necessarily locally compact, provided some extra conditions are satisfied by the sequences. However, there is a fundamental aspect that changes if we want to consider Pontryagin duality beyond the locally compact case. Elena Martín-Peinador proved in 1995 that if G {\displaystyle G} is a Hausdorff abelian topological group that satisfies Pontryagin duality, and the natural evaluation pairing { G × G ^ → T ( x , χ ) ↦ χ ( x ) {\displaystyle {\begin{cases}G\times {\widehat {G}}\to \mathbb {T} \\(x,\chi )\mapsto \chi (x)\end{cases}}} is (jointly) continuous, then G {\displaystyle G} is locally compact. As a corollary, all non-locally compact examples of Pontryagin duality are groups where the pairing G × G ^ → T {\displaystyle G\times {\widehat {G}}\to \mathbb {T} } is not (jointly) continuous. Another way to generalize Pontryagin duality to wider classes of commutative topological groups is to endow the dual group G ^ {\displaystyle {\widehat {G}}} with a bit different topology, namely the topology of uniform convergence on totally bounded sets. The groups satisfying the identity G ≅ G ^ ^ {\displaystyle G\cong {\widehat {\widehat {G}}}} under this assumption are called stereotype groups. This class is also very wide (and it contains locally compact abelian groups), but it is narrower than the class of reflective groups. === Pontryagin duality for topological vector spaces === In 1952 Marianne F. Smith noticed that Banach spaces and reflexive spaces, being considered as topological groups (with the additive group operation), satisfy Pontryagin duality. Later B. S. Brudovskiĭ, William C. Waterhouse and K. Brauner showed that this result can be extended to the class of all quasi-complete barreled spaces (in particular, to all Fréchet spaces). In the 1990s Sergei Akbarov gave a description of the class of the topological vector spaces that satisfy a stronger property than the classical Pontryagin reflexivity, namely, the identity ( X ⋆ ) ⋆ ≅ X {\displaystyle (X^{\star })^{\star }\cong X} where X ⋆ {\displaystyle X^{\star }} means the space of all linear continuous functionals f : X → C {\displaystyle f\colon X\to \mathbb {C} } endowed with the topology of uniform convergence on totally bounded sets in X {\displaystyle X} (and ( X ⋆ ) ⋆ {\displaystyle (X^{\star })^{\star }} means the dual to X ⋆ {\displaystyle X^{\star }} in the same sense). The spaces of this class are called stereotype spaces, and the corresponding theory found a series of applications in Functional analysis and Geometry, including the generalization of Pontryagin duality for non-commutative topological groups. === Dualities for non-commutative topological groups === For non-commutative locally compact groups G {\displaystyle G} the classical Pontryagin construction stops working for various reasons, in particular, because the characters don't always separate the points of G {\displaystyle G} , and the irreducible representations of G {\displaystyle G} are not always one-dimensional. At the same time it is not clear how to introduce multiplication on the set of irreducible unitary representations of G {\displaystyle G} , and it is even not clear whether this set is a good choice for the role of the dual object for G {\displaystyle G} . So the problem of constructing duality in this situation requires complete rethinking. Theories built to date are divided into two main groups: the theories where the dual object has the same nature as the source one (like in the Pontryagin duality itself), and the theories where the source object and its dual differ from each other so radically that it is impossible to count them as objects of one class. The second type theories were historically the first: soon after Pontryagin's work Tadao Tannaka (1938) and Mark Krein (1949) constructed a duality theory for arbitrary compact groups known now as the Tannaka–Krein duality. In this theory the dual object for a group G {\displaystyle G} is not a group but a category of its representations Π ( G ) {\displaystyle \Pi (G)} . The theories of first type appeared later and the key example for them was the duality theory for finite groups. In this theory the category of finite groups is embedded by the operation G ↦ C G {\displaystyle G\mapsto \mathbb {C} _{G}} of taking group algebra C G {\displaystyle \mathbb {C} _{G}} (over C {\displaystyle \mathbb {C} } ) into the category of finite dimensional Hopf algebras, so that the Pontryagin duality functor G ↦ G ^ {\displaystyle G\mapsto {\widehat {G}}} turns into the operation H ↦ H ∗ {\displaystyle H\mapsto H^{*}} of taking the dual vector space (which is a duality functor in the category of finite dimensional Hopf algebras). In 1973 Leonid I. Vainerman, George I. Kac, Michel Enock, and Jean-Marie Schwartz built a general theory of this type for all locally compact groups. From the 1980s the research in this area was resumed after the discovery of quantum groups, to which the constructed theories began to be actively transferred. These theories are formulated in the language of C*-algebras, or Von Neumann algebras, and one of its variants is the recent theory of locally compact quantum groups. One of the drawbacks of these general theories, however, is that in them the objects generalizing the concept of a group are not Hopf algebras in the usual algebraic sense. This deficiency can be corrected (for some classes of groups) within the framework of duality theories constructed on the basis of the notion of envelope of topological algebra. == See also == Peter–Weyl theorem Cartier duality Stereotype space Bochner's theorem == Notes == == Citations == == References == Akbarov, S.S. (2003). "Pontryagin duality in the theory of topological vector spaces and in topological algebra". Journal of Mathematical Sciences. 113 (2): 179–349. doi:10.1023/A:1020929201133. S2CID 115297067. Akbarov, Sergei S.; Shavgulidze, Evgeniy T. (2003). "On two classes of spaces reflexive in the sense of Pontryagin". Matematicheskii Sbornik. 194 (10): 3–26. Akbarov, Sergei S. (2009). "Holomorphic functions of exponential type and duality for Stein groups with algebraic connected component of identity". Journal of Mathematical Sciences. 162 (4): 459–586. arXiv:0806.3205. doi:10.1007/s10958-009-9646-1. S2CID 115153766. Akbarov, Sergei S. (2017a). "Continuous and smooth envelopes of topological algebras. Part 1". Journal of Mathematical Sciences. 227 (5): 531–668. arXiv:1303.2424. doi:10.1007/s10958-017-3599-6. MR 3790317. S2CID 126018582. Akbarov, Sergei S. (2017b). "Continuous and smooth envelopes of topological algebras. Part 2". Journal of Mathematical Sciences. 227 (6): 669–789. arXiv:1303.2424. doi:10.1007/s10958-017-3600-4. MR 3796205. S2CID 128246373. Brauner, Kalman (1973). "Duals of Fréchet spaces and a generalization of the Banach–Dieudonné theorem". Duke Mathematical Journal. 40 (4): 845–855. doi:10.1215/S0012-7094-73-04078-7. Brudovskiĭ, B. S. (1967). "On k- and c-reflexivity of locally convex vector spaces". Lithuanian Mathematical Journal. 7 (1): 17–21. doi:10.15388/LMJ.1967.19927. Dixmier, Jacques (1969). Les C*-algèbres et leurs Représentations. Gauthier-Villars. ISBN 978-2-87647-013-2. Enock, Michel; Schwartz, Jean-Marie (1992). Kac Algebras and Duality of Locally Compact Groups. With a preface by Alain Connes. With a postface by Adrian Ocneanu. Berlin: Springer-Verlag. doi:10.1007/978-3-662-02813-1. ISBN 978-3-540-54745-7. MR 1215933. Hewitt, Edwin; Ross, Kenneth A. (1963). Abstract Harmonic Analysis. Vol. I: Structure of topological groups. Integration theory, group representations. Die Grundlehren der mathematischen Wissenschaften. Vol. 115. Berlin-Göttingen-Heidelberg: Springer-Verlag. ISBN 978-0-387-94190-5. MR 0156915. {{cite book}}: ISBN / Date incompatibility (help) Hewitt, Edwin; Ross, Kenneth A. (1970). Abstract Harmonic Analysis. Vol. 2. Springer. ISBN 978-3-662-24595-8. MR 0262773. Kirillov, Alexandre A. (1976) [1972]. Elements of the theory of representations. Grundlehren der Mathematischen Wissenschaften. Vol. 220. Berlin, New York: Springer-Verlag. ISBN 978-0-387-07476-4. MR 0412321. Loomis, Lynn H. (1953). An Introduction to Abstract Harmonic Analysis. D. van Nostrand Co. ISBN 978-0486481234. {{cite book}}: ISBN / Date incompatibility (help) Morris, S.A. (1977). Pontryagin duality and the structure of locally compact Abelian groups. Cambridge University Press. ISBN 978-0521215435. Onishchik, A.L. (1984), "Pontrjagin duality", Encyclopedia of Mathematics, 4: 481–482, ISBN 978-1402006098 Reiter, Hans (1968). Classical Harmonic Analysis and Locally Compact Groups. Clarendon Press. ISBN 978-0198511892. Rudin, Walter (1962). Fourier Analysis on Groups. D. van Nostrand Co. ISBN 978-0471523642. {{cite book}}: ISBN / Date incompatibility (help) Timmermann, T. (2008). An Invitation to Quantum Groups and Duality - From Hopf Algebras to Multiplicative Unitaries and Beyond. EMS Textbooks in Mathematics, European Mathematical Society. ISBN 978-3-03719-043-2. Kustermans, J.; Vaes, S. (2000). "Locally Compact Quantum Groups". Annales Scientifiques de l'École Normale Supérieure. 33 (6): 837–934. doi:10.1016/s0012-9593(00)01055-7. Ardanza-Trevijano, Sergio; Chasco, María Jesús (2005). "The Pontryagin duality of sequential limits of topological Abelian groups". Journal of Pure and Applied Algebra. 202 (1–3): 11–21. doi:10.1016/j.jpaa.2005.02.006. hdl:10171/1586. MR 2163398. Chasco, María Jesús; Dikranjan, Dikran; Martín-Peinador, Elena (2012). "A survey on reflexivity of abelian topological groups". Topology and Its Applications. 159 (9): 2290–2309. doi:10.1016/j.topol.2012.04.012. MR 2921819. Kaplan, Samuel (1948). "Extensions of the Pontrjagin duality. Part I: infinite products". Duke Mathematical Journal. 15: 649–658. doi:10.1215/S0012-7094-48-01557-9. MR 0026999. Kaplan, Samuel (1950). "Extensions of the Pontrjagin duality. Part II: direct and inverse limits". Duke Mathematical Journal. 17: 419–435. doi:10.1215/S0012-7094-50-01737-6. MR 0049906. Venkataraman, Rangachari (1975). "Extensions of Pontryagin Duality". Mathematische Zeitschrift. 143 (2): 105–112. doi:10.1007/BF01187051. S2CID 123627326. Martín-Peinador, Elena (1995). "A reflexible admissible topological group must be locally compact". Proceedings of the American Mathematical Society. 123 (11): 3563–3566. doi:10.2307/2161108. hdl:10338.dmlcz/127641. JSTOR 2161108. Roeder, David W. (1974). "Category theory applied to Pontryagin duality". Pacific Journal of Mathematics. 52 (2): 519–527. doi:10.2140/pjm.1974.52.519. Smith, Marianne F. (1952). "The Pontrjagin duality theorem in linear spaces". Annals of Mathematics. 56 (2): 248–253. doi:10.2307/1969798. JSTOR 1969798. MR 0049479. Waterhouse, William C. (1968). "Dual groups of vector spaces". Pacific Journal of Mathematics. 26 (1): 193–196. doi:10.2140/pjm.1968.26.193. |
Wikipedia:Portmanteau theorem#0 | In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for n ≥ N to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength. Three of the most common notions of convergence are described below. == Informal descriptions == This section attempts to provide a rough intuitive description of three notions of convergence, using terminology developed in calculus courses; this section is necessarily imprecise as well as inexact, and the reader should refer to the formal clarifications in subsequent sections. In particular, the descriptions here do not address the possibility that the measure of some sets could be infinite, or that the underlying space could exhibit pathological behavior, and additional technical assumptions are needed for some of the statements. The statements in this section are however all correct if μn is a sequence of probability measures on a Polish space. The various notions of convergence formalize the assertion that the 'average value' of each 'sufficiently nice' function should converge: ∫ f d μ n → ∫ f d μ {\displaystyle \int f\,d\mu _{n}\to \int f\,d\mu } To formalize this requires a careful specification of the set of functions under consideration and how uniform the convergence should be. The notion of weak convergence requires this convergence to take place for every continuous bounded function f. This notion treats convergence for different functions f independently of one another, i.e., different functions f may require different values of N ≤ n to be approximated equally well (thus, convergence is non-uniform in f). The notion of setwise convergence formalizes the assertion that the measure of each measurable set should converge: μ n ( A ) → μ ( A ) {\displaystyle \mu _{n}(A)\to \mu (A)} Again, no uniformity over the set A is required. Intuitively, considering integrals of 'nice' functions, this notion provides more uniformity than weak convergence. As a matter of fact, when considering sequences of measures with uniformly bounded variation on a Polish space, setwise convergence implies the convergence ∫ f d μ n → ∫ f d μ {\textstyle \int f\,d\mu _{n}\to \int f\,d\mu } for any bounded measurable function f. As before, this convergence is non-uniform in f. The notion of total variation convergence formalizes the assertion that the measure of all measurable sets should converge uniformly, i.e. for every ε > 0 there exists N such that | μ n ( A ) − μ ( A ) | < ε {\displaystyle |\mu _{n}(A)-\mu (A)|<\varepsilon } for every n > N and for every measurable set A. As before, this implies convergence of integrals against bounded measurable functions, but this time convergence is uniform over all functions bounded by any fixed constant. == Total variation convergence of measures == This is the strongest notion of convergence shown on this page and is defined as follows. Let ( X , F ) {\displaystyle (X,{\mathcal {F}})} be a measurable space. The total variation distance between two (positive) measures μ and ν is then given by ‖ μ − ν ‖ TV = sup f { ∫ X f d μ − ∫ X f d ν } . {\displaystyle \left\|\mu -\nu \right\|_{\text{TV}}=\sup _{f}\left\{\int _{X}f\,d\mu -\int _{X}f\,d\nu \right\}.} Here the supremum is taken over f ranging over the set of all measurable functions from X to [−1, 1]. This is in contrast, for example, to the Wasserstein metric, where the definition is of the same form, but the supremum is taken over f ranging over the set of those measurable functions from X to [−1, 1] which have Lipschitz constant at most 1; and also in contrast to the Radon metric, where the supremum is taken over f ranging over the set of continuous functions from X to [−1, 1]. In the case where X is a Polish space, the total variation metric coincides with the Radon metric. If μ and ν are both probability measures, then the total variation distance is also given by ‖ μ − ν ‖ TV = 2 ⋅ sup A ∈ F | μ ( A ) − ν ( A ) | . {\displaystyle \left\|\mu -\nu \right\|_{\text{TV}}=2\cdot \sup _{A\in {\mathcal {F}}}|\mu (A)-\nu (A)|.} The equivalence between these two definitions can be seen as a particular case of the Monge–Kantorovich duality. From the two definitions above, it is clear that the total variation distance between probability measures is always between 0 and 2. To illustrate the meaning of the total variation distance, consider the following thought experiment. Assume that we are given two probability measures μ and ν, as well as a random variable X. We know that X has law either μ or ν but we do not know which one of the two. Assume that these two measures have prior probabilities 0.5 each of being the true law of X. Assume now that we are given one single sample distributed according to the law of X and that we are then asked to guess which one of the two distributions describes that law. The quantity 2 + ‖ μ − ν ‖ TV 4 {\displaystyle {2+\|\mu -\nu \|_{\text{TV}} \over 4}} then provides a sharp upper bound on the prior probability that our guess will be correct. Given the above definition of total variation distance, a sequence μn of measures defined on the same measure space is said to converge to a measure μ in total variation distance if for every ε > 0, there exists an N such that for all n > N, one has that ‖ μ n − μ ‖ TV < ε . {\displaystyle \|\mu _{n}-\mu \|_{\text{TV}}<\varepsilon .} == Setwise convergence of measures == For ( X , F ) {\displaystyle (X,{\mathcal {F}})} a measurable space, a sequence μn is said to converge setwise to a limit μ if lim n → ∞ μ n ( A ) = μ ( A ) {\displaystyle \lim _{n\to \infty }\mu _{n}(A)=\mu (A)} for every set A ∈ F {\displaystyle A\in {\mathcal {F}}} . Typical arrow notations are μ n → s w μ {\displaystyle \mu _{n}\xrightarrow {sw} \mu } and μ n → s μ {\displaystyle \mu _{n}\xrightarrow {s} \mu } . For example, as a consequence of the Riemann–Lebesgue lemma, the sequence μn of measures on the interval [−1, 1] given by μn(dx) = (1 + sin(nx))dx converges setwise to Lebesgue measure, but it does not converge in total variation. In a measure theoretical or probabilistic context setwise convergence is often referred to as strong convergence (as opposed to weak convergence). This can lead to some ambiguity because in functional analysis, strong convergence usually refers to convergence with respect to a norm. == Weak convergence of measures == In mathematics and statistics, weak convergence is one of many types of convergence relating to the convergence of measures. It depends on a topology on the underlying space and thus is not a purely measure-theoretic notion. There are several equivalent definitions of weak convergence of a sequence of measures, some of which are (apparently) more general than others. The equivalence of these conditions is sometimes known as the Portmanteau theorem. Definition. Let S {\displaystyle S} be a metric space with its Borel σ {\displaystyle \sigma } -algebra Σ {\displaystyle \Sigma } . A bounded sequence of positive probability measures P n ( n = 1 , 2 , … ) {\displaystyle P_{n}\,(n=1,2,\dots )} on ( S , Σ ) {\displaystyle (S,\Sigma )} is said to converge weakly to a probability measure P {\displaystyle P} (denoted P n ⇒ P {\displaystyle P_{n}\Rightarrow P} ) if any of the following equivalent conditions is true (here E n {\displaystyle \operatorname {E} _{n}} denotes expectation or the integral with respect to P n {\displaystyle P_{n}} , while E {\displaystyle \operatorname {E} } denotes expectation or the integral with respect to P {\displaystyle P} ): E n [ f ] → E [ f ] {\displaystyle \operatorname {E} _{n}[f]\to \operatorname {E} [f]} for all bounded, continuous functions f {\displaystyle f} ; E n [ f ] → E [ f ] {\displaystyle \operatorname {E} _{n}[f]\to \operatorname {E} [f]} for all bounded and Lipschitz functions f {\displaystyle f} ; lim sup E n [ f ] ≤ E [ f ] {\displaystyle \limsup \operatorname {E} _{n}[f]\leq \operatorname {E} [f]} for every upper semi-continuous function f {\displaystyle f} bounded from above; lim inf E n [ f ] ≥ E [ f ] {\displaystyle \liminf \operatorname {E} _{n}[f]\geq \operatorname {E} [f]} for every lower semi-continuous function f {\displaystyle f} bounded from below; lim sup P n ( C ) ≤ P ( C ) {\displaystyle \limsup P_{n}(C)\leq P(C)} for all closed sets C {\displaystyle C} of space S {\displaystyle S} ; lim inf P n ( U ) ≥ P ( U ) {\displaystyle \liminf P_{n}(U)\geq P(U)} for all open sets U {\displaystyle U} of space S {\displaystyle S} ; lim P n ( A ) = P ( A ) {\displaystyle \lim P_{n}(A)=P(A)} for all continuity sets A {\displaystyle A} of measure P {\displaystyle P} . In the case S {\displaystyle S} and R {\displaystyle \mathbf {R} } (with its usual topology) are homeomorphic , if F n {\displaystyle F_{n}} and F {\displaystyle F} denote the cumulative distribution functions of the measures P n {\displaystyle P_{n}} and P {\displaystyle P} , respectively, then P n {\displaystyle P_{n}} converges weakly to P {\displaystyle P} if and only if lim n → ∞ F n ( x ) = F ( x ) {\displaystyle \lim _{n\to \infty }F_{n}(x)=F(x)} for all points x ∈ R {\displaystyle x\in \mathbf {R} } at which F {\displaystyle F} is continuous. For example, the sequence where P n {\displaystyle P_{n}} is the Dirac measure located at 1 / n {\displaystyle 1/n} converges weakly to the Dirac measure located at 0 (if we view these as measures on R {\displaystyle \mathbf {R} } with the usual topology), but it does not converge setwise. This is intuitively clear: we only know that 1 / n {\displaystyle 1/n} is "close" to 0 {\displaystyle 0} because of the topology of R {\displaystyle \mathbf {R} } . This definition of weak convergence can be extended for S {\displaystyle S} any metrizable topological space. It also defines a weak topology on P ( S ) {\displaystyle {\mathcal {P}}(S)} , the set of all probability measures defined on ( S , Σ ) {\displaystyle (S,\Sigma )} . The weak topology is generated by the following basis of open sets: { U φ , x , δ | φ : S → R is bounded and continuous, x ∈ R and δ > 0 } , {\displaystyle \left\{\ U_{\varphi ,x,\delta }\ \left|\quad \varphi :S\to \mathbf {R} {\text{ is bounded and continuous, }}x\in \mathbf {R} {\text{ and }}\delta >0\ \right.\right\},} where U φ , x , δ := { μ ∈ P ( S ) | | ∫ S φ d μ − x | < δ } . {\displaystyle U_{\varphi ,x,\delta }:=\left\{\ \mu \in {\mathcal {P}}(S)\ \left|\quad \left|\int _{S}\varphi \,\mathrm {d} \mu -x\right|<\delta \ \right.\right\}.} If S {\displaystyle S} is also separable, then P ( S ) {\displaystyle {\mathcal {P}}(S)} is metrizable and separable, for example by the Lévy–Prokhorov metric. If S {\displaystyle S} is also compact or Polish, so is P ( S ) {\displaystyle {\mathcal {P}}(S)} . If S {\displaystyle S} is separable, it naturally embeds into P ( S ) {\displaystyle {\mathcal {P}}(S)} as the (closed) set of Dirac measures, and its convex hull is dense. There are many "arrow notations" for this kind of convergence: the most frequently used are P n ⇒ P {\displaystyle P_{n}\Rightarrow P} , P n ⇀ P {\displaystyle P_{n}\rightharpoonup P} , P n → w P {\displaystyle P_{n}\xrightarrow {w} P} and P n → D P {\displaystyle P_{n}\xrightarrow {\mathcal {D}} P} . === Weak convergence of random variables === Let ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )} be a probability space and X be a metric space. If Xn: Ω → X is a sequence of random variables then Xn is said to converge weakly (or in distribution or in law) to the random variable X: Ω → X as n → ∞ if the sequence of pushforward measures (Xn)∗(P) converges weakly to X∗(P) in the sense of weak convergence of measures on X, as defined above. === Comparison with vague convergence === Let X {\displaystyle X} be a metric space (for example R {\displaystyle \mathbb {R} } or [ 0 , 1 ] {\displaystyle [0,1]} ). The following spaces of test functions are commonly used in the convergence of probability measures. C c ( X ) {\displaystyle C_{c}(X)} the class of continuous functions f {\displaystyle f} each vanishing outside a compact set. C 0 ( X ) {\displaystyle C_{0}(X)} the class of continuous functions f {\displaystyle f} such that lim | x | → ∞ f ( x ) = 0 {\displaystyle \lim _{|x|\rightarrow \infty }f(x)=0} C B ( X ) {\displaystyle C_{B}(X)} the class of continuous bounded functions We have C c ⊂ C 0 ⊂ C B ⊂ C {\displaystyle C_{c}\subset C_{0}\subset C_{B}\subset C} . Moreover, C 0 {\displaystyle C_{0}} is the closure of C c {\displaystyle C_{c}} with respect to uniform convergence. ==== Vague Convergence ==== A sequence of measures ( μ n ) n ∈ N {\displaystyle \left(\mu _{n}\right)_{n\in \mathbb {N} }} converges vaguely to a measure μ {\displaystyle \mu } if for all f ∈ C c ( X ) {\displaystyle f\in C_{c}(X)} , ∫ X f d μ n → ∫ X f d μ {\displaystyle \int _{X}f\,d\mu _{n}\rightarrow \int _{X}f\,d\mu } . ==== Weak Convergence ==== A sequence of measures ( μ n ) n ∈ N {\displaystyle \left(\mu _{n}\right)_{n\in \mathbb {N} }} converges weakly to a measure μ {\displaystyle \mu } if for all f ∈ C B ( X ) {\displaystyle f\in C_{B}(X)} , ∫ X f d μ n → ∫ X f d μ {\displaystyle \int _{X}f\,d\mu _{n}\rightarrow \int _{X}f\,d\mu } . In general, these two convergence notions are not equivalent. In a probability setting, vague convergence and weak convergence of probability measures are equivalent assuming tightness. That is, a tight sequence of probability measures ( μ n ) n ∈ N {\displaystyle (\mu _{n})_{n\in \mathbb {N} }} converges vaguely to a probability measure μ {\displaystyle \mu } if and only if ( μ n ) n ∈ N {\displaystyle (\mu _{n})_{n\in \mathbb {N} }} converges weakly to μ {\displaystyle \mu } . The weak limit of a sequence of probability measures, provided it exists, is a probability measure. In general, if tightness is not assumed, a sequence of probability (or sub-probability) measures may not necessarily converge vaguely to a true probability measure, but rather to a sub-probability measure (a measure such that μ ( X ) ≤ 1 {\displaystyle \mu (X)\leq 1} ). Thus, a sequence of probability measures ( μ n ) n ∈ N {\displaystyle (\mu _{n})_{n\in \mathbb {N} }} such that μ n → v μ {\displaystyle \mu _{n}{\overset {v}{\to }}\mu } where μ {\displaystyle \mu } is not specified to be a probability measure is not guaranteed to imply weak convergence. === Weak convergence of measures as an example of weak-* convergence === Despite having the same name as weak convergence in the context of functional analysis, weak convergence of measures is actually an example of weak-* convergence. The definitions of weak and weak-* convergences used in functional analysis are as follows: Let V {\displaystyle V} be a topological vector space or Banach space. A sequence x n {\displaystyle x_{n}} in V {\displaystyle V} converges weakly to x {\displaystyle x} if φ ( x n ) → φ ( x ) {\displaystyle \varphi \left(x_{n}\right)\rightarrow \varphi (x)} as n → ∞ {\displaystyle n\to \infty } for all φ ∈ V ∗ {\displaystyle \varphi \in V^{*}} . One writes x n → w x {\displaystyle x_{n}\mathrel {\stackrel {w}{\rightarrow }} x} as n → ∞ {\displaystyle n\to \infty } . A sequence of φ n ∈ V ∗ {\displaystyle \varphi _{n}\in V^{*}} converges in the weak-* topology to φ {\displaystyle \varphi } provided that φ n ( x ) → φ ( x ) {\displaystyle \varphi _{n}(x)\rightarrow \varphi (x)} for all x ∈ V {\displaystyle x\in V} . That is, convergence occurs in the point-wise sense. In this case, one writes φ n → w ∗ φ {\displaystyle \varphi _{n}\mathrel {\stackrel {w^{*}}{\rightarrow }} \varphi } as n → ∞ {\displaystyle n\to \infty } . To illustrate how weak convergence of measures is an example of weak-* convergence, we give an example in terms of vague convergence (see above). Let X {\displaystyle X} be a locally compact Hausdorff space. By the Riesz-Representation theorem, the space M ( X ) {\displaystyle M(X)} of Radon measures is isomorphic to a subspace of the space of continuous linear functionals on C 0 ( X ) {\displaystyle C_{0}(X)} . Therefore, for each Radon measure μ n ∈ M ( X ) {\displaystyle \mu _{n}\in M(X)} , there is a linear functional φ n ∈ C 0 ( X ) ∗ {\displaystyle \varphi _{n}\in C_{0}(X)^{*}} such that φ n ( f ) = ∫ X f d μ n {\displaystyle \varphi _{n}(f)=\int _{X}f\,d\mu _{n}} for all f ∈ C 0 ( X ) {\displaystyle f\in C_{0}(X)} . Applying the definition of weak-* convergence in terms of linear functionals, the characterization of vague convergence of measures is obtained. For compact X {\displaystyle X} , C 0 ( X ) = C B ( X ) {\displaystyle C_{0}(X)=C_{B}(X)} , so in this case weak convergence of measures is a special case of weak-* convergence. == See also == Convergence of random variables Lévy–Prokhorov metric Prokhorov's theorem Tightness of measures == Notes and references == == Further reading == Ambrosio, L., Gigli, N. & Savaré, G. (2005). Gradient Flows in Metric Spaces and in the Space of Probability Measures. Basel: ETH Zürich, Birkhäuser Verlag. ISBN 3-7643-2428-7.{{cite book}}: CS1 maint: multiple names: authors list (link) Billingsley, Patrick (1995). Probability and Measure. New York, NY: John Wiley & Sons, Inc. ISBN 0-471-00710-2. Billingsley, Patrick (1999). Convergence of Probability Measures. New York, NY: John Wiley & Sons, Inc. ISBN 0-471-19745-9. |
Wikipedia:Posidonius#0 | Posidonius (; Ancient Greek: Ποσειδώνιος Poseidṓnios, "of Poseidon") "of Apameia" (ὁ Ἀπαμεύς) or "of Rhodes" (ὁ Ῥόδιος) (c. 135 – c. 51 BC), was a Greek politician, astronomer, astrologer, geographer, historian, mathematician, and teacher native to Apamea, Syria. He was considered the most learned man of his time and, possibly, of the entire Stoic school. After a period learning Stoic philosophy from Panaetius in Athens, he spent many years in travel and scientific researches in Spain, Africa, Italy, Gaul, Liguria, Sicily and on the eastern shores of the Adriatic. He settled as a teacher at Rhodes where his fame attracted numerous scholars. Next to Panaetius he did most, by writings and personal lectures, to spread Stoicism to the Roman world, and he became well known to many leading men, including Pompey and Cicero. His works are now lost, but they proved a mine of information to later writers. The titles and subjects of more than twenty of them are known. In common with other Stoics of the middle period, he displayed syncretic tendencies, following not just the earlier Stoics, but making use of the works of Plato and Aristotle. A polymath as well as a philosopher, he took genuine interest in natural science, geography, natural history, mathematics and astronomy. He sought to determine the distance and magnitude of the Sun, to calculate the diameter of the Earth and the influence of the Moon on the tides. == Life == === Early life and education === Posidonius, nicknamed "the Athlete" (Ἀθλητής), was born around 135 BC. He was born into a Greek family in Apamea, a Hellenistic city on the river Orontes in northern Syria. As historian Philip Freeman puts it: "Posidonius was Greek to the core". Posidonius expressed no love for his native city, Apamea, in his writings and he mocked its inhabitants. As a young man he moved to Athens and studied under Panaetius, the leading Stoic philosopher of the age, and the last undisputed head (scholarch) of the Stoic school in Athens. When Panaetius died in 110 BC, Posidonius would have been around 25 years old. Rather than remain in Athens, he instead settled in Rhodes, and gained citizenship. In Rhodes, Posidonius maintained his own school which would become the leading institution of the time. === Travels === Around the 90s BC Posidonius embarked on a series of voyages around the Mediterranean gathering scientific data and observing the customs and people of the places he visited. He traveled in Greece, Hispania, Italy, Sicily, Dalmatia, Gaul, Liguria, North Africa, and on the eastern shores of the Adriatic. In Hispania, on the Atlantic coast at Gades (the modern Cadiz), Posidonius could observe tides much higher than in his native Mediterranean. He wrote that daily tides are related to the Moon's orbit, while tidal heights vary with the cycles of the Moon, and he hypothesized about yearly tidal cycles synchronized with the equinoxes and solstices. In Gaul, he studied the Celts. He left descriptions of customs such as nailing skulls to doorways as trophies, which he witnessed, and vivid legends told to him by the Celts, such as a story that in the past, men were paid to allow their throats to be slit for public amusement. But he noted that the Celts honored the Druids, whom Posidonius saw as philosophers, and concluded that, even among the barbaric, "pride and passion give way to wisdom, and Ares stands in awe of the Muses." Posidonius wrote a geographic treatise on the lands of the Celts which has since been lost, but which is referred to extensively (both directly and otherwise) in the works of Diodorus of Sicily, Strabo, Caesar and Tacitus' Germania. === Political offices === In Rhodes, Posidonius actively took part in political life, and he attained high office when he was appointed as one of the Prytaneis. This was the most important political office in Rhodes, combining presidential and executive functions, of which there were five (or possibly six) men holding the office for a six-month period. He was chosen for at least one embassy to Rome in 87/86, during the Marian and Sullan era. Although the purpose of the embassy is unknown, this was at the time of the First Mithridatic War when Roman rule over the Greek cities was being challenged by Mithridates VI of Pontus and the political situation was delicate. === The Stoic school on Rhodes === Under Posidonius, Rhodes eclipsed Athens to become the new centre for Stoic philosophy in the 1st century BC. This process may have already have begun under Panaetius, who was a native of Rhodes, and may have fostered a school there. Ian Kidd remarks that Rhodes "was attractive, not only as an independent city, commercially prosperous, go-ahead and with easy links of movement in all directions, but because it was welcoming to intellectuals, for it already had a strong reputation particularly for scientific research from men like Hipparchus." Although little is known of the organization of his school, it is clear that Posidonius had a steady stream of Greek and Roman students, as demonstrated by the eminent Romans who visited it. Pompey sat in on a lecture in 66 and did so again in 62 on return from campaigning in the East. On this latter occasion the subject of the lecture was "There is no good but moral good". Posidonius was probably in his seventies at this time and was suffering from gout. He illustrated the theme of his lecture by pointing to his painful leg and declaring "It is no good, pain; bothersome you may be, but you will never persuade me that you are an evil." When Cicero was in his late twenties, he attended a course of Posidonius' lectures, and later invited Posidonius to write a monograph on Cicero's own consulship (Posidonius politely refused). In his later writings Cicero repeatedly refers to Posidonius as "my teacher" and "my dear friend". Posidonius died in his eighties in 51 BC; his grandson, Jason of Nysa, succeeded him as head of the school on Rhodes. == Partial scope of writings == Posidonius was celebrated as a polymath throughout the Graeco-Roman world because he came near to mastering all the knowledge of his time, similar to Aristotle and Eratosthenes. He attempted to create a unified system for understanding the human intellect and the universe which would provide an explanation of and a guide for human behavior. Posidonius wrote on physics (including meteorology and physical geography), astronomy, astrology and divination, seismology, geology and mineralogy, hydrology, botany, ethics, logic, mathematics, history, natural history, anthropology, and tactics. His studies were major investigations into their subjects, although not without errors. None of his works survives intact. All that have been found are fragments, although the titles and subjects of many of his books are known. Writers such as Strabo and Seneca provide most of the information about his life and works. == Philosophy == For Posidonius, philosophy was the dominant master art and all the individual sciences were subordinate to philosophy, which alone could explain the cosmos. All his works, from scientific to historical, were inseparably philosophical. He accepted the Stoic categorization of philosophy into physics (natural philosophy, including metaphysics and theology), logic (including dialectic), and ethics. These three categories for him were, in Stoic fashion, inseparable and interdependent parts of an organic, natural whole. He compared them to a living being, with physics the flesh and blood, logic the bones and tendons holding the organism together, and finally ethics—the most important part—corresponding to the soul. Although a firm Stoic, Posidonius was syncretic like Panaetius and other Stoics of the middle period. He followed not only the earlier Stoics, but made use of the writings of Plato and Aristotle. Posidonius studied Plato's Timaeus, and seems to have written a commentary on it emphasizing its Pythagorean features. As a creative philosopher, Posidonius would however be expected to create innovations within the tradition of the philosophical school to which he belonged. David Sedley remarks: On the vast majority of philosophical issues, what we know of both Panaetius and Posidonius places them firmly within the main current of Stoic debate. Their innovatively hospitable attitude to Plato and Aristotle enables them to enrich and, to a limited extent, reorientate their inherited Stoicism, but, for all that, they remain palpably Stoics, working within the established tradition. === Ethics === Ethics, Posidonius taught, is about practice not just theory. It involves knowledge of both the human and the divine, and a knowledge of the universe to which human reason is related. It was once the general view that Posidonius departed from the monistic psychology of the earlier Stoics. Chrysippus had written a work called On Passions in which he affirmed that reason and emotion were not separate and distinct faculties, and that destructive passions were instead rational impulses which were out-of-control. According to the testimony of Galen (an adherent of Plato), Posidonius wrote his own On Passions in which he instead adopted Plato's tripartition of the soul which taught that in addition to the rational faculties, the human soul had faculties that were spirited (anger, desires for power, possessions, etc.) and desiderative (desires for sex and food). Although Galen's testimony is still accepted by some, more recent scholarship argues that Galen may have exaggerated Posidonius' views for polemical effect, and that Posidonius may have been trying to clarify and expand on Chrysippus rather than oppose him. Other writers who knew the ethical works of Posidonius, including Cicero and Seneca, grouped Chrysippus and Posidonius together and saw no opposition between them. === Physics === The philosophical grand vision of Posidonius was that the universe itself was interconnected as an organic whole, providential and organised in all respects, from the development of the physical world to the behaviour of living creatures. Panaetius had doubted both the reality of divination and the Stoic doctrine of the future conflagration (ekpyrosis), but Posidonius wrote in favour of these ideas. As a Stoic, Posidonius was an advocate of cosmic "sympathy" (συμπάθεια, sympatheia)—the organic interrelation of all appearances in the world, from the sky to the Earth, as part of a rational design uniting humanity and all things in the universe. He believed valid predictions could be made from signs in nature—whether through astrology or prophetic dreams—as a kind of scientific prediction. === Mathematics === Posidonius was one of the first to attempt to prove Euclid's fifth postulate of geometry. He suggested changing the definition of parallel straight lines to an equivalent statement that would allow him to prove the fifth postulate. From there, Euclidean geometry could be restructured, placing the fifth postulate among the theorems instead. In addition to his writings on geometry, Posidonius was credited for creating some mathematical definitions, or for articulating views on technical terms, for example 'theorem' and 'problem'. === Astronomy and meteorology === Some fragments of his writings on astronomy survive through the treatise by Cleomedes, On the Circular Motions of the Celestial Bodies, the first chapter of the second book appearing to have been mostly copied from Posidonius. Posidonius advanced the theory that the Sun emanated a vital force that permeated the world. He attempted to measure the distance and size of the Sun. In about 90 BC, Posidonius estimated the distance from the Earth to the Sun (see astronomical unit) to be 9,893 times the Earth's radius. This was still too small by half. In measuring the size of the Sun, however, he reached a figure larger and more accurate than those proposed by other Greek astronomers and Aristarchus of Samos. Posidonius also calculated the size and distance of the Moon. Posidonius constructed an orrery, possibly similar to the Antikythera mechanism. Posidonius's orrery, according to Cicero, exhibited the diurnal motions of the Sun, Moon, and the five known planets. Posidonius in his writings on meteorology followed Aristotle. He theorized on the causes of clouds, mist, wind, and rain as well as frost, hail, lightning, and rainbows. He also estimated that the boundary between the clouds and the heavens lies about 40 stadia above the Earth. === Geography, ethnology, and geology === Posidonius's fame beyond specialized philosophical circles had begun, at the latest, in the eighties with the publication of the work "About the ocean and the adjacent areas". This work was not only an overall representation of geographical questions according to current scientific knowledge, but it served to popularize his theories about the internal connections of the world, to show how all the forces had an effect on each other and how the interconnectedness applied also to human life, to the political just as to the personal spheres. In this work, Posidonius detailed his theory of the effect on a people's character by the climate, which included his representation of the "geography of the races". This theory was not solely scientific, but also had political implications—his Roman readers were informed that the climatic central position of Italy was an essential condition of the Roman destiny to dominate the world. As a Stoic, he did not, however, make a fundamental distinction between the civilized Romans as masters of the world and the less civilized peoples. Posidonius's writings on the Jews were probably the source of Diodorus Siculus's account of the siege of Jerusalem and possibly also for Strabo's. Some of Posidonius's arguments are contested by Josephus in Against Apion. Like Pytheas, Posidonius believed the tide is caused by the Moon. Posidonius was, however, wrong about the cause. Thinking that the Moon was a mixture of air and fire, he attributed the cause of the tides to the heat of the Moon, hot enough to cause the water to swell but not hot enough to evaporate it. He recorded observations on both earthquakes and volcanoes, including accounts of the eruptions of the volcanoes in the Aeolian Islands, north of Sicily. ==== Earth's circumference ==== Posidonius calculated the Earth's circumference by the arc measurement method, by reference to the position of the star Canopus. As explained by Cleomedes, Posidonius observed Canopus on but never above the horizon at Rhodes, while at Alexandria he saw it ascend as far as 7½ degrees above the horizon (the meridian arc between the latitude of the two locales is actually 5 degrees 14 minutes). Since he thought Rhodes was 5,000 stadia due north of Alexandria, and the difference in the star's elevation indicated the distance between the two locales was 1/48 of the circle, he multiplied 5,000 stadia by 48 to arrive at a figure of 240,000 stadia for the circumference of the Earth. His estimate of the latitude difference of these two points, 360 degrees/48=7.5 degrees, is rather erroneous. (The modern value is approximately 5 degrees.) In addition, they are not quite on the same meridian as they were assumed to be. The longitude difference of the points, slightly less than 2 degrees, is not negligible compared with the latitude difference. Translating stadia into modern units of distance can be problematic, but it is generally thought that the stadion used by Posidonius was almost exactly 1/10 of a modern statute mile. Thus Posidonius's measure of 240,000 stadia translates to 24,000 mi (39,000 km) compared to the actual circumference of 24,901 mi (40,074 km). Posidonius was informed in his approach to finding the Earth's circumference by Eratosthenes, who a century earlier arrived at a figure of 252,000 stadia; both men's figures for the Earth's circumference were uncannily accurate. Strabo noted that the distance between Rhodes and Alexandria is 3,750 stadia, and reported Posidonius's estimate of the Earth's circumference to be 180,000 stadia or 18,000 mi (29,000 km). Pliny the Elder mentions Posidonius among his sources and without naming him reported his method for estimating the Earth's circumference. He noted, however, that Hipparchus had added some 26,000 stadia to Eratosthenes's estimate. The smaller value offered by Strabo and the different lengths of Greek and Roman stadia have created a persistent confusion around Posidonius's result. Ptolemy used Posidonius's lower value of 180,000 stades (about 33% too low) for the Earth's circumference in his Geography. This was the number used by Christopher Columbus to underestimate the distance to India as 70,000 stades. === History and tactics === In his Histories, Posidonius continued the World History of Polybius. His history of the period 146–88 BC is said to have filled 52 volumes. His Histories continue the account of the rise and expansion of Roman dominance, which he appears to have supported. Posidonius did not follow Polybius's more detached and factual style, for Posidonius saw events as caused by human psychology; while he understood human passions and follies, he did not pardon or excuse them in his historical writing, using his narrative skill in fact to enlist the readers' approval or condemnation. For Posidonius "history" extended beyond the earth into the sky; humanity was not isolated each in its own political history, but was a part of the cosmos. His Histories were not, therefore, concerned with isolated political history of peoples and individuals, but they included discussions of all forces and factors (geographical factors, mineral resources, climate, nutrition), which let humans act and be a part of their environment. For example, Posidonius considered the climate of Arabia and the life-giving strength of the sun, tides (taken from his book on the oceans), and climatic theory to explain people's ethnic or national characters. Of Posidonius's work on tactics, The Art of War, the Greek historian Arrian complained that it was written 'for experts', which suggests that Posidonius may have had first hand experience of military leadership or, perhaps, used knowledge he gained from his acquaintance with Pompey. == Reputation and influence == In his own era, his writings on almost all the principal divisions of philosophy made Posidonius a renowned international figure throughout the Graeco-Roman world and he was widely cited by writers of his era, including Cicero, Livy, Plutarch, Strabo (who called Posidonius "the most learned of all philosophers of my time"), Cleomedes, Seneca the Younger, Diodorus Siculus (who used Posidonius as a source for his Bibliotheca Historia ["Historical Library"]), and others. Although his ornate and rhetorical style of writing passed out of fashion soon after his death, Posidonius was acclaimed during his life for his literary ability and as a stylist. Posidonius was the major source of materials on the Celts of Gaul and was profusely quoted by Timagenes, Julius Caesar, the Sicilian Greek Diodorus Siculus, and the Greek geographer Strabo. Posidonius appears to have moved with ease among the upper echelons of Roman society as an ambassador from Rhodes. He associated with some of the leading figures of late republican Rome, including Cicero and Pompey, both of whom visited him in Rhodes. In his twenties, Cicero attended his lectures (77 BC) and they continued to correspond. Cicero in his De Finibus closely followed Posidonius's presentation of Panaetius's ethical teachings. Posidonius met Pompey when he was Rhodes's ambassador in Rome and Pompey visited him in Rhodes twice, once in 66 BC during his campaign against the pirates and again in 62 BC during his eastern campaigns, and asked Posidonius to write his biography. As a gesture of respect and great honor, Pompey lowered his fasces before Posidonius's door. Other Romans who visited Posidonius in Rhodes were Velleius, Cotta, and Lucilius. Ptolemy was impressed by the sophistication of Posidonius's methods, which included correcting for the refraction of light passing through denser air near the horizon. Ptolemy's approval of Posidonius's result, rather than Eratosthenes's earlier and more correct figure, caused it to become the accepted value for the Earth's circumference for the next 1,500 years. Posidonius fortified the Stoicism of the middle period with contemporary learning. Next to his teacher Panaetius, he did most, by writings and personal contacts, to spread Stoicism in the Roman world. A century later, Seneca referred to Posidonius as one of those who had made the largest contribution to philosophy. His influence on Greek philosophical thinking lasted until the Middle Ages, as is demonstrated by the large number of times he is cited as a source in the Suda (a 10th-century Byzantine encyclopedia). Wilhelm Capelle traced most of the doctrines of the popular philosophic treatise De Mundo to Posidonius. Today, Posidonius seems to be recognized as having had an inquiring and wide-ranging mind, not entirely original, but with a breadth of view that connected, in accordance with his underlying Stoic philosophy, all things and their causes and all knowledge into an overarching, unified world view. The crater Posidonius on the Moon is named after him. == See also == Historic recurrence Twin study == References == == Editions and translations == Kidd, I. G.; Edelstein, Ludwig (1972). Posidonius. I. The Fragments. Cambridge University Press. ISBN 0521080460. Kidd, I. G. (1988). Posidonius. II. The Commentary. Cambridge University Press. ISBN 0521200628. Kidd, I. G. (1999). Posidonius. III. The Translation of the Fragments. Cambridge University Press. ISBN 0521622581. == Sources == Chisholm, Hugh, ed. (1911). "Posidonius" . Encyclopædia Britannica. Vol. 22 (11th ed.). Cambridge University Press. p. 172. Bevan, Edwyn. Stoics and Skeptics, 1913. ISBN 0890053642 Graver, Margaret (2002). "Appendix D: Posidonius". Cicero on the Emotions. Tusculan Disputations 3 and 4. University of Chicago Press. ISBN 0226305783. Harley, J. B. & Woodward, David. The History of Cartography, Volume 1: Cartography in Prehistoric, Ancient, and Medieval Europe and the Mediterranean, 1987, pp. 168–170. ISBN 0226316335 (v. 1) Juergen Malitz, Poseidonios from Grosse Gestalten der griechischen Antike. 58 historische Portraits von Homer bis Kleopatra. Hrsg. von Kai Brodersen. München: Verlag C.H. Beck. S. 426–432. Sedley, David (2003). "The School, from Zeno to Arius Didymus". In Inwood, Brad (ed.). The Cambridge Companion to the Stoics. Cambridge University Press. ISBN 0521779855. Sellars, John (2006). Stoicism. Acumen. ISBN 978-1844650521. Magill, Frank Northen; Aves, Alison (1998). Dictionary of World Biography. Taylor & Francis. pp. 904–910. ISBN 978-1579580407. Retrieved 28 May 2013. == Further reading == Freeman, Phillip, The Philosopher and the Druids: A Journey Among The Ancient Celts, Simon and Schuster, 2006. Hall, J.J. (2023). The meteorology of Posidonius. London; New York: Routledge. ISBN 9780367023720. Holiday, Ryan; Hanselman, Stephen (2020). "Posidonius the Genius". Lives of the Stoics. New York: Portfolio/Penguin. pp. 98–107. ISBN 978-0525541875. Irvine, William B. (2008) A Guide to the Good Life: The Ancient Art of Stoic Joy, Oxford University Press. ISBN 978-0195374612 – Discussion of his work and influence == External links == Posidonius of Rhodes (MacTutor History of Mathematics Archive) Poseidonius: English translations of fragments about history and geography |
Wikipedia:Positive element#0 | In mathematics, an element of a *-algebra is called positive if it is the sum of elements of the form a ∗ a {\displaystyle a^{*}a} . == Definition == Let A {\displaystyle {\mathcal {A}}} be a *-algebra. An element a ∈ A {\displaystyle a\in {\mathcal {A}}} is called positive if there are finitely many elements a k ∈ A ( k = 1 , 2 , … , n ) {\displaystyle a_{k}\in {\mathcal {A}}\;(k=1,2,\ldots ,n)} , so that a = ∑ k = 1 n a k ∗ a k {\textstyle a=\sum _{k=1}^{n}a_{k}^{*}a_{k}} holds. This is also denoted by a ≥ 0 {\displaystyle a\geq 0} . The set of positive elements is denoted by A + {\displaystyle {\mathcal {A}}_{+}} . A special case from particular importance is the case where A {\displaystyle {\mathcal {A}}} is a complete normed *-algebra, that satisfies the C*-identity ( ‖ a ∗ a ‖ = ‖ a ‖ 2 ∀ a ∈ A {\displaystyle \left\|a^{*}a\right\|=\left\|a\right\|^{2}\ \forall a\in {\mathcal {A}}} ), which is called a C*-algebra. == Examples == The unit element e {\displaystyle e} of an unital *-algebra is positive. For each element a ∈ A {\displaystyle a\in {\mathcal {A}}} , the elements a ∗ a {\displaystyle a^{*}a} and a a ∗ {\displaystyle aa^{*}} are positive by definition. In case A {\displaystyle {\mathcal {A}}} is a C*-algebra, the following holds: Let a ∈ A N {\displaystyle a\in {\mathcal {A}}_{N}} be a normal element, then for every positive function f ≥ 0 {\displaystyle f\geq 0} which is continuous on the spectrum of a {\displaystyle a} the continuous functional calculus defines a positive element f ( a ) {\displaystyle f(a)} . Every projection, i.e. every element a ∈ A {\displaystyle a\in {\mathcal {A}}} for which a = a ∗ = a 2 {\displaystyle a=a^{*}=a^{2}} holds, is positive. For the spectrum σ ( a ) {\displaystyle \sigma (a)} of such an idempotent element, σ ( a ) ⊆ { 0 , 1 } {\displaystyle \sigma (a)\subseteq \{0,1\}} holds, as can be seen from the continuous functional calculus. == Criteria == Let A {\displaystyle {\mathcal {A}}} be a C*-algebra and a ∈ A {\displaystyle a\in {\mathcal {A}}} . Then the following are equivalent: For the spectrum σ ( a ) ⊆ [ 0 , ∞ ) {\displaystyle \sigma (a)\subseteq [0,\infty )} holds and a {\displaystyle a} is a normal element. There exists an element b ∈ A {\displaystyle b\in {\mathcal {A}}} , such that a = b b ∗ {\displaystyle a=bb^{*}} . There exists a (unique) self-adjoint element c ∈ A s a {\displaystyle c\in {\mathcal {A}}_{sa}} such that a = c 2 {\displaystyle a=c^{2}} . If A {\displaystyle {\mathcal {A}}} is a unital *-algebra with unit element e {\displaystyle e} , then in addition the following statements are equivalent: ‖ t e − a ‖ ≤ t {\displaystyle \left\|te-a\right\|\leq t} for every t ≥ ‖ a ‖ {\displaystyle t\geq \left\|a\right\|} and a {\displaystyle a} is a self-adjoint element. ‖ t e − a ‖ ≤ t {\displaystyle \left\|te-a\right\|\leq t} for some t ≥ ‖ a ‖ {\displaystyle t\geq \left\|a\right\|} and a {\displaystyle a} is a self-adjoint element. == Properties == === In *-algebras === Let A {\displaystyle {\mathcal {A}}} be a *-algebra. Then: If a ∈ A + {\displaystyle a\in {\mathcal {A}}_{+}} is a positive element, then a {\displaystyle a} is self-adjoint. The set of positive elements A + {\displaystyle {\mathcal {A}}_{+}} is a convex cone in the real vector space of the self-adjoint elements A s a {\displaystyle {\mathcal {A}}_{sa}} . This means that α a , a + b ∈ A + {\displaystyle \alpha a,a+b\in {\mathcal {A}}_{+}} holds for all a , b ∈ A {\displaystyle a,b\in {\mathcal {A}}} and α ∈ [ 0 , ∞ ) {\displaystyle \alpha \in [0,\infty )} . If a ∈ A + {\displaystyle a\in {\mathcal {A}}_{+}} is a positive element, then b ∗ a b {\displaystyle b^{*}ab} is also positive for every element b ∈ A {\displaystyle b\in {\mathcal {A}}} . For the linear span of A + {\displaystyle {\mathcal {A}}_{+}} the following holds: ⟨ A + ⟩ = A 2 {\displaystyle \langle {\mathcal {A}}_{+}\rangle ={\mathcal {A}}^{2}} and A + − A + = A s a ∩ A 2 {\displaystyle {\mathcal {A}}_{+}-{\mathcal {A}}_{+}={\mathcal {A}}_{sa}\cap {\mathcal {A}}^{2}} . === In C*-algebras === Let A {\displaystyle {\mathcal {A}}} be a C*-algebra. Then: Using the continuous functional calculus, for every a ∈ A + {\displaystyle a\in {\mathcal {A}}_{+}} and n ∈ N {\displaystyle n\in \mathbb {N} } there is a uniquely determined b ∈ A + {\displaystyle b\in {\mathcal {A}}_{+}} that satisfies b n = a {\displaystyle b^{n}=a} , i.e. a unique n {\displaystyle n} -th root. In particular, a square root exists for every positive element. Since for every b ∈ A {\displaystyle b\in {\mathcal {A}}} the element b ∗ b {\displaystyle b^{*}b} is positive, this allows the definition of a unique absolute value: | b | = ( b ∗ b ) 1 2 {\textstyle |b|=(b^{*}b)^{\frac {1}{2}}} . For every real number α ≥ 0 {\displaystyle \alpha \geq 0} there is a positive element a α ∈ A + {\displaystyle a^{\alpha }\in {\mathcal {A}}_{+}} for which a α a β = a α + β {\displaystyle a^{\alpha }a^{\beta }=a^{\alpha +\beta }} holds for all β ∈ [ 0 , ∞ ) {\displaystyle \beta \in [0,\infty )} . The mapping α ↦ a α {\displaystyle \alpha \mapsto a^{\alpha }} is continuous. Negative values for α {\displaystyle \alpha } are also possible for invertible elements a {\displaystyle a} . Products of commutative positive elements are also positive. So if a b = b a {\displaystyle ab=ba} holds for positive a , b ∈ A + {\displaystyle a,b\in {\mathcal {A}}_{+}} , then a b ∈ A + {\displaystyle ab\in {\mathcal {A}}_{+}} . Each element a ∈ A {\displaystyle a\in {\mathcal {A}}} can be uniquely represented as a linear combination of four positive elements. To do this, a {\displaystyle a} is first decomposed into the self-adjoint real and imaginary parts and these are then decomposed into positive and negative parts using the continuous functional calculus. For it holds that A s a = A + − A + {\displaystyle {\mathcal {A}}_{sa}={\mathcal {A}}_{+}-{\mathcal {A}}_{+}} , since A 2 = A {\displaystyle {\mathcal {A}}^{2}={\mathcal {A}}} . If both a {\displaystyle a} and − a {\displaystyle -a} are positive a = 0 {\displaystyle a=0} holds. If B {\displaystyle {\mathcal {B}}} is a C*-subalgebra of A {\displaystyle {\mathcal {A}}} , then B + = B ∩ A + {\displaystyle {\mathcal {B}}_{+}={\mathcal {B}}\cap {\mathcal {A}}_{+}} . If B {\displaystyle {\mathcal {B}}} is another C*-algebra and Φ {\displaystyle \Phi } is a *-homomorphism from A {\displaystyle {\mathcal {A}}} to B {\displaystyle {\mathcal {B}}} , then Φ ( A + ) = Φ ( A ) ∩ B + {\displaystyle \Phi ({\mathcal {A}}_{+})=\Phi ({\mathcal {A}})\cap {\mathcal {B}}_{+}} holds. If a , b ∈ A + {\displaystyle a,b\in {\mathcal {A}}_{+}} are positive elements for which a b = 0 {\displaystyle ab=0} , they commutate and ‖ a + b ‖ = max ( ‖ a ‖ , ‖ b ‖ ) {\displaystyle \left\|a+b\right\|=\max(\left\|a\right\|,\left\|b\right\|)} holds. Such elements are called orthogonal and one writes a ⊥ b {\displaystyle a\bot b} . == Partial order == Let A {\displaystyle {\mathcal {A}}} be a *-algebra. The property of being a positive element defines a translation invariant partial order on the set of self-adjoint elements A s a {\displaystyle {\mathcal {A}}_{sa}} . If b − a ∈ A + {\displaystyle b-a\in {\mathcal {A}}_{+}} holds for a , b ∈ A {\displaystyle a,b\in {\mathcal {A}}} , one writes a ≤ b {\displaystyle a\leq b} or b ≥ a {\displaystyle b\geq a} . This partial order fulfills the properties t a ≤ t b {\displaystyle ta\leq tb} and a + c ≤ b + c {\displaystyle a+c\leq b+c} for all a , b , c ∈ A s a {\displaystyle a,b,c\in {\mathcal {A}}_{sa}} with a ≤ b {\displaystyle a\leq b} and t ∈ [ 0 , ∞ ) {\displaystyle t\in [0,\infty )} . If A {\displaystyle {\mathcal {A}}} is a C*-algebra, the partial order also has the following properties for a , b ∈ A {\displaystyle a,b\in {\mathcal {A}}} : If a ≤ b {\displaystyle a\leq b} holds, then c ∗ a c ≤ c ∗ b c {\displaystyle c^{*}ac\leq c^{*}bc} is true for every c ∈ A {\displaystyle c\in {\mathcal {A}}} . For every c ∈ A + {\displaystyle c\in {\mathcal {A}}_{+}} that commutates with a {\displaystyle a} and b {\displaystyle b} even a c ≤ b c {\displaystyle ac\leq bc} holds. If − b ≤ a ≤ b {\displaystyle -b\leq a\leq b} holds, then ‖ a ‖ ≤ ‖ b ‖ {\displaystyle \left\|a\right\|\leq \left\|b\right\|} . If 0 ≤ a ≤ b {\displaystyle 0\leq a\leq b} holds, then a α ≤ b α {\textstyle a^{\alpha }\leq b^{\alpha }} holds for all real numbers 0 < α ≤ 1 {\displaystyle 0<\alpha \leq 1} . If a {\displaystyle a} is invertible and 0 ≤ a ≤ b {\displaystyle 0\leq a\leq b} holds, then b {\displaystyle b} is invertible and for the inverses b − 1 ≤ a − 1 {\displaystyle b^{-1}\leq a^{-1}} holds. == See also == Nonnegative matrix Positive operator (Hilbert space) == Citations == === References === === Bibliography === |
Wikipedia:Positively invariant set#0 | In mathematical analysis, a positively (or positive) invariant set is a set with the following properties: Suppose x ˙ = f ( x ) {\displaystyle {\dot {x}}=f(x)} is a dynamical system, x ( t , x 0 ) {\displaystyle x(t,x_{0})} is a trajectory, and x 0 {\displaystyle x_{0}} is the initial point. Let O := { x ∈ R n ∣ φ ( x ) = 0 } {\displaystyle {\mathcal {O}}:=\left\lbrace x\in \mathbb {R} ^{n}\mid \varphi (x)=0\right\rbrace } where φ {\displaystyle \varphi } is a real-valued function. The set O {\displaystyle {\mathcal {O}}} is said to be positively invariant if x 0 ∈ O {\displaystyle x_{0}\in {\mathcal {O}}} implies that x ( t , x 0 ) ∈ O ∀ t ≥ 0 {\displaystyle x(t,x_{0})\in {\mathcal {O}}\ \forall \ t\geq 0} In other words, once a trajectory of the system enters O {\displaystyle {\mathcal {O}}} , it will never leave it again. == References == Dr. Francesco Borrelli [1] A. Benzaouia. book of "Saturated Switching Systems". chapter I, Definition I, Springer 2012. ISBN 978-1-4471-2900-4 [2]. |
Wikipedia:Posner's theorem#0 | In algebra, Posner's theorem states that given a prime polynomial identity algebra A with center Z, the ring A ⊗ Z Z ( 0 ) {\displaystyle A\otimes _{Z}Z_{(0)}} is a central simple algebra over Z ( 0 ) {\displaystyle Z_{(0)}} , the field of fractions of Z. It is named after Ed Posner. == Notes == == References == Artin, Michael (1999). "Noncommutative Rings" (PDF). Chapter V. Formanek, Edward (1991). The polynomial identities and invariants of n×n matrices. Regional Conference Series in Mathematics. Vol. 78. Providence, RI: American Mathematical Society. ISBN 0-8218-0730-7. Zbl 0714.16001. Edward C. Posner, Prime rings satisfying a polynomial identity, Proc. Amer. Math. Soc. 11 (1960), pp. 180–183. doi:10.2307/2032951 |
Wikipedia:Posynomial#0 | A posynomial, also known as a posinomial in some literature, is a function of the form f ( x 1 , x 2 , … , x n ) = ∑ k = 1 K c k x 1 a 1 k ⋯ x n a n k {\displaystyle f(x_{1},x_{2},\dots ,x_{n})=\sum _{k=1}^{K}c_{k}x_{1}^{a_{1k}}\cdots x_{n}^{a_{nk}}} where all the coordinates x i {\displaystyle x_{i}} and coefficients c k {\displaystyle c_{k}} are positive real numbers, and the exponents a i k {\displaystyle a_{ik}} are real numbers. Posynomials are closed under addition, multiplication, and nonnegative scaling. For example, f ( x 1 , x 2 , x 3 ) = 2.7 x 1 2 x 2 − 1 / 3 x 3 0.7 + 2 x 1 − 4 x 3 2 / 5 {\displaystyle f(x_{1},x_{2},x_{3})=2.7x_{1}^{2}x_{2}^{-1/3}x_{3}^{0.7}+2x_{1}^{-4}x_{3}^{2/5}} is a posynomial. Posynomials are not the same as polynomials in several independent variables. A polynomial's exponents must be non-negative integers, but its independent variables and coefficients can be arbitrary real numbers; on the other hand, a posynomial's exponents can be arbitrary real numbers, but its independent variables and coefficients must be positive real numbers. This terminology was introduced by Richard J. Duffin, Elmor L. Peterson, and Clarence Zener in their seminal book on geometric programming. Posynomials are a special case of signomials, the latter not having the restriction that the c k {\displaystyle c_{k}} be positive. == References == Richard J. Duffin; Elmor L. Peterson; Clarence Zener (1967). Geometric Programming. John Wiley and Sons. p. 278. ISBN 0-471-22370-0. Stephen P Boyd; Lieven Vandenberghe (2004). Convex optimization. Cambridge University Press. ISBN 0-521-83378-7. Harvir Singh Kasana; Krishna Dev Kumar (2004). Introductory Operations Research: Theory and Applications. Springer. ISBN 3-540-40138-5. Weinstock, D.; Appelbaum, J. (2004). "Optimal solar field design of stationary collectors". Journal of Solar Energy Engineering. 126 (3): 898–905. doi:10.1115/1.1756137. == External links == S. Boyd, S. J. Kim, L. Vandenberghe, and A. Hassibi, A Tutorial on Geometric Programming |
Wikipedia:Poul Heegaard#0 | Poul Heegaard (Danish: [ˈhe̝ˀˌkɒˀ] ; November 2, 1871, Copenhagen - February 7, 1948, Oslo) was a Danish mathematician active in the field of topology. His 1898 thesis introduced a concept now called the Heegaard splitting of a 3-manifold. Heegaard's ideas allowed him to make a careful critique of work of Henri Poincaré. Poincaré had overlooked the possibility of the appearance of torsion in the homology groups of a space. He later co-authored, with Max Dehn, a foundational article on combinatorial topology, in the form of an encyclopedia entry. Heegaard studied mathematics at the University of Copenhagen from 1889 to 1893. Following years of travelling, and teaching mathematics, he was appointed professor at University of Copenhagen in 1910. An English translation of his 1898 thesis, which laid a rigorous topological foundation for modern knot theory, may be found at https://www.maths.ed.ac.uk/~v1ranick/papers/heegaardenglish.pdf. The section on "a visually transparent representation of the complex points of an algebraic surface" is especially important. In 1936, Heegaard served as the President of The Third International Congress of Nationalists at The Nobel Institute. Following a talk titled "A Biologist's View of the Future of the White Race," Heegaard hosted a garden party for the Congress participants. Following a dispute with the faculty over, among other things, the hiring of Harald Bohr as professor at the University (which Heegaard opposed); Heegaard accepted a professorship at Oslo in Norway, where he worked till his retirement in 1941. == Notes == == External links == Works by or about Poul Heegaard at the Internet Archive Heegaard, Poul (1898), Forstudier til en topologisk Teori for de algebraiske Fladers Sammenhang (PDF), Thesis (in Danish), JFM 29.0417.02 O'Connor, John J.; Robertson, Edmund F., "Poul Heegaard", MacTutor History of Mathematics Archive, University of St Andrews "Heegaard home page" |
Wikipedia:Power rule#0 | In calculus, the power rule is used to differentiate functions of the form f ( x ) = x r {\displaystyle f(x)=x^{r}} , whenever r {\displaystyle r} is a real number. Since differentiation is a linear operation on the space of differentiable functions, polynomials can also be differentiated using this rule. The power rule underlies the Taylor series as it relates a power series with a function's derivatives. == Statement of the power rule == Let f {\displaystyle f} be a function satisfying f ( x ) = x r {\displaystyle f(x)=x^{r}} for all x {\displaystyle x} , where r ∈ R {\displaystyle r\in \mathbb {R} } . Then, f ′ ( x ) = r x r − 1 . {\displaystyle f'(x)=rx^{r-1}\,.} The power rule for integration states that ∫ x r d x = x r + 1 r + 1 + C {\displaystyle \int \!x^{r}\,dx={\frac {x^{r+1}}{r+1}}+C} for any real number r ≠ − 1 {\displaystyle r\neq -1} . It can be derived by inverting the power rule for differentiation. In this equation C is any constant. == Proofs == === Proof for real exponents === Let f ( x ) = x r {\displaystyle f(x)=x^{r}} , where r {\displaystyle r} is any real number. If f ( x ) = e x {\displaystyle f(x)=e^{x}} , then ln ( f ( x ) ) = x {\displaystyle \ln(f(x))=x} , where ln {\displaystyle \ln } is the natural logarithm function, or f ′ ( x ) = f ( x ) = e x {\displaystyle f'(x)=f(x)=e^{x}} , as was required. Therefore, applying the chain rule to f ( x ) = e r ln x {\displaystyle f(x)=e^{r\ln x}} , we see that f ′ ( x ) = r x e r ln x = r x x r {\displaystyle f'(x)={\frac {r}{x}}e^{r\ln x}={\frac {r}{x}}x^{r}} which simplifies to r x r − 1 {\displaystyle rx^{r-1}} . When x < 0 {\displaystyle x<0} , we may use the same definition with x r = ( ( − 1 ) ( − x ) ) r = ( − 1 ) r ( − x ) r {\displaystyle x^{r}=((-1)(-x))^{r}=(-1)^{r}(-x)^{r}} , where we now have − x > 0 {\displaystyle -x>0} . This necessarily leads to the same result. Note that because ( − 1 ) r {\displaystyle (-1)^{r}} does not have a conventional definition when r {\displaystyle r} is not a rational number, irrational power functions are not well defined for negative bases. In addition, as rational powers of −1 with even denominators (in lowest terms) are not real numbers, these expressions are only real valued for rational powers with odd denominators (in lowest terms). Finally, whenever the function is differentiable at x = 0 {\displaystyle x=0} , the defining limit for the derivative is: lim h → 0 h r − 0 r h {\displaystyle \lim _{h\to 0}{\frac {h^{r}-0^{r}}{h}}} which yields 0 only when r {\displaystyle r} is a rational number with odd denominator (in lowest terms) and r > 1 {\displaystyle r>1} , and 1 when r = 1 {\displaystyle r=1} . For all other values of r {\displaystyle r} , the expression h r {\displaystyle h^{r}} is not well-defined for h < 0 {\displaystyle h<0} , as was covered above, or is not a real number, so the limit does not exist as a real-valued derivative. For the two cases that do exist, the values agree with the value of the existing power rule at 0, so no exception need be made. The exclusion of the expression 0 0 {\displaystyle 0^{0}} (the case x = 0 {\displaystyle x=0} ) from our scheme of exponentiation is due to the fact that the function f ( x , y ) = x y {\displaystyle f(x,y)=x^{y}} has no limit at (0,0), since x 0 {\displaystyle x^{0}} approaches 1 as x approaches 0, while 0 y {\displaystyle 0^{y}} approaches 0 as y approaches 0. Thus, it would be problematic to ascribe any particular value to it, as the value would contradict one of the two cases, dependent on the application. It is traditionally left undefined. === Proofs for integer exponents === ==== Proof by induction (natural numbers) ==== Let n ∈ N {\displaystyle n\in \mathbb {N} } . It is required to prove that d d x x n = n x n − 1 . {\displaystyle {\frac {d}{dx}}x^{n}=nx^{n-1}.} The base case may be when n = 0 {\displaystyle n=0} or 1 {\displaystyle 1} , depending on how the set of natural numbers is defined. When n = 0 {\displaystyle n=0} , d d x x 0 = d d x ( 1 ) = lim h → 0 1 − 1 h = lim h → 0 0 h = 0 = 0 x 0 − 1 . {\displaystyle {\frac {d}{dx}}x^{0}={\frac {d}{dx}}(1)=\lim _{h\to 0}{\frac {1-1}{h}}=\lim _{h\to 0}{\frac {0}{h}}=0=0x^{0-1}.} When n = 1 {\displaystyle n=1} , d d x x 1 = lim h → 0 ( x + h ) − x h = lim h → 0 h h = 1 = 1 x 1 − 1 . {\displaystyle {\frac {d}{dx}}x^{1}=\lim _{h\to 0}{\frac {(x+h)-x}{h}}=\lim _{h\to 0}{\frac {h}{h}}=1=1x^{1-1}.} Therefore, the base case holds either way. Suppose the statement holds for some natural number k, i.e. d d x x k = k x k − 1 . {\displaystyle {\frac {d}{dx}}x^{k}=kx^{k-1}.} When n = k + 1 {\displaystyle n=k+1} , d d x x k + 1 = d d x ( x k ⋅ x ) = x k ⋅ d d x x + x ⋅ d d x x k = x k + x ⋅ k x k − 1 = x k + k x k = ( k + 1 ) x k = ( k + 1 ) x ( k + 1 ) − 1 {\displaystyle {\frac {d}{dx}}x^{k+1}={\frac {d}{dx}}(x^{k}\cdot x)=x^{k}\cdot {\frac {d}{dx}}x+x\cdot {\frac {d}{dx}}x^{k}=x^{k}+x\cdot kx^{k-1}=x^{k}+kx^{k}=(k+1)x^{k}=(k+1)x^{(k+1)-1}} By the principle of mathematical induction, the statement is true for all natural numbers n. ==== Proof by binomial theorem (natural number) ==== Let y = x n {\displaystyle y=x^{n}} , where n ∈ N {\displaystyle n\in \mathbb {N} } . Then, d y d x = lim h → 0 ( x + h ) n − x n h = lim h → 0 1 h [ x n + ( n 1 ) x n − 1 h + ( n 2 ) x n − 2 h 2 + ⋯ + ( n n ) h n − x n ] = lim h → 0 [ ( n 1 ) x n − 1 + ( n 2 ) x n − 2 h + ⋯ + ( n n ) h n − 1 ] = n x n − 1 {\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=\lim _{h\to 0}{\frac {(x+h)^{n}-x^{n}}{h}}\\[4pt]&=\lim _{h\to 0}{\frac {1}{h}}\left[x^{n}+{\binom {n}{1}}x^{n-1}h+{\binom {n}{2}}x^{n-2}h^{2}+\dots +{\binom {n}{n}}h^{n}-x^{n}\right]\\[4pt]&=\lim _{h\to 0}\left[{\binom {n}{1}}x^{n-1}+{\binom {n}{2}}x^{n-2}h+\dots +{\binom {n}{n}}h^{n-1}\right]\\[4pt]&=nx^{n-1}\end{aligned}}} Since n choose 1 is equal to n, and the rest of the terms all contain h, which is 0, the rest of the terms cancel. This proof only works for natural numbers as the binomial theorem only works for natural numbers. ==== Generalization to negative integer exponents ==== For a negative integer n, let n = − m {\displaystyle n=-m} so that m is a positive integer. Using the reciprocal rule, d d x x n = d d x ( 1 x m ) = − d d x x m ( x m ) 2 = − m x m − 1 x 2 m = − m x − m − 1 = n x n − 1 . {\displaystyle {\frac {d}{dx}}x^{n}={\frac {d}{dx}}\left({\frac {1}{x^{m}}}\right)={\frac {-{\frac {d}{dx}}x^{m}}{(x^{m})^{2}}}=-{\frac {mx^{m-1}}{x^{2m}}}=-mx^{-m-1}=nx^{n-1}.} In conclusion, for any integer n {\displaystyle n} , d d x x n = n x n − 1 . {\displaystyle {\frac {d}{dx}}x^{n}=nx^{n-1}.} === Generalization to rational exponents === Upon proving that the power rule holds for integer exponents, the rule can be extended to rational exponents. ==== Proof by chain rule ==== This proof is composed of two steps that involve the use of the chain rule for differentiation. Let y = x r = x 1 n {\displaystyle y=x^{r}=x^{\frac {1}{n}}} , where n ∈ N + {\displaystyle n\in \mathbb {N} ^{+}} . Then y n = x {\displaystyle y^{n}=x} . By the chain rule, n y n − 1 ⋅ d y d x = 1 {\displaystyle ny^{n-1}\cdot {\frac {dy}{dx}}=1} . Solving for d y d x {\displaystyle {\frac {dy}{dx}}} , d y d x = 1 n y n − 1 = 1 n ( x 1 n ) n − 1 = 1 n x 1 − 1 n = 1 n x 1 n − 1 = r x r − 1 {\displaystyle {\frac {dy}{dx}}={\frac {1}{ny^{n-1}}}={\frac {1}{n\left(x^{\frac {1}{n}}\right)^{n-1}}}={\frac {1}{nx^{1-{\frac {1}{n}}}}}={\frac {1}{n}}x^{{\frac {1}{n}}-1}=rx^{r-1}} Thus, the power rule applies for rational exponents of the form 1 / n {\displaystyle 1/n} , where n {\displaystyle n} is a nonzero natural number. This can be generalized to rational exponents of the form p / q {\displaystyle p/q} by applying the power rule for integer exponents using the chain rule, as shown in the next step. Let y = x r = x p / q {\displaystyle y=x^{r}=x^{p/q}} , where p ∈ Z , q ∈ N + , {\displaystyle p\in \mathbb {Z} ,q\in \mathbb {N} ^{+},} so that r ∈ Q {\displaystyle r\in \mathbb {Q} } . By the chain rule, d y d x = d d x ( x 1 q ) p = p ( x 1 q ) p − 1 ⋅ 1 q x 1 q − 1 = p q x p / q − 1 = r x r − 1 {\displaystyle {\frac {dy}{dx}}={\frac {d}{dx}}\left(x^{\frac {1}{q}}\right)^{p}=p\left(x^{\frac {1}{q}}\right)^{p-1}\cdot {\frac {1}{q}}x^{{\frac {1}{q}}-1}={\frac {p}{q}}x^{p/q-1}=rx^{r-1}} From the above results, we can conclude that when r {\displaystyle r} is a rational number, d d x x r = r x r − 1 . {\displaystyle {\frac {d}{dx}}x^{r}=rx^{r-1}.} ==== Proof by implicit differentiation ==== A more straightforward generalization of the power rule to rational exponents makes use of implicit differentiation. Let y = x r = x p / q {\displaystyle y=x^{r}=x^{p/q}} , where p , q ∈ Z {\displaystyle p,q\in \mathbb {Z} } so that r ∈ Q {\displaystyle r\in \mathbb {Q} } . Then, y q = x p {\displaystyle y^{q}=x^{p}} Differentiating both sides of the equation with respect to x {\displaystyle x} , q y q − 1 ⋅ d y d x = p x p − 1 {\displaystyle qy^{q-1}\cdot {\frac {dy}{dx}}=px^{p-1}} Solving for d y d x {\displaystyle {\frac {dy}{dx}}} , d y d x = p x p − 1 q y q − 1 . {\displaystyle {\frac {dy}{dx}}={\frac {px^{p-1}}{qy^{q-1}}}.} Since y = x p / q {\displaystyle y=x^{p/q}} , d d x x p / q = p x p − 1 q x p − p / q . {\displaystyle {\frac {d}{dx}}x^{p/q}={\frac {px^{p-1}}{qx^{p-p/q}}}.} Applying laws of exponents, d d x x p / q = p q x p − 1 x − p + p / q = p q x p / q − 1 . {\displaystyle {\frac {d}{dx}}x^{p/q}={\frac {p}{q}}x^{p-1}x^{-p+p/q}={\frac {p}{q}}x^{p/q-1}.} Thus, letting r = p q {\displaystyle r={\frac {p}{q}}} , we can conclude that d d x x r = r x r − 1 {\displaystyle {\frac {d}{dx}}x^{r}=rx^{r-1}} when r {\displaystyle r} is a rational number. == History == The power rule for integrals was first demonstrated in a geometric form by Italian mathematician Bonaventura Cavalieri in the early 17th century for all positive integer values of n {\displaystyle {\displaystyle n}} , and during the mid 17th century for all rational powers by the mathematicians Pierre de Fermat, Evangelista Torricelli, Gilles de Roberval, John Wallis, and Blaise Pascal, each working independently. At the time, they were treatises on determining the area between the graph of a rational power function and the horizontal axis. With hindsight, however, it is considered the first general theorem of calculus to be discovered. The power rule for differentiation was derived by Isaac Newton and Gottfried Wilhelm Leibniz, each independently, for rational power functions in the mid 17th century, who both then used it to derive the power rule for integrals as the inverse operation. This mirrors the conventional way the related theorems are presented in modern basic calculus textbooks, where differentiation rules usually precede integration rules. Although both men stated that their rules, demonstrated only for rational quantities, worked for all real powers, neither sought a proof of such, as at the time the applications of the theory were not concerned with such exotic power functions, and questions of convergence of infinite series were still ambiguous. The unique case of r = − 1 {\displaystyle r=-1} was resolved by Flemish Jesuit and mathematician Grégoire de Saint-Vincent and his student Alphonse Antonio de Sarasa in the mid 17th century, who demonstrated that the associated definite integral, ∫ 1 x 1 t d t {\displaystyle \int _{1}^{x}{\frac {1}{t}}\,dt} representing the area between the rectangular hyperbola x y = 1 {\displaystyle xy=1} and the x-axis, was a logarithmic function, whose base was eventually discovered to be the transcendental number e. The modern notation for the value of this definite integral is ln ( x ) {\displaystyle \ln(x)} , the natural logarithm. == Generalizations == === Complex power functions === If we consider functions of the form f ( z ) = z c {\displaystyle f(z)=z^{c}} where c {\displaystyle c} is any complex number and z {\displaystyle z} is a complex number in a slit complex plane that excludes the branch point of 0 and any branch cut connected to it, and we use the conventional multivalued definition z c := exp ( c ln z ) {\displaystyle z^{c}:=\exp(c\ln z)} , then it is straightforward to show that, on each branch of the complex logarithm, the same argument used above yields a similar result: f ′ ( z ) = c z exp ( c ln z ) {\displaystyle f'(z)={\frac {c}{z}}\exp(c\ln z)} . In addition, if c {\displaystyle c} is a positive integer, then there is no need for a branch cut: one may define f ( 0 ) = 0 {\displaystyle f(0)=0} , or define positive integral complex powers through complex multiplication, and show that f ′ ( z ) = c z c − 1 {\displaystyle f'(z)=cz^{c-1}} for all complex z {\displaystyle z} , from the definition of the derivative and the binomial theorem. However, due to the multivalued nature of complex power functions for non-integer exponents, one must be careful to specify the branch of the complex logarithm being used. In addition, no matter which branch is used, if c {\displaystyle c} is not a positive integer, then the function is not differentiable at 0. == See also == Differentiation rules General Leibniz rule Inverse functions and differentiation Linearity of differentiation Product rule Quotient rule Table of derivatives Vector calculus identities == References == === Notes === === Citations === == Further reading == Larson, Ron; Hostetler, Robert P.; and Edwards, Bruce H. (2003). Calculus of a Single Variable: Early Transcendental Functions (3rd edition). Houghton Mifflin Company. ISBN 0-618-22307-X. |
Wikipedia:Power sum symmetric polynomial#0 | In mathematics, specifically in commutative algebra, the power sum symmetric polynomials are a type of basic building block for symmetric polynomials, in the sense that every symmetric polynomial with rational coefficients can be expressed as a sum and difference of products of power sum symmetric polynomials with rational coefficients. However, not every symmetric polynomial with integral coefficients is generated by integral combinations of products of power-sum polynomials: they are a generating set over the rationals, but not over the integers. == Definition == The power sum symmetric polynomial of degree k in n {\displaystyle n} variables x1, ..., xn, written pk for k = 0, 1, 2, ..., is the sum of all kth powers of the variables. Formally, p k ( x 1 , x 2 , … , x n ) = ∑ i = 1 n x i k . {\displaystyle p_{k}(x_{1},x_{2},\dots ,x_{n})=\sum _{i=1}^{n}x_{i}^{k}\,.} The first few of these polynomials are p 0 ( x 1 , x 2 , … , x n ) = 1 + 1 + ⋯ + 1 = n , {\displaystyle p_{0}(x_{1},x_{2},\dots ,x_{n})=1+1+\cdots +1=n\,,} p 1 ( x 1 , x 2 , … , x n ) = x 1 + x 2 + ⋯ + x n , {\displaystyle p_{1}(x_{1},x_{2},\dots ,x_{n})=x_{1}+x_{2}+\cdots +x_{n}\,,} p 2 ( x 1 , x 2 , … , x n ) = x 1 2 + x 2 2 + ⋯ + x n 2 , {\displaystyle p_{2}(x_{1},x_{2},\dots ,x_{n})=x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}\,,} p 3 ( x 1 , x 2 , … , x n ) = x 1 3 + x 2 3 + ⋯ + x n 3 . {\displaystyle p_{3}(x_{1},x_{2},\dots ,x_{n})=x_{1}^{3}+x_{2}^{3}+\cdots +x_{n}^{3}\,.} Thus, for each nonnegative integer k {\displaystyle k} , there exists exactly one power sum symmetric polynomial of degree k {\displaystyle k} in n {\displaystyle n} variables. The polynomial ring formed by taking all integral linear combinations of products of the power sum symmetric polynomials is a commutative ring. == Examples == The following lists the n {\displaystyle n} power sum symmetric polynomials of positive degrees up to n for the first three positive values of n . {\displaystyle n.} In every case, p 0 = n {\displaystyle p_{0}=n} is one of the polynomials. The list goes up to degree n because the power sum symmetric polynomials of degrees 1 to n are basic in the sense of the theorem stated below. For n = 1: p 1 = x 1 . {\displaystyle p_{1}=x_{1}\,.} For n = 2: p 1 = x 1 + x 2 , {\displaystyle p_{1}=x_{1}+x_{2}\,,} p 2 = x 1 2 + x 2 2 . {\displaystyle p_{2}=x_{1}^{2}+x_{2}^{2}\,.} For n = 3: p 1 = x 1 + x 2 + x 3 , {\displaystyle p_{1}=x_{1}+x_{2}+x_{3}\,,} p 2 = x 1 2 + x 2 2 + x 3 2 , {\displaystyle p_{2}=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\,,} p 3 = x 1 3 + x 2 3 + x 3 3 , {\displaystyle p_{3}=x_{1}^{3}+x_{2}^{3}+x_{3}^{3}\,,} == Properties == The set of power sum symmetric polynomials of degrees 1, 2, ..., n in n variables generates the ring of symmetric polynomials in n variables. More specifically: Theorem. The ring of symmetric polynomials with rational coefficients equals the rational polynomial ring Q [ p 1 , … , p n ] . {\displaystyle \mathbb {Q} [p_{1},\ldots ,p_{n}].} The same is true if the coefficients are taken in any field of characteristic 0. However, this is not true if the coefficients must be integers. For example, for n = 2, the symmetric polynomial P ( x 1 , x 2 ) = x 1 2 x 2 + x 1 x 2 2 + x 1 x 2 {\displaystyle P(x_{1},x_{2})=x_{1}^{2}x_{2}+x_{1}x_{2}^{2}+x_{1}x_{2}} has the expression P ( x 1 , x 2 ) = p 1 3 − p 1 p 2 2 + p 1 2 − p 2 2 , {\displaystyle P(x_{1},x_{2})={\frac {p_{1}^{3}-p_{1}p_{2}}{2}}+{\frac {p_{1}^{2}-p_{2}}{2}}\,,} which involves fractions. According to the theorem this is the only way to represent P ( x 1 , x 2 ) {\displaystyle P(x_{1},x_{2})} in terms of p1 and p2. Therefore, P does not belong to the integral polynomial ring Z [ p 1 , … , p n ] . {\displaystyle \mathbb {Z} [p_{1},\ldots ,p_{n}].} For another example, the elementary symmetric polynomials ek, expressed as polynomials in the power sum polynomials, do not all have integral coefficients. For instance, e 2 := ∑ 1 ≤ i < j ≤ n x i x j = p 1 2 − p 2 2 . {\displaystyle e_{2}:=\sum _{1\leq i<j\leq n}x_{i}x_{j}={\frac {p_{1}^{2}-p_{2}}{2}}\,.} The theorem is also untrue if the field has characteristic different from 0. For example, if the field F has characteristic 2, then p 2 = p 1 2 {\displaystyle p_{2}=p_{1}^{2}} , so p1 and p2 cannot generate e2 = x1x2. Sketch of a partial proof of the theorem: By Newton's identities the power sums are functions of the elementary symmetric polynomials; this is implied by the following recurrence relation, though the explicit function that gives the power sums in terms of the ej is complicated: p n = ( − 1 ) n − 1 n e n + ∑ j = 1 n − 1 ( − 1 ) j − 1 e j p n − j . {\displaystyle p_{n}=(-1)^{n-1}ne_{n}+\sum _{j=1}^{n-1}(-1)^{j-1}e_{j}p_{n-j}\,.} Rewriting the same recurrence, one has the elementary symmetric polynomials in terms of the power sums (also implicitly, the explicit formula being complicated): e n = 1 n ∑ j = 1 n ( − 1 ) j − 1 e n − j p j . {\displaystyle e_{n}={\frac {1}{n}}\sum _{j=1}^{n}(-1)^{j-1}e_{n-j}p_{j}\,.} This implies that the elementary polynomials are rational, though not integral, linear combinations of the power sum polynomials of degrees 1, ..., n. Since the elementary symmetric polynomials are an algebraic basis for all symmetric polynomials with coefficients in a field, it follows that every symmetric polynomial in n variables is a polynomial function f ( p 1 , … , p n ) {\displaystyle f(p_{1},\ldots ,p_{n})} of the power sum symmetric polynomials p1, ..., pn. That is, the ring of symmetric polynomials is contained in the ring generated by the power sums, Q [ p 1 , … , p n ] . {\displaystyle \mathbb {Q} [p_{1},\ldots ,p_{n}].} Because every power sum polynomial is symmetric, the two rings are equal. (This does not show how to prove the polynomial f is unique.) For another system of symmetric polynomials with similar properties see complete homogeneous symmetric polynomials. == See also == Representation theory Newton's identities == References == Ian G. Macdonald (1979), Symmetric Functions and Hall Polynomials. Oxford Mathematical Monographs. Oxford: Clarendon Press. Ian G. Macdonald (1995), Symmetric Functions and Hall Polynomials, second ed. Oxford: Clarendon Press. ISBN 0-19-850450-0 (paperback, 1998). Richard P. Stanley (1999), Enumerative Combinatorics, Vol. 2. Cambridge: Cambridge University Press. ISBN 0-521-56069-1 |
Wikipedia:Practical number#0 | In number theory, a practical number or panarithmic number is a positive integer n {\displaystyle n} such that all smaller positive integers can be represented as sums of distinct divisors of n {\displaystyle n} . For example, 12 is a practical number because all the numbers from 1 to 11 can be expressed as sums of its divisors 1, 2, 3, 4, and 6: as well as these divisors themselves, we have 5 = 3 + 2, 7 = 6 + 1, 8 = 6 + 2, 9 = 6 + 3, 10 = 6 + 3 + 1, and 11 = 6 + 3 + 2. The sequence of practical numbers (sequence A005153 in the OEIS) begins Practical numbers were used by Fibonacci in his Liber Abaci (1202) in connection with the problem of representing rational numbers as Egyptian fractions. Fibonacci does not formally define practical numbers, but he gives a table of Egyptian fraction expansions for fractions with practical denominators. The name "practical number" is due to Srinivasan (1948). He noted that "the subdivisions of money, weights, and measures involve numbers like 4, 12, 16, 20 and 28 which are usually supposed to be so inconvenient as to deserve replacement by powers of 10." His partial classification of these numbers was completed by Stewart (1954) and Sierpiński (1955). This characterization makes it possible to determine whether a number is practical by examining its prime factorization. Every even perfect number and every power of two is also a practical number. Practical numbers have also been shown to be analogous with prime numbers in many of their properties. == Characterization of practical numbers == The original characterisation by Srinivasan (1948) stated that a practical number cannot be a deficient number, that is one of which the sum of all divisors (including 1 and itself) is less than twice the number unless the deficiency is one. If the ordered set of all divisors of the practical number n {\displaystyle n} is d 1 , d 2 , . . . , d j {\displaystyle {d_{1},d_{2},...,d_{j}}} with d 1 = 1 {\displaystyle d_{1}=1} and d j = n {\displaystyle d_{j}=n} , then Srinivasan's statement can be expressed by the inequality 2 n ≤ 1 + ∑ i = 1 j d i . {\displaystyle 2n\leq 1+\sum _{i=1}^{j}d_{i}.} In other words, the ordered sequence of all divisors d 1 < d 2 < . . . < d j {\displaystyle {d_{1}<d_{2}<...<d_{j}}} of a practical number has to be a complete sub-sequence. This partial characterization was extended and completed by Stewart (1954) and Sierpiński (1955) who showed that it is straightforward to determine whether a number is practical from its prime factorization. A positive integer greater than one with prime factorization n = p 1 α 1 . . . p k α k {\displaystyle n=p_{1}^{\alpha _{1}}...p_{k}^{\alpha _{k}}} (with the primes in sorted order p 1 < p 2 < ⋯ < p k {\displaystyle p_{1}<p_{2}<\dots <p_{k}} ) is practical if and only if each of its prime factors p i {\displaystyle p_{i}} is small enough for p i − 1 {\displaystyle p_{i}-1} to have a representation as a sum of smaller divisors. For this to be true, the first prime p 1 {\displaystyle p_{1}} must equal 2 and, for every i from 2 to k, each successive prime p i {\displaystyle p_{i}} must obey the inequality p i ≤ 1 + σ ( p 1 α 1 p 2 α 2 … p i − 1 α i − 1 ) = 1 + σ ( p 1 α 1 ) σ ( p 2 α 2 ) … σ ( p i − 1 α i − 1 ) = 1 + ∏ j = 1 i − 1 p j α j + 1 − 1 p j − 1 , {\displaystyle p_{i}\leq 1+\sigma (p_{1}^{\alpha _{1}}p_{2}^{\alpha _{2}}\dots p_{i-1}^{\alpha _{i-1}})=1+\sigma (p_{1}^{\alpha _{1}})\sigma (p_{2}^{\alpha _{2}})\dots \sigma (p_{i-1}^{\alpha _{i-1}})=1+\prod _{j=1}^{i-1}{\frac {p_{j}^{\alpha _{j}+1}-1}{p_{j}-1}},} where σ ( x ) {\displaystyle \sigma (x)} denotes the sum of the divisors of x. For example, 2 × 32 × 29 × 823 = 429606 is practical, because the inequality above holds for each of its prime factors: 3 ≤ σ(2) + 1 = 4, 29 ≤ σ(2 × 32) + 1 = 40, and 823 ≤ σ(2 × 32 × 29) + 1 = 1171. The condition stated above is necessary and sufficient for a number to be practical. In one direction, this condition is necessary in order to be able to represent p i − 1 {\displaystyle p_{i}-1} as a sum of divisors of n {\displaystyle n} , because if the inequality failed to be true then even adding together all the smaller divisors would give a sum too small to reach p i − 1 {\displaystyle p_{i}-1} . In the other direction, the condition is sufficient, as can be shown by induction. More strongly, if the factorization of n {\displaystyle n} satisfies the condition above, then any m ≤ σ ( n ) {\displaystyle m\leq \sigma (n)} can be represented as a sum of divisors of n {\displaystyle n} , by the following sequence of steps: By induction on j ∈ [ 1 , α k ] {\displaystyle j\in [1,\alpha _{k}]} , it can be shown that p k j ≤ 1 + σ ( n / p k α k − ( j − 1 ) ) {\displaystyle p_{k}^{j}\leq 1+\sigma (n/p_{k}^{\alpha _{k}-(j-1)})} . Hence p k α k ≤ 1 + σ ( n / p k ) {\displaystyle p_{k}^{\alpha _{k}}\leq 1+\sigma (n/p_{k})} . Since the internals [ q p k α k , q p k α k + σ ( n / p k ) ] {\displaystyle [qp_{k}^{\alpha _{k}},qp_{k}^{\alpha _{k}}+\sigma (n/p_{k})]} cover [ 1 , σ ( n ) ] {\displaystyle [1,\sigma (n)]} for 1 ≤ q ≤ σ ( n / p k α k ) {\displaystyle 1\leq q\leq \sigma (n/p_{k}^{\alpha _{k}})} , there are such a q {\displaystyle q} and some r ∈ [ 0 , σ ( n / p k ) ] {\displaystyle r\in [0,\sigma (n/p_{k})]} such that m = q p k α k + r {\displaystyle m=qp_{k}^{\alpha _{k}}+r} . Since q ≤ σ ( n / p k α k ) {\displaystyle q\leq \sigma (n/p_{k}^{\alpha _{k}})} and n / p k α k {\displaystyle n/p_{k}^{\alpha _{k}}} can be shown by induction to be practical, we can find a representation of q as a sum of divisors of n / p k α k {\displaystyle n/p_{k}^{\alpha _{k}}} . Since r ≤ σ ( n / p k ) {\displaystyle r\leq \sigma (n/p_{k})} , and since n / p k {\displaystyle n/p_{k}} can be shown by induction to be practical, we can find a representation of r as a sum of divisors of n / p k {\displaystyle n/p_{k}} . The divisors representing r, together with p k α k {\displaystyle p_{k}^{\alpha _{k}}} times each of the divisors representing q, together form a representation of m as a sum of divisors of n {\displaystyle n} . == Properties == The only odd practical number is 1, because if n {\displaystyle n} is an odd number greater than 2, then 2 cannot be expressed as the sum of distinct divisors of n {\displaystyle n} . More strongly, Srinivasan (1948) observes that other than 1 and 2, every practical number is divisible by 4 or 6 (or both). The product of two practical numbers is also a practical number. Equivalently, the set of all practical numbers is closed under multiplication. More strongly, the least common multiple of any two practical numbers is also a practical number. From the above characterization by Stewart and Sierpiński it can be seen that if n {\displaystyle n} is a practical number and d {\displaystyle d} is one of its divisors then n ⋅ d {\displaystyle n\cdot d} must also be a practical number. Furthermore, a practical number multiplied by power combinations of any of its divisors is also practical. In the set of all practical numbers there is a primitive set of practical numbers. A primitive practical number is either practical and squarefree or practical and when divided by any of its prime factors whose factorization exponent is greater than 1 is no longer practical. The sequence of primitive practical numbers (sequence A267124 in the OEIS) begins Every positive integer has a practical multiple. For instance, for every integer n {\displaystyle n} , its multiple 2 ⌊ log 2 n ⌋ n {\displaystyle 2^{\lfloor \log _{2}n\rfloor }n} is practical. Every odd prime has a primitive practical multiple. For instance, for every odd prime p {\displaystyle p} , its multiple 2 ⌊ log 2 p ⌋ p {\displaystyle 2^{\lfloor \log _{2}p\rfloor }p} is primitive practical. This is because 2 ⌊ log 2 p ⌋ p {\displaystyle 2^{\lfloor \log _{2}p\rfloor }p} is practical but when divided by 2 is no longer practical. A good example is a Mersenne prime of the form 2 p − 1 {\displaystyle 2^{p}-1} . Its primitive practical multiple is 2 p − 1 ( 2 p − 1 ) {\displaystyle 2^{p-1}(2^{p}-1)} which is an even perfect number. == Relation to other classes of numbers == Several other notable sets of integers consist only of practical numbers: From the above properties with n {\displaystyle n} a practical number and d {\displaystyle d} one of its divisors (that is, d | n {\displaystyle d|n} ) then n ⋅ d {\displaystyle n\cdot d} must also be a practical number therefore six times every power of 3 must be a practical number as well as six times every power of 2. Every power of two is a practical number. Powers of two trivially satisfy the characterization of practical numbers in terms of their prime factorizations: the only prime in their factorizations, p1, equals two as required. Every even perfect number is also a practical number. This follows from Leonhard Euler's result that an even perfect number must have the form 2 k − 1 ( 2 k − 1 ) {\displaystyle 2^{k-1}(2^{k}-1)} . The odd part of this factorization equals the sum of the divisors of the even part, so every odd prime factor of such a number must be at most the sum of the divisors of the even part of the number. Therefore, this number must satisfy the characterization of practical numbers. A similar argument can be used to show that an even perfect number when divided by 2 is no longer practical. Therefore, every even perfect number is also a primitive practical number. Every primorial (the product of the first i {\displaystyle i} primes, for some i {\displaystyle i} ) is practical. For the first two primorials, two and six, this is clear. Each successive primorial is formed by multiplying a prime number p i {\displaystyle p_{i}} by a smaller primorial that is divisible by both two and the next smaller prime, p i − 1 {\displaystyle p_{i-1}} . By Bertrand's postulate, p i < 2 p i − 1 {\displaystyle p_{i}<2p_{i-1}} , so each successive prime factor in the primorial is less than one of the divisors of the previous primorial. By induction, it follows that every primorial satisfies the characterization of practical numbers. Because a primorial is, by definition, squarefree it is also a primitive practical number. Generalizing the primorials, any number that is the product of nonzero powers of the first k {\displaystyle k} primes must also be practical. This includes Ramanujan's highly composite numbers (numbers with more divisors than any smaller positive integer) as well as the factorial numbers. == Practical numbers and Egyptian fractions == If n {\displaystyle n} is practical, then any rational number of the form m / n {\displaystyle m/n} with m < n {\displaystyle m<n} may be represented as a sum ∑ d i / n {\textstyle \sum d_{i}/n} where each d i {\displaystyle d_{i}} is a distinct divisor of n {\displaystyle n} . Each term in this sum simplifies to a unit fraction, so such a sum provides a representation of m / n {\displaystyle m/n} as an Egyptian fraction. For instance, 13 20 = 10 20 + 2 20 + 1 20 = 1 2 + 1 10 + 1 20 . {\displaystyle {\frac {13}{20}}={\frac {10}{20}}+{\frac {2}{20}}+{\frac {1}{20}}={\frac {1}{2}}+{\frac {1}{10}}+{\frac {1}{20}}.} Fibonacci, in his 1202 book Liber Abaci lists several methods for finding Egyptian fraction representations of a rational number. Of these, the first is to test whether the number is itself already a unit fraction, but the second is to search for a representation of the numerator as a sum of divisors of the denominator, as described above. This method is only guaranteed to succeed for denominators that are practical. Fibonacci provides tables of these representations for fractions having as denominators the practical numbers 6, 8, 12, 20, 24, 60, and 100. Vose (1985) showed that every rational number x / y {\displaystyle x/y} has an Egyptian fraction representation with O ( log y ) {\displaystyle O({\sqrt {\log y}})} terms. The proof involves finding a sequence of practical numbers n i {\displaystyle n_{i}} with the property that every number less than n i {\displaystyle n_{i}} may be written as a sum of O ( log n i − 1 ) {\displaystyle O({\sqrt {\log n_{i-1}}})} distinct divisors of n i {\displaystyle n_{i}} . Then, i {\displaystyle i} is chosen so that n i − 1 < y < n i {\displaystyle n_{i-1}<y<n_{i}} , and x n i {\displaystyle xn_{i}} is divided by y {\displaystyle y} giving quotient q {\displaystyle q} and remainder r {\displaystyle r} . It follows from these choices that x y = q n i + r y n i {\displaystyle {\frac {x}{y}}={\frac {q}{n_{i}}}+{\frac {r}{yn_{i}}}} . Expanding both numerators on the right hand side of this formula into sums of divisors of n i {\displaystyle n_{i}} results in the desired Egyptian fraction representation. Tenenbaum & Yokota (1990) use a similar technique involving a different sequence of practical numbers to show that every rational number x / y {\displaystyle x/y} has an Egyptian fraction representation in which the largest denominator is O ( y log 2 y / log log y ) {\displaystyle O(y\log ^{2}y/\log \log y)} . According to a September 2015 conjecture by Zhi-Wei Sun, every positive rational number has an Egyptian fraction representation in which every denominator is a practical number. The conjecture was proved by David Eppstein (2021). == Analogies with prime numbers == One reason for interest in practical numbers is that many of their properties are similar to properties of the prime numbers. Indeed, theorems analogous to Goldbach's conjecture and the twin prime conjecture are known for practical numbers: every positive even integer is the sum of two practical numbers, and there exist infinitely many triples of practical numbers ( x − 2 , x , x + 2 ) {\displaystyle (x-2,x,x+2)} . Melfi also showed that there are infinitely many practical Fibonacci numbers (sequence A124105 in the OEIS); and Sanna proved that at least C n / log n {\displaystyle Cn/\log n} of the first n {\displaystyle n} terms of every Lucas sequence are practical numbers, where C > 0 {\displaystyle C>0} is a constant and n {\displaystyle n} is sufficiently large. The analogous questions of the existence of infinitely many Fibonacci primes, or prime in a Lucas sequence, are open. Hausman & Shapiro (1984) showed that there always exists a practical number in the interval [ x 2 , ( x + 1 ) 2 ) ] {\displaystyle [x^{2},(x+1)^{2})]} for any positive real x {\displaystyle x} , a result analogous to Legendre's conjecture for primes. Moreover, for all sufficiently large x {\displaystyle x} , the interval [ x − x 0.4872 , x ] {\displaystyle [x-x^{0.4872},x]} contains many practical numbers. Let p ( x ) {\displaystyle p(x)} count how many practical numbers are at most x {\displaystyle x} . Margenstern (1991) conjectured that p ( x ) {\displaystyle p(x)} is asymptotic to c x / log x {\displaystyle cx/\log x} for some constant c {\displaystyle c} , a formula which resembles the prime number theorem, strengthening the earlier claim of Erdős & Loxton (1979) that the practical numbers have density zero in the integers. Improving on an estimate of Tenenbaum (1986), Saias (1997) found that p ( x ) {\displaystyle p(x)} has order of magnitude x / log x {\displaystyle x/\log x} . Weingartner (2015) proved Margenstern's conjecture. We have p ( x ) = c x log x ( 1 + O ( 1 log x ) ) , {\displaystyle p(x)={\frac {cx}{\log x}}\left(1+O\!\left({\frac {1}{\log x}}\right)\right),} where c = 1.33607... {\displaystyle c=1.33607...} Thus the practical numbers are about 33.6% more numerous than the prime numbers. The exact value of the constant factor c {\displaystyle c} is given by c = 1 1 − e − γ ∑ n practical 1 n ( ∑ p ≤ σ ( n ) + 1 log p p − 1 − log n ) ∏ p ≤ σ ( n ) + 1 ( 1 − 1 p ) , {\displaystyle c={\frac {1}{1-e^{-\gamma }}}\sum _{n\ {\text{practical}}}{\frac {1}{n}}{\Biggl (}\sum _{p\leq \sigma (n)+1}{\frac {\log p}{p-1}}-\log n{\Biggr )}\prod _{p\leq \sigma (n)+1}\left(1-{\frac {1}{p}}\right),} where γ {\displaystyle \gamma } is the Euler–Mascheroni constant and p {\displaystyle p} runs over primes. As with prime numbers in an arithmetic progression, given two natural numbers a {\displaystyle a} and q {\displaystyle q} , we have | { n ≤ x : n practical and n ≡ a mod q } | = c q , a x log x + O q ( x ( log x ) 2 ) . {\displaystyle |\{n\leq x:n{\text{ practical and }}n\equiv a{\bmod {q}}\}|={\frac {c_{q,a}x}{\log x}}+O_{q}\left({\frac {x}{(\log x)^{2}}}\right).} The constant factor c q , a {\displaystyle c_{q,a}} is positive if, and only if, there is more than one practical number congruent to a mod q {\displaystyle a{\bmod {q}}} . If gcd ( q , a ) = gcd ( q , b ) {\displaystyle \gcd(q,a)=\gcd(q,b)} , then c q , a = c q , b {\displaystyle c_{q,a}=c_{q,b}} . For example, about 38.26% of practical numbers have a last decimal digit of 0, while the last digits of 2, 4, 6, 8 each occur with the same relative frequency of 15.43%. == The number of prime factors, the number of divisors, and the sum of divisors == The Erdős–Kac theorem implies that for a large random integer n {\displaystyle n} , the number of prime factors of n {\displaystyle n} (counted with or without multiplicity) follows an approximate normal distribution with mean log log n {\displaystyle \log \log n} and variance log log n {\displaystyle \log \log n} . The corresponding result for practical numbers implies that for a large random practical number n {\displaystyle n} , the number of prime factors is approximately normal with mean C log log n {\displaystyle C\log \log n} and variance V log log n {\displaystyle V\log \log n} , where C = 1 / ( 1 − e − γ ) = 2.280 … {\displaystyle C=1/(1-e^{-\gamma })=2.280\ldots } and V = 0.414 … {\displaystyle V=0.414\ldots } . That is, most large integers n {\displaystyle n} have about log log n {\displaystyle \log \log n} prime factors, while most large practical numbers n {\displaystyle n} have about C log log n ≈ 2.28 log log n {\displaystyle C\log \log n\approx 2.28\log \log n} prime factors. As a consequence, most large integers n {\displaystyle n} have 2 ( 1 + o ( 1 ) ) log log n = ( log n ) 0.693 … {\displaystyle 2^{(1+o(1))\log \log n}=(\log n)^{0.693\ldots }} divisors, while most large practical numbers n {\displaystyle n} have 2 ( C + o ( 1 ) ) log log n = ( log n ) 1.580 … {\displaystyle 2^{(C+o(1))\log \log n}=(\log n)^{1.580\ldots }} divisors. In both cases, the average number of divisors is much larger than the typical number of divisors: for integers n ≤ x {\displaystyle n\leq x} , the average number of divisors is about log x {\displaystyle \log x} , while for practical numbers n ≤ x {\displaystyle n\leq x} , it is about ( log x ) 1.713 … {\displaystyle (\log x)^{1.713\ldots }} . The average value of the sum-of-divisors function σ ( n ) {\displaystyle \sigma (n)} , for integers n ≤ x {\displaystyle n\leq x} , as well as for practical numbers n ≤ x {\displaystyle n\leq x} , has order of magnitude x {\displaystyle x} . == Notes == == References == == External links == Tables of practical numbers Archived 2017-12-26 at the Wayback Machine compiled by Giuseppe Melfi. Practical Number at PlanetMath. Weisstein, Eric W., "Practical Number", MathWorld |
Wikipedia:Prasad Raghavendra#0 | Prasad Raghavendra is an Indian-American theoretical computer scientist and mathematician, working in optimization, complexity theory, approximation algorithms, hardness of approximation and statistics. He is a professor of computer science at the University of California at Berkeley. == Education == After completing a BTech at IIT Madras in 2005, he obtained an MSc (2007) and PhD (2009) at the University of Washington under the supervision of Venkatesan Guruswami. After a postdoctoral position at Microsoft Research New England, he became faculty at the University of California at Berkeley. == Career == Raghavendra showed that assuming the unique games conjecture, semidefinite programming is the optimal algorithm for solving constraint satisfaction problems. Together with David Steurer, he developed the small set expansion hypothesis, for which they won the Michael and Shiela Held Prize in 2018. He developed sum of squares as a versatile algorithmic technique. Together with David Steurer, he gave an invited talk on the topic at the 2018 ICM. == References == |
Wikipedia:Pre-STEM#0 | A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. == Overview == The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. == Example programs == The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes summer research experiences for community college students, enhanced support for community college students once they transfer to a four-year school, or simply understanding the challenges STEM transfer students face. This meeting was made possible by funding from the National Science Foundation. All participants were funded by an NSF STEP grant at the time of the meeting. == See also == STEM pipeline STEM Academy STEAM fields == References == == External links == Missouri Pre-STEM Pathways Program Scientists Prepared, Enriched and Challenged Through Research-based Activities (SPECTRA), an NSF PRISM-funded program at Truman State University |
Wikipedia:Predual#0 | In mathematics, the predual of an object D is an object P whose dual space is D. For example, the predual of the space of bounded operators is the space of trace class operators, and the predual of the space L∞(R) of essentially bounded functions on R is the Banach space L1(R) of integrable functions. |
Wikipedia:Prem Kumar Bhatia#0 | Akshay Hari Om Bhatia (born Rajiv Hari Om Bhatia; 9 September 1967), known professionally as Akshay Kumar (pronounced [əkˈʂəj kʊˈmaːɾ]), is an Indian actor and film producer working in Hindi cinema. Referred to in the media as "Khiladi Kumar", through his career spanning over 30 years, Kumar has appeared in over 150 films and has won several awards, including two National Film Awards and two Filmfare Awards. He received the Padma Shri, India's fourth-highest civilian honour, from the Government of India in 2009. Kumar is one of the most prolific actors in Indian cinema. Forbes included Kumar in their lists of both highest-paid celebrities and highest-paid actors in the world from 2015 to 2020. Between 2019 and 2020, he was the only Indian on both lists. Kumar began his career in 1991 with Saugandh and had his first commercial success a year later with the action thriller Khiladi. The film established him as an action star in the 1990s and led to several films in the Khiladi film series, in addition to other action films such as Mohra (1994) and Jaanwar (1999). Although his early tryst with romance in Yeh Dillagi (1994) was positively received, it was in the next decade that Kumar expanded his range of roles. He gained recognition for the romantic films Dhadkan (2000), Andaaz (2003), Namastey London (2007), and for his slapstick comic performances in several films including Hera Pheri (2000), Mujhse Shaadi Karogi (2004), Phir Hera Pheri (2006), Bhool Bhulaiyaa (2007), and Singh Is Kinng (2008). Kumar won Filmfare Awards for his negative role in Ajnabee (2001) and his comic performance in Garam Masala (2005). While his career had fluctuated commercially, his mainstream success soared in 2007 with four consecutive box-office hits; it was consistent until a short period of decline between 2009 and 2011, after which he reinforced his status with several films, including Rowdy Rathore (2012) and Holiday (2014). Moreover, around this time critical response to several of his films improved; his work in Special 26 (2013), Baby (2015), Airlift (2016), and Jolly LLB 2 (2017) was acclaimed, and he won the National Film Award for Best Actor for the crime thriller Rustom (2016). He earned further notice for his self-produced social films Toilet: Ek Prem Katha (2017) and Pad Man (2018), as well as the war film Kesari (2019), and set box-office records in 2019 with Mission Mangal, Housefull 4, Good Newwz, and the 2021 action film Sooryavanshi. All of Kumar's subsequent theatrical releases failed commercially, with the exception of the comedy-drama OMG 2 (2023). In addition to acting, Kumar has worked as a stunt actor. In 2008, he started hosting Fear Factor: Khatron Ke Khiladi, which he did for four seasons. He also launched the TV reality show Dare 2 Dance in 2014 and his off-screen work includes ownership of the team Khalsa Warriors in the World Kabaddi League. The actor had also set up martial arts training schools for women safety in the country. Kumar is one of the India's most philanthropic actor and supports various charities. He is a leading brand endorser celebrity in India. From 2011 to 2023, he was a citizen of Canada. == Early life and background == Kumar was born in Old Delhi in Delhi, India, to Hari Om Bhatia (later Brijmohan Bhatia) and Aruna Bhatia in a Punjabi Hindu family. His father was an army officer. From a young age, Kumar was very interested in sports. His father too enjoyed wrestling. He lived and grew up in Delhi's Chandni Chowk and later he moved to Bombay (present-day Mumbai) when his father left Army to become an accountant with UNICEF. Soon, his sister was born and the family lived in Koliwada, a Punjabi dominated area of Central Bombay. He received his school education from Don Bosco High School, Matunga, simultaneously learning Karate. He enrolled in Guru Nanak Khalsa College for higher education, but dropped out as he was not much interested in studies. He requested his father that he wanted to learn martial arts further, and his father somehow saved money to send him to Thailand. Kumar went to Bangkok to learn martial arts and lived in Thailand for five years learning Thai Boxing. Kumar also has a sister, Alka Bhatia. When Kumar was a teenager, his father asked him what he aspired to be. Kumar expressed his desire to become an actor. After having obtained a black belt in Taekwondo while in India, he studied martial arts in Bangkok, Thailand, where he learned Muay Thai and worked as a chef and waiter. After Thailand, Kumar went to work in Calcutta (present-day Kolkata) in a travel agency, in Dhaka in a hotel as a chef and Delhi where he sold Kundan jewellery. Upon his return to Bombay, he commenced the teaching of martial arts. During this time, the father of one of his students, himself a model co-ordinator, recommended Kumar into modelling which ultimately led to a modelling assignment for a furniture showroom. Kumar effectively made more money within the first two days of shooting than in his entire month's salary, and therefore chose a modelling career path. He worked as an assistant for photographer Jayesh Sheth for 18 months without payment to shoot his first portfolio. He also worked as a background dancer in various films. Kumar made his first screen appearance with film Aaj under his birth name 'Rajiv Hari Om Bhatia', in a bit role as a Karate instructor. Kumar further stated that he changed his name to Akshay Hari Om Bhatia and chose his stage name as Akshay Kumar, after Kumar Gaurav's character Akshay in the film. One morning, he missed his flight for an ad-shoot in Bangalore. Disappointed with himself, he visited a film studio along with his portfolio. That evening, Kumar was signed for a lead role by producer Pramod Chakravarthy for the movie Deedar. == Film career == === 1991–1999: Debut, breakthrough and action films === Kumar made his first appearance as the lead actor opposite Raakhee and Shantipriya in Saugandh (1991). In the same year, he acted in Kishore Vyas-directed Dancer, which received poor reviews. The following year he starred in Abbas Mustan-directed suspense thriller, Khiladi, widely considered his breakthrough role. A review in The Indian Express called the film "an engrossing thriller" and described Kumar as impressive in the lead part, noting his physical appearance, strong screen presence, and commending him for being "perfectly at ease". His next release was the Raj Sippy-directed detective film Mr. Bond, based on James Bond. His last release of 1992 was Deedar. It failed to perform well at the box office. In 1993, he acted in the Keshu Ramsay-directed bilingual film Ashaant alongside Dr. Vishnuvardhan and Ashwini Bhave. Almost all of his films released during 1993, including Dil Ki Baazi, Kayda Kanoon and Sainik did not perform well commercially. In 1994, he appeared in 11 feature films. He played a police inspector in two films: Sameer Malkan's Main Khiladi Tu Anari and Rajiv Rai's Mohra, both among the highest-grossing films of the year. Further success came later that year when he starred in Yash Chopra-produced romance Yeh Dillagi, opposite Kajol. One of the year's biggest mainstream successes, both the film and his performance were received well by critics, with The Indian Express describing him as "always dependable" and singling out his performance. His work in the film earned him his first nomination for Best Actor at the Filmfare Awards and the Screen Awards. During the same year, Kumar also had success with films like Suhaag and the low-budget action film Elaan. All these achievements established Kumar as one of the most successful actors of the year, according to Box Office India. Kumar proved to have success with what later became known as the Khiladi series. He starred in the fourth and fifth action thriller films with Khiladi in the title: Sabse Bada Khiladi (1995) and Khiladiyon Ka Khiladi (1996), both directed by Umesh Mehra and released to commercial success. He played a dual role in the former. Khiladiyon Ka Khiladi co-starred Rekha and Raveena Tandon. During the film's shooting Kumar was injured, and went on to receive treatment in United States. Shubhra Gupta of The Indian Express wrote in a year-end review, "It was Akshay Kumar in Khiladiyon Ka Khiladi who packed the aisles, no doubt about it... He shoved his hair back in a slick little ponytail, much like Steven Seagal, wore ankle-length great coats, wrestled with the fearsome Undertaker, and walked away with the film." Kumar played a supporting role in Yash Chopra-directed musical romantic drama, Dil To Pagal Hai (1997), co-starring Shah Rukh Khan, Madhuri Dixit and Karisma Kapoor, for which he received his first nomination for the Filmfare Award for Best Supporting Actor. In the same year, he starred opposite Juhi Chawla in David Dhawan-directed comedy Mr. and Mrs. Khiladi, fifth instalment of the Khiladi series. Unlike his previous films of the series, it failed commercially. Considerable success, however, came with another dual role in the romantic action film Aflatoon. Khalid Mohamed of Filmfare, while critical of the film, approved of Kumar's effort: "Akshay Kumar comes to life. Given something even slightly different to do, he does rise to the occasion." His following releases failed in commercial terms and this caused a setback to his film career. In 1999, Kumar played opposite Twinkle Khanna in International Khiladi. The film did not do well at the box office. He received critical acclaim for his roles in the films Sangharsh and Jaanwar. While the former did not make a profit at the box office, the latter turned out to be a commercial success and marked his comeback. === 2000–2006: Hera Pheri and expansion to comedy films === In 2000, Kumar starred in the Priyadarshan-directed comedy Hera Pheri alongside Paresh Rawal and Suneil Shetty. The film which was a remake of Malayalam film Ramji Rao Speaking, became a commercial success and proved to be a turning point in Kumar's career. Hindustan Times noted the film's "intense portrayal of the surreality of the human condition". He also starred in the Dharmesh Darshan-directed romantic drama Dhadkan later that same year. The film performed moderately at the box office but Kumar was praised for his acting. Rediff.com's review stated that he had proved that he is "director's actor" and that "he has worked hard on his role is apparent." That same year, he performed some of his most dangerous stunts in Neeraj Vora-directed action thriller Khiladi 420, where he climbed a running plane, stood on top of the plane flying a thousand feet in the air, and jumped from the plane onto a hot air balloon. In a later scene, he is also seen being chased by a car, dodging bullets, jumping off buildings, and climbing walls. His character in the film had two names and his role received mixed reviews. Sukanya Verma wrote "Negative roles and Akshay Kumar don't go hand-in-hand. [...] Akshay is ridiculously over the top and irritating to the core. However, he manages a decent performance as the sober and suave Anand." Padmaraj Nair of Screen, however, believed it was "the best performance of his career". His first release in 2001 was Suneel Darshan-directed drama Ek Rishtaa: The Bond of Love. Kumar was praised for his performance in the film. Next, he played a negative role in the Abbas Mustan-directed film Ajnabee. While reviewing the film for Rediff.com, Sarita Tanwar termed Kumar the "surprise package" of the film. She added that he was "in total control as the bad guy." The film won him his first Filmfare Award for Best Villain and IIFA award 2002 for Performance in a Negative Role. His first release in 2002 was Dharmesh Darshan-directed romantic drama Haan Maine Bhi Pyaar Kiya. He played the role as a blind man in Vipul Amrutlal Shah and Shaarang Dev Pandit-directed heist film Aankhen, co-starring Amitabh Bachchan, Arjun Rampal, Aditya Pancholi, Sushmita Sen and Paresh Rawal. His performance in the film was critically acclaimed. Next, he starred in the Vikram Bhatt-directed comedy Awara Paagal Deewana. Rediff.com's review of the film mentioned that his sincerity and intensity seen in Hera Pheri, Ek Rishtaa – The Bond of Love and Aankhen "seems missing". His last film of the year was Rajkumar Kohli-directed supernatural horror film Jaani Dushman: Ek Anokhi Kahani alongside Manisha Koirala, Sunil Shetty, Sunny Deol, Aftab Shivdasani, Arshad Warsi, Aditya Pancholi and Armaan Kohli. The film was a remake of Kohli's former film Naagin and received mostly negative reviews from critics. Taran Adarsh wrote "only Munish[Armaan] Kohli and Akshay Kumar leave an impact." In 2003, he starred in Suneel Darshan's action film Talaash: The Hunt Begins... opposite Kareena Kapoor. While reviewing the film, Taran Adarsh wrote "Akshay Kumar is plain mediocre. The role hardly offers him scope to try out anything different." Next, he starred in Raj Kanwar-directed romantic drama Andaaz alongside Lara Dutta and Priyanka Chopra. The film received mixed reviews from critics, but turned out to be a commercial success at the box office and the first universal hit of 2003. In 2004 Kumar starred in Rajkumar Santoshi's action drama thriller Khakee alongside Amitabh Bachchan, Ajay Devgn and Aishwarya Rai. Kumar played the role of Inspector Shekhar Verma, a corrupt, morally bankrupt cop who changes himself during a mission to transfer an accused Pakistani spy Dr. Iqbal Ansari (played by Atul Kulkarni) from a remote town in Maharashtra to Mumbai. The film and Kumar's acting were positively reviewed by critics. He was nominated for the Filmfare Best Supporting Actor Award for his role in the film. His other releases included Dileep Shukla's crime film Police Force: An Inside Story. He starred alongside Raveena Tandon, Amrish Puri and Raj Babbar. The film's production was delayed following the break-up of the lead actors Tandon and Kumar. Upon release it received negative reviews from critics. Next, Kumar played Hari Om Patnaik, an IPS officer in Madhur Bhandarkar-directed Aan: Men at Work. He starred in David Dhawan-directed romantic comedy Mujhse Shaadi Karogi alongside Salman Khan and Priyanka Chopra. He played the role of Sunny, Sameer (played by Khan)'s roommate who pursuits Rani (played by Chopra)-Sameer's love interest. The film received positive reviews. Taran Adarsh praised Kumar and wrote "Akshay Kumar is a revelation [...] he surpasses his previous work. His timing is fantastic and the conviction with which he carries off the evil streak in his personality is bound to be talked-about in days to come." His performance in the film earned him his third nomination for supporting actor at the Filmfare Awards as well as a nomination for best comic role. His other films included Abbas-Mustan directed Aitraaz and S M Iqbal's Meri Biwi Ka Jawaab Nahin. In the former, Kumar played against type as a worker wrongly accused of sexual harassment by his female boss played by Chopra. According to the directors, Aitraaz was inspired by National Basketball Association player Kobe Bryant (who was accused of rape by a fan); and the film's development began when they read about his sexual-assault case in the newspapers. Talking about the character Kumar said that it is realistic and could be described as a "new-age metrosexual" man. He added that Aitraaz was the boldest film he had done. In the latter, he starred opposite Sridevi. The film was shot in 1994 but was released in 2004 after a delay of 10 years. The next year Kumar starred in Dharmesh Darshan-directed romantic drama musical film Bewafaa (2005) opposite Kareena Kapoor. He played the role of Raja, an aspiring musician who pursues his love interest Anjali (played by Kareena Kapoor) even after she is married to Aditya Sahai (played by Anil Kapoor). The film received mixed reviews from film critic but Kumar was praised for his acting. Anupama Chopra of India Today wrote that "Kareena Kapoor and Kumar stand out." Taran Adarsh wrote "Akshay Kumar does well in a role that fits him like a glove." Later that year he acted in Vipul Amrutlal Shah's family drama Waqt: The Race Against Time alongside Amitabh Bachchan, another Priyadarshan-directed comedy Garam Masala alongside John Abraham. Waqt: The Race Against Time was a family drama film. The film and Kumar's acting received mixed reviews. Vishal D'Souza wrote "Akshay shoulders an author-backed role, carrying more of the film's emotional baggage though he is distinctly uncomfortable in the soppy-weepy scenes." The films succeeded at the box office and his performance in the latter earned him his second Filmfare Award, for Best Comedian. His other films included Vikram Bhatt-directed action comedy romance film Deewane Huye Paagal and Suneel Darshan directed romantic drama Dosti: Friends Forever. In the former he starred alongside Shahid Kapoor, Sunil Shetty and Rimi Sen while in the latter he starred alongside Kareena Kapoor and Bobby Deol. Both of these films received positive reviews. Kumar's first release of 2006 was Rajkumar Santoshi-directed drama Family – Ties of Blood followed by Suneel Darshan's Mere Jeevan Saathi and Raj Kanwar's Humko Deewana Kar Gaye. Next, he starred in a sequel to Hera Pheri titled Phir Hera Pheri. As was the former, the sequel became a huge success at the box office. Later that year he starred alongside Salman Khan and Preity Zinta in the Shirish Kunder-directed romantic musical film Jaan-E-Mann. The film was a well anticipated release, and despite receiving positive reviews from critics, did not do as well as expected at the box office. The film received mostly negatively reviews. Vidya Pradhan of Rediff.com called it a "bizarre movie." Though the film under-performed, his role as a shy, lovable nerd was praised. He ended the year with Priyadarshan's comedy murder mystery film Bhagam Bhag. He starred alongside Lara Dutta, Govinda and Paresh Rawal and played the character of a theatre actor. The film received mixed reviews and Rediff.com called Kumar the real hero of the film. The film was commercially successful. The same year, he led the Heat 2006 world tour along with fellow stars Saif Ali Khan, Preity Zinta, Sushmita Sen and Celina Jaitley. === 2007–2011: Commercial success and professional setbacks === 2007 proved to be Kumar's most successful year during his career in the industry, and as described by box office analysts, "probably the best ever recorded by an actor, with four outright hits and no flops." His first release, Vipul Amrutlal Shah-directed Namastey London, was critically and commercially successful, and his performance earned him a Best Actor nomination at the Filmfare. Critic Taran Adarsh wrote of his performance in the film, "he's sure to win the hearts of millions of moviegoers with a terrific portrayal in this film." Kumar's chemistry with lead actress Katrina Kaif also generated immense appreciation, with Nikhat Kazmi of The Times of India describing their pairing as "refreshing." His next two releases, Sajid Khan-directed Heyy Babyy and Priyadarshan's Bhool Bhulaiyaa, were box office successes as well. In both of these films he starred opposite Vidya Balan. Kumar's last release of the year, the Anees Bazmee-directed Welcome, did extremely well at the box office, receiving a blockbuster status and simultaneously becoming his fifth successive hit. All of Kumar's films which released that year did well in the overseas market as well. Kumar appeared in a cameo role in Farah Khan directed Om Shanti Om. His role was listed as no. 3 on the Top 10 Cameos in Bollywood list of MensXP.com. Kumar's first film of 2008, Vijay Krishna Acharya-directed action thriller Tashan, marked his comeback to the Yash Raj Films banner after 11 years. Although a poll (conducted by Bollywood Hungama) named it the most anticipated release of the year, the film under-performed at the box office grossing ₹279 million (US$5.76 million) in India. His second film, Bazmee-directed Singh Is Kinng in which he starred opposite Kaif was a huge success at the box office and broke the first-week worldwide record of Om Shanti Om, the previous highest figure. His next film was the animated film Jumbo, directed by Kompin Kemgumnird. The year also saw Kumar making his small screen debut as the host of the successful show Fear Factor – Khatron Ke Khiladi. He later returned to host the show's second season in 2009. In 2009, Kumar featured opposite Deepika Padukone in the Warner Bros. and Rohan Sippy production Chandni Chowk to China. Directed by Nikhil Advani, the film was a critical and commercial failure at the box office. Kumar's next release was 8 x 10 Tasveer, an action-thriller directed by Nagesh Kukunoor that failed commercially. His next release was Sabbir Khan's battle-of-the-sexes comedy Kambakkht Ishq. Set in Los Angeles, it was the first Indian film to be shot at Universal Studios and featured cameo appearances by Hollywood actors. The film was poorly received by critics but became an economic success, earning over ₹840 million (US$17.35 million) worldwide. Kumar's film Blue was released on 16 October 2009. Blue received negative reviews and collected about ₹ 420 million at the box office. His last release in 2009 was Priyadarshan's De Dana Dan. He starred alongside Katrina Kaif, Suniel Shetty and Paresh Rawal. Kumar played a servant who plans to kidnap his owner's dog. The film received mixed reviews. He then appeared in the 2010 comedy, Housefull, directed by Sajid Khan which garnered the second-highest opening weekend collection of all time. Kumar's next release was Khatta Meetha, directed by Priyadarshan which was an average grosser. The film received negative reviews. Rajeev Masand of CNN-IBN called it a schizophrenic film. He also appeared in Vipul Shah's Action Replayy, which was a box office failure. The film received mostly negative reviews. His last film of 2010 was Tees Maar Khan. Directed by Farah Khan, the film received poor critical reviews but became moderately successful. In 2011 he starred in Patiala House and Thank You. His last film of 2011 was Rohit Dhawan-directed Desi Boyz (2011), which co-starred John Abraham, Chitrangada Singh and Deepika Padukone. He also co-produced a film with Russell Peters titled Breakaway (dubbed into Hindi as Speedy Singhs) which is reminiscent of his own Patiala House. Breakaway became the highest-grossing cross-cultural movie of 2011 in Canada. Kumar dubbed for the role of Optimus Prime in the Hindi version of Hollywood, action blockbuster, Transformers: Dark of the Moon. He took the dubbing role for his son, Aarav, and did so for free. === 2012–2021: Widespread success === His first release of 2012 was Housefull 2, a sequel of his earlier comedy film Housefull, which became a huge hit. Kumar's next film was the Prabhudeva-directed action drama Rowdy Rathore in which he played a double role opposite Sonakshi Sinha. The film earned more than ₹1.3 billion (US$24.33 million) in India. Both of these films grossed over ₹1 billion (US$18.71 million) at the box office. In 2012, he founded another production company called Grazing Goat Pictures Pvt Ltd. Joker was reportedly promoted as Kumar's 100th film, but later Akshay Kumar clarified that the 100th film landmark had been crossed long before he even signed up for Joker. "It was a miscalculation on Shirish's part. OMG is my 116th film," he said. Kumar kept himself away from the film's promotion due to differences with Kunder. Reacting to Kumar's backing out from the film's promotion Kunder tweeted "A true leader takes responsibility for his team and leads them through thick and thin. Never abandons them and runs away." He later deleted the tweet. His later release Oh My God which he produced and starred along with Paresh Rawal. It had a slow opening, but because of positive word of mouth it picked up and emerged a superhit at the box office. His last release in 2012 was Khiladi 786, the eighth instalment in his famous Khiladi series as well as the comeback of the series after 12 years. Although film was panned by critics, it grossed 970 million worldwide. His first release in 2013 was Special 26 which earned a positive critical reception and was semi-hit at the box office. Although the movie earned him positive reviews and commercial success, trade analysts noted that the movie could have done much better business due to its good content and Kumar's high-profile. Milan Luthria chose Kumar to play the character of Shoaib Khan (based on Dawood Ibrahim) in the gangster film Once Upon ay Time in Mumbai Dobaara!, sequel to Once Upon a Time in Mumbaai. It proved to be a below average at the box office. The film was declared a "flop" by Box Office India. It received mixed reviews however Kumar's acting was praised by a majority of critics. In a review for Hindustan Times, Anupama Chopra wrote that Kumar "makes a stellar killer". Madhureeta Mukherjee of The Times of India praised Kumar's performance and said that "Bhai act with flamboyance and mojo ... He gets a chance to do what he does best – herogiri (albeit less menacing, more entertaining), with charisma and clap-trap dialoguebaazi." Al Pacino saw the film's trailer and promos and admired Kumar's portrayal of Shoaib Khan, a gangster. He said that the promos and posters reminded him of his own The Godfather. Kumar said of Pacino's response: "A touch of appreciation is always held dearly in an actor's arms, even if it's from the simplest of people like our beloved spot boys. But to have your work spoken of so kindly by the world's most admired gangster Al Pacino himself – I had goose-bumps thinking about him watching the promo! I was so humbled, not only as an actor but as a fan of his legendary work." Rajeev Masand of CNN-IBN criticised Kumar for his "in-your-face flamboyance". After the film's mainly negative reviews, Kumar lashed out at critics, accusing them of lacking an understanding of the audience and the basic ideas of filmmaking. Built on an approximate budget of ₹1 billion (US$18.71 million), it was the first major Hindi language film to be shot in Oman. Kumar received a nomination for Best Actor in a Negative Role at Zee Cine Awards. His next release was Anthony D Souza's Boss alongside Shiv Panditt and Aditi Rao Hydari. The movie received mixed reviews; it performed poorly at the box office netting ₹540 million (equivalent to ₹860 million or US$10 million in 2023) domestically. Kumar came back strongly with Holiday: A Soldier Is Never Off Duty, the Hindi remake of the 2012 Tamil film Thuppakki. This action thriller earned both critical and commercial success entering the ₹1 billion (US$16.39 million) elite club and emerging one of the highest grossers of 2014. He then starred in Entertainment and has sung a song for the film. Making of the song has been uploaded on YouTube. His last film of 2014 was The Shaukeens. He appeared as himself in it and produced it. He then played the lead role in the thrillers Baby and Gabbar is Back. Kumar's first collaboration with Karan Johar, Brothers was released on 14 August 2015. His next release was Singh is Bling, a quasi sequel to 2008's Singh is Kinng was released on 2 October 2015 and is produced by Grazing Goats Pictures. His first release was Airlift released on 22 January 2016 was critically and commercially successful, and second was Housefull 3 which released on 3 June 2016. Rustom which was produced by Neeraj Pandey and marked his third release of 2016. Akshay was praised for his performance in Rustom which garnered him numerous award nominations. Rustom grossed more than 2 billion at the box office. Both Airlift and Rustom earned him the National Film Award for Best Actor. His second film release in 2017 was Toilet: Ek Prem Katha. This film depicted the serious social issue of toilets in certain regions of the country. Akshay's performance was praised. Akshay Kumar dug a toilet in Madhya Pradesh to promote the film. The movie trailer was released on 11 June 2017. Indian Prime Minister Narendra Modi called it a good effort to further the message of cleanliness, as per Swachh Bharat Abhiyan. In 2018, Akshay starred in another social drama film Pad Man alongside Sonam Kapoor and Radhika Apte. He later made his Tamil cinema debut in the science fiction thriller 2.0, a standalone sequel to the 2010 film Enthiran, co-starring Rajinikanth, in which he played an evil ornithologist named Pakshirajan. In 2019, Kumar appeared in Karan Johar's film Kesari opposite Parineeti Chopra, based on the story of the Battle of Saragarhi. The film grossed over ₹2 billion (US$28.4 million) worldwide. He next featured in Mission Mangal with an ensemble cast of Vidya Balan, Taapsee Pannu, Nithya Menen, Sharman Joshi and Sonakshi Sinha. The film is about the story of scientists at Indian Space Research Organisation who contributed to the Mars Orbiter Mission, which marked India's first interplanetary expedition. Housefull 4, directed by Farhad Samji, was released in October 2019. His next release in December 2019 was Karan Johar's and his own production Good Newwz, a romantic comedy about surrogacy, opposite Kareena Kapoor Khan. All of his four films were commercially successful this year with three consecutive domestic 200 Crore Club net films alongside Mission Mangal, Housefull 4 and Good Newwz. His only release in 2020 was the horror comedy Laxmii, directed by Raghava Lawrence, an official remake of the Tamil film Kanchana opposite Kiara Advani. It was released on 9 November on Disney+ Hotstar, and was not released theatrically in India due to the COVID-19 pandemic. The film revolves around a man who gets possessed by the ghost of a transgender. Despite receiving mixed to negative reviews from critics, it got huge response on both OTT as well as television, eventually emerging the only genuine hit to be premiered on digital. Kumar's first release of 2021, Bell Bottom did not perform well commercially, but his second release, Rohit Shetty's actioner Sooryavanshi proved to be a box office hit and was credited with reviving the exhibition sector for Hindi cinema post COVID-19 pandemic in India. Towards the end of year, he co-starred alongside Dhanush and Sara Ali Khan in Aanand L. Rai's direct-to-digital romantic comedy film Atrangi Re. At release, Atrangi Re garnered the highest opening day viewership in their streaming service, thereby breaking the viewership records of Laxmii (2020), Hungama 2 (2021) and Shiddat (2021). === 2022–present: Career decline === In 2022, Kumar's first release was Bachchhan Paandey, a remake of Jigarthanda, where he played the titular role of a gangster, a name derived from Kumar's character in the 2008 film Tashan. The film paired him with Kriti Sanon, and also features Jacqueline Fernandez and Arshad Warsi. Despite an ensemble cast and hype among fans, Bachchhan Paandey gathered negative critical reception and bombed at the box office. His next release was the historical film Samrat Prithviraj (2022), which was based on life of the Hindu warrior Prithviraj Chauhan. It also starred Sonu Sood, Sanjay Dutt and debutant Manushi Chhillar. Released theatrically on 3 June 2022, the film opened to mixed reviews. Anuj Kumar of The Hindu wrote 'In order to tone down his body language and accent, Kumar has lost much of his trademark energy and could not develop the gravitas required to play the celebrated ruler. He growls like a lion who has lost his bite and despite all the air-brushing, doesn't look like the boy who became a Samrat in his 20s'. Made on a budget of ₹200 crore, the film failed to recoup the massive investment and proved to be a disaster at the box office. His another movie Rakshabandhan which was released on the extended weekend of 5 days on 11 August received mixed reviews. The Hindu wrote "The film's engaging powerful anti-dowry sentiments, along with Akshay's brilliant comic timing, ensures that there is enough to keep the audience tied for two hours." The Indian Express rated the film 1.5 out of 5 stars and wrote "Do the filmmakers truly believe that such low-rent family dramas, with their uneasy mix of humour and crassness". The film couldn't manage to impress the audience and scored poorly at the Indian ticket windows. The film managed to earn mere $4.2 million over the extended weekend. The Hindustan Times wrote "Akshay Kumar got a golden opportunity in the film Ram Setu by Abhishek Sharma released on 25 October 2022, as his character is unlike anything he has done in the recent past. Ram Setu embraces the best of the Indiana Jones and National Treasure schools of storytelling with desi action". With the poor performance of Ram Setu, 2022 proved to be one of the worst years for Kumar in recent times. His first release of 2023 was Selfiee, an official remake of Driving Licence, which also starred Emraan Hashmi, Diana Penty and Nushrratt Bharuccha. This film too bombed at the box office. Kumar next appeared in OMG 2 – Oh my God 2, a spiritual successor to OMG – Oh My God!, where he played a messenger of Lord Shiva. Ganesh Aaglave of Firstpost stated, "Akshay's character as the messenger of Lord Shiva seems to be an extended cameo. However, the actor impresses with his expressions and dialogues and delivers one of his best performances in recent times." The film became a box office hit. Kumar next appeared opposite Parineeti Chopra as Jaswant Singh Gill, a brave and diligent mining engineer in the disaster thriller Mission Raniganj. It received mixed to positive response from critics, but flopped miserably at the box office. His first release of 2024 was Ali Abbas Zafar's actioner Bade Miyan Chote Miyan co-starring Tiger Shroff. Made at a budget of ₹350 crore, the film opened to largely negative reviews from critics and did a lifetime business of ₹102.16 crore at the worldwide box-office, thus proving to be a disaster and continuing the string of flops for Kumar. He was next seen in Sarfira alongside Radhika Madan. The movie was a remake of the Tamil hit Soorarai Pottru, which itself was an adaptation of Air Deccan founder G. R. Gopinath's memoir Simply Fly: A Deccan Odyssey. The remake performed poorly at the box-office, collecting just ₹20 crores against a budget of ₹80 crores. Following this, Kumar played a cosmetic surgeon in Khel Khel Mein alongside Vaani Kapoor. Sukanya Verma noted, "The chemistry between the motley bunch of actors works in fits and starts but Akshay Kumar's gift of the gab come out tops." Despite positive reviews, it emerged another commercial failure for him. Kumar then reprised his character from Sooryavanshi in Singham Again. It emerged a commercial success and one of the highest grossing film of the year. His first release of 2025 was Sky Force, based on India's first airstrike during the Sargodha airbase attack in the 1965 Indo-Pakistani war. This marked his second collaboration with Maddock Films following Stree 2. Pragati Awasthi of WION wrote "the aura and energy he brings to his character always manages to capture viewer attention. While his performance isn't extraordinary, he still manages to make the audience feel emotional with his poignant scenes." The film however was a box office failure. Following this, Kumar played advocate C. Sankaran Nair in Kesari Chapter 2, which is set against the backdrop of the Jallianwala Bagh massacre in 1919. Titas Chowdhury stated, "Akshay Kumar sheds off his aura as a superstar and chooses to lean on his acting prowess, taking it a notch higher than Sarfira." The film emerged a moderate commercial success, his first since OMG 2. == Other work == === Television === In 2004, Kumar presented seven-part miniseries Seven Deadly Arts with Akshay Kumar for free, played master and learner as he introduces viewers to each of the seven part of martial arts-kalaripayattu, Shaolin Kung Fu, karate, taekwondo, aikido, Muay Thai, capoeira, the show aired on every following Sunday. The following year Kumar was awarded the highest Japanese honour of "Katana" and a sixth degree black belt in Kuyukai Gōjū-ryū karate. Since 2008, Kumar started India's stunt/action reality game show – "Fear Factor: Khatron Ke Khiladi". He hosted Season 1, Season 2 and Season 4. The show was widely accepted and appreciated, became hugely successful in popular culture. It is still being run by Rohit Shetty. In 2011, Kumar hosted India's first MasterChef television show on Star Plus, was viewed by 18.2 million viewers and went to prove that Indian audiences are open to experimenting and look forward to innovations in television entertainment. In 2014, He hosted another reality show Dare 2 Dance as a mentor, which aired on Life OK from 6 September. It broke the norms of a regular dance format with a commitment of a 'first of its kind. A dance show where trained and famed dancers wasn't judged only on the basis of their dance performances, but they had to perform stunts.survive on the dance floor. In 2014, Kumar also produced a successful television serial Jamai Raja (2014), starring Ravi Dubey and Nia Sharma, which established them leading actors in Indian television industry. In 2017, he judged The Great Indian Laughter Challenge with Mallika Dua, Hussain Dalal and Zakir Khan, later they were replaced by Sajid Khan and Shreyas Talpade. The show made many popular names including Vishwash Chauhan and Shyam Rangeela. Kumar joined Bear Grylls for an episode in "Into The Wild", which aired on Discovery channel on 14 September 2020. The episode was second highest-rated show in the infotainment genre (Discovery Channel) in terms of TRP. 1.1 crore people watched the premiere on Discovery Network channels. === Fitness work and stage performances === Kumar promotes health fitness and exercising, stays in shape with a combination of kickboxing, basketball, swimming and Parkour as well as working out. While in standard eighth he had started practising Karate. He intended to open a martial arts school and the state government of Maharashtra allotted land for the school in Bhayandar. He helped Khanna with editing the drafts of her debut book Mrs Funnybones. He is a teetotaller but has endorsed for a liquor brand in the past. Half of the sum was given for daan (charity work), of which he has been doing more of in recent times. In 2013, one of his fan travelled from Haryana to Mumbai, to meet him. The journey took him 42 days. When he reached Kumar's building, he was informed that Kumar was in Casablanca. The fan stayed outside the building for one week before Kumar met him. Kumar has come out in support and lauded the Sports Minister Rajyavardhan Singh Rathore for his stand against corruption. The minister said that the government entrusted CBI in the investigation against the corrupt officials in sports department. On 9 August 2014, Kumar performed at his 500th live show. The show was held in O2 Arena in London as part of the inaugural function of the World Kabaddi League. His first live show was held in 1991 in Delhi. Kumar owns Bengal Warriors a team in the Indian Kabbadi League. Akshay Kumar sets himself on fire at his upcoming The End series launch with Prime Video, says he's a stuntman first and actor later. === Production === == Personal life == === Relationships and family === During late 90s, Kumar was dating actress Raveena Tandon. Although they were engaged, they later parted ways. Kumar married Twinkle Khanna, the daughter of actors Rajesh Khanna and Dimple Kapadia, on 17 January 2001. Together they have a son, and a daughter. He is known as a protective father and keeps his children away from the media. He stated that he wants to "give them a normal childhood." In 2009, while performing at a show for Levis at Lakme Fashion Week, Kumar asked Twinkle to unbutton his jeans. This incident sparked a controversy which led to a police case being filed against them. === Religion === Kumar was initially religious, till 2017 being a practising Shaiva Hindu who regularly visited shrines and temples across the country, including the famed Vaishno Devi Mandir, while in 2018 he said "there is only one God" and was against bringing religion into politics, but in March 2020 he stated, "I don't believe in any religion. I only believe in being Indian". === Citizenship === Sometime during or after the 2011 Canadian federal election, the Conservative government there granted Canadian citizenship to Kumar by invoking a little-known law which allowed circumventing the usual residency requirement for Canadian immigrants. According to a former Conservative Party minister, Tony Clement, the citizenship was awarded in return for Kumar's offer of putting his "star power to use to advance Canada-Indian relations," and Canada's "trade relations, commercial relations, in the movie sector, in the tourism sector." Although Kumar had earlier appeared in a campaign event for Conservative Prime Minister Stephen Harper in Brampton, Ontario, a city with a large Indo-Canadian population, and praised Harper, Clement denied that the citizenship was a reward for partisan support. Kumar had received an honorary doctorate degree from the University of Windsor, and in a 2010 interview with the Economist claimed he had "dual citizenship." He was one of the 15 international celebrities invited for the Olympics torch-bearer rally to Canada. In December 2019, Kumar stated that he has applied for an Indian passport and plans to give up his Canadian citizenship. On 15 August 2023, on the occasion of Independence Day, Kumar confirmed he got Indian citizenship back and renounced his Canadian citizenship as required by Indian law. == In the media == In Indian media, Kumar is referred as Khiladi or Khiladi Kumar for doing so many dangerous stunts by himself and also because of his Khiladi film series. In 2009, Madame Tussauds wanted to make his wax figure next because he has an international fan following. But he declined and said,"he don't want to be waxed because he does not think it is of that great importance". From 2015, he was continuously featured in Forbes's highest paid actors top 10 list. In 2019, he was fourth highest paid actor in the world behind Dwayne Johnson, Chris Hemsworth and Robert Downey Jr.. On the Forbes US list of World's 33rd Highest Earner with $65M. In 2020, He ranked sixth and only Indian actor in top 10 of highest paid actors list with $48.5M. Kumar is the first Indian film actor, whose films' domestic net lifetime collections crossed ₹20 billion (US$341.31 million) by 2013, and ₹30 billion (US$446.46 million) by 2016. Kumar was named "Sexiest Man Alive" by People Magazine in 2008. Kumar was awarded with NDTV Imagine Best Entertainer of the Year 2007 by the Apsara Film & Television Producers Guild Awards (FTPGI). In 2009, he was awarded the highest Japanese honour of "Katana" and a sixth degree Black Belt in Kuyukai Gojuryu Karate. He was one of the 15 international celebrities invited for the Olympics torch-bearer rally to Canada in 2009. Kumar bagged the Ultimate Man of the Year at the prestigious GQ Awards in 2015. He won the HT Hottest Trendsetter (Male) award at the HT India's Most Stylish in 2019. Kumar has great brand value and holds strong credibility in the advertising world. In 2020, Kumar has topped the list with brand value of $118M he has seen a jump of 13% by previous years. According to Duff and Phelps, Kumar was third most valued celebrity with $139M in 2021. Kumar has endorsed brands including Thums Up, Honda, Tata Motors, Dollar, Harpic, Sparx, Livguard Battery, and Kajarai Tiles. He was also brand ambassador of Canada Tourism. Kumar ranked number 1 on TAM's list of most visible stars in TV ads with an average visibility of 37 hours per day across all channels since 2019 to 2022. Kumar has significant fan following in Indian diaspora as well as in European and African countries. As of September 2022, He is most followed Indian actor on social media including Facebook, Twitter and Instagram. Memes from his comedy movies, especially his face expressions are hugely popular on social media. In 2019, interviewed Modi on TV, which concluded as controversies in social media, later he clarified that it was a personal interview as a common man, not political. Since 2013, Kumar has been the Hindi film industry's highest advance taxpayer for six consecutive years. He paid ₹190 million (US$3.24 million) as advance tax payment in that year. In August 2022, he got a certificate from Income Tax department for being highest tax payer. Kumar has criticised award functions and doesn't believe in it saying, "Organizers have asked me to perform at award nights. They said that they would pay me half the price and they would also give me an award. I replied saying, 'You pay me the whole amount and keep your award.'" He called National Film Award, most prestigious award of the country. Kumar holds the Guinness World Records for the most 184 Selfies taken in three minutes at a promotional event of his film Selfiee in Mumbai. He talked to the media and termed it as a 'way of paying tribute' to his fans. In March 2023, Kumar performed in various cities in the United States for "The Entertainers" tour, alongside Disha Patani, Mouni Roy, Nora Fatehi, Sonam Bajwa, Aparshakti Khurana, Stebin Ben and Zahrah S Khan. == Philanthropy and social service == Kumar is one of India's most philanthropic celebrities, does a lot of helps and donations, continues the good work offscreen with his philanthropy and services. He and his co-star Tamannaah Bhatia donated all the clothes from their film 2014 Entertainment to an animal welfare charity. Youth Organisation in Defence of Animals (YODA) is an organisation which works for the welfare of street animals. He has also donated ₹5 million (US$93,567.08) to Salman Khan's Being Human Foundation. Kumar also had donated a sum of ₹9 million to drought hit farmers in Maharashtra in 2015, Khan himself tweeted on Twitter. Kumar also helped a contestant of TV reality show Khatron Ke Khiladi by giving him Rs. 25 lakh after knowing that the contestant needed the prize money of the show for his father's cancer treatment. He has also donated ₹5 million to aid drought affected people through the Maharashtra government's Jalyukt Shivar Abhiyan. In March 2013, he started a 30-bed cancer shelter for policemen in Naigaon. In December 2013, Vishwas Nangre Patil, Additional Commissioner of Police, West Mumbai visited his gym along with several trainee officers. Acid attack survivor Laxmi Agarwal, on whose life Deepika Padukone's Chapaak was based is also an activist. Some years ago, she was struggling to make ends meet. Kumar came to her rescue and transferred Rs. 5 Lakh into her account so that she could fend for herself until she found a job, because medals, awards and certificates don't pay the bills. He launched an insurance scheme for the registered stunt directors in 2017. The family of deceased stunt director Abdul Sattar Munna has received a compensation of Rs. 20 lakh under the same scheme. Now, the actor has been asked to help those stunt choreographers who are above 55 years of age and therefore, aren't eligible to have insurance in their name. During the promotions of Rustom, Kumar expressed his wish of serving the nation but destiny held something else in store for him. He has played a soldier in Holiday and Kesari, a special agent in Baby and a naval officer in Rustom. The actor was applauded by many for showing his concern towards the families of 12 slain jawans of the Central Reserve Police Force (CRPF) who were killed in Chhattisgarh's Sukma district on Saturday, 11 March 2017. He donated Rs. 1.08 crore to the families of the martyred jawans. In August 2016, Kumar had donated Rs. 80 lakh to the families of army men. He gave Rs 5 lakh to each family and said our soldiers need money along with "samman". In October 2016, Akshay donated Rs. 9 lakh to the family of a martyred BSF jawan. He also donated Rs. 1.5 crore to build the shelter for transgender persons in Chennai. He is supporting the construction of the home. Laxmii's director Raghava Lawrence shared the "good news" of the actor's new initiative with friends and fans along with some pictures on Facebook. Kumar endorses Swachh Bharat Mission, Builds Toilets in Madhya Pradesh, he had previously posted a video on social media, talking about the importance of having individual toilets at home, His movie Toilet was also based on Toilet's importance. He was named Uttarakhand's brand ambassador for 'Swachhta Abhiyaan'. In 2017, Kumar launched the Bharat Ke Veer app, with the help of the home ministry. The platform lets people send money directly to the bank accounts of family members of soldiers martyred in the line of duty. On The Kapil Sharma Show, Akshay said that he was only carrying out his responsibility as a citizen with the app. "Hum yeh kahenge, ki hum apna kartvaya nibhate hai… Jo shaheed hote hai, unka nuksaan hum kabhi bhar nahi paayenge. That is for sure. Sarkar unko jo deti hai, woh deti hai. Lekin, as a civilian, humara bhi toh kuch kartavya banta hai. Yeh aisa app hai… Na iske beech mein koi NGO hai, na koi sarkar hai," he said. In the aftermath of Pulwama terror attack on the CRPF jawans, Kumar came forward to help the families of those sacrificed their lives in the attack. He donated Rs. 15 Lakh to a martyred jawan's family and even urged his fans to do the same. He further pledged to donate Rs. 5 crore through 'Bharat Ke Veer' app. He additionally donated Rs. 9 Lakh each to the families of 12 CRPF jawans who were killed in Chhattisgarh. In Mid-2020 when Kerala, Assam and Chennai were left devastated as they were hit by floods that severely affected several families as they lost their houses, land, crops, many people died. Kumar donated a sum of Rs. 1 crore each to not just Kerala but also Assam and Chennai. CM Sonowal thanks Kumar. Kumar was one of the first personalities from Bollywood to contribute to the PM CARES Fund right at the start of the first wave of the COVID-19 pandemic in March 2020, He donated Rs. 25 crores to the Prime Minister's Fund and another crore to ex-cricketer Gautam Gambhir's charity for the same cause. The former cricketer revealed on Twitter that Akshay Kumar had donated Rs. 1 crore to his foundation during the deadly second wave to help people affected by the novel coronavirus. Gambhir wrote, "Every help in this gloom comes as a ray of hope. Thanks a lot Akshay Kumar for committing Rs 1 crore to #GGF for food, meds and oxygen for the needy! God bless." In 2021, Kumar donated ₹1 crore for construction of a school building in Neeru Village of Bandipora district in Jammu and Kashmir. In August 2024, Kumar donated ₹1.21 crore to Haji Ali Dargah for its maintenance. == Awards and nominations == Kumar has been recipient of two Filmfare Awards from 13 nominations: Best Villain for Ajnabee (2002) and Best Comedian for Garam Masala (2006), and a National Film Award for Best Actor for the films Rustom and Airlift (both 2016). In 2008, the University of Windsor conferred an honorary Doctorate of Law on Kumar in recognition of his contribution to Indian cinema. The following year, he was awarded the Padma Shri by the Government of India. In 2011, The Asian Awards honoured Kumar for his outstanding achievement in Cinema. == Notes == == References == == External links == Akshay Kumar at IMDb Akshay Kumar at Bollywood Hungama Akshay Kumar at DNA India Collected news and commentary at The Times of India |
Wikipedia:Price of stability#0 | In game theory, the price of stability (PoS) of a game is the ratio between the best objective function value of one of its equilibria and that of an optimal outcome. The PoS is relevant for games in which there is some objective authority that can influence the players a bit, and maybe help them converge to a good Nash equilibrium. When measuring how efficient a Nash equilibrium is in a specific game we often also talk about the price of anarchy (PoA), which is the ratio between the worst objective function value of one of its equilibria and that of an optimal outcome. == Examples == Another way of expressing PoS is: PoS = value of best Nash equilibrium value of optimal solution , PoS ≥ 0. {\displaystyle {\text{PoS}}={\frac {\text{value of best Nash equilibrium}}{\text{value of optimal solution}}},\ {\text{PoS}}\geq 0.} In particular, if the optimal solution is a Nash equilibrium, then the PoS is 1. In the following prisoner’s dilemma game, since there is a single equilibrium ( B , R ) {\displaystyle (B,R)} we have PoS = PoA = 1/2. On this example which is a version of the battle of sexes game, there are two equilibrium points, ( T , L ) {\displaystyle (T,L)} and ( B , R ) {\displaystyle (B,R)} , with values 3 and 15, respectively. The optimal value is 15. Thus, PoS = 1 while PoA = 1/5. == Background and milestones == The price of stability was first studied by A. Schulz and N. Stier-Moses, while the term was coined by E. Anshelevich et al. Schulz and Stier-Moses focused on equilibria in a selfish routing game in which edges have capacities. Anshelevich et al. studied network design games and showed that a pure strategy Nash equilibrium always exists with the price of stability in this game being at most the nth harmonic number in directed graphs. For undirected graphs, Anshelevich et al. presented a tight bound on the price of stability of 4/3 for a single source and two players case. Jian Li has proved that for undirected graphs with a distinguished destination to which all players must connect the price of stability of the Shapely network design game is O ( log n / log log n ) {\displaystyle O(\log n/\log \log n)} where n {\displaystyle n} is the number of players. On the other hand, the price of anarchy is about n {\displaystyle n} in this game. == Network design games == === Setup === Network design games have a very natural motivation for the Price of Stability. In these games, the Price of Anarchy can be much worse than the Price of Stability. Consider the following game. n {\displaystyle n} players; Each player i {\displaystyle i} aims to connect s i {\displaystyle s_{i}} to t i {\displaystyle t_{i}} on a directed graph G = ( V , E ) {\displaystyle G=(V,E)} ; The strategies P i {\displaystyle P_{i}} for a player are all paths from s i {\displaystyle s_{i}} to t i {\displaystyle t_{i}} in G {\displaystyle G} ; Each edge has a cost c i {\displaystyle c_{i}} ; 'Fair cost allocation': When n e {\displaystyle n_{e}} players choose edge e {\displaystyle e} , the cost d e ( n e ) = c e n e {\displaystyle \textstyle d_{e}(n_{e})={\frac {c_{e}}{n_{e}}}} is split equally among them; The player cost is C i ( S ) = ∑ e ∈ P i c e n e {\displaystyle \textstyle C_{i}(S)=\sum _{e\in P_{i}}{\frac {c_{e}}{n_{e}}}} The social cost is the sum of the player costs: S C ( S ) = ∑ i C i ( S ) = ∑ e ∈ S n e c e n e = ∑ e ∈ S c e {\displaystyle \textstyle SC(S)=\sum _{i}C_{i}(S)=\sum _{e\in S}n_{e}{\frac {c_{e}}{n_{e}}}=\sum _{e\in S}c_{e}} . === Price of anarchy === The price of anarchy can be Ω ( n ) {\displaystyle \Omega (n)} . Consider the following network design game. Consider two different equilibria in this game. If everyone shares the 1 + ε {\displaystyle 1+\varepsilon } edge, the social cost is 1 + ε {\displaystyle 1+\varepsilon } . This equilibrium is indeed optimal. Note, however, that everyone sharing the n {\displaystyle n} edge is a Nash equilibrium as well. Each agent has cost 1 {\displaystyle 1} at equilibrium, and switching to the other edge raises his cost to 1 + ε {\displaystyle 1+\varepsilon } . === Lower bound on price of stability === Here is a pathological game in the same spirit for the Price of Stability, instead. Consider n {\displaystyle n} players, each originating from s i {\displaystyle s_{i}} and trying to connect to t {\displaystyle t} . The cost of unlabeled edges is taken to be 0. The optimal strategy is for everyone to share the 1 + ε {\displaystyle 1+\varepsilon } edge, yielding total social cost 1 + ε {\displaystyle 1+\varepsilon } . However, there is a unique Nash for this game. Note that when at the optimum, each player is paying 1 + ε n {\displaystyle \textstyle {\frac {1+\varepsilon }{n}}} , and player 1 can decrease his cost by switching to the 1 n {\displaystyle \textstyle {\frac {1}{n}}} edge. Once this has happened, it will be in player 2's interest to switch to the 1 n − 1 {\displaystyle \textstyle {\frac {1}{n-1}}} edge, and so on. Eventually, the agents will reach the Nash equilibrium of paying for their own edge. This allocation has social cost 1 + 1 2 + ⋯ + 1 n = H n {\displaystyle \textstyle 1+{\frac {1}{2}}+\cdots +{\frac {1}{n}}=H_{n}} , where H n {\displaystyle H_{n}} is the n {\displaystyle n} th harmonic number, which is Θ ( log n ) {\displaystyle \Theta (\log n)} . Even though it is unbounded, the price of stability is exponentially better than the price of anarchy in this game. === Upper bound on price of stability === Note that by design, network design games are congestion games. Therefore, they admit a potential function Φ = ∑ e ∑ i = 1 n e c e i {\displaystyle \textstyle \Phi =\sum _{e}\sum _{i=1}^{n_{e}}{\frac {c_{e}}{i}}} . Theorem. [Theorem 19.13 from Reference 1] Suppose there exist constants A {\displaystyle A} and B {\displaystyle B} such that for every strategy S {\displaystyle S} , A ⋅ S C ( S ) ≤ Φ ( S ) ≤ B ⋅ S C ( S ) . {\displaystyle A\cdot SC(S)\leq \Phi (S)\leq B\cdot SC(S).} Then the price of stability is less than B / A {\displaystyle B/A} Proof. The global minimum N E {\displaystyle NE} of Φ {\displaystyle \Phi } is a Nash equilibrium, so S C ( N E ) ≤ 1 / A ⋅ Φ ( N E ) ≤ 1 / A ⋅ Φ ( O P T ) ≤ B / A ⋅ S C ( O P T ) . {\displaystyle SC(NE)\leq 1/A\cdot \Phi (NE)\leq 1/A\cdot \Phi (OPT)\leq B/A\cdot SC(OPT).} Now recall that the social cost was defined as the sum of costs over edges, so Φ ( S ) = ∑ e ∈ S ∑ i = 1 n e c e i = ∑ e ∈ S c e H n e ≤ ∑ e ∈ S c e H n = H n ⋅ S C ( S ) . {\displaystyle \Phi (S)=\sum _{e\in S}\sum _{i=1}^{n_{e}}{\frac {c_{e}}{i}}=\sum _{e\in S}c_{e}H_{n_{e}}\leq \sum _{e\in S}c_{e}H_{n}=H_{n}\cdot SC(S).} We trivially have A = 1 {\displaystyle A=1} , and the computation above gives B = H n {\displaystyle B=H_{n}} , so we may invoke the theorem for an upper bound on the price of stability. == See also == Price of anarchy Competitive facility location game - a game with no price-of-stability. == References == A.S. Schulz, N.E. Stier-Moses. On the performance of user equilibria in traffic networks. Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2003. E. Anshelevich, E. Dasgupta, J. Kleinberg, E. Tardos, T. Wexler, T. Roughgarden. The Price of Stability for Network Design with Fair Cost Allocation. SIAM Journal on Computing, 38:4, 1602-1623, 2008. Conference version appeared in FOCS 2004. Vazirani, Vijay V.; Nisan, Noam; Roughgarden, Tim; Tardos, Éva (2007). Algorithmic Game Theory (PDF). Cambridge, UK: Cambridge University Press. ISBN 0-521-87282-0. L. Agussurja and H. C. Lau. The Price of Stability in Selfish Scheduling Games. Web Intelligence and Agent Systems: An International Journal, 9:4, 2009. Jian Li. An O ( log n / log log n ) {\displaystyle O(\log n/\log \log n)} upper bound on the price of stability for undirected Shapely network design games. Information Processing Letters 109 (15), 876-878, 2009. |
Wikipedia:Primary pseudoperfect number#0 | In mathematics, and particularly in number theory, N is a primary pseudoperfect number if it satisfies the Egyptian fraction equation 1 N + ∑ p | N 1 p = 1 , {\displaystyle {\frac {1}{N}}+\sum _{p\,|\;\!N}{\frac {1}{p}}=1,} where the sum is over only the prime divisors of N. == Properties == Equivalently, N is a primary pseudoperfect number if it satisfies 1 + ∑ p | N N p = N . {\displaystyle 1+\sum _{p\,|\;\!N}{\frac {N}{p}}=N.} Except for the primary pseudoperfect number N = 2, this expression gives a representation for N as the sum of distinct divisors of N. Therefore, each primary pseudoperfect number N (except N = 2) is also pseudoperfect. The eight known primary pseudoperfect numbers are 2, 6, 42, 1806, 47058, 2214502422, 52495396602, 8490421583559688410706771261086 (sequence A054377 in the OEIS). The first four of these numbers are one less than the corresponding numbers in Sylvester's sequence, but then the two sequences diverge. It is unknown whether there are infinitely many primary pseudoperfect numbers, or whether there are any odd primary pseudoperfect numbers. The prime factors of primary pseudoperfect numbers sometimes may provide solutions to Znám's problem, in which all elements of the solution set are prime. For instance, the prime factors of the primary pseudoperfect number 47058 form the solution set {2,3,11,23,31} to Znám's problem. However, the smaller primary pseudoperfect numbers 2, 6, 42, and 1806 do not correspond to solutions to Znám's problem in this way, as their sets of prime factors violate the requirement that no number in the set can equal one plus the product of the other numbers. Anne (1998) observes that there is exactly one solution set of this type that has k primes in it, for each k ≤ 8, and conjectures that the same is true for larger k. If a primary pseudoperfect number N is one less than a prime number, then N × (N + 1) is also primary pseudoperfect. For instance, 47058 is primary pseudoperfect, and 47059 is prime, so 47058 × 47059 = 2214502422 is also primary pseudoperfect. == History == Primary pseudoperfect numbers were first investigated and named by Butske, Jaje, and Mayernik (2000). Using computational search techniques, they proved the remarkable result that for each positive integer r up to 8, there exists exactly one primary pseudoperfect number with precisely r (distinct) prime factors, namely, the rth known primary pseudoperfect number. Those with 2 ≤ r ≤ 8, when reduced modulo 288, form the arithmetic progression 6, 42, 78, 114, 150, 186, 222, as was observed by Sondow and MacMillan (2017). == See also == Giuga number == References == Anne, Premchand (1998), "Egyptian fractions and the inheritance problem", The College Mathematics Journal, 29 (4), Mathematical Association of America: 296–300, doi:10.2307/2687685, JSTOR 2687685. Butske, William; Jaje, Lynda M.; Mayernik, Daniel R. (2000), "On the equation ∑ p | N 1 p + 1 N = 1 {\displaystyle \scriptstyle \sum _{p|N}{\frac {1}{p}}+{\frac {1}{N}}=1} , pseudoperfect numbers, and perfectly weighted graphs", Mathematics of Computation, 69: 407–420, doi:10.1090/S0025-5718-99-01088-1. Sondow, Jonathan; MacMillan, Kieren (2017), "Primary pseudoperfect numbers, arithmetic progressions, and the Erdős-Moser equation", The American Mathematical Monthly, 124 (3): 232–240, arXiv:1812.06566, doi:10.4169/amer.math.monthly.124.3.232, S2CID 119618783. == External links == Primary Pseudoperfect Number at PlanetMath. Weisstein, Eric W. "Primary Pseudoperfect Number". MathWorld. |
Wikipedia:Prime (order theory)#0 | In mathematics, an element p of a partial order (P, ≤) is a meet prime element when p is the principal element of a principal prime ideal. Equivalently, if P is a lattice, p ≠ top, and for all a, b in P, a∧b ≤ p implies a ≤ p or b ≤ p. == See also == Join and meet == References == Roman, Steven (2008), Lattices and ordered sets, New York: Springer, p. 50, ISBN 978-0-387-78900-2, MR 2446182. |
Wikipedia:Prime avoidance lemma#0 | In algebra, the prime avoidance lemma says that if an ideal I in a commutative ring R is contained in a union of finitely many prime ideals Pi's, then it is contained in Pi for some i. There are many variations of the lemma (cf. Hochster); for example, if the ring R contains an infinite field or a finite field of sufficiently large cardinality, then the statement follows from a fact in linear algebra that a vector space over an infinite field or a finite field of large cardinality is not a finite union of its proper vector subspaces. == Statement and proof == The following statement and argument are perhaps the most standard. Statement: Let E be a subset of R that is an additive subgroup of R and is multiplicatively closed. Let I 1 , I 2 , … , I n , n ≥ 1 {\displaystyle I_{1},I_{2},\dots ,I_{n},n\geq 1} be ideals such that I i {\displaystyle I_{i}} are prime ideals for i ≥ 3 {\displaystyle i\geq 3} . If E is not contained in any of I i {\displaystyle I_{i}} 's, then E is not contained in the union ∪ I i {\displaystyle \cup I_{i}} . Proof by induction on n: The idea is to find an element that is in E and not in any of I i {\displaystyle I_{i}} 's. The basic case n = 1 is trivial. Next suppose n ≥ 2. For each i, choose z i ∈ E − ∪ j ≠ i I j {\displaystyle z_{i}\in E-\cup _{j\neq i}I_{j}} where the set on the right is nonempty by inductive hypothesis. We can assume z i ∈ I i {\displaystyle z_{i}\in I_{i}} for all i; otherwise, some z i {\displaystyle z_{i}} avoids all the I i {\displaystyle I_{i}} 's and we are done. Put z = z 1 … z n − 1 + z n {\displaystyle z=z_{1}\dots z_{n-1}+z_{n}} . Then z is in E but not in any of I i {\displaystyle I_{i}} 's. Indeed, if z is in I i {\displaystyle I_{i}} for some i ≤ n − 1 {\displaystyle i\leq n-1} , then z n {\displaystyle z_{n}} is in I i {\displaystyle I_{i}} , a contradiction. Suppose z is in I n {\displaystyle I_{n}} . Then z 1 … z n − 1 {\displaystyle z_{1}\dots z_{n-1}} is in I n {\displaystyle I_{n}} . If n is 2, we are done. If n > 2, then, since I n {\displaystyle I_{n}} is a prime ideal, some z i , i < n {\displaystyle z_{i},i<n} is in I n {\displaystyle I_{n}} , a contradiction. == E. Davis' prime avoidance == There is the following variant of prime avoidance due to E. Davis. Proof: We argue by induction on r. Without loss of generality, we can assume there is no inclusion relation between the p i {\displaystyle {\mathfrak {p}}_{i}} 's; since otherwise we can use the inductive hypothesis. Also, if x ∉ p i {\displaystyle x\not \in {\mathfrak {p}}_{i}} for each i, then we are done; thus, without loss of generality, we can assume x ∈ p r {\displaystyle x\in {\mathfrak {p}}_{r}} . By inductive hypothesis, we find a y in J such that x + y ∈ I − ∪ 1 r − 1 p i {\displaystyle x+y\in I-\cup _{1}^{r-1}{\mathfrak {p}}_{i}} . If x + y {\displaystyle x+y} is not in p r {\displaystyle {\mathfrak {p}}_{r}} , we are done. Otherwise, note that J ⊄ p r {\displaystyle J\not \subset {\mathfrak {p}}_{r}} (since x ∈ p r {\displaystyle x\in {\mathfrak {p}}_{r}} ) and since p r {\displaystyle {\mathfrak {p}}_{r}} is a prime ideal, we have: p r ⊅ J p 1 ⋯ p r − 1 {\displaystyle {\mathfrak {p}}_{r}\not \supset J\,{\mathfrak {p}}_{1}\cdots {\mathfrak {p}}_{r-1}} . Hence, we can choose y ′ {\displaystyle y'} in J p 1 ⋯ p r − 1 {\displaystyle J\,{\mathfrak {p}}_{1}\cdots {\mathfrak {p}}_{r-1}} that is not in p r {\displaystyle {\mathfrak {p}}_{r}} . Then, since x + y ∈ p r {\displaystyle x+y\in {\mathfrak {p}}_{r}} , the element x + y + y ′ {\displaystyle x+y+y'} has the required property. ◻ {\displaystyle \square } === Application === Let A be a Noetherian ring, I an ideal generated by n elements and M a finite A-module such that I M ≠ M {\displaystyle IM\neq M} . Also, let d = depth A ( I , M ) {\displaystyle d=\operatorname {depth} _{A}(I,M)} = the maximal length of M-regular sequences in I = the length of every maximal M-regular sequence in I. Then d ≤ n {\displaystyle d\leq n} ; this estimate can be shown using the above prime avoidance as follows. We argue by induction on n. Let { p 1 , … , p r } {\displaystyle \{{\mathfrak {p}}_{1},\dots ,{\mathfrak {p}}_{r}\}} be the set of associated primes of M. If d > 0 {\displaystyle d>0} , then I ⊄ p i {\displaystyle I\not \subset {\mathfrak {p}}_{i}} for each i. If I = ( y 1 , … , y n ) {\displaystyle I=(y_{1},\dots ,y_{n})} , then, by prime avoidance, we can choose x 1 = y 1 + ∑ i = 2 n a i y i {\displaystyle x_{1}=y_{1}+\sum _{i=2}^{n}a_{i}y_{i}} for some a i {\displaystyle a_{i}} in A {\displaystyle A} such that x 1 ∉ ∪ 1 r p i {\displaystyle x_{1}\not \in \cup _{1}^{r}{\mathfrak {p}}_{i}} = the set of zero divisors on M. Now, I / ( x 1 ) {\displaystyle I/(x_{1})} is an ideal of A / ( x 1 ) {\displaystyle A/(x_{1})} generated by n − 1 {\displaystyle n-1} elements and so, by inductive hypothesis, depth A / ( x 1 ) ( I / ( x 1 ) , M / x 1 M ) ≤ n − 1 {\displaystyle \operatorname {depth} _{A/(x_{1})}(I/(x_{1}),M/x_{1}M)\leq n-1} . The claim now follows. == Notes == == References == Mel Hochster, Dimension theory and systems of parameters, a supplementary note Matsumura, Hideyuki (1986). Commutative ring theory. Cambridge Studies in Advanced Mathematics. Vol. 8. Cambridge University Press. ISBN 0-521-36764-6. MR 0879273. Zbl 0603.13001. |
Wikipedia:Prime factor exponent notation#0 | In his 1557 work The Whetstone of Witte, British mathematician Robert Recorde proposed an exponent notation by prime factorisation, which remained in use up until the eighteenth century and acquired the name Arabic exponent notation. The principle of Arabic exponents was quite similar to Egyptian fractions; large exponents were broken down into smaller prime numbers. Squares and cubes were so called; prime numbers from five onwards were called sursolids. Although the terms used for defining exponents differed between authors and times, the general system was the primary exponent notation until René Descartes devised the Cartesian exponent notation, which is still used today. This is a list of Recorde's terms. By comparison, here is a table of prime factors: == See also == Surd == External links (references) == Mathematical dictionary, Chas Hutton, pg 224 |
Wikipedia:Primitive element (co-algebra)#0 | In algebra, a primitive element of a co-algebra C (over an element g) is an element x that satisfies μ ( x ) = x ⊗ g + g ⊗ x {\displaystyle \mu (x)=x\otimes g+g\otimes x} where μ {\displaystyle \mu } is the co-multiplication and g is an element of C that maps to the multiplicative identity 1 of the base field under the co-unit (g is called group-like). If C is a bi-algebra, i.e., a co-algebra that is also an algebra (with certain compatibility conditions satisfied), then one usually takes g to be 1, the multiplicative identity of C. The bi-algebra C is said to be primitively generated if it is generated by primitive elements (as an algebra). If C is a bi-algebra, then the set of primitive elements form a Lie algebra with the usual commutator bracket [ x , y ] = x y − y x {\displaystyle [x,y]=xy-yx} (graded commutator if C is graded). If A is a connected graded cocommutative Hopf algebra over a field of characteristic zero, then the Milnor–Moore theorem states the universal enveloping algebra of the graded Lie algebra of primitive elements of A is isomorphic to A. (This also holds under slightly weaker requirements.) == References == http://www.encyclopediaofmath.org/index.php/Primitive_element_in_a_co-algebra |
Wikipedia:Primitive part and content#0 | In algebra, the content of a nonzero polynomial with integer coefficients (or, more generally, with coefficients in a unique factorization domain) is the greatest common divisor of its coefficients. The primitive part of such a polynomial is the quotient of the polynomial by its content. Thus a polynomial is the product of its primitive part and its content, and this factorization is unique up to the multiplication of the content by a unit of the ring of the coefficients (and the multiplication of the primitive part by the inverse of the unit). A polynomial is primitive if its content equals 1. Thus the primitive part of a polynomial is a primitive polynomial. Gauss's lemma for polynomials states that the product of primitive polynomials (with coefficients in the same unique factorization domain) also is primitive. This implies that the content and the primitive part of the product of two polynomials are, respectively, the product of the contents and the product of the primitive parts. As the computation of greatest common divisors is generally much easier than polynomial factorization, the first step of a polynomial factorization algorithm is generally the computation of its primitive part–content factorization (see Factorization of polynomials § Primitive part–content factorization). Then the factorization problem is reduced to factorize separately the content and the primitive part. Content and primitive part may be generalized to polynomials over the rational numbers, and, more generally, to polynomials over the field of fractions of a unique factorization domain. This makes essentially equivalent the problems of computing greatest common divisors and factorization of polynomials over the integers and of polynomials over the rational numbers. == Over the integers == For a polynomial with integer coefficients, the content may be either the greatest common divisor of the coefficients or its additive inverse. The choice is arbitrary, and may depend on a further convention, which is commonly that the leading coefficient of the primitive part be positive. For example, the content of − 12 x 3 + 30 x − 20 {\displaystyle -12x^{3}+30x-20} may be either 2 or −2, since 2 is the greatest common divisor of −12, 30, and −20. If one chooses 2 as the content, the primitive part of this polynomial is − 6 x 3 + 15 x − 10 = − 12 x 3 + 30 x − 20 2 , {\displaystyle -6x^{3}+15x-10={\frac {-12x^{3}+30x-20}{2}},} and thus the primitive-part-content factorization is − 12 x 3 + 30 x − 20 = 2 ( − 6 x 3 + 15 x − 10 ) . {\displaystyle -12x^{3}+30x-20=2(-6x^{3}+15x-10).} For aesthetic reasons, one often prefers choosing a negative content, here −2, giving the primitive-part-content factorization − 12 x 3 + 30 x − 20 = − 2 ( 6 x 3 − 15 x + 10 ) . {\displaystyle -12x^{3}+30x-20=-2(6x^{3}-15x+10).} == Properties == In the remainder of this article, we consider polynomials over a unique factorization domain R, which can typically be the ring of integers, or a polynomial ring over a field. In R, greatest common divisors are well defined, and are unique up to multiplication by a unit of R. The content c(P) of a polynomial P with coefficients in R is the greatest common divisor of its coefficients, and, as such, is defined up to multiplication by a unit. The primitive part pp(P) of P is the quotient P/c(P) of P by its content; it is a polynomial with coefficients in R, which is unique up to multiplication by a unit. If the content is changed by multiplication by a unit u, then the primitive part must be changed by dividing it by the same unit, in order to keep the equality P = c ( P ) pp ( P ) , {\displaystyle P=c(P)\operatorname {pp} (P),} which is called the primitive-part-content factorization of P. The main properties of the content and the primitive part are results of Gauss's lemma, which asserts that the product of two primitive polynomials is primitive, where a polynomial is primitive if 1 is the greatest common divisor of its coefficients. This implies: The content of a product of polynomials is the product of their contents: c ( P 1 P 2 ) = c ( P 1 ) c ( P 2 ) . {\displaystyle c(P_{1}P_{2})=c(P_{1})c(P_{2}).} The primitive part of a product of polynomials is the product of their primitive parts: pp ( P 1 P 2 ) = pp ( P 1 ) pp ( P 2 ) . {\displaystyle \operatorname {pp} (P_{1}P_{2})=\operatorname {pp} (P_{1})\operatorname {pp} (P_{2}).} The content of a greatest common divisor of polynomials is the greatest common divisor (in R) of their contents: c ( gcd ( P 1 , P 2 ) ) = gcd ( c ( P 1 ) , c ( P 2 ) ) . {\displaystyle c(\operatorname {gcd} (P_{1},P_{2}))=\operatorname {gcd} (c(P_{1}),c(P_{2})).} The primitive part of a greatest common divisor of polynomials is the greatest common divisor (in R) of their primitive parts: pp ( gcd ( P 1 , P 2 ) ) = gcd ( pp ( P 1 ) , pp ( P 2 ) ) . {\displaystyle \operatorname {pp} (\operatorname {gcd} (P_{1},P_{2}))=\operatorname {gcd} (\operatorname {pp} (P_{1}),\operatorname {pp} (P_{2})).} The complete factorization of a polynomial over R is the product of the factorization (in R) of the content and of the factorization (in the polynomial ring) of the primitive part. The last property implies that the computation of the primitive-part-content factorization of a polynomial reduces the computation of its complete factorization to the separate factorization of the content and the primitive part. This is generally interesting, because the computation of the prime-part-content factorization involves only greatest common divisor computation in R, which is usually much easier than factorization. == Over the rationals == The primitive-part-content factorization may be extended to polynomials with rational coefficients as follows. Given a polynomial P with rational coefficients, by rewriting its coefficients with the same common denominator d, one may rewrite P as P = Q d , {\displaystyle P={\frac {Q}{d}},} where Q is a polynomial with integer coefficients. The content of P is the quotient by d of the content of Q, that is c ( P ) = c ( Q ) d , {\displaystyle c(P)={\frac {c(Q)}{d}},} and the primitive part of P is the primitive part of Q: pp ( P ) = pp ( Q ) . {\displaystyle \operatorname {pp} (P)=\operatorname {pp} (Q).} It is easy to show that this definition does not depend on the choice of the common denominator, and that the primitive-part-content factorization remains valid: P = c ( P ) pp ( P ) . {\displaystyle P=c(P)\operatorname {pp} (P).} This shows that every polynomial over the rationals is associated with a unique primitive polynomial over the integers, and that the Euclidean algorithm allows the computation of this primitive polynomial. A consequence is that factoring polynomials over the rationals is equivalent to factoring primitive polynomials over the integers. As polynomials with coefficients in a field are more common than polynomials with integer coefficients, it may seem that this equivalence may be used for factoring polynomials with integer coefficients. In fact, the truth is exactly the opposite: every known efficient algorithm for factoring polynomials with rational coefficients uses this equivalence for reducing the problem modulo some prime number p (see Factorization of polynomials). This equivalence is also used for computing greatest common divisors of polynomials, although the Euclidean algorithm is defined for polynomials with rational coefficients. In fact, in this case, the Euclidean algorithm requires one to compute the reduced form of many fractions, and this makes the Euclidean algorithm less efficient than algorithms which work only with polynomials over the integers (see Polynomial greatest common divisor). == Over a field of fractions == The results of the preceding section remain valid if the ring of integers and the field of rationals are respectively replaced by any unique factorization domain R and its field of fractions K. This is typically used for factoring multivariate polynomials, and for proving that a polynomial ring over a unique factorization domain is also a unique factorization domain. === Unique factorization property of polynomial rings === A polynomial ring over a field is a unique factorization domain. The same is true for a polynomial ring over a unique factorization domain. To prove this, it suffices to consider the univariate case, as the general case may be deduced by induction on the number of indeterminates. The unique factorization property is a direct consequence of Euclid's lemma: If an irreducible element divides a product, then it divides one of the factors. For univariate polynomials over a field, this results from Bézout's identity, which itself results from the Euclidean algorithm. So, let R be a unique factorization domain, which is not a field, and R[X] the univariate polynomial ring over R. An irreducible element r in R[X] is either an irreducible element in R or an irreducible primitive polynomial. If r is in R and divides a product P 1 P 2 {\displaystyle P_{1}P_{2}} of two polynomials, then it divides the content c ( P 1 P 2 ) = c ( P 1 ) c ( P 2 ) . {\displaystyle c(P_{1}P_{2})=c(P_{1})c(P_{2}).} Thus, by Euclid's lemma in R, it divides one of the contents, and therefore one of the polynomials. If r is not R, it is a primitive polynomial (because it is irreducible). Then Euclid's lemma in R[X] results immediately from Euclid's lemma in K[X], where K is the field of fractions of R. === Factorization of multivariate polynomials === For factoring a multivariate polynomial over a field or over the integers, one may consider it as a univariate polynomial with coefficients in a polynomial ring with one less indeterminate. Then the factorization is reduced to factorizing separately the primitive part and the content. As the content has one less indeterminate, it may be factorized by applying the method recursively. For factorizing the primitive part, the standard method consists of substituting integers to the indeterminates of the coefficients in a way that does not change the degree in the remaining variable, factorizing the resulting univariate polynomial, and lifting the result to a factorization of the primitive part. == See also == Rational root theorem == References == B. Hartley; T.O. Hawkes (1970). Rings, modules and linear algebra. Chapman and Hall. ISBN 0-412-09810-5. Page 181 of Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley, ISBN 978-0-201-55540-0, Zbl 0848.13001 David Sharpe (1987). Rings and factorization. Cambridge University Press. pp. 68–69. ISBN 0-521-33718-6. |
Wikipedia:Primitive recursive function#0 | In computability theory, a primitive recursive function is, roughly speaking, a function that can be computed by a computer program whose loops are all "for" loops (that is, an upper bound of the number of iterations of every loop is fixed before entering the loop). Primitive recursive functions form a strict subset of those general recursive functions that are also total functions. The importance of primitive recursive functions lies in the fact that most computable functions that are studied in number theory (and more generally in mathematics) are primitive recursive. For example, addition and division, the factorial and exponential function, and the function which returns the nth prime are all primitive recursive. In fact, for showing that a computable function is primitive recursive, it suffices to show that its time complexity is bounded above by a primitive recursive function of the input size. It is hence not particularly easy to devise a computable function that is not primitive recursive; some examples are shown in section § Limitations below. The set of primitive recursive functions is known as PR in computational complexity theory. == Definition == A primitive recursive function takes a fixed number of arguments, each a natural number (nonnegative integer: {0, 1, 2, ...}), and returns a natural number. If it takes n arguments it is called n-ary. The basic primitive recursive functions are given by these axioms: More complex primitive recursive functions can be obtained by applying the operations given by these axioms: The primitive recursive functions are the basic functions and those obtained from the basic functions by applying these operations a finite number of times. == Examples == === Addition === A definition of the 2-ary function A d d {\displaystyle Add} , to compute the sum of its arguments, can be obtained using the primitive recursion operator ρ {\displaystyle \rho } . To this end, the well-known equations 0 + y = y and S ( x ) + y = S ( x + y ) . {\displaystyle {\begin{array}{rcll}0+y&=&y&{\text{ and}}\\S(x)+y&=&S(x+y)&.\\\end{array}}} are "rephrased in primitive recursive function terminology": In the definition of ρ ( g , h ) {\displaystyle \rho (g,h)} , the first equation suggests to choose g = P 1 1 {\displaystyle g=P_{1}^{1}} to obtain A d d ( 0 , y ) = g ( y ) = y {\displaystyle Add(0,y)=g(y)=y} ; the second equation suggests to choose h = S ∘ P 2 3 {\displaystyle h=S\circ P_{2}^{3}} to obtain A d d ( S ( x ) , y ) = h ( x , A d d ( x , y ) , y ) = ( S ∘ P 2 3 ) ( x , A d d ( x , y ) , y ) = S ( A d d ( x , y ) ) {\displaystyle Add(S(x),y)=h(x,Add(x,y),y)=(S\circ P_{2}^{3})(x,Add(x,y),y)=S(Add(x,y))} . Therefore, the addition function can be defined as A d d = ρ ( P 1 1 , S ∘ P 2 3 ) {\displaystyle Add=\rho (P_{1}^{1},S\circ P_{2}^{3})} . As a computation example, A d d ( 1 , 7 ) = ρ ( P 1 1 , S ∘ P 2 3 ) ( S ( 0 ) , 7 ) by Def. A d d , S = ( S ∘ P 2 3 ) ( 0 , A d d ( 0 , 7 ) , 7 ) by case ρ ( g , h ) ( S ( . . . ) , . . . ) = S ( A d d ( 0 , 7 ) ) by Def. ∘ , P 2 3 = S ( ρ ( P 1 1 , S ∘ P 2 3 ) ( 0 , 7 ) ) by Def. A d d = S ( P 1 1 ( 7 ) ) by case ρ ( g , h ) ( 0 , . . . ) = S ( 7 ) by Def. P 1 1 = 8 by Def. S . {\displaystyle {\begin{array}{lll}&Add(1,7)\\=&\rho (P_{1}^{1},S\circ P_{2}^{3})\;(S(0),7)&{\text{ by Def. }}Add,S\\=&(S\circ P_{2}^{3})(0,Add(0,7),7)&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&S(Add(0,7))&{\text{ by Def. }}\circ ,P_{2}^{3}\\=&S(\;\rho (P_{1}^{1},S\circ P_{2}^{3})\;(0,7)\;)&{\text{ by Def. }}Add\\=&S(P_{1}^{1}(7))&{\text{ by case }}\rho (g,h)\;(0,...)\\=&S(7)&{\text{ by Def. }}P_{1}^{1}\\=&8&{\text{ by Def. }}S.\\\end{array}}} === Doubling === Given A d d {\displaystyle Add} , the 1-ary function A d d ∘ ( P 1 1 , P 1 1 ) {\displaystyle Add\circ (P_{1}^{1},P_{1}^{1})} doubles its argument, ( A d d ∘ ( P 1 1 , P 1 1 ) ) ( x ) = A d d ( x , x ) = x + x {\displaystyle (Add\circ (P_{1}^{1},P_{1}^{1}))(x)=Add(x,x)=x+x} . === Multiplication === In a similar way as addition, multiplication can be defined by M u l = ρ ( C 0 1 , A d d ∘ ( P 2 3 , P 3 3 ) ) {\displaystyle Mul=\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))} . This reproduces the well-known multiplication equations: M u l ( 0 , y ) = ρ ( C 0 1 , A d d ∘ ( P 2 3 , P 3 3 ) ) ( 0 , y ) by Def. M u l = C 0 1 ( y ) by case ρ ( g , h ) ( 0 , . . . ) = 0 by Def. C 0 1 . {\displaystyle {\begin{array}{lll}&Mul(0,y)\\=&\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))\;(0,y)&{\text{ by Def. }}Mul\\=&C_{0}^{1}(y)&{\text{ by case }}\rho (g,h)\;(0,...)\\=&0&{\text{ by Def. }}C_{0}^{1}.\\\end{array}}} and M u l ( S ( x ) , y ) = ρ ( C 0 1 , A d d ∘ ( P 2 3 , P 3 3 ) ) ( S ( x ) , y ) by Def. M u l = ( A d d ∘ ( P 2 3 , P 3 3 ) ) ( x , M u l ( x , y ) , y ) by case ρ ( g , h ) ( S ( . . . ) , . . . ) = A d d ( M u l ( x , y ) , y ) by Def. ∘ , P 2 3 , P 3 3 = M u l ( x , y ) + y by property of A d d . {\displaystyle {\begin{array}{lll}&Mul(S(x),y)\\=&\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))\;(S(x),y)&{\text{ by Def. }}Mul\\=&(Add\circ (P_{2}^{3},P_{3}^{3}))\;(x,Mul(x,y),y)&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&Add(Mul(x,y),y)&{\text{ by Def. }}\circ ,P_{2}^{3},P_{3}^{3}\\=&Mul(x,y)+y&{\text{ by property of }}Add.\\\end{array}}} === Predecessor === The predecessor function acts as the "opposite" of the successor function and is recursively defined by the rules P r e d ( 0 ) = 0 {\displaystyle Pred(0)=0} and P r e d ( S ( n ) ) = n {\displaystyle Pred(S(n))=n} . A primitive recursive definition is P r e d = ρ ( C 0 0 , P 1 2 ) {\displaystyle Pred=\rho (C_{0}^{0},P_{1}^{2})} . As a computation example, P r e d ( 8 ) = ρ ( C 0 0 , P 1 2 ) ( S ( 7 ) ) by Def. P r e d , S = P 1 2 ( 7 , P r e d ( 7 ) ) by case ρ ( g , h ) ( S ( . . . ) , . . . ) = 7 by Def. P 1 2 . {\displaystyle {\begin{array}{lll}&Pred(8)\\=&\rho (C_{0}^{0},P_{1}^{2})\;(S(7))&{\text{ by Def. }}Pred,S\\=&P_{1}^{2}(7,Pred(7))&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&7&{\text{ by Def. }}P_{1}^{2}.\\\end{array}}} === Truncated subtraction === The limited subtraction function (also called "monus", and denoted " − . {\displaystyle {\stackrel {.}{-}}} ") is definable from the predecessor function. It satisfies the equations y − . 0 = y and y − . S ( x ) = P r e d ( y − . x ) . {\displaystyle {\begin{array}{rcll}y{\stackrel {.}{-}}0&=&y&{\text{and}}\\y{\stackrel {.}{-}}S(x)&=&Pred(y{\stackrel {.}{-}}x)&.\\\end{array}}} Since the recursion runs over the second argument, we begin with a primitive recursive definition of the reversed subtraction, R S u b ( y , x ) = x − . y {\displaystyle RSub(y,x)=x{\stackrel {.}{-}}y} . Its recursion then runs over the first argument, so its primitive recursive definition can be obtained, similar to addition, as R S u b = ρ ( P 1 1 , P r e d ∘ P 2 3 ) {\displaystyle RSub=\rho (P_{1}^{1},Pred\circ P_{2}^{3})} . To get rid of the reversed argument order, then define S u b = R S u b ∘ ( P 2 2 , P 1 2 ) {\displaystyle Sub=RSub\circ (P_{2}^{2},P_{1}^{2})} . As a computation example, S u b ( 8 , 1 ) = ( R S u b ∘ ( P 2 2 , P 1 2 ) ) ( 8 , 1 ) by Def. S u b = R S u b ( 1 , 8 ) by Def. ∘ , P 2 2 , P 1 2 = ρ ( P 1 1 , P r e d ∘ P 2 3 ) ( S ( 0 ) , 8 ) by Def. R S u b , S = ( P r e d ∘ P 2 3 ) ( 0 , R S u b ( 0 , 8 ) , 8 ) by case ρ ( g , h ) ( S ( . . . ) , . . . ) = P r e d ( R S u b ( 0 , 8 ) ) by Def. ∘ , P 2 3 = P r e d ( ρ ( P 1 1 , P r e d ∘ P 2 3 ) ( 0 , 8 ) ) by Def. R S u b = P r e d ( P 1 1 ( 8 ) ) by case ρ ( g , h ) ( 0 , . . . ) = P r e d ( 8 ) by Def. P 1 1 = 7 by property of P r e d . {\displaystyle {\begin{array}{lll}&Sub(8,1)\\=&(RSub\circ (P_{2}^{2},P_{1}^{2}))\;(8,1)&{\text{ by Def. }}Sub\\=&RSub(1,8)&{\text{ by Def. }}\circ ,P_{2}^{2},P_{1}^{2}\\=&\rho (P_{1}^{1},Pred\circ P_{2}^{3})\;(S(0),8)&{\text{ by Def. }}RSub,S\\=&(Pred\circ P_{2}^{3})\;(0,RSub(0,8),8)&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&Pred(RSub(0,8))&{\text{ by Def. }}\circ ,P_{2}^{3}\\=&Pred(\;\rho (P_{1}^{1},Pred\circ P_{2}^{3})\;(0,8)\;)&{\text{ by Def. }}RSub\\=&Pred(P_{1}^{1}(8))&{\text{ by case }}\rho (g,h)\;(0,...)\\=&Pred(8)&{\text{ by Def. }}P_{1}^{1}\\=&7&{\text{ by property of }}Pred.\\\end{array}}} === Converting predicates to numeric functions === In some settings it is natural to consider primitive recursive functions that take as inputs tuples that mix numbers with truth values (that is t {\displaystyle t} for true and f {\displaystyle f} for false), or that produce truth values as outputs. This can be accomplished by identifying the truth values with numbers in any fixed manner. For example, it is common to identify the truth value t {\displaystyle t} with the number 1 {\displaystyle 1} and the truth value f {\displaystyle f} with the number 0 {\displaystyle 0} . Once this identification has been made, the characteristic function of a set A {\displaystyle A} , which always returns 1 {\displaystyle 1} or 0 {\displaystyle 0} , can be viewed as a predicate that tells whether a number is in the set A {\displaystyle A} . Such an identification of predicates with numeric functions will be assumed for the remainder of this article. === Predicate "Is zero" === As an example for a primitive recursive predicate, the 1-ary function I s Z e r o {\displaystyle IsZero} shall be defined such that I s Z e r o ( x ) = 1 {\displaystyle IsZero(x)=1} if x = 0 {\displaystyle x=0} , and I s Z e r o ( x ) = 0 {\displaystyle IsZero(x)=0} , otherwise. This can be achieved by defining I s Z e r o = ρ ( C 1 0 , C 0 2 ) {\displaystyle IsZero=\rho (C_{1}^{0},C_{0}^{2})} . Then, I s Z e r o ( 0 ) = ρ ( C 1 0 , C 0 2 ) ( 0 ) = C 1 0 ( 0 ) = 1 {\displaystyle IsZero(0)=\rho (C_{1}^{0},C_{0}^{2})(0)=C_{1}^{0}(0)=1} and e.g. I s Z e r o ( 8 ) = ρ ( C 1 0 , C 0 2 ) ( S ( 7 ) ) = C 0 2 ( 7 , I s Z e r o ( 7 ) ) = 0 {\displaystyle IsZero(8)=\rho (C_{1}^{0},C_{0}^{2})(S(7))=C_{0}^{2}(7,IsZero(7))=0} . === Predicate "Less or equal" === Using the property x ≤ y ⟺ x − . y = 0 {\displaystyle x\leq y\iff x{\stackrel {.}{-}}y=0} , the 2-ary function L e q {\displaystyle Leq} can be defined by L e q = I s Z e r o ∘ S u b {\displaystyle Leq=IsZero\circ Sub} . Then L e q ( x , y ) = 1 {\displaystyle Leq(x,y)=1} if x ≤ y {\displaystyle x\leq y} , and L e q ( x , y ) = 0 {\displaystyle Leq(x,y)=0} , otherwise. As a computation example, L e q ( 8 , 3 ) = I s Z e r o ( S u b ( 8 , 3 ) ) by Def. L e q = I s Z e r o ( 5 ) by property of S u b = 0 by property of I s Z e r o {\displaystyle {\begin{array}{lll}&Leq(8,3)\\=&IsZero(Sub(8,3))&{\text{ by Def. }}Leq\\=&IsZero(5)&{\text{ by property of }}Sub\\=&0&{\text{ by property of }}IsZero\\\end{array}}} === Predicate "Greater or equal" === Once a definition of L e q {\displaystyle Leq} is obtained, the converse predicate can be defined as G e q = L e q ∘ ( P 2 2 , P 1 2 ) {\displaystyle Geq=Leq\circ (P_{2}^{2},P_{1}^{2})} . Then, G e q ( x , y ) = L e q ( y , x ) {\displaystyle Geq(x,y)=Leq(y,x)} is true (more precisely: has value 1) if, and only if, x ≥ y {\displaystyle x\geq y} . === If-then-else === The 3-ary if-then-else operator known from programming languages can be defined by If = ρ ( P 2 2 , P 3 4 ) {\displaystyle {\textit {If}}=\rho (P_{2}^{2},P_{3}^{4})} . Then, for arbitrary x {\displaystyle x} , If ( S ( x ) , y , z ) = ρ ( P 2 2 , P 3 4 ) ( S ( x ) , y , z ) by Def. If = P 3 4 ( x , If ( x , y , z ) , y , z ) by case ρ ( S ( . . . ) , . . . ) = y by Def. P 3 4 {\displaystyle {\begin{array}{lll}&{\textit {If}}(S(x),y,z)\\=&\rho (P_{2}^{2},P_{3}^{4})\;(S(x),y,z)&{\text{ by Def. }}{\textit {If}}\\=&P_{3}^{4}(x,{\textit {If}}(x,y,z),y,z)&{\text{ by case }}\rho (S(...),...)\\=&y&{\text{ by Def. }}P_{3}^{4}\\\end{array}}} and If ( 0 , y , z ) = ρ ( P 2 2 , P 3 4 ) ( 0 , y , z ) by Def. If = P 2 2 ( y , z ) by case ρ ( 0 , . . . ) = z by Def. P 2 2 . {\displaystyle {\begin{array}{lll}&{\textit {If}}(0,y,z)\\=&\rho (P_{2}^{2},P_{3}^{4})\;(0,y,z)&{\text{ by Def. }}{\textit {If}}\\=&P_{2}^{2}(y,z)&{\text{ by case }}\rho (0,...)\\=&z&{\text{ by Def. }}P_{2}^{2}.\\\end{array}}} . That is, If ( x , y , z ) {\displaystyle {\textit {If}}(x,y,z)} returns the then-part, y {\displaystyle y} , if the if-part, x {\displaystyle x} , is true, and the else-part, z {\displaystyle z} , otherwise. === Junctors === Based on the If {\displaystyle {\textit {If}}} function, it is easy to define logical junctors. For example, defining A n d = If ∘ ( P 1 2 , P 2 2 , C 0 2 ) {\displaystyle And={\textit {If}}\circ (P_{1}^{2},P_{2}^{2},C_{0}^{2})} , one obtains A n d ( x , y ) = If ( x , y , 0 ) {\displaystyle And(x,y)={\textit {If}}(x,y,0)} , that is, A n d ( x , y ) {\displaystyle And(x,y)} is true if, and only if, both x {\displaystyle x} and y {\displaystyle y} are true (logical conjunction of x {\displaystyle x} and y {\displaystyle y} ). Similarly, O r = If ∘ ( P 1 2 , C 1 2 , P 2 2 ) {\displaystyle Or={\textit {If}}\circ (P_{1}^{2},C_{1}^{2},P_{2}^{2})} and N o t = If ∘ ( P 1 1 , C 0 1 , C 1 1 ) {\displaystyle Not={\textit {If}}\circ (P_{1}^{1},C_{0}^{1},C_{1}^{1})} lead to appropriate definitions of disjunction and negation: O r ( x , y ) = If ( x , 1 , y ) {\displaystyle Or(x,y)={\textit {If}}(x,1,y)} and N o t ( x ) = If ( x , 0 , 1 ) {\displaystyle Not(x)={\textit {If}}(x,0,1)} . === Equality predicate === Using the above functions L e q {\displaystyle Leq} , G e q {\displaystyle Geq} and A n d {\displaystyle And} , the definition E q = A n d ∘ ( L e q , G e q ) {\displaystyle Eq=And\circ (Leq,Geq)} implements the equality predicate. In fact, E q ( x , y ) = A n d ( L e q ( x , y ) , G e q ( x , y ) ) {\displaystyle Eq(x,y)=And(Leq(x,y),Geq(x,y))} is true if, and only if, x {\displaystyle x} equals y {\displaystyle y} . Similarly, the definition L t = N o t ∘ G e q {\displaystyle Lt=Not\circ Geq} implements the predicate "less-than", and G t = N o t ∘ L e q {\displaystyle Gt=Not\circ Leq} implements "greater-than". === Other operations on natural numbers === Exponentiation and primality testing are primitive recursive. Given primitive recursive functions e {\displaystyle e} , f {\displaystyle f} , g {\displaystyle g} , and h {\displaystyle h} , a function that returns the value of g {\displaystyle g} when e ≤ f {\displaystyle e\leq f} and the value of h {\displaystyle h} otherwise is primitive recursive. === Operations on integers and rational numbers === By using Gödel numberings, the primitive recursive functions can be extended to operate on other objects such as integers and rational numbers. If integers are encoded by Gödel numbers in a standard way, the arithmetic operations including addition, subtraction, and multiplication are all primitive recursive. Similarly, if the rationals are represented by Gödel numbers then the field operations are all primitive recursive. === Some common primitive recursive functions === The following examples and definitions are from Kleene (1952, pp. 222–231). Many appear with proofs. Most also appear with similar names, either as proofs or as examples, in Boolos, Burgess & Jeffrey (2002, pp. 63–70) they add the logarithm lo(x, y) or lg(x, y) depending on the exact derivation. In the following the mark " ' ", e.g. a', is the primitive mark meaning "the successor of", usually thought of as " +1", e.g. a +1 =def a'. The functions 16–20 and #G are of particular interest with respect to converting primitive recursive predicates to, and extracting them from, their "arithmetical" form expressed as Gödel numbers. Addition: a+b Multiplication: a×b Exponentiation: ab Factorial a! : 0! = 1, a'! = a!×a' pred(a): (Predecessor or decrement): If a > 0 then a−1 else 0 Proper subtraction a ∸ b: If a ≥ b then a−b else 0 Minimum(a1, ... an) Maximum(a1, ... an) Absolute difference: | a−b | =def (a ∸ b) + (b ∸ a) ~sg(a): NOT[signum(a)]: If a=0 then 1 else 0 sg(a): signum(a): If a=0 then 0 else 1 a | b: (a divides b): If b=k×a for some k then 0 else 1 Remainder(a, b): the leftover if b does not divide a "evenly". Also called MOD(a, b) a = b: sg | a − b | (Kleene's convention was to represent true by 0 and false by 1; presently, especially in computers, the most common convention is the reverse, namely to represent true by 1 and false by 0, which amounts to changing sg into ~sg here and in the next item) a < b: sg( a' ∸ b ) Pr(a): a is a prime number Pr(a) =def a>1 & NOT(Exists c)1<c<a [ c|a ] pi: the i+1th prime number (a)i: exponent of pi in a: the unique x such that pix|a & NOT(pix'|a) lh(a): the "length" or number of non-vanishing exponents in a lo(a, b): (logarithm of a to base b): If a, b > 1 then the greatest x such that bx | a else 0 In the following, the abbreviation x =def x1, ... xn; subscripts may be applied if the meaning requires. #A: A function φ definable explicitly from functions Ψ and constants q1, ... qn is primitive recursive in Ψ. #B: The finite sum Σy<z ψ(x, y) and product Πy<zψ(x, y) are primitive recursive in ψ. #C: A predicate P obtained by substituting functions χ1,..., χm for the respective variables of a predicate Q is primitive recursive in χ1,..., χm, Q. #D: The following predicates are primitive recursive in Q and R: NOT_Q(x) . Q OR R: Q(x) V R(x), Q AND R: Q(x) & R(x), Q IMPLIES R: Q(x) → R(x) Q is equivalent to R: Q(x) ≡ R(x) #E: The following predicates are primitive recursive in the predicate R: (Ey)y<z R(x, y) where (Ey)y<z denotes "there exists at least one y that is less than z such that" (y)y<z R(x, y) where (y)y<z denotes "for all y less than z it is true that" μyy<z R(x, y). The operator μyy<z R(x, y) is a bounded form of the so-called minimization- or mu-operator: Defined as "the least value of y less than z such that R(x, y) is true; or z if there is no such value." #F: Definition by cases: The function defined thus, where Q1, ..., Qm are mutually exclusive predicates (or "ψ(x) shall have the value given by the first clause that applies), is primitive recursive in φ1, ..., Q1, ... Qm: φ(x) = φ1(x) if Q1(x) is true, . . . . . . . . . . . . . . . . . . . φm(x) if Qm(x) is true φm+1(x) otherwise #G: If φ satisfies the equation: φ(y,x) = χ(y, COURSE-φ(y; x2, ... xn ), x2, ... xn then φ is primitive recursive in χ. The value COURSE-φ(y; x2 to n ) of the course-of-values function encodes the sequence of values φ(0,x2 to n), ..., φ(y-1,x2 to n) of the original function. == Relationship to recursive functions == The broader class of partial recursive functions is defined by introducing an unbounded search operator. The use of this operator may result in a partial function, that is, a relation with at most one value for each argument, but does not necessarily have any value for any argument (see domain). An equivalent definition states that a partial recursive function is one that can be computed by a Turing machine. A total recursive function is a partial recursive function that is defined for every input. Every primitive recursive function is total recursive, but not all total recursive functions are primitive recursive. The Ackermann function A(m,n) is a well-known example of a total recursive function (in fact, provable total), that is not primitive recursive. There is a characterization of the primitive recursive functions as a subset of the total recursive functions using the Ackermann function. This characterization states that a function is primitive recursive if and only if there is a natural number m such that the function can be computed by a Turing machine that always halts within A(m,n) or fewer steps, where n is the sum of the arguments of the primitive recursive function. An important property of the primitive recursive functions is that they are a recursively enumerable subset of the set of all total recursive functions (which is not itself recursively enumerable). This means that there is a single computable function f(m,n) that enumerates the primitive recursive functions, namely: For every primitive recursive function g, there is an m such that g(n) = f(m,n) for all n, and For every m, the function h(n) = f(m,n) is primitive recursive. f can be explicitly constructed by iteratively repeating all possible ways of creating primitive recursive functions. Thus, it is provably total. One can use a diagonalization argument to show that f is not recursive primitive in itself: had it been such, so would be h(n) = f(n,n)+1. But if this equals some primitive recursive function, there is an m such that h(n) = f(m,n) for all n, and then h(m) = f(m,m), leading to contradiction. However, the set of primitive recursive functions is not the largest recursively enumerable subset of the set of all total recursive functions. For example, the set of provably total functions (in Peano arithmetic) is also recursively enumerable, as one can enumerate all the proofs of the theory. While all primitive recursive functions are provably total, the converse is not true. == Limitations == Primitive recursive functions tend to correspond very closely with our intuition of what a computable function must be. Certainly the initial functions are intuitively computable (in their very simplicity), and the two operations by which one can create new primitive recursive functions are also very straightforward. However, the set of primitive recursive functions does not include every possible total computable function—this can be seen with a variant of Cantor's diagonal argument. This argument provides a total computable function that is not primitive recursive. A sketch of the proof is as follows: This argument can be applied to any class of computable (total) functions that can be enumerated in this way, as explained in the article Machine that always halts. Note however that the partial computable functions (those that need not be defined for all arguments) can be explicitly enumerated, for instance by enumerating Turing machine encodings. Other examples of total recursive but not primitive recursive functions are known: The function that takes m to Ackermann(m,m) is a unary total recursive function that is not primitive recursive. The Paris–Harrington theorem involves a total recursive function that is not primitive recursive. The Sudan function The Goodstein function == Variants == === Constant functions === Instead of C n k {\displaystyle C_{n}^{k}} , alternative definitions use just one 0-ary zero function C 0 0 {\displaystyle C_{0}^{0}} as a primitive function that always returns zero, and built the constant functions from the zero function, the successor function and the composition operator. === Iterative functions === Robinson considered various restrictions of the recursion rule. One is the so-called iteration rule where the function h does not have access to the parameters xi (in this case, we may assume without loss of generality that the function g is just the identity, as the general case can be obtained by substitution): f ( 0 , x ) = x , f ( S ( y ) , x ) = h ( y , f ( y , x ) ) . {\displaystyle {\begin{aligned}f(0,x)&=x,\\f(S(y),x)&=h(y,f(y,x)).\end{aligned}}} He proved that the class of all primitive recursive functions can still be obtained in this way. === Pure recursion === Another restriction considered by Robinson is pure recursion, where h does not have access to the induction variable y: f ( 0 , x 1 , … , x k ) = g ( x 1 , … , x k ) , f ( S ( y ) , x 1 , … , x k ) = h ( f ( y , x 1 , … , x k ) , x 1 , … , x k ) . {\displaystyle {\begin{aligned}f(0,x_{1},\ldots ,x_{k})&=g(x_{1},\ldots ,x_{k}),\\f(S(y),x_{1},\ldots ,x_{k})&=h(f(y,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k}).\end{aligned}}} Gladstone proved that this rule is enough to generate all primitive recursive functions. Gladstone improved this so that even the combination of these two restrictions, i.e., the pure iteration rule below, is enough: f ( 0 , x ) = x , f ( S ( y ) , x ) = h ( f ( y , x ) ) . {\displaystyle {\begin{aligned}f(0,x)&=x,\\f(S(y),x)&=h(f(y,x)).\end{aligned}}} Further improvements are possible: Severin prove that even the pure iteration rule without parameters, namely f ( 0 ) = 0 , f ( S ( y ) ) = h ( f ( y ) ) , {\displaystyle {\begin{aligned}f(0)&=0,\\f(S(y))&=h(f(y)),\end{aligned}}} suffices to generate all unary primitive recursive functions if we extend the set of initial functions with truncated subtraction x ∸ y. We get all primitive recursive functions if we additionally include + as an initial function. === Additional primitive recursive forms === Some additional forms of recursion also define functions that are in fact primitive recursive. Definitions in these forms may be easier to find or more natural for reading or writing. Course-of-values recursion defines primitive recursive functions. Some forms of mutual recursion also define primitive recursive functions. The functions that can be programmed in the LOOP programming language are exactly the primitive recursive functions. This gives a different characterization of the power of these functions. The main limitation of the LOOP language, compared to a Turing-complete language, is that in the LOOP language the number of times that each loop will run is specified before the loop begins to run. === Computer language definition === An example of a primitive recursive programming language is one that contains basic arithmetic operators (e.g. + and −, or ADD and SUBTRACT), conditionals and comparison (IF-THEN, EQUALS, LESS-THAN), and bounded loops, such as the basic for loop, where there is a known or calculable upper bound to all loops (FOR i FROM 1 TO n, with neither i nor n modifiable by the loop body). No control structures of greater generality, such as while loops or IF-THEN plus GOTO, are admitted in a primitive recursive language. The LOOP language, introduced in a 1967 paper by Albert R. Meyer and Dennis M. Ritchie, is such a language. Its computing power coincides with the primitive recursive functions. A variant of the LOOP language is Douglas Hofstadter's BlooP in Gödel, Escher, Bach. Adding unbounded loops (WHILE, GOTO) makes the language general recursive and Turing-complete, as are all real-world computer programming languages. The definition of primitive recursive functions implies that their computation halts on every input (after a finite number of steps). On the other hand, the halting problem is undecidable for general recursive functions. == Finitism and consistency results == The primitive recursive functions are closely related to mathematical finitism, and are used in several contexts in mathematical logic where a particularly constructive system is desired. Primitive recursive arithmetic (PRA), a formal axiom system for the natural numbers and the primitive recursive functions on them, is often used for this purpose. PRA is much weaker than Peano arithmetic, which is not a finitistic system. Nevertheless, many results in number theory and in proof theory can be proved in PRA. For example, Gödel's incompleteness theorem can be formalized into PRA, giving the following theorem: If T is a theory of arithmetic satisfying certain hypotheses, with Gödel sentence GT, then PRA proves the implication Con(T)→GT. Similarly, many of the syntactic results in proof theory can be proved in PRA, which implies that there are primitive recursive functions that carry out the corresponding syntactic transformations of proofs. In proof theory and set theory, there is an interest in finitistic consistency proofs, that is, consistency proofs that themselves are finitistically acceptable. Such a proof establishes that the consistency of a theory T implies the consistency of a theory S by producing a primitive recursive function that can transform any proof of an inconsistency from S into a proof of an inconsistency from T. One sufficient condition for a consistency proof to be finitistic is the ability to formalize it in PRA. For example, many consistency results in set theory that are obtained by forcing can be recast as syntactic proofs that can be formalized in PRA. == History == Recursive definitions had been used more or less formally in mathematics before, but the construction of primitive recursion is traced back to Richard Dedekind's theorem 126 of his Was sind und was sollen die Zahlen? (1888). This work was the first to give a proof that a certain recursive construction defines a unique function. Primitive recursive arithmetic was first proposed by Thoralf Skolem in 1923. The current terminology was coined by Rózsa Péter (1934) after Ackermann had proved in 1928 that the function which today is named after him was not primitive recursive, an event which prompted the need to rename what until then were simply called recursive functions. == See also == Grzegorczyk hierarchy Recursion (computer science) Primitive recursive functional Double recursion Primitive recursive set function Primitive recursive ordinal function Tail call == Notes == == References == Brainerd, W.S.; Landweber, L.H. (1974), Theory of Computation, Wiley, ISBN 0471095850 Hartmanis, Juris (1989), "Overview of Computational Complexity Theory", Computational Complexity Theory, Proceedings of Symposia in Applied Mathematics, vol. 38, American Mathematical Society, pp. 1–17, ISBN 978-0-8218-0131-4, MR 1020807 Robert I. Soare, Recursively Enumerable Sets and Degrees, Springer-Verlag, 1987. ISBN 0-387-15299-7 Kleene, Stephen Cole (1952), Introduction to Metamathematics (7th [1974] reprint; 2nd ed.), North-Holland Publishing Company, ISBN 0444100881, OCLC 3757798 {{citation}}: ISBN / Date incompatibility (help) Chapter XI. General Recursive Functions §57 Boolos, George; Burgess, John; Jeffrey, Richard (2002), Computability and Logic (4th ed.), Cambridge University Press, pp. 70–71, ISBN 9780521007580 Soare, Robert I. (1996), "Computability and recursion", The Bulletin of Symbolic Logic, 2 (3): 284–321, doi:10.2307/420992, JSTOR 420992, MR 1416870 Severin, Daniel E. (2008), "Unary primitive recursive functions", The Journal of Symbolic Logic, 73 (4): 1122–1138, arXiv:cs/0603063, doi:10.2178/jsl/1230396909, JSTOR 275903221, MR 2467207 Robinson, Raphael M. (1947), "Primitive recursive functions", Bulletin of the American Mathematical Society, 53 (10): 925–942, doi:10.1090/S0002-9904-1947-08911-4, MR 0022536 Gladstone, M. D. (1967), "A reduction of the recursion scheme", The Journal of Symbolic Logic, 32 (4): 505–508, doi:10.2307/2270177, JSTOR 2270177, MR 0224460 Gladstone, M. D. (1971), "Simplifications of the recursion scheme", The Journal of Symbolic Logic, 36 (4): 653–665, doi:10.2307/2272468, JSTOR 2272468, MR 0305993 |
Wikipedia:Primitive recursive set function#0 | In computability theory, a primitive recursive function is, roughly speaking, a function that can be computed by a computer program whose loops are all "for" loops (that is, an upper bound of the number of iterations of every loop is fixed before entering the loop). Primitive recursive functions form a strict subset of those general recursive functions that are also total functions. The importance of primitive recursive functions lies in the fact that most computable functions that are studied in number theory (and more generally in mathematics) are primitive recursive. For example, addition and division, the factorial and exponential function, and the function which returns the nth prime are all primitive recursive. In fact, for showing that a computable function is primitive recursive, it suffices to show that its time complexity is bounded above by a primitive recursive function of the input size. It is hence not particularly easy to devise a computable function that is not primitive recursive; some examples are shown in section § Limitations below. The set of primitive recursive functions is known as PR in computational complexity theory. == Definition == A primitive recursive function takes a fixed number of arguments, each a natural number (nonnegative integer: {0, 1, 2, ...}), and returns a natural number. If it takes n arguments it is called n-ary. The basic primitive recursive functions are given by these axioms: More complex primitive recursive functions can be obtained by applying the operations given by these axioms: The primitive recursive functions are the basic functions and those obtained from the basic functions by applying these operations a finite number of times. == Examples == === Addition === A definition of the 2-ary function A d d {\displaystyle Add} , to compute the sum of its arguments, can be obtained using the primitive recursion operator ρ {\displaystyle \rho } . To this end, the well-known equations 0 + y = y and S ( x ) + y = S ( x + y ) . {\displaystyle {\begin{array}{rcll}0+y&=&y&{\text{ and}}\\S(x)+y&=&S(x+y)&.\\\end{array}}} are "rephrased in primitive recursive function terminology": In the definition of ρ ( g , h ) {\displaystyle \rho (g,h)} , the first equation suggests to choose g = P 1 1 {\displaystyle g=P_{1}^{1}} to obtain A d d ( 0 , y ) = g ( y ) = y {\displaystyle Add(0,y)=g(y)=y} ; the second equation suggests to choose h = S ∘ P 2 3 {\displaystyle h=S\circ P_{2}^{3}} to obtain A d d ( S ( x ) , y ) = h ( x , A d d ( x , y ) , y ) = ( S ∘ P 2 3 ) ( x , A d d ( x , y ) , y ) = S ( A d d ( x , y ) ) {\displaystyle Add(S(x),y)=h(x,Add(x,y),y)=(S\circ P_{2}^{3})(x,Add(x,y),y)=S(Add(x,y))} . Therefore, the addition function can be defined as A d d = ρ ( P 1 1 , S ∘ P 2 3 ) {\displaystyle Add=\rho (P_{1}^{1},S\circ P_{2}^{3})} . As a computation example, A d d ( 1 , 7 ) = ρ ( P 1 1 , S ∘ P 2 3 ) ( S ( 0 ) , 7 ) by Def. A d d , S = ( S ∘ P 2 3 ) ( 0 , A d d ( 0 , 7 ) , 7 ) by case ρ ( g , h ) ( S ( . . . ) , . . . ) = S ( A d d ( 0 , 7 ) ) by Def. ∘ , P 2 3 = S ( ρ ( P 1 1 , S ∘ P 2 3 ) ( 0 , 7 ) ) by Def. A d d = S ( P 1 1 ( 7 ) ) by case ρ ( g , h ) ( 0 , . . . ) = S ( 7 ) by Def. P 1 1 = 8 by Def. S . {\displaystyle {\begin{array}{lll}&Add(1,7)\\=&\rho (P_{1}^{1},S\circ P_{2}^{3})\;(S(0),7)&{\text{ by Def. }}Add,S\\=&(S\circ P_{2}^{3})(0,Add(0,7),7)&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&S(Add(0,7))&{\text{ by Def. }}\circ ,P_{2}^{3}\\=&S(\;\rho (P_{1}^{1},S\circ P_{2}^{3})\;(0,7)\;)&{\text{ by Def. }}Add\\=&S(P_{1}^{1}(7))&{\text{ by case }}\rho (g,h)\;(0,...)\\=&S(7)&{\text{ by Def. }}P_{1}^{1}\\=&8&{\text{ by Def. }}S.\\\end{array}}} === Doubling === Given A d d {\displaystyle Add} , the 1-ary function A d d ∘ ( P 1 1 , P 1 1 ) {\displaystyle Add\circ (P_{1}^{1},P_{1}^{1})} doubles its argument, ( A d d ∘ ( P 1 1 , P 1 1 ) ) ( x ) = A d d ( x , x ) = x + x {\displaystyle (Add\circ (P_{1}^{1},P_{1}^{1}))(x)=Add(x,x)=x+x} . === Multiplication === In a similar way as addition, multiplication can be defined by M u l = ρ ( C 0 1 , A d d ∘ ( P 2 3 , P 3 3 ) ) {\displaystyle Mul=\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))} . This reproduces the well-known multiplication equations: M u l ( 0 , y ) = ρ ( C 0 1 , A d d ∘ ( P 2 3 , P 3 3 ) ) ( 0 , y ) by Def. M u l = C 0 1 ( y ) by case ρ ( g , h ) ( 0 , . . . ) = 0 by Def. C 0 1 . {\displaystyle {\begin{array}{lll}&Mul(0,y)\\=&\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))\;(0,y)&{\text{ by Def. }}Mul\\=&C_{0}^{1}(y)&{\text{ by case }}\rho (g,h)\;(0,...)\\=&0&{\text{ by Def. }}C_{0}^{1}.\\\end{array}}} and M u l ( S ( x ) , y ) = ρ ( C 0 1 , A d d ∘ ( P 2 3 , P 3 3 ) ) ( S ( x ) , y ) by Def. M u l = ( A d d ∘ ( P 2 3 , P 3 3 ) ) ( x , M u l ( x , y ) , y ) by case ρ ( g , h ) ( S ( . . . ) , . . . ) = A d d ( M u l ( x , y ) , y ) by Def. ∘ , P 2 3 , P 3 3 = M u l ( x , y ) + y by property of A d d . {\displaystyle {\begin{array}{lll}&Mul(S(x),y)\\=&\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))\;(S(x),y)&{\text{ by Def. }}Mul\\=&(Add\circ (P_{2}^{3},P_{3}^{3}))\;(x,Mul(x,y),y)&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&Add(Mul(x,y),y)&{\text{ by Def. }}\circ ,P_{2}^{3},P_{3}^{3}\\=&Mul(x,y)+y&{\text{ by property of }}Add.\\\end{array}}} === Predecessor === The predecessor function acts as the "opposite" of the successor function and is recursively defined by the rules P r e d ( 0 ) = 0 {\displaystyle Pred(0)=0} and P r e d ( S ( n ) ) = n {\displaystyle Pred(S(n))=n} . A primitive recursive definition is P r e d = ρ ( C 0 0 , P 1 2 ) {\displaystyle Pred=\rho (C_{0}^{0},P_{1}^{2})} . As a computation example, P r e d ( 8 ) = ρ ( C 0 0 , P 1 2 ) ( S ( 7 ) ) by Def. P r e d , S = P 1 2 ( 7 , P r e d ( 7 ) ) by case ρ ( g , h ) ( S ( . . . ) , . . . ) = 7 by Def. P 1 2 . {\displaystyle {\begin{array}{lll}&Pred(8)\\=&\rho (C_{0}^{0},P_{1}^{2})\;(S(7))&{\text{ by Def. }}Pred,S\\=&P_{1}^{2}(7,Pred(7))&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&7&{\text{ by Def. }}P_{1}^{2}.\\\end{array}}} === Truncated subtraction === The limited subtraction function (also called "monus", and denoted " − . {\displaystyle {\stackrel {.}{-}}} ") is definable from the predecessor function. It satisfies the equations y − . 0 = y and y − . S ( x ) = P r e d ( y − . x ) . {\displaystyle {\begin{array}{rcll}y{\stackrel {.}{-}}0&=&y&{\text{and}}\\y{\stackrel {.}{-}}S(x)&=&Pred(y{\stackrel {.}{-}}x)&.\\\end{array}}} Since the recursion runs over the second argument, we begin with a primitive recursive definition of the reversed subtraction, R S u b ( y , x ) = x − . y {\displaystyle RSub(y,x)=x{\stackrel {.}{-}}y} . Its recursion then runs over the first argument, so its primitive recursive definition can be obtained, similar to addition, as R S u b = ρ ( P 1 1 , P r e d ∘ P 2 3 ) {\displaystyle RSub=\rho (P_{1}^{1},Pred\circ P_{2}^{3})} . To get rid of the reversed argument order, then define S u b = R S u b ∘ ( P 2 2 , P 1 2 ) {\displaystyle Sub=RSub\circ (P_{2}^{2},P_{1}^{2})} . As a computation example, S u b ( 8 , 1 ) = ( R S u b ∘ ( P 2 2 , P 1 2 ) ) ( 8 , 1 ) by Def. S u b = R S u b ( 1 , 8 ) by Def. ∘ , P 2 2 , P 1 2 = ρ ( P 1 1 , P r e d ∘ P 2 3 ) ( S ( 0 ) , 8 ) by Def. R S u b , S = ( P r e d ∘ P 2 3 ) ( 0 , R S u b ( 0 , 8 ) , 8 ) by case ρ ( g , h ) ( S ( . . . ) , . . . ) = P r e d ( R S u b ( 0 , 8 ) ) by Def. ∘ , P 2 3 = P r e d ( ρ ( P 1 1 , P r e d ∘ P 2 3 ) ( 0 , 8 ) ) by Def. R S u b = P r e d ( P 1 1 ( 8 ) ) by case ρ ( g , h ) ( 0 , . . . ) = P r e d ( 8 ) by Def. P 1 1 = 7 by property of P r e d . {\displaystyle {\begin{array}{lll}&Sub(8,1)\\=&(RSub\circ (P_{2}^{2},P_{1}^{2}))\;(8,1)&{\text{ by Def. }}Sub\\=&RSub(1,8)&{\text{ by Def. }}\circ ,P_{2}^{2},P_{1}^{2}\\=&\rho (P_{1}^{1},Pred\circ P_{2}^{3})\;(S(0),8)&{\text{ by Def. }}RSub,S\\=&(Pred\circ P_{2}^{3})\;(0,RSub(0,8),8)&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&Pred(RSub(0,8))&{\text{ by Def. }}\circ ,P_{2}^{3}\\=&Pred(\;\rho (P_{1}^{1},Pred\circ P_{2}^{3})\;(0,8)\;)&{\text{ by Def. }}RSub\\=&Pred(P_{1}^{1}(8))&{\text{ by case }}\rho (g,h)\;(0,...)\\=&Pred(8)&{\text{ by Def. }}P_{1}^{1}\\=&7&{\text{ by property of }}Pred.\\\end{array}}} === Converting predicates to numeric functions === In some settings it is natural to consider primitive recursive functions that take as inputs tuples that mix numbers with truth values (that is t {\displaystyle t} for true and f {\displaystyle f} for false), or that produce truth values as outputs. This can be accomplished by identifying the truth values with numbers in any fixed manner. For example, it is common to identify the truth value t {\displaystyle t} with the number 1 {\displaystyle 1} and the truth value f {\displaystyle f} with the number 0 {\displaystyle 0} . Once this identification has been made, the characteristic function of a set A {\displaystyle A} , which always returns 1 {\displaystyle 1} or 0 {\displaystyle 0} , can be viewed as a predicate that tells whether a number is in the set A {\displaystyle A} . Such an identification of predicates with numeric functions will be assumed for the remainder of this article. === Predicate "Is zero" === As an example for a primitive recursive predicate, the 1-ary function I s Z e r o {\displaystyle IsZero} shall be defined such that I s Z e r o ( x ) = 1 {\displaystyle IsZero(x)=1} if x = 0 {\displaystyle x=0} , and I s Z e r o ( x ) = 0 {\displaystyle IsZero(x)=0} , otherwise. This can be achieved by defining I s Z e r o = ρ ( C 1 0 , C 0 2 ) {\displaystyle IsZero=\rho (C_{1}^{0},C_{0}^{2})} . Then, I s Z e r o ( 0 ) = ρ ( C 1 0 , C 0 2 ) ( 0 ) = C 1 0 ( 0 ) = 1 {\displaystyle IsZero(0)=\rho (C_{1}^{0},C_{0}^{2})(0)=C_{1}^{0}(0)=1} and e.g. I s Z e r o ( 8 ) = ρ ( C 1 0 , C 0 2 ) ( S ( 7 ) ) = C 0 2 ( 7 , I s Z e r o ( 7 ) ) = 0 {\displaystyle IsZero(8)=\rho (C_{1}^{0},C_{0}^{2})(S(7))=C_{0}^{2}(7,IsZero(7))=0} . === Predicate "Less or equal" === Using the property x ≤ y ⟺ x − . y = 0 {\displaystyle x\leq y\iff x{\stackrel {.}{-}}y=0} , the 2-ary function L e q {\displaystyle Leq} can be defined by L e q = I s Z e r o ∘ S u b {\displaystyle Leq=IsZero\circ Sub} . Then L e q ( x , y ) = 1 {\displaystyle Leq(x,y)=1} if x ≤ y {\displaystyle x\leq y} , and L e q ( x , y ) = 0 {\displaystyle Leq(x,y)=0} , otherwise. As a computation example, L e q ( 8 , 3 ) = I s Z e r o ( S u b ( 8 , 3 ) ) by Def. L e q = I s Z e r o ( 5 ) by property of S u b = 0 by property of I s Z e r o {\displaystyle {\begin{array}{lll}&Leq(8,3)\\=&IsZero(Sub(8,3))&{\text{ by Def. }}Leq\\=&IsZero(5)&{\text{ by property of }}Sub\\=&0&{\text{ by property of }}IsZero\\\end{array}}} === Predicate "Greater or equal" === Once a definition of L e q {\displaystyle Leq} is obtained, the converse predicate can be defined as G e q = L e q ∘ ( P 2 2 , P 1 2 ) {\displaystyle Geq=Leq\circ (P_{2}^{2},P_{1}^{2})} . Then, G e q ( x , y ) = L e q ( y , x ) {\displaystyle Geq(x,y)=Leq(y,x)} is true (more precisely: has value 1) if, and only if, x ≥ y {\displaystyle x\geq y} . === If-then-else === The 3-ary if-then-else operator known from programming languages can be defined by If = ρ ( P 2 2 , P 3 4 ) {\displaystyle {\textit {If}}=\rho (P_{2}^{2},P_{3}^{4})} . Then, for arbitrary x {\displaystyle x} , If ( S ( x ) , y , z ) = ρ ( P 2 2 , P 3 4 ) ( S ( x ) , y , z ) by Def. If = P 3 4 ( x , If ( x , y , z ) , y , z ) by case ρ ( S ( . . . ) , . . . ) = y by Def. P 3 4 {\displaystyle {\begin{array}{lll}&{\textit {If}}(S(x),y,z)\\=&\rho (P_{2}^{2},P_{3}^{4})\;(S(x),y,z)&{\text{ by Def. }}{\textit {If}}\\=&P_{3}^{4}(x,{\textit {If}}(x,y,z),y,z)&{\text{ by case }}\rho (S(...),...)\\=&y&{\text{ by Def. }}P_{3}^{4}\\\end{array}}} and If ( 0 , y , z ) = ρ ( P 2 2 , P 3 4 ) ( 0 , y , z ) by Def. If = P 2 2 ( y , z ) by case ρ ( 0 , . . . ) = z by Def. P 2 2 . {\displaystyle {\begin{array}{lll}&{\textit {If}}(0,y,z)\\=&\rho (P_{2}^{2},P_{3}^{4})\;(0,y,z)&{\text{ by Def. }}{\textit {If}}\\=&P_{2}^{2}(y,z)&{\text{ by case }}\rho (0,...)\\=&z&{\text{ by Def. }}P_{2}^{2}.\\\end{array}}} . That is, If ( x , y , z ) {\displaystyle {\textit {If}}(x,y,z)} returns the then-part, y {\displaystyle y} , if the if-part, x {\displaystyle x} , is true, and the else-part, z {\displaystyle z} , otherwise. === Junctors === Based on the If {\displaystyle {\textit {If}}} function, it is easy to define logical junctors. For example, defining A n d = If ∘ ( P 1 2 , P 2 2 , C 0 2 ) {\displaystyle And={\textit {If}}\circ (P_{1}^{2},P_{2}^{2},C_{0}^{2})} , one obtains A n d ( x , y ) = If ( x , y , 0 ) {\displaystyle And(x,y)={\textit {If}}(x,y,0)} , that is, A n d ( x , y ) {\displaystyle And(x,y)} is true if, and only if, both x {\displaystyle x} and y {\displaystyle y} are true (logical conjunction of x {\displaystyle x} and y {\displaystyle y} ). Similarly, O r = If ∘ ( P 1 2 , C 1 2 , P 2 2 ) {\displaystyle Or={\textit {If}}\circ (P_{1}^{2},C_{1}^{2},P_{2}^{2})} and N o t = If ∘ ( P 1 1 , C 0 1 , C 1 1 ) {\displaystyle Not={\textit {If}}\circ (P_{1}^{1},C_{0}^{1},C_{1}^{1})} lead to appropriate definitions of disjunction and negation: O r ( x , y ) = If ( x , 1 , y ) {\displaystyle Or(x,y)={\textit {If}}(x,1,y)} and N o t ( x ) = If ( x , 0 , 1 ) {\displaystyle Not(x)={\textit {If}}(x,0,1)} . === Equality predicate === Using the above functions L e q {\displaystyle Leq} , G e q {\displaystyle Geq} and A n d {\displaystyle And} , the definition E q = A n d ∘ ( L e q , G e q ) {\displaystyle Eq=And\circ (Leq,Geq)} implements the equality predicate. In fact, E q ( x , y ) = A n d ( L e q ( x , y ) , G e q ( x , y ) ) {\displaystyle Eq(x,y)=And(Leq(x,y),Geq(x,y))} is true if, and only if, x {\displaystyle x} equals y {\displaystyle y} . Similarly, the definition L t = N o t ∘ G e q {\displaystyle Lt=Not\circ Geq} implements the predicate "less-than", and G t = N o t ∘ L e q {\displaystyle Gt=Not\circ Leq} implements "greater-than". === Other operations on natural numbers === Exponentiation and primality testing are primitive recursive. Given primitive recursive functions e {\displaystyle e} , f {\displaystyle f} , g {\displaystyle g} , and h {\displaystyle h} , a function that returns the value of g {\displaystyle g} when e ≤ f {\displaystyle e\leq f} and the value of h {\displaystyle h} otherwise is primitive recursive. === Operations on integers and rational numbers === By using Gödel numberings, the primitive recursive functions can be extended to operate on other objects such as integers and rational numbers. If integers are encoded by Gödel numbers in a standard way, the arithmetic operations including addition, subtraction, and multiplication are all primitive recursive. Similarly, if the rationals are represented by Gödel numbers then the field operations are all primitive recursive. === Some common primitive recursive functions === The following examples and definitions are from Kleene (1952, pp. 222–231). Many appear with proofs. Most also appear with similar names, either as proofs or as examples, in Boolos, Burgess & Jeffrey (2002, pp. 63–70) they add the logarithm lo(x, y) or lg(x, y) depending on the exact derivation. In the following the mark " ' ", e.g. a', is the primitive mark meaning "the successor of", usually thought of as " +1", e.g. a +1 =def a'. The functions 16–20 and #G are of particular interest with respect to converting primitive recursive predicates to, and extracting them from, their "arithmetical" form expressed as Gödel numbers. Addition: a+b Multiplication: a×b Exponentiation: ab Factorial a! : 0! = 1, a'! = a!×a' pred(a): (Predecessor or decrement): If a > 0 then a−1 else 0 Proper subtraction a ∸ b: If a ≥ b then a−b else 0 Minimum(a1, ... an) Maximum(a1, ... an) Absolute difference: | a−b | =def (a ∸ b) + (b ∸ a) ~sg(a): NOT[signum(a)]: If a=0 then 1 else 0 sg(a): signum(a): If a=0 then 0 else 1 a | b: (a divides b): If b=k×a for some k then 0 else 1 Remainder(a, b): the leftover if b does not divide a "evenly". Also called MOD(a, b) a = b: sg | a − b | (Kleene's convention was to represent true by 0 and false by 1; presently, especially in computers, the most common convention is the reverse, namely to represent true by 1 and false by 0, which amounts to changing sg into ~sg here and in the next item) a < b: sg( a' ∸ b ) Pr(a): a is a prime number Pr(a) =def a>1 & NOT(Exists c)1<c<a [ c|a ] pi: the i+1th prime number (a)i: exponent of pi in a: the unique x such that pix|a & NOT(pix'|a) lh(a): the "length" or number of non-vanishing exponents in a lo(a, b): (logarithm of a to base b): If a, b > 1 then the greatest x such that bx | a else 0 In the following, the abbreviation x =def x1, ... xn; subscripts may be applied if the meaning requires. #A: A function φ definable explicitly from functions Ψ and constants q1, ... qn is primitive recursive in Ψ. #B: The finite sum Σy<z ψ(x, y) and product Πy<zψ(x, y) are primitive recursive in ψ. #C: A predicate P obtained by substituting functions χ1,..., χm for the respective variables of a predicate Q is primitive recursive in χ1,..., χm, Q. #D: The following predicates are primitive recursive in Q and R: NOT_Q(x) . Q OR R: Q(x) V R(x), Q AND R: Q(x) & R(x), Q IMPLIES R: Q(x) → R(x) Q is equivalent to R: Q(x) ≡ R(x) #E: The following predicates are primitive recursive in the predicate R: (Ey)y<z R(x, y) where (Ey)y<z denotes "there exists at least one y that is less than z such that" (y)y<z R(x, y) where (y)y<z denotes "for all y less than z it is true that" μyy<z R(x, y). The operator μyy<z R(x, y) is a bounded form of the so-called minimization- or mu-operator: Defined as "the least value of y less than z such that R(x, y) is true; or z if there is no such value." #F: Definition by cases: The function defined thus, where Q1, ..., Qm are mutually exclusive predicates (or "ψ(x) shall have the value given by the first clause that applies), is primitive recursive in φ1, ..., Q1, ... Qm: φ(x) = φ1(x) if Q1(x) is true, . . . . . . . . . . . . . . . . . . . φm(x) if Qm(x) is true φm+1(x) otherwise #G: If φ satisfies the equation: φ(y,x) = χ(y, COURSE-φ(y; x2, ... xn ), x2, ... xn then φ is primitive recursive in χ. The value COURSE-φ(y; x2 to n ) of the course-of-values function encodes the sequence of values φ(0,x2 to n), ..., φ(y-1,x2 to n) of the original function. == Relationship to recursive functions == The broader class of partial recursive functions is defined by introducing an unbounded search operator. The use of this operator may result in a partial function, that is, a relation with at most one value for each argument, but does not necessarily have any value for any argument (see domain). An equivalent definition states that a partial recursive function is one that can be computed by a Turing machine. A total recursive function is a partial recursive function that is defined for every input. Every primitive recursive function is total recursive, but not all total recursive functions are primitive recursive. The Ackermann function A(m,n) is a well-known example of a total recursive function (in fact, provable total), that is not primitive recursive. There is a characterization of the primitive recursive functions as a subset of the total recursive functions using the Ackermann function. This characterization states that a function is primitive recursive if and only if there is a natural number m such that the function can be computed by a Turing machine that always halts within A(m,n) or fewer steps, where n is the sum of the arguments of the primitive recursive function. An important property of the primitive recursive functions is that they are a recursively enumerable subset of the set of all total recursive functions (which is not itself recursively enumerable). This means that there is a single computable function f(m,n) that enumerates the primitive recursive functions, namely: For every primitive recursive function g, there is an m such that g(n) = f(m,n) for all n, and For every m, the function h(n) = f(m,n) is primitive recursive. f can be explicitly constructed by iteratively repeating all possible ways of creating primitive recursive functions. Thus, it is provably total. One can use a diagonalization argument to show that f is not recursive primitive in itself: had it been such, so would be h(n) = f(n,n)+1. But if this equals some primitive recursive function, there is an m such that h(n) = f(m,n) for all n, and then h(m) = f(m,m), leading to contradiction. However, the set of primitive recursive functions is not the largest recursively enumerable subset of the set of all total recursive functions. For example, the set of provably total functions (in Peano arithmetic) is also recursively enumerable, as one can enumerate all the proofs of the theory. While all primitive recursive functions are provably total, the converse is not true. == Limitations == Primitive recursive functions tend to correspond very closely with our intuition of what a computable function must be. Certainly the initial functions are intuitively computable (in their very simplicity), and the two operations by which one can create new primitive recursive functions are also very straightforward. However, the set of primitive recursive functions does not include every possible total computable function—this can be seen with a variant of Cantor's diagonal argument. This argument provides a total computable function that is not primitive recursive. A sketch of the proof is as follows: This argument can be applied to any class of computable (total) functions that can be enumerated in this way, as explained in the article Machine that always halts. Note however that the partial computable functions (those that need not be defined for all arguments) can be explicitly enumerated, for instance by enumerating Turing machine encodings. Other examples of total recursive but not primitive recursive functions are known: The function that takes m to Ackermann(m,m) is a unary total recursive function that is not primitive recursive. The Paris–Harrington theorem involves a total recursive function that is not primitive recursive. The Sudan function The Goodstein function == Variants == === Constant functions === Instead of C n k {\displaystyle C_{n}^{k}} , alternative definitions use just one 0-ary zero function C 0 0 {\displaystyle C_{0}^{0}} as a primitive function that always returns zero, and built the constant functions from the zero function, the successor function and the composition operator. === Iterative functions === Robinson considered various restrictions of the recursion rule. One is the so-called iteration rule where the function h does not have access to the parameters xi (in this case, we may assume without loss of generality that the function g is just the identity, as the general case can be obtained by substitution): f ( 0 , x ) = x , f ( S ( y ) , x ) = h ( y , f ( y , x ) ) . {\displaystyle {\begin{aligned}f(0,x)&=x,\\f(S(y),x)&=h(y,f(y,x)).\end{aligned}}} He proved that the class of all primitive recursive functions can still be obtained in this way. === Pure recursion === Another restriction considered by Robinson is pure recursion, where h does not have access to the induction variable y: f ( 0 , x 1 , … , x k ) = g ( x 1 , … , x k ) , f ( S ( y ) , x 1 , … , x k ) = h ( f ( y , x 1 , … , x k ) , x 1 , … , x k ) . {\displaystyle {\begin{aligned}f(0,x_{1},\ldots ,x_{k})&=g(x_{1},\ldots ,x_{k}),\\f(S(y),x_{1},\ldots ,x_{k})&=h(f(y,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k}).\end{aligned}}} Gladstone proved that this rule is enough to generate all primitive recursive functions. Gladstone improved this so that even the combination of these two restrictions, i.e., the pure iteration rule below, is enough: f ( 0 , x ) = x , f ( S ( y ) , x ) = h ( f ( y , x ) ) . {\displaystyle {\begin{aligned}f(0,x)&=x,\\f(S(y),x)&=h(f(y,x)).\end{aligned}}} Further improvements are possible: Severin prove that even the pure iteration rule without parameters, namely f ( 0 ) = 0 , f ( S ( y ) ) = h ( f ( y ) ) , {\displaystyle {\begin{aligned}f(0)&=0,\\f(S(y))&=h(f(y)),\end{aligned}}} suffices to generate all unary primitive recursive functions if we extend the set of initial functions with truncated subtraction x ∸ y. We get all primitive recursive functions if we additionally include + as an initial function. === Additional primitive recursive forms === Some additional forms of recursion also define functions that are in fact primitive recursive. Definitions in these forms may be easier to find or more natural for reading or writing. Course-of-values recursion defines primitive recursive functions. Some forms of mutual recursion also define primitive recursive functions. The functions that can be programmed in the LOOP programming language are exactly the primitive recursive functions. This gives a different characterization of the power of these functions. The main limitation of the LOOP language, compared to a Turing-complete language, is that in the LOOP language the number of times that each loop will run is specified before the loop begins to run. === Computer language definition === An example of a primitive recursive programming language is one that contains basic arithmetic operators (e.g. + and −, or ADD and SUBTRACT), conditionals and comparison (IF-THEN, EQUALS, LESS-THAN), and bounded loops, such as the basic for loop, where there is a known or calculable upper bound to all loops (FOR i FROM 1 TO n, with neither i nor n modifiable by the loop body). No control structures of greater generality, such as while loops or IF-THEN plus GOTO, are admitted in a primitive recursive language. The LOOP language, introduced in a 1967 paper by Albert R. Meyer and Dennis M. Ritchie, is such a language. Its computing power coincides with the primitive recursive functions. A variant of the LOOP language is Douglas Hofstadter's BlooP in Gödel, Escher, Bach. Adding unbounded loops (WHILE, GOTO) makes the language general recursive and Turing-complete, as are all real-world computer programming languages. The definition of primitive recursive functions implies that their computation halts on every input (after a finite number of steps). On the other hand, the halting problem is undecidable for general recursive functions. == Finitism and consistency results == The primitive recursive functions are closely related to mathematical finitism, and are used in several contexts in mathematical logic where a particularly constructive system is desired. Primitive recursive arithmetic (PRA), a formal axiom system for the natural numbers and the primitive recursive functions on them, is often used for this purpose. PRA is much weaker than Peano arithmetic, which is not a finitistic system. Nevertheless, many results in number theory and in proof theory can be proved in PRA. For example, Gödel's incompleteness theorem can be formalized into PRA, giving the following theorem: If T is a theory of arithmetic satisfying certain hypotheses, with Gödel sentence GT, then PRA proves the implication Con(T)→GT. Similarly, many of the syntactic results in proof theory can be proved in PRA, which implies that there are primitive recursive functions that carry out the corresponding syntactic transformations of proofs. In proof theory and set theory, there is an interest in finitistic consistency proofs, that is, consistency proofs that themselves are finitistically acceptable. Such a proof establishes that the consistency of a theory T implies the consistency of a theory S by producing a primitive recursive function that can transform any proof of an inconsistency from S into a proof of an inconsistency from T. One sufficient condition for a consistency proof to be finitistic is the ability to formalize it in PRA. For example, many consistency results in set theory that are obtained by forcing can be recast as syntactic proofs that can be formalized in PRA. == History == Recursive definitions had been used more or less formally in mathematics before, but the construction of primitive recursion is traced back to Richard Dedekind's theorem 126 of his Was sind und was sollen die Zahlen? (1888). This work was the first to give a proof that a certain recursive construction defines a unique function. Primitive recursive arithmetic was first proposed by Thoralf Skolem in 1923. The current terminology was coined by Rózsa Péter (1934) after Ackermann had proved in 1928 that the function which today is named after him was not primitive recursive, an event which prompted the need to rename what until then were simply called recursive functions. == See also == Grzegorczyk hierarchy Recursion (computer science) Primitive recursive functional Double recursion Primitive recursive set function Primitive recursive ordinal function Tail call == Notes == == References == Brainerd, W.S.; Landweber, L.H. (1974), Theory of Computation, Wiley, ISBN 0471095850 Hartmanis, Juris (1989), "Overview of Computational Complexity Theory", Computational Complexity Theory, Proceedings of Symposia in Applied Mathematics, vol. 38, American Mathematical Society, pp. 1–17, ISBN 978-0-8218-0131-4, MR 1020807 Robert I. Soare, Recursively Enumerable Sets and Degrees, Springer-Verlag, 1987. ISBN 0-387-15299-7 Kleene, Stephen Cole (1952), Introduction to Metamathematics (7th [1974] reprint; 2nd ed.), North-Holland Publishing Company, ISBN 0444100881, OCLC 3757798 {{citation}}: ISBN / Date incompatibility (help) Chapter XI. General Recursive Functions §57 Boolos, George; Burgess, John; Jeffrey, Richard (2002), Computability and Logic (4th ed.), Cambridge University Press, pp. 70–71, ISBN 9780521007580 Soare, Robert I. (1996), "Computability and recursion", The Bulletin of Symbolic Logic, 2 (3): 284–321, doi:10.2307/420992, JSTOR 420992, MR 1416870 Severin, Daniel E. (2008), "Unary primitive recursive functions", The Journal of Symbolic Logic, 73 (4): 1122–1138, arXiv:cs/0603063, doi:10.2178/jsl/1230396909, JSTOR 275903221, MR 2467207 Robinson, Raphael M. (1947), "Primitive recursive functions", Bulletin of the American Mathematical Society, 53 (10): 925–942, doi:10.1090/S0002-9904-1947-08911-4, MR 0022536 Gladstone, M. D. (1967), "A reduction of the recursion scheme", The Journal of Symbolic Logic, 32 (4): 505–508, doi:10.2307/2270177, JSTOR 2270177, MR 0224460 Gladstone, M. D. (1971), "Simplifications of the recursion scheme", The Journal of Symbolic Logic, 36 (4): 653–665, doi:10.2307/2272468, JSTOR 2272468, MR 0305993 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.