source
stringlengths
16
98
text
stringlengths
40
168k
Wikipedia:Eric de Sturler#0
Eric de Sturler (born 15 January 1966, Groningen) is a Professor of Mathematics at Virginia Tech in Blacksburg, Virginia. He is on the editorial board of Applied Numerical Mathematics and the Open Applied Mathematics Journal. Prof. de Sturler completed his Ph.D. under the direction of Henk van der Vorst at Technische Universiteit Delft in 1994. His thesis is entitled Iterative Methods on Distributive Memory Computers. He was a second-place winner of the Leslie Fox Prize for Numerical Analysis in 1997. His research focuses on preconditioned iterative methods for solving linear and nonlinear systems, with applications in computational physics, material science, and mathematical biology. == References == == External links == Eric de Sturler's personal webpage Archived 2006-05-30 at the Wayback Machine
Wikipedia:Eric-Jan Wagenmakers#0
Eric-Jan Wagenmakers (born May 21, 1972) is a Dutch mathematical psychologist. He is a professor at the Methodology Unit in the Department of Psychology at the University of Amsterdam (UvA). Since 2012, he has also been Professor of Neurocognitive Modeling: Interdisciplinary Integration at UvA's Faculty of Social and Behavioral Sciences. A noted expert on research methods in psychology, he has been highly critical of some dubious practices by his fellow psychologists, including Daryl Bem's research purporting to find support for the parapsychological concept of extrasensory perception, and the tendency for psychologists in general to favor the publication of studies with surprising, eye-catching results. He has also been actively addressing the replication crisis in psychology by helping to conduct a series of studies aimed at reproducing a 1988 study on the supposed effects of smiling on the perceived funniness of cartoons. František Bartoš, Dr Wagenmakers, Alexandra Sarafoglou, Henrik Godmann, and many colleagues were awarded the 2024 Ig Nobel Probability Prize for "showing, both in theory and by 350,757 experiments, that when you flip a coin, it tends to land on the same side as it started." == References == == External links == Official website Eric-Jan Wagenmakers publications indexed by Google Scholar
Wikipedia:Erik Albert Holmgren#0
Erik Albert Holmgren (7 July 1872 – 18 March 1943) was a Swedish mathematician known for contributions to partial differential equations. Holmgren's uniqueness theorem is named after him. Torsten Carleman was one of his students. His father was the mathematician Hjalmar Holmgren (1822 – 1885) and his siblings include the forester Anders Holmgren and the zoologist Nils Holmgren. Holmgren enrolled at Uppsala University in the autumn of 1890 and received his Bachelor of Arts degree on January 31, 1893. He received his Licentiate of Philosophy degree on December 14, 1895 and defended his dissertation for graduation on February 19, 1898. Holmgren became a docent in mathematics on March 8, 1898 and received his Doctor of Philosophy degree on May 31, 1898, all at Uppsala University. He was an extraordinary professor of mathematics at Uppsala University from October 1, 1901 to January 1, 1902 and from October 15 to March 15, 1907. At the age of thirty-five when Holmgren's was called to be a professor in Uppsala, he had already conducted research at the universities of Göttingen (1900-01), Paris (1902), and Rome (1905-06), which were the most important centers of mathematics at that time. He was appointed professor of mathematics at Uppsala University on March 15, 1907 and held the position until January 1, 1937. Holmgren was a member of the Royal Swedish Academy of Sciences from 1910 and a member of the Royal Swedish Academy of Letters from 1924. Holmgren's research primarily concerned differential equations, especially the theory of partial differential equations and associated problems in functional theory. Holmgren also appreciated the purely aesthetic side of mathematics as seen in the problems he composed for mathematical seminar exercises. As a teacher, he devoted himself with equal zeal and clarity to both elementary teaching and teaching for licentiate or doctoral degrees. Although Holmgren exhibited a strong temper during mathematical seminars, several of his students carried on his work and won fame throughout the mathematical world. Holmgren also had an interest in art history. He had an interest in French culture since his youth, and often traveled to France and Italy to combine his holidays with his research trips. During his early years as a professor, Holmgren participated in student discussions and political activities that led to the peasant armament support march and the departure of the second Staafian government some years before World War I. He never married. == Notes == == External links == Erik Albert Holmgren at the Mathematics Genealogy Project Holmgren, Erik Albert (Swedish Biographical Dictionary ID: 13747)
Wikipedia:Erland Samuel Bring#0
Erland Samuel Bring (19 August 1736 – 20 May 1798) was a Swedish mathematician. Bring studied at Lund University between 1750 and 1757. In 1762 he obtained a position of a reader in history and was promoted to professor in 1779. At Lund he wrote eight volumes of mathematical work in the fields of algebra, geometry, analysis and astronomy, including Meletemata quaedam mathematica circa transformationem aequationum algebraicarum (1786). This work describes Bring's contribution to the algebraic solution of equations. Bring had developed an important transformation to simplify a quintic equation to the form x 5 + p x + q = 0 {\displaystyle x^{5}+px+q=0} (see Bring radical). In 1832–35 the same transformation was independently derived by George Jerrard. However, whereas Jerrard knew from the past work by Paolo Ruffini and Niels Henrik Abel that a general quintic equation can not be solved, this fact was not known to Bring, putting him in a disadvantage. Bring's curve is named after him. == References ==
Wikipedia:Ermil Pangrati#0
Ermil A. Pangrati (July 21, 1864 in Iaşi – September 19, 1931 in Bucharest) was a Romanian politician, engineer, mathematician, historian of science and architect. == Career == He was a professor of geometry at the University of Bucharest and director of the Ion Mincu University of Architecture and Urban Planning. A member of the National Liberal Party, he sat in the Chamber of Deputies. He was also Minister of Public Works in the Titu Maiorescu cabinet during 1912. == References ==
Wikipedia:Ernest Vessiot#0
Ernest Vessiot (French: [vesjo]; 8 March 1865 – 17 October 1952) was a French mathematician. He was born in Marseille, France, and died in La Bauche, Savoie, France. He entered the École Normale Supérieure in 1884. He was Maître de Conférences at Lille University of Science and Technology in 1892-1893, then moved at Toulouse and Lyon. After 1910, he was a professor of analytical mechanics and celestial mechanics at the University of Paris. He presided over entrance examinations at the École Polytechnique. As director of École Normale Supérieure until 1935, he overviewed the construction of its new physics, chemistry and geology buildings of 24, Rue Lhomond. He was elected a member of the Académie des Sciences in 1943. Vessiot's work on Picard–Vessiot theory dealt with the integrability of ordinary differential equations. == Works == Leçons De Géométrie Supérieure (Hermann, 1919) Vessiot, Ernest (1910), "Méthodes d'intégration élémentaires", in Molk, Jules (ed.), Encyclopédie des sciences mathématiques pures et appliquées, vol. 3, Gauthier-Villars & Teubner, pp. 58–170 == References == == External links == O'Connor, John J.; Robertson, Edmund F., "Ernest Vessiot", MacTutor History of Mathematics Archive, University of St Andrews Ernest Vessiot at the Mathematics Genealogy Project Works by Ernest Vessiot at Project Gutenberg Works by or about Ernest Vessiot at the Internet Archive
Wikipedia:Ernesto Lupercio#0
Ernesto Lupercio is a Mexican mathematician. He was awarded the ICTP Ramanujan Prize in 2009, "for his outstanding contributions to algebraic topology, geometry and mathematical physics." Lupercio earned a Ph.D. from Stanford University in 1997 under the guidance of Ralph L. Cohen. He was a member of the Global Young Academy (2011-2016) and a member of the Third World Academy of Sciences. == Selected publications == Lupercio, Ernesto; Uribe, Bernardo (2004), "Gerbes over orbifolds and twisted K-theory", Communications in Mathematical Physics, 245 (3): 449–489, arXiv:math/0105039, doi:10.1007/s00220-003-1035-x de Fernex, Tommaso; Lupercio, Ernesto; Nevins, Thomas; Uribe, Bernardo (30 January 2007), "Stringy Chern classes of singular varieties", Advances in Mathematics, 208 (2): 597–621, arXiv:math/0407314, doi:10.1016/j.aim.2006.03.005 Lupercio, Ernesto; Poddar, Mainak (July 2004), "The global McKay–Ruan correspondence via motivic integration", Bulletin of the London Mathematical Society, 36 (4): 509–515, arXiv:math/0308200, doi:10.1112/S002460930300290X == References ==
Wikipedia:Ernst Anton Henrik Sinding#0
Ernst Anton Henrik Sinding (8 December 1839 – 11 January 1924) was a Norwegian school director. == Personal life == He was born in Larvik as a son of vicar Otto Ludvig Sinding (1809–1890) and Dorothea Magdalene Lammers. He was a brother of Elisabeth Sinding and Gustav Adolf Sinding, a nephew of Gustav Adolph Lammers and Matthias Wilhelm Sinding and a first cousin of Alfred Sinding-Larsen and the three siblings Christian, Otto and Stephan Sinding. In April 1864 in Kristiania he married Alfhild Bassøe (1846–1919). Through his son Bjarne he was a grandfather of economist and statistician Thomas Sinding. == Career == He finished his secondary education in 1856, and graduated from university with the cand.real. degree in 1863. He worked as a teacher from 1864 to February 1873. He also worked part-time at the Royal Frederick University from 1865 t 1873. From 1873 to 1915 he was the first director of Kristiania Technical School. He also lectured in mathematics. The school was important in educating Norwegian technicians and engineers, before the Norwegian Institute of Technology had been founded (in 1910). He was also a member of the Patent Commission, and, when it was founded, the Norwegian Industrial Property Office board from 1911 to 1921. He was a member of Kristiania city council from 1885 to 1898, and several governmental commissions. He died in January 1924 in Kristiania. == References ==
Wikipedia:Ernst Hölder#0
Ludwig Otto Hölder (December 22, 1859 – August 29, 1937) was a German mathematician born in Stuttgart. == Early life and education == Hölder was the youngest of three sons of professor Otto Hölder (1811–1890), and a grandson of professor Christian Gottlieb Hölder (1776–1847); his two brothers also became professors. He first studied at the Polytechnikum (which today is the University of Stuttgart) and then in 1877 went to Berlin where he was a student of Leopold Kronecker, Karl Weierstrass, and Ernst Kummer. In 1877, he entered the University of Berlin and took his doctorate from the University of Tübingen in 1882. The title of his doctoral thesis was "Beiträge zur Potentialtheorie" ("Contributions to potential theory"). Following this, he went to the University of Leipzig but was unable to habilitate there, instead earning a second doctorate and habilitation at the University of Göttingen, both in 1884. == Academic career and later life == He was unable to get government approval for a faculty position in Göttingen, and instead was offered a position as extraordinary professor at Tübingen in 1889. Temporary mental incapacitation delayed his acceptance but he began working there in 1890. In 1899, he took the former chair of Sophus Lie as a full professor at the University of Leipzig. There he served as dean from 1912 to 1913, and as rector in 1918. He married Helene, the daughter of a bank director and politician, in 1899. They had two sons and two daughters. His son Ernst Hölder became another mathematician, and his daughter Irmgard married mathematician Aurel Wintner. In 1933, Hölder signed the Vow of allegiance of the Professors of the German Universities and High-Schools to Adolf Hitler and the National Socialistic State. == Mathematical contributions == Holder's inequality, named for Hölder, was actually proven earlier by Leonard James Rogers. It is named for a paper in which Hölder, citing Rogers, reproves it; in turn, the same paper includes a proof of what is now called Jensen's inequality, with some side conditions that were later removed by Jensen. Hölder is also noted for many other theorems including the Jordan–Hölder theorem, the theorem stating that every linearly ordered group that satisfies an Archimedean property is isomorphic to a subgroup of the additive group of real numbers, the classification of simple groups of order up to 200, the anomalous outer automorphisms of the symmetric group S6, and Hölder's theorem, which implies that the Gamma function satisfies no algebraic differential equation. Another idea related to his name is the Hölder condition (or Hölder continuity), which is used in many areas of analysis, including the theories of partial differential equations and function spaces. == References ==
Wikipedia:Ernst Kolman#0
Ernst Kolman or Arnošt Yaromirovich Kolman (Russian: Арношт Яромирович Кольман); 6 December 1892 – 22 January 1979) was a Marxist philosopher, who renounced his former activities as an ideological enforcer in Soviet science. At the age of 84 he sought asylum in Sweden and published a retraction of his previous activity. == Biography == He was born in Prague to a Jewish family and studied at Charles University. During World War I he fought in the Austro-Hungarian army and was taken prisoner by the Russian forces. After the Russian Revolution he joined the Bolshevik party and worked as a party functionary in the Red Army and the Communist International. In 1923 Kolman was assigned to the party apparatus in Moscow, where he quickly assumed the role of ideological watchdog in scientific community. He became deputy head of the Moscow Party Science Department in 1936. In 1930 Dmitri Egorov, the president of Moscow Mathematical Society was arrested by Soviet secret police. Under threat of the society's closure, Ernst Kolman was elected its new president, a position he held from 1930 to 1932. Kolman attended the Second International Congress of the History of Science and Technology held in London in June–July 1931. He was part of a delegation of Soviet scientists led by Nikolai Bukharin. He attacked a number of prominent Soviet mathematicians and physicists, accusing them of wrecking and different political crimes. Kolman initiated the so-called "Academician Luzin case". In July–August 1936, Nikolai Luzin was criticised in Pravda in a series of anonymous articles, whose authorship later was attributed to Kolman. Luzin was accused of publishing his works in foreign scientific journals and denounced for being close to the “slightly modernized ideology of the black hundreds, orthodoxy, and monarchy.” After World War II Kolman was sent to Czechoslovakia, where he worked as a head of the propaganda department of the Communist Party of Czechoslovakia Central Committee. He helped to establish communist party control over the Czekhoslovak scientific community. At the 10th International Congress of Philosophy in Amsterdam Kolman attacked all non-Marxist philosophies as "fascist and imperialist." In 1948 Kolman criticized Rudolf Slánský and Klement Gottwald. He was summoned back to USSR and spent three years at the Lubianka prison, until Stalin's death. He returned to Czechoslovakia in 1958–1963, and then lived in Moscow, where he became increasingly disaffected with Soviet communism. Kolman authored several books on dialectical materialism and historical materialism. == Defection == In 1976 he applied for political asylum in Sweden, making him the oldest asylum seeker from the Soviet Union at the time at 84. He terminated his 58-year membership of the Communist Party of the Soviet Union on September 22, 1976, in an open letter addressed to party general secretary Leonid Brezhnev. On 9 December 1976, the Czechoslovak government revoked his membership of the Czechoslovak Academy of Sciences. He died on 22 January 1979 in Stockholm. == Publications (incomplete list) == Karl Marx and Mathematics (1968) "Hegel and Mathematics" (1931) published in Under the Banner of Marxism, 1931. The adventure of cybernetics in the Soviet Union, Minerva vol 16, no 3 (September 1978), 416–424. Die verirrte Generation (with Hanswilhelm Haefs and Frantisek Janouch). Fischer Taschenbuch-Verlag, 1979, extended 1982, ISBN 978-3-596-23464-6. (In German. Translations into Swedish, Danish, and Czech (ISBN 978-8-090-45733-1) exist). == Bibliography == Pavel Kovaly, "Arnoŝt Kolman: Portrait of a Marxist-Leninist philosopher," Studies in East European Thought 12 (1972): 337–366. == References ==
Wikipedia:Ernst Sejersted Selmer#0
Ernst Sejersted Selmer (11 February 1920 – 8 November 2006) was a Norwegian mathematician, who worked in number theory, as well as a cryptologist. The Selmer group of an Abelian variety is named after him. His primary contributions to mathematics reside within the field of diophantine equations. He started working as a cryptologist during the Second World War; due to his work, Norway became a NATO superpower in the field of encryption. == Biography == === Early life === Ernest S. Selmer was born in Oslo in the family of Professor Ernst W. Selmer and Ella Selmer (born Sejersted). He was the brother of Knut S. Selmer, who married Elisabeth Schweigaard, as well as first cousin of Francis Sejersted. Already early in school, Selmer demonstrated mathematical talent. When attending Stabekk high school he was an editor of the school's magazine Tall og tanker (numbers and thoughts). In 1938, he won Crown Prince Olav's Mathematics Prize for high school graduates. From 1942–1943, he studied at the University of Oslo. As a student at the university during World War II, Selmer was involved in encrypting secret messages for the Norwegian resistance movement. During the autumn of 1943 when the Germans forced the University to close he escaped to Sweden, just in time before the Nazi Germany secret police Gestapo closed the university and arrested the male students. In 1944 Selmer was sent to London, where he took technical responsibility for all Norwegian military and civilian cipher machines. The communication was mainly carried out using the Hagelin cipher machine. When the war ended, Selmer returned to Norway, and in 1946, was hired as a lecturer in the University of Oslo. In the same year, he started working for the Cipher Department of the Armed Forces Security Service as a consultant. With colleagues, he built a communication system for Norway's equivalent of the MI5, which was used from 1949 till 1960. Selmer spent the spring of 1949 at the Cambridge University working with the famous mathematician JWS Cassels. More than a decade after their initial collaboration, a group related to an Abelian variety—namely, the Selmer group—was discovered by Cassels and named after Selmer. In 1993, Andrew Wiles used the Selmer group in his proof of Fermat's last theorem. === Middle years === Selmer received his dr.philos in 1952 from the University of Oslo and was at the same time hired as a lecturer for the university. Among Selmer's lectures, his lectures on data processing is of particular note, as it helped lay the foundation for the Department of Informatics at the university. In the same year, he received a Rockefeller Foundation Fellowship to study in the United States during the years 1951–1952. Selmer arrived in January 1951 as a visiting scholar at the Institute for Advanced Study in Princeton, N.J. where the IAS machine was being constructed for John von Neumann. During his stay in Princeton he also met with people such as Albert Einstein, J. Robert Oppenheimer and his countryman Atle Selberg. Einstein is said to have been the first person Selmer met on arrival in Princeton on a Saturday afternoon, and apparently took on the task as campus guide with open arms. From Princeton, Selmer traveled to Berkeley where he contributed to Paul Morton's construction of the CALDIC computer. He was hired by Consolidated Engineering Corporation (CEC) on von Neumann's recommendation in late 1951 and designed much of the logic for their Datatron computer, working closely with other CEC employees such as Sibyl M. Rock. Later the computer was named Burroughs 205 and it was the most serious competitor of IBM 650. He returned to the Institute for Advanced Study again as a visiting scholar in 1952. In late 1952, Selmer returned to Oslo, and started working on a military computer. A product of this work was implemented in a computer, which was installed in the Norwegian Defence Research Establishment in 1957. On September 25, 1953 Selmer applied for a U.S. Patent for an Electronic Adder. This patent, No. 2,947,479, was awarded on August 2, 1960. === Later life === At the age of mere 37 Selmer took a position of a full professor in mathematics at the University of Bergen, which was a huge feat in 1957. At the university he was involved in designing two ciphers for NATO. In 1962, a hotline between the Kremlin and Washington was established via the Norwegian-developed encryption equipment ETCRRM II (Electronic Teleprinter Cryptographic Regenerative Repeater Mixer) from STK. At the University of Bergen Selmer started studying Linear Shift Registers and lectured on the subject. He commissioned a theoretical basis for linear shift register sequences in the 1960s on behalf of the Cipher Department. His lecture notes were published several times, under the title "Linear Recurrence Relations over Finite Fields". In his lecture on EUROCRYPT'93, Ernst Sejersted Selmer gave an overview of what he had contributed to the field of cryptography. From 1960–1966, Selmer served as vice dean at the Faculty of Mathematics and Natural Sciences at the University of Bergen, and dean from 1966–1968. Selmer was a member of the Council for Electronic Data Processing in the Norwegian state from its establishment in 1961 to 1973. === Personal relations === Selmer was married to Signe Randi Johanne Faanes and had one daughter, the microbiologist Johanne-Sophie Selmer who was educated at Karlstad University. His wife became his support throughout his life, and his great efforts in many fields would probably not have been possible without her. While work was his life, he was also a man that gave his home and family high priority. One time Selmer would not want to break a deal with his daughter in favor of a meeting with Fields Medal winner Alan Baker. Selmer was also fond of gardening as a hobby and the famous botanist Knut Fægri used to make excursions to Selmer's garden. In 1990 he retired with his wife in Ski and was in good physical and mental shape until he was hit by a stroke in the fall of 2004; after the stroke he was never the same. On the 8th of November 2006 Selmer fell asleep quietly. Selmer was elected member of the Norwegian Academy of Science and Letters in 1961, and became a knight of the 1st class of the Order of St. Olav in 1983. In 2020, the University of Bergen published the book "Professor in Secret Service", which is a biography on Selmer. == Legacy == In honor of Prof. Ernst Sejersted Selmer the University of Bergen established the Selmer Center in 2003. The Selmer Center held a leading position in the field of cryptography nationally and internationally, with roots going back 70 years. Selmer is behind the algorithm used to calculate the check digits in Norwegian birth numbers. Norwegian-developed mathematical theory became an important contribution to the modernization of crypto-algorithms in NATO and the NSA. Selmer's advanced research formed the basis for National Security Agency to develop modern crypto machines. == Publications == Selmer, Ernst S. (1966), Linear recurrence relations over finite fields, Department of Mathematics, University of Bergen == References == == External links == interview with Selmer Selmer center
Wikipedia:Ernst Snapper#0
Ernst Snapper (December 2, 1913, Groningen – February 5, 2011, Chapel Hill, North Carolina) was a Dutch-American mathematician, known for his research in "commutative algebra, algebraic geometry, cohomology of groups, character theory, and combinatorics." == Biography == Ernst Snapper, born to a Jewish family in the Netherlands, received in 1936 the equivalent of a master's degree from the University of Amsterdam. In 1938 his father, Isidore Snapper, an internationally known physician and medical researcher, accepted an offer to become the director of medical research at the Rockefeller Foundation's Peking Union Medical College. Acting on a suggestion from Abraham Flexner, Isidore Snapper encouraged Ernst Snapper to apply to Princeton University to become a graduate student. As a doctoral student of Joseph Wedderburn, Ernst Snapper graduated with a Ph.D. from Princeton University in 1941. In China, his father and mother were interned by the Japanese, but were later released in an exchange. Ernst Snapper was an instructor from 1941 to 1945 at Princeton University. He was a professor of mathematics from 1945 to 1955 at the University of Southern California, from 1955 to 1958 at Miami University of Ohio, from 1958 to 1963 at Indiana University, and from 1963 to 1979 at Dartmouth College, where he retired as professor emeritus. He was a visiting professor for the academic years 1949–1950 and 1954–1955 at Princeton University and for the academic year 1953–1954 at Harvard University. An early sequence of papers extended the Steinitz field theory to completely primary rings using ideas from the work of Krull. During his visits at Princeton and Harvard, Snapper studied algebraic geometry and the homological and sheaf-theoretic methods of Serre and Grothendieck. Later he applied those methods in several important papers on the polynomial properties of the Euler characteristic associated with divisor classes of an irreducible normal projective variety. He continued using homological methods in a sequence of papers in which he extended the classical cohomology of groups to the cohomology of arbitrary permutation representations of finite groups. Snapper then applied these methods to obtain a classical result on Frobenius kernels. In the area of combinatorial mathematics, Snapper extended de Bruijn’s theory of the cycle index of a finite group to that of an arbitrary permutation representation. A subsequent paper coauthored with Arunas Rudvalis extended this cycle index to a generalized cycle index of a permutation representation paired with a class function. They then obtained the theorem of Frobenius that every simple character of the symmetric group is an integral linear combination of transitive permutation characters. His doctoral students include Arunas Rudvalis. Snapper's paper The Three Crises in Mathematics: Logicism, Intuitionism and formalism won the 1980 Carl B. Allendoerfer Award. He was married to Ethel K. Snapper (1917–1995) for nearly 60 years. Upon his death he was survived by his two sons, John and James, both of whom were graduates of Princeton University, and two granddaughters. John Snapper received his Ph.D. in philosophy from the University of Chicago and became a professor at Illinois Institute of Technology. James Robert Snapper received his M.D. from Harvard Medical School in 1974 and became a pulmonologist and consulting professor in the department of medicine of Duke University School of Medicine. Ernst Snapper corresponded with Leo Vroman, who was his cousin. == Selected publications == === Articles === Snapper, Ernst (1947). "Polynomial Matrices in One Variable, Differential Equations and Module Theory". American Journal of Mathematics. 69 (2): 299–326. doi:10.2307/2371854. JSTOR 2371854. S2CID 123982270. Snapper, Ernst (1949). "Completely Indecomposable Modules". Canadian Journal of Mathematics. 1 (2): 125–152. doi:10.4153/CJM-1949-013-3. S2CID 124525144. Snapper, E. (1950). "Completely Primary Rings. I". Annals of Mathematics. 52 (3): 666–693. doi:10.2307/1969441. JSTOR 1969441. Snapper, Ernst (1950). "Periodic Linear Transformations of Affine and Projective Geometries". Canadian Journal of Mathematics. 2: 149–151. doi:10.4153/CJM-1950-013-9. S2CID 124520395. Snapper, E. (1951). "Completely Primary Rings. III. Imbedding and Isomorphism Theorems". Annals of Mathematics. 53 (2): 207–234. doi:10.2307/1969539. JSTOR 1969539. Snapper, E. (1952). "Completely Primary Rings: IV. Chain Conditions". Annals of Mathematics. 55 (1): 46–64. doi:10.2307/1969419. JSTOR 1969419. Snapper, E. (1956). "Higher-dimensional field theory. I. The integral closure of a module" (PDF). Compositio Math. 13: 1–15. MR 0083172. Snapper, Ernst (1959). "Multiples of Divisors". Journal of Mathematics and Mechanics. 8 (6): 967–992. JSTOR 24900666. Snapper, Ernst (1960). "Polynomials Associated with Divisors". Journal of Mathematics and Mechanics. 9 (1): 123–139. JSTOR 24900514. Snapper, Ernst (1964). "Cohomology of Permutation Representations1 I. Spectral Sequences". Journal of Mathematics and Mechanics. 13 (1): 133–161. JSTOR 24901188. Snapper, Ernst (1964). "Cohomology of Permutation Representations: II. Cup Product". Journal of Mathematics and Mechanics. 13 (6): 1047–1064. JSTOR 24901252. Snapper, Ernst (1965). "Spectral Sequences and Frobenius Groups". Transactions of the American Mathematical Society. 114 (1): 133–146. doi:10.2307/1993992. JSTOR 1993992. Snapper, Ernst (1965). "Inflation and deflation for all dimensions". Pacific Journal of Mathematics. 15 (3): 1061–1081. doi:10.2140/pjm.1965.15.1061. Rudvalis, A.; Snapper, E. (1971). "Numerical polynomials for arbitrary characters". Journal of Combinatorial Theory, Series A. 10 (2): 145–159. doi:10.1016/0097-3165(71)90018-5. Snapper, Ernst (1979). "What is Mathematics?". The American Mathematical Monthly. 86 (7): 551–557. doi:10.1080/00029890.1979.11994852. Snapper, Ernst (1981). "An Affine Generalization of the Euler Line". The American Mathematical Monthly. 88 (3): 196–198. doi:10.1080/00029890.1981.11995225. === Books === Snapper, E. (1959). Cohomology Theory and Algebraic Correspondences. Memoirs of the American Mathematical Society, Number 33. American Mathematical Soc. ISBN 0-8218-1233-5. {{cite book}}: ISBN / Date incompatibility (help) Snapper, Ernst; Troyer, Robert J. (10 May 2014). Metric Affine Geometry. Academic Press. ISBN 9781483269337. (reprint of 1971 original) == References ==
Wikipedia:Ernst Specker#0
Ernst Paul Specker (11 February 1920, Zürich – 10 December 2011, Zürich) was a Swiss mathematician. Much of his most influential work was on Quine's New Foundations, a set theory with a universal set, but he is most famous for the Kochen–Specker theorem in quantum mechanics, showing that certain types of hidden-variable theories are impossible. He also proved the ordinal partition relation ω2 → (ω2, 3)2, thereby solving a problem of Erdős. Specker received his Ph.D. in 1949 from ETH Zurich, where he remained throughout his professional career. == See also == Specker sequence Baer–Specker group == References == == External links == Biography Archived 2008-03-17 at the Wayback Machine at the University of St. Andrews Ernst Specker (1920-2011), Martin Fürer, January 25, 2012. Ernst Specker: Selecta, Birkhauser, 1990.
Wikipedia:Ernst Witt#0
Ernst Witt (26 June 1911 – 3 July 1991) was a German mathematician, one of the leading algebraists of his time. == Biography == Witt was born on the island of Alsen, then a part of the German Empire. Shortly after his birth, his parents moved the family to China to work as missionaries, and he did not return to Europe until he was nine. After his schooling, Witt went to the University of Freiburg and the University of Göttingen. He joined the NSDAP (Nazi Party) and was an active party member. Witt was awarded a Ph.D. at the University of Göttingen in 1933 with a thesis titled: "Riemann-Roch theorem and zeta-Function in hypercomplexes" (Riemann-Rochscher Satz und Zeta-Funktion im Hyperkomplexen) that was supervised by Gustav Herglotz, with Emmy Noether suggesting the topic for the doctorate. He qualified to become a lecturer and gave guest lectures in Göttingen and Hamburg. He became associated with the team led by Helmut Hasse who led his habilitation. In June 1936, he gave his habilitation lecture. During World War II he joined a group of five mathematicians, recruited by Wilhelm Fenner, and which included Georg Aumann, Alexander Aigner, Oswald Teichmüller, Johann Friedrich Schultze and their leader professor Wolfgang Franz, to form the backbone of the new mathematical research department in the late 1930s that would eventually be called: Section IVc of Cipher Department of the High Command of the Wehrmacht (abbr. OKW/Chi). From 1937 until 1979, he taught at the University of Hamburg. He died in Hamburg in 1991, shortly after his 80th birthday. == Work == Witt's work has been highly influential. His invention of the Witt vectors clarifies and generalizes the structure of the p-adic numbers. It has become fundamental to p-adic Hodge theory. Witt was the founder of the theory of quadratic forms over an arbitrary field. He proved several of the key results, including the Witt cancellation theorem. He defined the Witt ring of all quadratic forms over a field, now a central object in the theory. The Poincaré–Birkhoff–Witt theorem is basic to the study of Lie algebras. In algebraic geometry, the Hasse–Witt matrix of an algebraic curve over a finite field determines the cyclic étale coverings of degree p of a curve in characteristic p. In the 1970s, Witt claimed that in 1940 he had discovered what would eventually be named the "Leech lattice" many years before John Leech discovered it in 1965, but Witt did not publish his discovery and the details of exactly what he did are unclear. == See also == Leech lattice Verschiebung operator Wedderburn's little theorem List of things named after Ernst Witt == References == == Bibliography == Schappacher, Norbert; Scholz, Erhard (1996). "How to Write about Teichmüller". Mathematical Intelligencer. 18 (1): 5–6. doi:10.1007/BF03024810. S2CID 189882910. Witt, Ernst (1998), Kersten, Ina (ed.), Collected papers. Gesammelte Abhandlungen, Berlin, New York: Springer-Verlag, ISBN 978-3-540-57061-5, MR 1643949 == External links == O'Connor, John J.; Robertson, Edmund F., "Ernst Witt", MacTutor History of Mathematics Archive, University of St Andrews Ernst Witt at the Mathematics Genealogy Project
Wikipedia:Erwin Engeler#0
Erwin Engeler (born 13 February 1930) is a Swiss mathematician who did pioneering work on the interrelations between logic, computer science and scientific computation in the 20th century. He was one of Paul Bernays' students at the ETH Zürich. After completing his doctorate in 1958, Engeler spent fourteen years in the United States, teaching at the University of Minnesota and at the University of California, Berkeley. In 1959 he contributed an independent proof of several equivalent conditions to omega-categoricity, an important concept in model theory. He returned to Switzerland in 1972, where he served as a professor of logic and computer science at the ETH until his retirement in 1997. Engeler was named a Fellow of the Association for Computing Machinery in 1995. == Selected publications == Engeler, Erwin (1993). Algorithmic Properties of Structures: Selected Papers of Erwin Engeler. World Scientific. ISBN 978-981-02-0872-1. == External links == Erwin Engeler at the Mathematics Genealogy Project Professor Engeler's home page at the ETH Zurich.
Wikipedia:Erwin Kreyszig#0
Erwin Otto Kreyszig (6 January 1922 in Pirna, Germany – 12 December 2008) was a German Canadian applied mathematician and the Professor of Mathematics at Carleton University in Ottawa, Ontario, Canada. He was a pioneer in the field of applied mathematics: non-wave replicating linear systems. He was also a distinguished author, having written the textbook Advanced Engineering Mathematics, the leading textbook for civil, mechanical, electrical, and chemical engineering undergraduate engineering mathematics. Kreyszig received his PhD degree in 1949 at the University of Darmstadt under the supervision of Alwin Walther. He then continued his research activities at the universities of Tübingen and Münster. Prior to joining Carleton University in 1984, he held positions at Stanford University (1954/1955), the University of Ottawa (1955/1956), Ohio State University (1956–1960, professor 1957) and he completed his habilitation at the University of Mainz. In 1960 he became professor at the Technical University of Graz and organized the Graz 1964 Mathematical Congress. He worked at the University of Düsseldorf (1967–1971) and at the University of Karlsruhe (1971–1973). From 1973 through 1984 he worked at the University of Windsor and since 1984 he had been at Carleton University. He was awarded the title of Distinguished Research Professor in 1991 in recognition of a research career during which he published 176 papers in refereed journals, and 37 in refereed conference proceedings. Kreyszig was also an administrator, developing a Computer Centre at the University of Graz, and at the Mathematics Institute at the University of Düsseldorf. In 1964, he took a leave of absence from Graz to initiate a doctoral program in mathematics at Texas A&M University. Kreyszig authored 14 books, including Advanced Engineering Mathematics, which was published in its 10th edition in 2011. He supervised 104 master's and 22 doctoral students as well as 12 postdoctoral researchers. Together with his son he founded the Erwin and Herbert Kreyszig Scholarship which has funded graduate students since 2001. == Books == Statistische Methoden und ihre Anwendungen, Vandenhoeck & Ruprecht, Göttingen, 1965. Introduction to Differential Geometry and Riemannian Geometry (English Translation), University of Toronto Press, 1968. (with Kracht, Manfred): Methods of Complex Analysis in Partial Differential Equations with Applications, Wiley, 1988, ISBN 978-0-471-83091-7. Introductory Functional Analysis with Applications, Wiley, 1989, ISBN 978-0-471-50459-7. Differentialgeometrie. Leipzig 1957; engl. Differential Geometry, Dover, 1991, ISBN 978-0-486-66721-8. Advanced Engineering Mathematics, Wiley, (First edition 1962; ninth edition 2006, ISBN 978-0-471-48885-9; tenth edition (posthumous) 2011, ISBN 978-0-470-45836-5). == Literature == Kracht, Manfred W. (1992). "In honor of professor Erwin Kreyszig on the occasion of his seventieth birthday". Complex Variables, Theory and Application. 18 (1–2): 1–2. doi:10.1080/17476939208814521. ISSN 0278-1077. Obituary by Martin Muldoon == External links == Erwin Kreyszig at the Mathematics Genealogy Project
Wikipedia:Esprit Jouffret#0
Esprit Jouffret (15 March 1837 – 6 November 1904) was a French artillery officer, insurance actuary and mathematician, author of Traité élémentaire de géométrie à quatre dimensions (Elementary Treatise on the Geometry of Four Dimensions, 1903), a popularization of Henri Poincaré's Science and Hypothesis in which Jouffret described hypercubes and other complex polyhedra in four dimensions and projected them onto the two-dimensional page. Maurice Princet brought Traite to artist Pablo Picasso's attention. Picasso's sketchbooks for his 1907 painting Les Demoiselles d'Avignon illustrate Jouffret's influence on the artist's work. == See also == Maurice Princet == References ==
Wikipedia:Esther Arkin#0
Esther M. (Estie) Arkin (Hebrew: אסתר ארקין) is an Israeli–American mathematician and computer scientist whose research interests include operations research, computational geometry, combinatorial optimization, and the design and analysis of algorithms. She is a professor of applied mathematics and statistics at Stony Brook University. At Stony Brook, she also directs the undergraduate program in applied mathematics and statistics, and is an affiliated faculty member with the department of computer science. == Education and career == Arkin graduated from Tel Aviv University in 1981. She earned a master's degree at Stanford University in 1983, and completed her Ph.D. at Stanford in 1986. Her doctoral dissertation, Complexity of Cycle and Path Problems in Graphs, was supervised by Christos Papadimitriou. After working as a visiting professor at Cornell University, she joined the Stony Brook faculty in 1991. == Selected publications == Arkin, Esther M.; Silverberg, Ellen B. (September 1987), "Scheduling jobs with fixed start and end times", Discrete Applied Mathematics, 18 (1): 1–8, doi:10.1016/0166-218X(87)90037-0, MR 0905173 Arkin, Esther; Joneja, Dev; Roundy, Robin (April 1989), "Computational complexity of uncapacitated multi-echelon production planning problems", Operations Research Letters, 8 (2): 61–66, doi:10.1016/0167-6377(89)90001-1 Arkin, E. M.; Chew, L. P.; Huttenlocher, D. P.; Kedem, K.; Mitchell, J. S. B. (March 1991), "An efficiently computable metric for comparing polygonal shapes", IEEE Transactions on Pattern Analysis and Machine Intelligence, 13 (3): 209–216, doi:10.1109/34.75509, hdl:1813/8729, S2CID 8247618 Arkin, Esther M.; Hassin, Refael (December 1994), "Approximation algorithms for the geometric covering salesman problem", Discrete Applied Mathematics, 55 (3): 197–218, doi:10.1016/0166-218X(94)90008-6, MR 1308878 Arkin, Esther M.; Fekete, Sándor P.; Mitchell, Joseph S. B. (October 2000), "Approximation algorithms for lawn mowing and milling", Computational Geometry: Theory and Applications, 17 (1–2): 25–50, doi:10.1016/S0925-7721(00)00015-8, MR 1794471 Arkin, Esther M.; Bender, Michael A.; Demaine, Erik D.; Fekete, Sándor P.; Mitchell, Joseph S. B.; Sethia, Saurabh (January 2005), "Optimal covering tours with turn costs", SIAM Journal on Computing, 35 (3): 531–566, arXiv:cs/0309014, doi:10.1137/S0097539703434267, MR 2201447, S2CID 1174606 == References == == External links == Home page Esther Arkin publications indexed by Google Scholar
Wikipedia:Esther Seiden#0
Esther Seiden (Hebrew: אסתר זיידן; March 9, 1908 – June 3, 2014) was a mathematical statistician known for her research on the design of experiments and combinatorial design theory. In the study of finite geometry, she introduced the concept of the complement of an oval, and her work with Rita Zemach on orthogonal arrays of strength four was described as "the first significant progress" on the subject. == Early life and education == Seiden was born to a Polish-speaking Jewish middle-class family in West Galicia, and educated at a Zionist gymnasium in Kraków. Against her father's wishes, she went into mathematics. She began her university studies at the University of Kraków but moved after a year to Stefan Batory University in Vilnius, where an uncle was a high school mathematics teacher. There, as well as pure mathematics, she also studied physics and mathematical logic. Although she planned a teaching career with the master's degree she earned, her instructors provided support to continue her studies for another year. By that time, violence between anti-Jewish student groups and Jewish counter-protesters in Vilnius had led to the death of a student, so she was sent away to the University of Warsaw, where she studied logic with Alfred Tarski and Stanisław Leśniewski. == Activism in Palestine == After completing her studies, Seiden became a schoolteacher at a Jewish school from 1932 to 1934. By this time, she had long felt like a second-class citizen in Europe and wished to move to Mandatory Palestine. With the help of recommendations from Tarski and one of her Vilnius professors, she obtained admission to the Hebrew University of Jerusalem, which allowed her to move there in 1935. In Palestine, she continued her work as a teacher, and studied mathematics at the Hebrew University under Abraham Fraenkel. However, her interest in mathematics diminished as she became involved in the paramilitary Haganah and then worked in the Red Cross during World War II. == Statistics == At the end of the war, Seiden came to work for the Palestine Census of Industry and began studying statistics under Aryeh Dvoretzky. On the recommendation of Tarski, she entered graduate study in statistics at the University of California, Berkeley in 1947 as an assistant to Jerzy Neyman. She began her work in experimental design, a topic she came to through lectures from Berkeley visitor Raj Chandra Bose. She completed her Ph.D. in 1949. Her dissertation, supervised by Neyman, was On a problem of confounding in symmetrical factorial design. Contribution to the theory of tests of composite hypotheses. After shorter positions on the faculties of the University of Buffalo, University of Chicago, University of Chicago, American University, Northwestern University, and the Indian Statistical Institute, she moved to Michigan State University in 1960. She retired from Michigan State in 1978, only to return to the Hebrew University as a faculty member, and she remained active at the Hebrew University for many more years. == Recognition == In 1976, Seiden was elected as a member of the International Statistical Institute. She was also a Fellow of the Institute of Mathematical Statistics. == References ==
Wikipedia:Esther Szekeres#0
Esther Szekeres, also known as Esther Klein (Hungarian: Klein Eszter; 20 February 1910 – 28 August 2005) was a Hungarian–Australian mathematician. == Biography == Esther Klein was born to Ignaz Klein in a Jewish family in Budapest, Kingdom of Hungary in 1910. As a young physics student in Budapest, Klein was a member of a group of Hungarians including Paul Erdős, George Szekeres and Pál Turán that convened over interesting mathematical problems. In 1933, Klein proposed to the group a combinatorial problem that Erdős named as the Happy Ending problem as it led to her marriage to George Szekeres in 1937, with whom she had two children. Following the outbreak of World War II, Esther and George Szekeres emigrated to Australia after spending several years in Hongkew, a community of refugees located in Shanghai, China. In Australia, they originally shared an apartment in Adelaide with Márta Svéd, an old school friend of Szekeres, before moving to Sydney in 1964. In Sydney, Esther lectured at Macquarie University and was actively involved in mathematics enrichment for high-school students. In 1984, she jointly founded a weekly mathematics enrichment meeting that has since expanded into a programme of about 30 groups that continue to meet weekly and inspire high school students throughout Australia and New Zealand. In 2004, she and George moved back to Adelaide, where, on 28 August 2005, she and her husband died within an hour of each other. == Recognition == In 1990, Macquarie gave Szekeres an honorary doctorate. In 1993, she won the BH Neumann Award of the Australian Mathematics Trust. == References ==
Wikipedia:Eta invariant#0
In mathematics, the eta invariant of a self-adjoint elliptic differential operator on a compact manifold is formally the number of positive eigenvalues minus the number of negative eigenvalues. In practice both numbers are often infinite so are defined using zeta function regularization. It was introduced by Atiyah, Patodi, and Singer (1973, 1975) who used it to extend the Hirzebruch signature theorem to manifolds with boundary. The name comes from the fact that it is a generalization of the Dirichlet eta function. They also later used the eta invariant of a self-adjoint operator to define the eta invariant of a compact odd-dimensional smooth manifold. Michael Francis Atiyah, H. Donnelly, and I. M. Singer (1983) defined the signature defect of the boundary of a manifold as the eta invariant, and used this to show that Hirzebruch's signature defect of a cusp of a Hilbert modular surface can be expressed in terms of the value at s=0 or 1 of a Shimizu L-function. == Definition == The eta invariant of self-adjoint operator A is given by ηA(0), where η is the analytic continuation of η ( s ) = ∑ λ ≠ 0 sign ⁡ ( λ ) | λ | s {\displaystyle \eta (s)=\sum _{\lambda \neq 0}{\frac {\operatorname {sign} (\lambda )}{|\lambda |^{s}}}} and the sum is over the nonzero eigenvalues λ of A. == References == Atiyah, Michael Francis; Patodi, V. K.; Singer, I. M. (1973), "Spectral asymmetry and Riemannian geometry", The Bulletin of the London Mathematical Society, 5 (2): 229–234, CiteSeerX 10.1.1.597.6432, doi:10.1112/blms/5.2.229, ISSN 0024-6093, MR 0331443 Atiyah, Michael Francis; Patodi, V. K.; Singer, I. M. (1975), "Spectral asymmetry and Riemannian geometry. I", Mathematical Proceedings of the Cambridge Philosophical Society, 77 (1): 43–69, Bibcode:1975MPCPS..77...43A, doi:10.1017/S0305004100049410, ISSN 0305-0041, MR 0397797, S2CID 17638224 Atiyah, Michael Francis; Donnelly, H.; Singer, I. M. (1983), "Eta invariants, signature defects of cusps, and values of L-functions", Annals of Mathematics, Second Series, 118 (1): 131–177, doi:10.2307/2006957, ISSN 0003-486X, JSTOR 2006957, MR 0707164
Wikipedia:Ethel Raybould#0
Ethel Harriet Raybould B.A., M.A. (1899–1987), was the University of Queensland's first female Mathematics Lecturer who taught at the University from 1928 to 1955. She was one of the University's most generous benefactors with her bequest to the University of almost $1 million upon her death supporting fellowships, prizes, and a teaching facility. == Education == Ethel Harriet Raybould was born on 31 July 1899 in Brisbane, Queensland. She was brought up in Paddington, Brisbane While a student at Petrie Terrace State School, the Head of the school noticed her intelligence and dedication to her studies and she was appointed a Pupil-Teacher at the age of 14, while she completed her senior school studies. She worked as an Assistant Teacher in domestic science studies at Kangaroo Point (Girls) School, Mundubbera State School, Rockhampton High, Domestic Science High, Bulimba State School, and Central Technical College (now QUT). She would attend night classes at the Central Technical College, studying physics, whilst teaching domestic science at the College during the day. In 1921, Raybould won a teacher scholarship to the University of Queensland. She completed her B.A. in mathematics part-time whilst teaching domestic science, and took a year away from her teaching duties to take first class honours in mathematics in 1927. She was awarded the University Gold Medal in 1927, only the 10th person to receive it. == Career == In 1928, Raybould was seconded to the University of Queensland by the Department of Public Instruction, as a temporary lecturer in pure mathematics. In 1931, she was appointed as a permanent lecturer in the Department, one of the few women to be employed as such by UQ. She took her M.A. in 1931 with a thesis in mathematics on "The Transfinite and its Significance in Analysis". From 1937 to 1939 she undertook postgraduate study at Columbia University She returned to the University as Lecturer and later Senior Lecturer from 1951 to 1955. She retired in 1955 Raybould died in 1987 and left her estate to the University of Queensland. == Legacy == The Ethel Harriet Raybould Trust was established at the University in 1988. Two Fellowships were made available from the money - the Raybould Tutorial Fellowship and the Raybould Visiting Fellowship. The Fellowships provide an opportunity for people to work in a university mathematics department and to develop a project that supports senior secondary mathematics. A prize is also given in her name. The Raybould lecture theatre was constructed with part of the bequest and was opened in 1990. Another portion of the estate went to the Dorothy Hill Engineering and Sciences Library which sits adjacent to the lecture theatre. == References ==
Wikipedia:Euclidean space#0
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's Elements, it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean spaces of any positive integer dimension n, which are called Euclidean n-spaces when one wants to specify their dimension. For n equal to one or two, they are commonly called respectively Euclidean lines and Euclidean planes. The qualifier "Euclidean" is used to distinguish Euclidean spaces from other spaces that were later considered in physics and modern mathematics. Ancient Greek geometers introduced Euclidean space for modeling the physical space. Their work was collected by the ancient Greek mathematician Euclid in his Elements, with the great innovation of proving all properties of the space as theorems, by starting from a few fundamental properties, called postulates, which either were considered as evident (for example, there is exactly one straight line passing through two points), or seemed impossible to prove (parallel postulate). After the introduction at the end of the 19th century of non-Euclidean geometries, the old postulates were re-formalized to define Euclidean spaces through axiomatic theory. Another definition of Euclidean spaces by means of vector spaces and linear algebra has been shown to be equivalent to the axiomatic definition. It is this definition that is more commonly used in modern mathematics, and detailed in this article. In all definitions, Euclidean spaces consist of points, which are defined only by the properties that they must have for forming a Euclidean space. There is essentially only one Euclidean space of each dimension; that is, all Euclidean spaces of a given dimension are isomorphic. Therefore, it is usually possible to work with a specific Euclidean space, denoted E n {\displaystyle \mathbf {E} ^{n}} or E n {\displaystyle \mathbb {E} ^{n}} , which can be represented using Cartesian coordinates as the real n-space R n {\displaystyle \mathbb {R} ^{n}} equipped with the standard dot product. == Definition == === History of the definition === Euclidean space was introduced by ancient Greeks as an abstraction of our physical space. Their great innovation, appearing in Euclid's Elements was to build and prove all geometry by starting from a few very basic properties, which are abstracted from the physical world, and cannot be mathematically proved because of the lack of more basic tools. These properties are called postulates, or axioms in modern language. This way of defining Euclidean space is still in use under the name of synthetic geometry. In 1637, René Descartes introduced Cartesian coordinates, and showed that these allow reducing geometric problems to algebraic computations with numbers. This reduction of geometry to algebra was a major change in point of view, as, until then, the real numbers were defined in terms of lengths and distances. Euclidean geometry was not applied in spaces of dimension more than three until the 19th century. Ludwig Schläfli generalized Euclidean geometry to spaces of dimension n, using both synthetic and algebraic methods, and discovered all of the regular polytopes (higher-dimensional analogues of the Platonic solids) that exist in Euclidean spaces of any dimension. Despite the wide use of Descartes' approach, which was called analytic geometry, the definition of Euclidean space remained unchanged until the end of 19th century. The introduction of abstract vector spaces allowed their use in defining Euclidean spaces with a purely algebraic definition. This new definition has been shown to be equivalent to the classical definition in terms of geometric axioms. It is this algebraic definition that is now most often used for introducing Euclidean spaces. === Motivation of the modern definition === One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance and angles. For example, there are two fundamental operations (referred to as motions) on the plane. One is translation, which means a shifting of the plane so that every point is shifted in the same direction and by the same distance. The other is rotation around a fixed point in the plane, in which all points in the plane turn around that fixed point through the same angle. One of the basic tenets of Euclidean geometry is that two figures (usually considered as subsets) of the plane should be considered equivalent (congruent) if one can be transformed into the other by some sequence of translations, rotations and reflections (see below). In order to make all of this mathematically precise, the theory must clearly define what is a Euclidean space, and the related notions of distance, angle, translation, and rotation. Even when used in physical theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments, and so on. A purely mathematical definition of Euclidean space also ignores questions of units of length and other physical dimensions: the distance in a "mathematical" space is a number, not something expressed in inches or metres. The standard way to mathematically define a Euclidean space, as carried out in the remainder of this article, is as a set of points on which a real vector space acts – the space of translations which is equipped with an inner product. The action of translations makes the space an affine space, and this allows defining lines, planes, subspaces, dimension, and parallelism. The inner product allows defining distance and angles. The set R n {\displaystyle \mathbb {R} ^{n}} of n-tuples of real numbers equipped with the dot product is a Euclidean space of dimension n. Conversely, the choice of a point called the origin and an orthonormal basis of the space of translations is equivalent with defining an isomorphism between a Euclidean space of dimension n and R n {\displaystyle \mathbb {R} ^{n}} viewed as a Euclidean space. It follows that everything that can be said about a Euclidean space can also be said about R n . {\displaystyle \mathbb {R} ^{n}.} Therefore, many authors, especially at elementary level, call R n {\displaystyle \mathbb {R} ^{n}} the standard Euclidean space of dimension n, or simply the Euclidean space of dimension n. A reason for introducing such an abstract definition of Euclidean spaces, and for working with E n {\displaystyle \mathbb {E} ^{n}} instead of R n {\displaystyle \mathbb {R} ^{n}} is that it is often preferable to work in a coordinate-free and origin-free manner (that is, without choosing a preferred basis and a preferred origin). Another reason is that there is no standard origin nor any standard basis in the physical world. === Technical definition === A Euclidean vector space is a finite-dimensional inner product space over the real numbers. A Euclidean space is an affine space over the reals such that the associated vector space is a Euclidean vector space. Euclidean spaces are sometimes called Euclidean affine spaces to distinguish them from Euclidean vector spaces. If E is a Euclidean space, its associated vector space (Euclidean vector space) is often denoted E → . {\displaystyle {\overrightarrow {E}}.} The dimension of a Euclidean space is the dimension of its associated vector space. The elements of E are called points, and are commonly denoted by capital letters. The elements of E → {\displaystyle {\overrightarrow {E}}} are called Euclidean vectors or free vectors. They are also called translations, although, properly speaking, a translation is the geometric transformation resulting from the action of a Euclidean vector on the Euclidean space. The action of a translation v on a point P provides a point that is denoted P + v. This action satisfies P + ( v + w ) = ( P + v ) + w . {\displaystyle P+(v+w)=(P+v)+w.} Note: The second + in the left-hand side is a vector addition; each other + denotes an action of a vector on a point. This notation is not ambiguous, as, to distinguish between the two meanings of +, it suffices to look at the nature of its left argument. The fact that the action is free and transitive means that, for every pair of points (P, Q), there is exactly one displacement vector v such that P + v = Q. This vector v is denoted Q − P or P Q → ) . {\displaystyle {\overrightarrow {PQ}}{\vphantom {\frac {)}{}}}.} As previously explained, some of the basic properties of Euclidean spaces result from the structure of affine space. They are described in § Affine structure and its subsections. The properties resulting from the inner product are explained in § Metric structure and its subsections. == Prototypical examples == For any vector space, the addition acts freely and transitively on the vector space itself. Thus a Euclidean vector space can be viewed as a Euclidean space that has itself as the associated vector space. A typical case of Euclidean vector space is R n {\displaystyle \mathbb {R} ^{n}} viewed as a vector space equipped with the dot product as an inner product. The importance of this particular example of Euclidean space lies in the fact that every Euclidean space is isomorphic to it. More precisely, given a Euclidean space E of dimension n, the choice of a point, called an origin and an orthonormal basis of E → {\displaystyle {\overrightarrow {E}}} defines an isomorphism of Euclidean spaces from E to R n . {\displaystyle \mathbb {R} ^{n}.} As every Euclidean space of dimension n is isomorphic to it, the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} is sometimes called the standard Euclidean space of dimension n. == Affine structure == Some basic properties of Euclidean spaces depend only on the fact that a Euclidean space is an affine space. They are called affine properties and include the concepts of lines, subspaces, and parallelism, which are detailed in next subsections. === Subspaces === Let E be a Euclidean space and E → {\displaystyle {\overrightarrow {E}}} its associated vector space. A flat, Euclidean subspace or affine subspace of E is a subset F of E such that F → = { P Q → | P ∈ F , Q ∈ F } ( {\displaystyle {\overrightarrow {F}}={\Bigl \{}{\overrightarrow {PQ}}\mathrel {\Big |} P\in F,Q\in F{\Bigr \}}{\vphantom {\frac {(}{}}}} as the associated vector space of F is a linear subspace (vector subspace) of E → . {\displaystyle {\overrightarrow {E}}.} A Euclidean subspace F is a Euclidean space with F → {\displaystyle {\overrightarrow {F}}} as the associated vector space. This linear subspace F → {\displaystyle {\overrightarrow {F}}} is also called the direction of F. If P is a point of F then F = { P + v | v ∈ F → } . {\displaystyle F={\Bigl \{}P+v\mathrel {\Big |} v\in {\overrightarrow {F}}{\Bigr \}}.} Conversely, if P is a point of E and V → {\displaystyle {\overrightarrow {V}}} is a linear subspace of E → , {\displaystyle {\overrightarrow {E}},} then P + V → = { P + v | v ∈ V → } {\displaystyle P+{\overrightarrow {V}}={\Bigl \{}P+v\mathrel {\Big |} v\in {\overrightarrow {V}}{\Bigr \}}} is a Euclidean subspace of direction V → {\displaystyle {\overrightarrow {V}}} . (The associated vector space of this subspace is V → {\displaystyle {\overrightarrow {V}}} .) A Euclidean vector space E → {\displaystyle {\overrightarrow {E}}} (that is, a Euclidean space that is equal to E → {\displaystyle {\overrightarrow {E}}} ) has two sorts of subspaces: its Euclidean subspaces and its linear subspaces. Linear subspaces are Euclidean subspaces and a Euclidean subspace is a linear subspace if and only if it contains the zero vector. === Lines and segments === In a Euclidean space, a line is a Euclidean subspace of dimension one. Since a vector space of dimension one is spanned by any nonzero vector, a line is a set of the form { P + λ P Q → | λ ∈ R } , ( {\displaystyle {\Bigl \{}P+\lambda {\overrightarrow {PQ}}\mathrel {\Big |} \lambda \in \mathbb {R} {\Bigr \}},{\vphantom {\frac {(}{}}}} where P and Q are two distinct points of the Euclidean space as a part of the line. It follows that there is exactly one line that passes through (contains) two distinct points. This implies that two distinct lines intersect in at most one point. A more symmetric representation of the line passing through P and Q is { O + ( 1 − λ ) O P → + λ O Q → | λ ∈ R } , ( {\displaystyle {\Bigl \{}O+(1-\lambda ){\overrightarrow {OP}}+\lambda {\overrightarrow {OQ}}\mathrel {\Big |} \lambda \in \mathbb {R} {\Bigr \}},{\vphantom {\frac {(}{}}}} where O is an arbitrary point (not necessary on the line). In a Euclidean vector space, the zero vector is usually chosen for O; this allows simplifying the preceding formula into { ( 1 − λ ) P + λ Q | λ ∈ R } . {\displaystyle {\bigl \{}(1-\lambda )P+\lambda Q\mathrel {\big |} \lambda \in \mathbb {R} {\bigr \}}.} A standard convention allows using this formula in every Euclidean space, see Affine space § Affine combinations and barycenter. The line segment, or simply segment, joining the points P and Q is the subset of points such that 0 ≤ 𝜆 ≤ 1 in the preceding formulas. It is denoted PQ or QP; that is P Q = Q P = { P + λ P Q → | 0 ≤ λ ≤ 1 } . ( {\displaystyle PQ=QP={\Bigl \{}P+\lambda {\overrightarrow {PQ}}\mathrel {\Big |} 0\leq \lambda \leq 1{\Bigr \}}.{\vphantom {\frac {(}{}}}} === Parallelism === Two subspaces S and T of the same dimension in a Euclidean space are parallel if they have the same direction (i.e., the same associated vector space). Equivalently, they are parallel, if there is a translation vector v that maps one to the other: T = S + v . {\displaystyle T=S+v.} Given a point P and a subspace S, there exists exactly one subspace that contains P and is parallel to S, which is P + S → . {\displaystyle P+{\overrightarrow {S}}.} In the case where S is a line (subspace of dimension one), this property is Playfair's axiom. It follows that in a Euclidean plane, two lines either meet in one point or are parallel. The concept of parallel subspaces has been extended to subspaces of different dimensions: two subspaces are parallel if the direction of one of them is contained in the direction to the other. == Metric structure == The vector space E → {\displaystyle {\overrightarrow {E}}} associated to a Euclidean space E is an inner product space. This implies a symmetric bilinear form E → × E → → R ( x , y ) ↦ ⟨ x , y ⟩ {\displaystyle {\begin{aligned}{\overrightarrow {E}}\times {\overrightarrow {E}}&\to \mathbb {R} \\(x,y)&\mapsto \langle x,y\rangle \end{aligned}}} that is positive definite (that is ⟨ x , x ⟩ {\displaystyle \langle x,x\rangle } is always positive for x ≠ 0). The inner product of a Euclidean space is often called dot product and denoted x ⋅ y. This is specially the case when a Cartesian coordinate system has been chosen, as, in this case, the inner product of two vectors is the dot product of their coordinate vectors. For this reason, and for historical reasons, the dot notation is more commonly used than the bracket notation for the inner product of Euclidean spaces. This article will follow this usage; that is ⟨ x , y ⟩ {\displaystyle \langle x,y\rangle } will be denoted x ⋅ y in the remainder of this article. The Euclidean norm of a vector x is ‖ x ‖ = x ⋅ x . {\displaystyle \|x\|={\sqrt {x\cdot x}}.} The inner product and the norm allows expressing and proving metric and topological properties of Euclidean geometry. The next subsection describe the most fundamental ones. In these subsections, E denotes an arbitrary Euclidean space, and E → {\displaystyle {\overrightarrow {E}}} denotes its vector space of translations. === Distance and length === The distance (more precisely the Euclidean distance) between two points of a Euclidean space is the norm of the translation vector that maps one point to the other; that is d ( P , Q ) = ‖ P Q → ‖ . ( {\displaystyle d(P,Q)={\Bigl \|}{\overrightarrow {PQ}}{\Bigr \|}.{\vphantom {\frac {(}{}}}} The length of a segment PQ is the distance d(P, Q) between its endpoints P and Q. It is often denoted | P Q | {\displaystyle |PQ|} . The distance is a metric, as it is positive definite, symmetric, and satisfies the triangle inequality d ( P , Q ) ≤ d ( P , R ) + d ( R , Q ) . {\displaystyle d(P,Q)\leq d(P,R)+d(R,Q).} Moreover, the equality is true if and only if a point R belongs to the segment PQ. This inequality means that the length of any edge of a triangle is smaller than the sum of the lengths of the other edges. This is the origin of the term triangle inequality. With the Euclidean distance, every Euclidean space is a complete metric space. === Orthogonality === Two nonzero vectors u and v of E → {\displaystyle {\overrightarrow {E}}} (the associated vector space of a Euclidean space E) are perpendicular or orthogonal if their inner product is zero: u ⋅ v = 0 {\displaystyle u\cdot v=0} Two linear subspaces of E → {\displaystyle {\overrightarrow {E}}} are orthogonal if every nonzero vector of the first one is perpendicular to every nonzero vector of the second one. This implies that the intersection of the linear subspaces is reduced to the zero vector. Two lines, and more generally two Euclidean subspaces (A line can be considered as one Euclidean subspace.) are orthogonal if their directions (the associated vector spaces of the Euclidean subspaces) are orthogonal. Two orthogonal lines that intersect are said perpendicular. Two segments AB and AC that share a common endpoint A are perpendicular or form a right angle if the vectors A B → ) {\displaystyle {\overrightarrow {AB}}{\vphantom {\frac {)}{}}}} and A C → ) {\displaystyle {\overrightarrow {AC}}{\vphantom {\frac {)}{}}}} are orthogonal. If AB and AC form a right angle, one has | B C | 2 = | A B | 2 + | A C | 2 . {\displaystyle |BC|^{2}=|AB|^{2}+|AC|^{2}.} This is the Pythagorean theorem. Its proof is easy in this context, as, expressing this in terms of the inner product, one has, using bilinearity and symmetry of the inner product: | B C | 2 = B C → ⋅ B C → ( = ( B A → + A C → ) ⋅ ( B A → + A C → ) = B A → ⋅ B A → + A C → ⋅ A C → − 2 A B → ⋅ A C → = A B → ⋅ A B → + A C → ⋅ A C → = | A B | 2 + | A C | 2 . {\displaystyle {\begin{aligned}|BC|^{2}&={\overrightarrow {BC}}\cdot {\overrightarrow {BC}}{\vphantom {\frac {(}{}}}\\[2mu]&={\Bigl (}{\overrightarrow {BA}}+{\overrightarrow {AC}}{\Bigr )}\cdot {\Bigl (}{\overrightarrow {BA}}+{\overrightarrow {AC}}{\Bigr )}\\[4mu]&={\overrightarrow {BA}}\cdot {\overrightarrow {BA}}+{\overrightarrow {AC}}\cdot {\overrightarrow {AC}}-2{\overrightarrow {AB}}\cdot {\overrightarrow {AC}}\\[6mu]&={\overrightarrow {AB}}\cdot {\overrightarrow {AB}}+{\overrightarrow {AC}}\cdot {\overrightarrow {AC}}\\[6mu]&=|AB|^{2}+|AC|^{2}.\end{aligned}}} Here, A B → ⋅ A C → = 0 ( {\displaystyle {\overrightarrow {AB}}\cdot {\overrightarrow {AC}}=0{\vphantom {\frac {(}{}}}} is used since these two vectors are orthogonal. === Angle === The (non-oriented) angle θ between two nonzero vectors x and y in E → {\displaystyle {\overrightarrow {E}}} is θ = arccos ⁡ ( x ⋅ y | x | | y | ) {\displaystyle \theta =\arccos \left({\frac {x\cdot y}{|x|\,|y|}}\right)} where arccos is the principal value of the arccosine function. By Cauchy–Schwarz inequality, the argument of the arccosine is in the interval [−1, 1]. Therefore θ is real, and 0 ≤ θ ≤ π (or 0 ≤ θ ≤ 180 if angles are measured in degrees). Angles are not useful in a Euclidean line, as they can be only 0 or π. In an oriented Euclidean plane, one can define the oriented angle of two vectors. The oriented angle of two vectors x and y is then the opposite of the oriented angle of y and x. In this case, the angle of two vectors can have any value modulo an integer multiple of 2π. In particular, a reflex angle π < θ < 2π equals the negative angle −π < θ − 2π < 0. The angle of two vectors does not change if they are multiplied by positive numbers. More precisely, if x and y are two vectors, and λ and μ are real numbers, then angle ⁡ ( λ x , μ y ) = { angle ⁡ ( x , y ) if λ and μ have the same sign π − angle ⁡ ( x , y ) otherwise . {\displaystyle \operatorname {angle} (\lambda x,\mu y)={\begin{cases}\operatorname {angle} (x,y)\qquad \qquad {\text{if }}\lambda {\text{ and }}\mu {\text{ have the same sign}}\\\pi -\operatorname {angle} (x,y)\qquad {\text{otherwise}}.\end{cases}}} If A, B, and C are three points in a Euclidean space, the angle of the segments AB and AC is the angle of the vectors A B → ( {\displaystyle {\overrightarrow {AB}}{\vphantom {\frac {(}{}}}} and A C → . ( {\displaystyle {\overrightarrow {AC}}.{\vphantom {\frac {(}{}}}} As the multiplication of vectors by positive numbers do not change the angle, the angle of two half-lines with initial point A can be defined: it is the angle of the segments AB and AC, where B and C are arbitrary points, one on each half-line. Although this is less used, one can define similarly the angle of segments or half-lines that do not share an initial point. The angle of two lines is defined as follows. If θ is the angle of two segments, one on each line, the angle of any two other segments, one on each line, is either θ or π − θ. One of these angles is in the interval [0, π/2], and the other being in [π/2, π]. The non-oriented angle of the two lines is the one in the interval [0, π/2]. In an oriented Euclidean plane, the oriented angle of two lines belongs to the interval [−π/2, π/2]. === Cartesian coordinates === Every Euclidean vector space has an orthonormal basis (in fact, infinitely many in dimension higher than one, and two in dimension one), that is a basis ( e 1 , … , e n ) {\displaystyle (e_{1},\dots ,e_{n})} of unit vectors ( ‖ e i ‖ = 1 {\displaystyle \|e_{i}\|=1} ) that are pairwise orthogonal ( e i ⋅ e j = 0 {\displaystyle e_{i}\cdot e_{j}=0} for i ≠ j). More precisely, given any basis ( b 1 , … , b n ) , {\displaystyle (b_{1},\dots ,b_{n}),} the Gram–Schmidt process computes an orthonormal basis such that, for every i, the linear spans of ( e 1 , … , e i ) {\displaystyle (e_{1},\dots ,e_{i})} and ( b 1 , … , b i ) {\displaystyle (b_{1},\dots ,b_{i})} are equal. Given a Euclidean space E, a Cartesian frame is a set of data consisting of an orthonormal basis of E → , {\displaystyle {\overrightarrow {E}},} and a point of E, called the origin and often denoted O. A Cartesian frame ( O , e 1 , … , e n ) {\displaystyle (O,e_{1},\dots ,e_{n})} allows defining Cartesian coordinates for both E and E → {\displaystyle {\overrightarrow {E}}} in the following way. The Cartesian coordinates of a vector v of E → {\displaystyle {\overrightarrow {E}}} are the coefficients of v on the orthonormal basis e 1 , … , e n . {\displaystyle e_{1},\dots ,e_{n}.} For example, the Cartesian coordinates of a vector v {\displaystyle v} on an orthonormal basis ( e 1 , e 2 , e 3 ) {\displaystyle (e_{1},e_{2},e_{3})} (that may be named as ( x , y , z ) {\displaystyle (x,y,z)} as a convention) in a 3-dimensional Euclidean space is ( α 1 , α 2 , α 3 ) {\displaystyle (\alpha _{1},\alpha _{2},\alpha _{3})} if v = α 1 e 1 + α 2 e 2 + α 3 e 3 {\displaystyle v=\alpha _{1}e_{1}+\alpha _{2}e_{2}+\alpha _{3}e_{3}} . As the basis is orthonormal, the i-th coefficient α i {\displaystyle \alpha _{i}} is equal to the dot product v ⋅ e i . {\displaystyle v\cdot e_{i}.} The Cartesian coordinates of a point P of E are the Cartesian coordinates of the vector O P → . ( {\displaystyle {\overrightarrow {OP}}.{\vphantom {\frac {(}{}}}} === Other coordinates === As a Euclidean space is an affine space, one can consider an affine frame on it, which is the same as a Euclidean frame, except that the basis is not required to be orthonormal. This define affine coordinates, sometimes called skew coordinates for emphasizing that the basis vectors are not pairwise orthogonal. An affine basis of a Euclidean space of dimension n is a set of n + 1 points that are not contained in a hyperplane. An affine basis define barycentric coordinates for every point. Many other coordinates systems can be defined on a Euclidean space E of dimension n, in the following way. Let f be a homeomorphism (or, more often, a diffeomorphism) from a dense open subset of E to an open subset of R n . {\displaystyle \mathbb {R} ^{n}.} The coordinates of a point x of E are the components of f(x). The polar coordinate system (dimension 2) and the spherical and cylindrical coordinate systems (dimension 3) are defined this way. For points that are outside the domain of f, coordinates may sometimes be defined as the limit of coordinates of neighbour points, but these coordinates may be not uniquely defined, and may be not continuous in the neighborhood of the point. For example, for the spherical coordinate system, the longitude is not defined at the pole, and on the antimeridian, the longitude passes discontinuously from –180° to +180°. This way of defining coordinates extends easily to other mathematical structures, and in particular to manifolds. == Isometries == An isometry between two metric spaces is a bijection preserving the distance, that is d ( f ( x ) , f ( y ) ) = d ( x , y ) . {\displaystyle d(f(x),f(y))=d(x,y).} In the case of a Euclidean vector space, an isometry that maps the origin to the origin preserves the norm ‖ f ( x ) ‖ = ‖ x ‖ , {\displaystyle \|f(x)\|=\|x\|,} since the norm of a vector is its distance from the zero vector. It preserves also the inner product f ( x ) ⋅ f ( y ) = x ⋅ y , {\displaystyle f(x)\cdot f(y)=x\cdot y,} since x ⋅ y = 1 2 ( ‖ x + y ‖ 2 − ‖ x ‖ 2 − ‖ y ‖ 2 ) . {\displaystyle x\cdot y={\tfrac {1}{2}}\left(\|x+y\|^{2}-\|x\|^{2}-\|y\|^{2}\right).} An isometry of Euclidean vector spaces is a linear isomorphism. An isometry f : E → F {\displaystyle f\colon E\to F} of Euclidean spaces defines an isometry f → : E → → F → {\displaystyle {\overrightarrow {f}}\colon {\overrightarrow {E}}\to {\overrightarrow {F}}} of the associated Euclidean vector spaces. This implies that two isometric Euclidean spaces have the same dimension. Conversely, if E and F are Euclidean spaces, O ∈ E, O′ ∈ F, and f → : E → → F → {\displaystyle {\overrightarrow {f}}\colon {\overrightarrow {E}}\to {\overrightarrow {F}}} is an isometry, then the map f : E → F {\displaystyle f\colon E\to F} defined by f ( P ) = O ′ + f → ( O P → ) ( {\displaystyle f(P)=O'+{\overrightarrow {f}}{\Bigl (}{\overrightarrow {OP}}{\Bigr )}{\vphantom {\frac {(}{}}}} is an isometry of Euclidean spaces. It follows from the preceding results that an isometry of Euclidean spaces maps lines to lines, and, more generally Euclidean subspaces to Euclidean subspaces of the same dimension, and that the restriction of the isometry on these subspaces are isometries of these subspaces. === Isometry with prototypical examples === If E is a Euclidean space, its associated vector space E → {\displaystyle {\overrightarrow {E}}} can be considered as a Euclidean space. Every point O ∈ E defines an isometry of Euclidean spaces P ↦ O P → , ( {\displaystyle P\mapsto {\overrightarrow {OP}},{\vphantom {\frac {(}{}}}} which maps O to the zero vector and has the identity as associated linear map. The inverse isometry is the map v ↦ O + v . {\displaystyle v\mapsto O+v.} A Euclidean frame ⁠ ( O , e 1 , … , e n ) {\displaystyle (O,e_{1},\dots ,e_{n})} ⁠ allows defining the map E → R n P ↦ ( e 1 ⋅ O P → , … , e n ⋅ O P → ) , ( {\displaystyle {\begin{aligned}E&\to \mathbb {R} ^{n}\\P&\mapsto {\Bigl (}e_{1}\cdot {\overrightarrow {OP}},\dots ,e_{n}\cdot {\overrightarrow {OP}}{\Bigr )},{\vphantom {\frac {(}{}}}\end{aligned}}} which is an isometry of Euclidean spaces. The inverse isometry is R n → E ( x 1 … , x n ) ↦ ( O + x 1 e 1 + ⋯ + x n e n ) . {\displaystyle {\begin{aligned}\mathbb {R} ^{n}&\to E\\(x_{1}\dots ,x_{n})&\mapsto \left(O+x_{1}e_{1}+\dots +x_{n}e_{n}\right).\end{aligned}}} This means that, up to an isomorphism, there is exactly one Euclidean space of a given dimension. This justifies that many authors talk of R n {\displaystyle \mathbb {R} ^{n}} as the Euclidean space of dimension n. === Euclidean group === An isometry from a Euclidean space onto itself is called Euclidean isometry, Euclidean transformation or rigid transformation. The rigid transformations of a Euclidean space form a group (under composition), called the Euclidean group and often denoted E(n) of ISO(n). The simplest Euclidean transformations are translations P → P + v . {\displaystyle P\to P+v.} They are in bijective correspondence with vectors. This is a reason for calling space of translations the vector space associated to a Euclidean space. The translations form a normal subgroup of the Euclidean group. A Euclidean isometry f of a Euclidean space E defines a linear isometry f → {\displaystyle {\overrightarrow {f}}} of the associated vector space (by linear isometry, it is meant an isometry that is also a linear map) in the following way: denoting by Q − P the vector P Q → , ( {\displaystyle {\overrightarrow {PQ}},{\vphantom {\frac {(}{}}}} if O is an arbitrary point of E, one has f → ( O P → ) = f ( P ) − f ( O ) . ( {\displaystyle {\overrightarrow {f}}{\Bigl (}{\overrightarrow {OP}}{\Bigr )}=f(P)-f(O).{\vphantom {\frac {(}{}}}} It is straightforward to prove that this is a linear map that does not depend from the choice of O. The map f → f → {\displaystyle f\to {\overrightarrow {f}}} is a group homomorphism from the Euclidean group onto the group of linear isometries, called the orthogonal group. The kernel of this homomorphism is the translation group, showing that it is a normal subgroup of the Euclidean group. The isometries that fix a given point P form the stabilizer subgroup of the Euclidean group with respect to P. The restriction to this stabilizer of above group homomorphism is an isomorphism. So the isometries that fix a given point form a group isomorphic to the orthogonal group. Let P be a point, f an isometry, and t the translation that maps P to f(P). The isometry g = t − 1 ∘ f {\displaystyle g=t^{-1}\circ f} fixes P. So f = t ∘ g , {\displaystyle f=t\circ g,} and the Euclidean group is the semidirect product of the translation group and the orthogonal group. The special orthogonal group is the normal subgroup of the orthogonal group that preserves handedness. It is a subgroup of index two of the orthogonal group. Its inverse image by the group homomorphism f → f → {\displaystyle f\to {\overrightarrow {f}}} is a normal subgroup of index two of the Euclidean group, which is called the special Euclidean group or the displacement group. Its elements are called rigid motions or displacements. Rigid motions include the identity, translations, rotations (the rigid motions that fix at least a point), and also screw motions. Typical examples of rigid transformations that are not rigid motions are reflections, which are rigid transformations that fix a hyperplane and are not the identity. They are also the transformations consisting in changing the sign of one coordinate over some Euclidean frame. As the special Euclidean group is a subgroup of index two of the Euclidean group, given a reflection r, every rigid transformation that is not a rigid motion is the product of r and a rigid motion. A glide reflection is an example of a rigid transformation that is not a rigid motion or a reflection. All groups that have been considered in this section are Lie groups and algebraic groups. == Topology == The Euclidean distance makes a Euclidean space a metric space, and thus a topological space. This topology is called the Euclidean topology. In the case of R n , {\displaystyle \mathbb {R} ^{n},} this topology is also the product topology. The open sets are the subsets that contains an open ball around each of their points. In other words, open balls form a base of the topology. The topological dimension of a Euclidean space equals its dimension. This implies that Euclidean spaces of different dimensions are not homeomorphic. Moreover, the theorem of invariance of domain asserts that a subset of a Euclidean space is open (for the subspace topology) if and only if it is homeomorphic to an open subset of a Euclidean space of the same dimension. Euclidean spaces are complete and locally compact. That is, a closed subset of a Euclidean space is compact if it is bounded (that is, contained in a ball). In particular, closed balls are compact. == Axiomatic definitions == The definition of Euclidean spaces that has been described in this article differs fundamentally of Euclid's one. In reality, Euclid did not define formally the space, because it was thought as a description of the physical world that exists independently of human mind. The need of a formal definition appeared only at the end of 19th century, with the introduction of non-Euclidean geometries. Two different approaches have been used. Felix Klein suggested to define geometries through their symmetries. The presentation of Euclidean spaces given in this article, is essentially issued from his Erlangen program, with the emphasis given on the groups of translations and isometries. On the other hand, David Hilbert proposed a set of axioms, inspired by Euclid's postulates. They belong to synthetic geometry, as they do not involve any definition of real numbers. Later G. D. Birkhoff and Alfred Tarski proposed simpler sets of axioms, which use real numbers (see Birkhoff's axioms and Tarski's axioms). In Geometric Algebra, Emil Artin has proved that all these definitions of a Euclidean space are equivalent. It is rather easy to prove that all definitions of Euclidean spaces satisfy Hilbert's axioms, and that those involving real numbers (including the above given definition) are equivalent. The difficult part of Artin's proof is the following. In Hilbert's axioms, congruence is an equivalence relation on segments. One can thus define the length of a segment as its equivalence class. One must thus prove that this length satisfies properties that characterize nonnegative real numbers. Artin proved this with axioms equivalent to those of Hilbert. == Usage == Since the ancient Greeks, Euclidean space has been used for modeling shapes in the physical world. It is thus used in many sciences, such as physics, mechanics, and astronomy. It is also widely used in all technical areas that are concerned with shapes, figure, location and position, such as architecture, geodesy, topography, navigation, industrial design, or technical drawing. Space of dimensions higher than three occurs in several modern theories of physics; see Higher dimension. They occur also in configuration spaces of physical systems. Beside Euclidean geometry, Euclidean spaces are also widely used in other areas of mathematics. Tangent spaces of differentiable manifolds are Euclidean vector spaces. More generally, a manifold is a space that is locally approximated by Euclidean spaces. Most non-Euclidean geometries can be modeled by a manifold, and embedded in a Euclidean space of higher dimension. For example, an elliptic space can be modeled by an ellipsoid. It is common to represent in a Euclidean space mathematical objects that are a priori not of a geometrical nature. An example among many is the usual representation of graphs. == Other geometric spaces == Since the introduction, at the end of 19th century, of non-Euclidean geometries, many sorts of spaces have been considered, about which one can do geometric reasoning in the same way as with Euclidean spaces. In general, they share some properties with Euclidean spaces, but may also have properties that could appear as rather strange. Some of these spaces use Euclidean geometry for their definition, or can be modeled as subspaces of a Euclidean space of higher dimension. When such a space is defined by geometrical axioms, embedding the space in a Euclidean space is a standard way for proving consistency of its definition, or, more precisely for proving that its theory is consistent, if Euclidean geometry is consistent (which cannot be proved). === Affine space === A Euclidean space is an affine space equipped with a metric. Affine spaces have many other uses in mathematics. In particular, as they are defined over any field, they allow doing geometry in other contexts. As soon as non-linear questions are considered, it is generally useful to consider affine spaces over the complex numbers as an extension of Euclidean spaces. For example, a circle and a line have always two intersection points (possibly not distinct) in the complex affine space. Therefore, most of algebraic geometry is built in complex affine spaces and affine spaces over algebraically closed fields. The shapes that are studied in algebraic geometry in these affine spaces are therefore called affine algebraic varieties. Affine spaces over the rational numbers and more generally over algebraic number fields provide a link between (algebraic) geometry and number theory. For example, the Fermat's Last Theorem can be stated "a Fermat curve of degree higher than two has no point in the affine plane over the rationals." Geometry in affine spaces over a finite fields has also been widely studied. For example, elliptic curves over finite fields are widely used in cryptography. === Projective space === Originally, projective spaces have been introduced by adding "points at infinity" to Euclidean spaces, and, more generally to affine spaces, in order to make true the assertion "two coplanar lines meet in exactly one point". Projective space share with Euclidean and affine spaces the property of being isotropic, that is, there is no property of the space that allows distinguishing between two points or two lines. Therefore, a more isotropic definition is commonly used, which consists as defining a projective space as the set of the vector lines in a vector space of dimension one more. As for affine spaces, projective spaces are defined over any field, and are fundamental spaces of algebraic geometry. === Non-Euclidean geometries === Non-Euclidean geometry refers usually to geometrical spaces where the parallel postulate is false. They include elliptic geometry, where the sum of the angles of a triangle is more than 180°, and hyperbolic geometry, where this sum is less than 180°. Their introduction in the second half of 19th century, and the proof that their theory is consistent (if Euclidean geometry is not contradictory) is one of the paradoxes that are at the origin of the foundational crisis in mathematics of the beginning of 20th century, and motivated the systematization of axiomatic theories in mathematics. === Curved spaces === A manifold is a space that in the neighborhood of each point resembles a Euclidean space. In technical terms, a manifold is a topological space, such that each point has a neighborhood that is homeomorphic to an open subset of a Euclidean space. Manifolds can be classified by increasing degree of this "resemblance" into topological manifolds, differentiable manifolds, smooth manifolds, and analytic manifolds. However, none of these types of "resemblance" respect distances and angles, even approximately. Distances and angles can be defined on a smooth manifold by providing a smoothly varying Euclidean metric on the tangent spaces at the points of the manifold (these tangent spaces are thus Euclidean vector spaces). This results in a Riemannian manifold. Generally, straight lines do not exist in a Riemannian manifold, but their role is played by geodesics, which are the "shortest paths" between two points. This allows defining distances, which are measured along geodesics, and angles between geodesics, which are the angle of their tangents in the tangent space at their intersection. So, Riemannian manifolds behave locally like a Euclidean space that has been bent. Euclidean spaces are trivially Riemannian manifolds. An example illustrating this well is the surface of a sphere. In this case, geodesics are arcs of great circle, which are called orthodromes in the context of navigation. More generally, the spaces of non-Euclidean geometries can be realized as Riemannian manifolds. === Pseudo-Euclidean space === An inner product of a real vector space is a positive definite bilinear form, and so characterized by a positive definite quadratic form. A pseudo-Euclidean space is an affine space with an associated real vector space equipped with a non-degenerate quadratic form (that may be indefinite). A fundamental example of such a space is the Minkowski space, which is the space-time of Einstein's special relativity. It is a four-dimensional space, where the metric is defined by the quadratic form x 2 + y 2 + z 2 − t 2 , {\displaystyle x^{2}+y^{2}+z^{2}-t^{2},} where the last coordinate (t) is temporal, and the other three (x, y, z) are spatial. To take gravity into account, general relativity uses a pseudo-Riemannian manifold that has Minkowski spaces as tangent spaces. The curvature of this manifold at a point is a function of the value of the gravitational field at this point. == See also == Hilbert space, a generalization to infinite dimension, used in functional analysis Position space, an application in physics == Footnotes == == References == Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0 Artin, Emil (1988) [1957], Geometric Algebra, Wiley Classics Library, New York: John Wiley & Sons Inc., pp. x+214, doi:10.1002/9781118164518, ISBN 0-471-60839-4, MR 1009557 Ball, W.W. Rouse (1960) [1908]. A Short Account of the History of Mathematics (4th ed.). Dover Publications. ISBN 0-486-20630-0. {{cite book}}: ISBN / Date incompatibility (help) Berger, Marcel (1987), Geometry I, Berlin: Springer, ISBN 3-540-11658-3 Coxeter, H.S.M. (1973) [1948]. Regular Polytopes (3rd ed.). New York: Dover. Schläfli ... discovered them before 1853 -- a time when Cayley, Grassman and Möbius were the only other people who had ever conceived of the possibility of geometry in more than three dimensions. Solomentsev, E.D. (2001) [1994], "Euclidean space", Encyclopedia of Mathematics, EMS Press
Wikipedia:Euclidean vector#0
In mathematics, physics, and engineering, a Euclidean vector or simply a vector (sometimes called a geometric vector or spatial vector) is a geometric object that has magnitude (or length) and direction. Euclidean vectors can be added and scaled to form a vector space. A vector quantity is a vector-valued physical quantity, including units of measurement and possibly a support, formulated as a directed line segment. A vector is frequently depicted graphically as an arrow connecting an initial point A with a terminal point B, and denoted by A B ⟶ . {\textstyle {\stackrel {\longrightarrow }{AB}}.} A vector is what is needed to "carry" the point A to the point B; the Latin word vector means 'carrier'. It was first used by 18th century astronomers investigating planetary revolution around the Sun. The magnitude of the vector is the distance between the two points, and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics: the velocity and acceleration of a moving object and the forces acting on it can all be described with vectors. Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances (except, for example, position or displacement), their magnitude and direction can still be represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors. == History == The vector concept, as it is known today, is the result of a gradual development over a period of more than 200 years. About a dozen people contributed significantly to its development. In 1835, Giusto Bellavitis abstracted the basic idea when he established the concept of equipollence. Working in a Euclidean plane, he made equipollent any pair of parallel line segments of the same length and orientation. Essentially, he realized an equivalence relation on the pairs of points (bipoints) in the plane, and thus erected the first space of vectors in the plane.: 52–4 The term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a real number s (also called scalar) and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments. As complex numbers use an imaginary unit to complement the real line, Hamilton considered the vector v to be the imaginary part of a quaternion: The algebraically imaginary part, being geometrically constructed by a straight line, or radius vector, which has, in general, for each determined quaternion, a determined length and determined direction in space, may be called the vector part, or simply the vector of the quaternion. Several other mathematicians developed vector-like systems in the middle of the nineteenth century, including Augustin Cauchy, Hermann Grassmann, August Möbius, Comte de Saint-Venant, and Matthew O'Brien. Grassmann's 1840 work Theorie der Ebbe und Flut (Theory of the Ebb and Flow) was the first system of spatial analysis that is similar to today's system, and had ideas corresponding to the cross product, scalar product and vector differentiation. Grassmann's work was largely neglected until the 1870s. Peter Guthrie Tait carried the quaternion standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇. In 1878, Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product. This approach made vector calculations available to engineers—and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwell's Treatise on Electricity and Magnetism, separated off their vector part for independent treatment. The first half of Gibbs's Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901, Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs's lectures, which banished any mention of quaternions in the development of vector calculus. == Overview == In physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a relative direction. It is formally defined as a directed line segment, or arrow, in a Euclidean space. In pure mathematics, a vector is defined more generally as any element of a vector space. In this context, vectors are abstract entities which may or may not be characterized by a magnitude and a direction. This generalized definition implies that the above-mentioned geometric entities are a special kind of abstract vectors, as they are elements of a special kind of vector space called Euclidean space. This particular article is about vectors strictly defined as arrows in Euclidean space. When it becomes necessary to distinguish these special vectors from vectors as defined in pure mathematics, they are sometimes referred to as geometric, spatial, or Euclidean vectors. A Euclidean vector may possess a definite initial point and terminal point; such a condition may be emphasized calling the result a bound vector. When only the magnitude and direction of the vector matter, and the particular initial or terminal points are of no importance, the vector is called a free vector. The distinction between bound and free vectors is especially relevant in mechanics, where a force applied to a body has a point of contact (see resultant force and couple). Two arrows A B ⟶ {\displaystyle {\stackrel {\,\longrightarrow }{AB}}} and A ′ B ′ ⟶ {\displaystyle {\stackrel {\,\longrightarrow }{A'B'}}} in space represent the same free vector if they have the same magnitude and direction: that is, they are equipollent if the quadrilateral ABB′A′ is a parallelogram. If the Euclidean space is equipped with a choice of origin, then a free vector is equivalent to the bound vector of the same magnitude and direction whose initial point is the origin. The term vector also has generalizations to higher dimensions, and to more formal approaches with much wider applications. === Further information === In classical Euclidean geometry (i.e., synthetic geometry), vectors were introduced (during the 19th century) as equivalence classes under equipollence, of ordered pairs of points; two pairs (A, B) and (C, D) being equipollent if the points A, B, D, C, in this order, form a parallelogram. Such an equivalence class is called a vector, more precisely, a Euclidean vector. The equivalence class of (A, B) is often denoted A B → . {\displaystyle {\overrightarrow {AB}}.} A Euclidean vector is thus an equivalence class of directed segments with the same magnitude (e.g., the length of the line segment (A, B)) and same direction (e.g., the direction from A to B). In physics, Euclidean vectors are used to represent physical quantities that have both magnitude and direction, but are not located at a specific place, in contrast to scalars, which have no direction. For example, velocity, forces and acceleration are represented by vectors. In modern geometry, Euclidean spaces are often defined from linear algebra. More precisely, a Euclidean space E is defined as a set to which is associated an inner product space of finite dimension over the reals E → , {\displaystyle {\overrightarrow {E}},} and a group action of the additive group of E → , {\displaystyle {\overrightarrow {E}},} which is free and transitive (See Affine space for details of this construction). The elements of E → {\displaystyle {\overrightarrow {E}}} are called translations. It has been proven that the two definitions of Euclidean spaces are equivalent, and that the equivalence classes under equipollence may be identified with translations. Sometimes, Euclidean vectors are considered without reference to a Euclidean space. In this case, a Euclidean vector is an element of a normed vector space of finite dimension over the reals, or, typically, an element of the real coordinate space R n {\displaystyle \mathbb {R} ^{n}} equipped with the dot product. This makes sense, as the addition in such a vector space acts freely and transitively on the vector space itself. That is, R n {\displaystyle \mathbb {R} ^{n}} is a Euclidean space, with itself as an associated vector space, and the dot product as an inner product. The Euclidean space R n {\displaystyle \mathbb {R} ^{n}} is often presented as the standard Euclidean space of dimension n. This is motivated by the fact that every Euclidean space of dimension n is isomorphic to the Euclidean space R n . {\displaystyle \mathbb {R} ^{n}.} More precisely, given such a Euclidean space, one may choose any point O as an origin. By Gram–Schmidt process, one may also find an orthonormal basis of the associated vector space (a basis such that the inner product of two basis vectors is 0 if they are different and 1 if they are equal). This defines Cartesian coordinates of any point P of the space, as the coordinates on this basis of the vector O P → . {\displaystyle {\overrightarrow {OP}}.} These choices define an isomorphism of the given Euclidean space onto R n , {\displaystyle \mathbb {R} ^{n},} by mapping any point to the n-tuple of its Cartesian coordinates, and every vector to its coordinate vector. === Examples in one dimension === Since the physicist's concept of force has a direction and a magnitude, it may be seen as a vector. As an example, consider a rightward force F of 15 newtons. If the positive axis is also directed rightward, then F is represented by the vector 15 N, and if positive points leftward, then the vector for F is −15 N. In either case, the magnitude of the vector is 15 N. Likewise, the vector representation of a displacement Δs of 4 meters would be 4 m or −4 m, depending on its direction, and its magnitude would be 4 m regardless. === In physics and engineering === Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has magnitude, has direction, and which adheres to the rules of vector addition. An example is velocity, the magnitude of which is speed. For instance, the velocity 5 meters per second upward could be represented by the vector (0, 5) (in 2 dimensions with the positive y-axis as 'up'). Another quantity represented by a vector is force, since it has a magnitude and direction and follows the rules of vector addition. Vectors also describe many other physical quantities, such as linear displacement, displacement, linear acceleration, angular acceleration, linear momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field. Examples of quantities that have magnitude and direction, but fail to follow the rules of vector addition, are angular displacement and electric current. Consequently, these are not vectors. === In Cartesian space === In the Cartesian coordinate system, a bound vector can be represented by identifying the coordinates of its initial and terminal point. For instance, the points A = (1, 0, 0) and B = (0, 1, 0) in space determine the bound vector A B → {\displaystyle {\overrightarrow {AB}}} pointing from the point x = 1 on the x-axis to the point y = 1 on the y-axis. In Cartesian coordinates, a free vector may be thought of in terms of a corresponding bound vector, in this sense, whose initial point has the coordinates of the origin O = (0, 0, 0). It is then determined by the coordinates of that bound vector's terminal point. Thus the free vector represented by (1, 0, 0) is a vector of unit length—pointing along the direction of the positive x-axis. This coordinate representation of free vectors allows their algebraic features to be expressed in a convenient numerical fashion. For example, the sum of the two (free) vectors (1, 2, 3) and (−2, 0, 4) is the (free) vector ( 1 , 2 , 3 ) + ( − 2 , 0 , 4 ) = ( 1 − 2 , 2 + 0 , 3 + 4 ) = ( − 1 , 2 , 7 ) . {\displaystyle (1,2,3)+(-2,0,4)=(1-2,2+0,3+4)=(-1,2,7)\,.} === Euclidean and affine vectors === In the geometrical and physical settings, it is sometimes possible to associate, in a natural way, a length or magnitude and a direction to vectors. In addition, the notion of direction is strictly associated with the notion of an angle between two vectors. If the dot product of two vectors is defined—a scalar-valued product of two vectors—then it is also possible to define a length; the dot product gives a convenient algebraic characterization of both angle (a function of the dot product between any two non-zero vectors) and length (the square root of the dot product of a vector by itself). In three dimensions, it is further possible to define the cross product, which supplies an algebraic characterization of the area and orientation in space of the parallelogram defined by two vectors (used as sides of the parallelogram). In any dimension (and, in particular, higher dimensions), it is possible to define the exterior product, which (among other things) supplies an algebraic characterization of the area and orientation in space of the n-dimensional parallelotope defined by n vectors. In a pseudo-Euclidean space, a vector's squared length can be positive, negative, or zero. An important example is Minkowski space (which is important to our understanding of special relativity). However, it is not always possible or desirable to define the length of a vector. This more general type of spatial vector is the subject of vector spaces (for free vectors) and affine spaces (for bound vectors, as each represented by an ordered pair of "points"). One physical example comes from thermodynamics, where many quantities of interest can be considered vectors in a space with no notion of length or angle. === Generalizations === In physics, as well as mathematics, a vector is often identified with a tuple of components, or list of numbers, that act as scalar coefficients for a set of basis vectors. When the basis is transformed, for example by rotation or stretching, then the components of any vector in terms of that basis also transform in an opposite sense. The vector itself has not changed, but the basis has, so the components of the vector must change to compensate. The vector is called covariant or contravariant, depending on how the transformation of the vector's components is related to the transformation of the basis. In general, contravariant vectors are "regular vectors" with units of distance (such as a displacement), or distance times some other unit (such as velocity or acceleration); covariant vectors, on the other hand, have units of one-over-distance such as gradient. If you change units (a special case of a change of basis) from meters to millimeters, a scale factor of 1/1000, a displacement of 1 m becomes 1000 mm—a contravariant change in numerical value. In contrast, a gradient of 1 K/m becomes 0.001 K/mm—a covariant change in value (for more, see covariance and contravariance of vectors). Tensors are another type of quantity that behave in this way; a vector is one type of tensor. In pure mathematics, a vector is any element of a vector space over some field and is often represented as a coordinate vector. The vectors described in this article are a very special case of this general definition, because they are contravariant with respect to the ambient space. Contravariance captures the physical intuition behind the idea that a vector has "magnitude and direction". == Representations == Vectors are usually denoted in lowercase boldface, as in u {\displaystyle \mathbf {u} } , v {\displaystyle \mathbf {v} } and w {\displaystyle \mathbf {w} } , or in lowercase italic boldface, as in a. (Uppercase letters are typically used to represent matrices.) Other conventions include a → {\displaystyle {\vec {a}}} or a, especially in handwriting. Alternatively, some use a tilde (~) or a wavy underline drawn beneath the symbol, e.g. a ∼ {\displaystyle {\underset {^{\sim }}{a}}} , which is a convention for indicating boldface type. If the vector represents a directed distance or displacement from a point A to a point B (see figure), it can also be denoted as A B ⟶ {\displaystyle {\stackrel {\longrightarrow }{AB}}} or AB. In German literature, it was especially common to represent vectors with small fraktur letters such as a {\displaystyle {\mathfrak {a}}} . Vectors are usually shown in graphs or other diagrams as arrows (directed line segments), as illustrated in the figure. Here, the point A is called the origin, tail, base, or initial point, and the point B is called the head, tip, endpoint, terminal point or final point. The length of the arrow is proportional to the vector's magnitude, while the direction in which the arrow points indicates the vector's direction. On a two-dimensional diagram, a vector perpendicular to the plane of the diagram is sometimes desired. These vectors are commonly shown as small circles. A circle with a dot at its centre (Unicode U+2299 ⊙) indicates a vector pointing out of the front of the diagram, toward the viewer. A circle with a cross inscribed in it (Unicode U+2297 ⊗) indicates a vector pointing into and behind the diagram. These can be thought of as viewing the tip of an arrow head on and viewing the flights of an arrow from the back. In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented as coordinate vectors in a Cartesian coordinate system. The endpoint of a vector can be identified with an ordered list of n real numbers (n-tuple). These numbers are the coordinates of the endpoint of the vector, with respect to a given Cartesian coordinate system, and are typically called the scalar components (or scalar projections) of the vector on the axes of the coordinate system. As an example in two dimensions (see figure), the vector from the origin O = (0, 0) to the point A = (2, 3) is simply written as a = ( 2 , 3 ) . {\displaystyle \mathbf {a} =(2,3).} The notion that the tail of the vector coincides with the origin is implicit and easily understood. Thus, the more explicit notation O A → {\displaystyle {\overrightarrow {OA}}} is usually deemed not necessary (and is indeed rarely used). In three dimensional Euclidean space (or R3), vectors are identified with triples of scalar components: a = ( a 1 , a 2 , a 3 ) . {\displaystyle \mathbf {a} =(a_{1},a_{2},a_{3}).} also written, a = ( a x , a y , a z ) . {\displaystyle \mathbf {a} =(a_{x},a_{y},a_{z}).} This can be generalised to n-dimensional Euclidean space (or Rn). a = ( a 1 , a 2 , a 3 , ⋯ , a n − 1 , a n ) . {\displaystyle \mathbf {a} =(a_{1},a_{2},a_{3},\cdots ,a_{n-1},a_{n}).} These numbers are often arranged into a column vector or row vector, particularly when dealing with matrices, as follows: a = [ a 1 a 2 a 3 ] = [ a 1 a 2 a 3 ] T . {\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\end{bmatrix}}=[a_{1}\ a_{2}\ a_{3}]^{\operatorname {T} }.} Another way to represent a vector in n-dimensions is to introduce the standard basis vectors. For instance, in three dimensions, there are three of them: e 1 = ( 1 , 0 , 0 ) , e 2 = ( 0 , 1 , 0 ) , e 3 = ( 0 , 0 , 1 ) . {\displaystyle {\mathbf {e} }_{1}=(1,0,0),\ {\mathbf {e} }_{2}=(0,1,0),\ {\mathbf {e} }_{3}=(0,0,1).} These have the intuitive interpretation as vectors of unit length pointing up the x-, y-, and z-axis of a Cartesian coordinate system, respectively. In terms of these, any vector a in R3 can be expressed in the form: a = ( a 1 , a 2 , a 3 ) = a 1 ( 1 , 0 , 0 ) + a 2 ( 0 , 1 , 0 ) + a 3 ( 0 , 0 , 1 ) , {\displaystyle \mathbf {a} =(a_{1},a_{2},a_{3})=a_{1}(1,0,0)+a_{2}(0,1,0)+a_{3}(0,0,1),\ } or a = a 1 + a 2 + a 3 = a 1 e 1 + a 2 e 2 + a 3 e 3 , {\displaystyle \mathbf {a} =\mathbf {a} _{1}+\mathbf {a} _{2}+\mathbf {a} _{3}=a_{1}{\mathbf {e} }_{1}+a_{2}{\mathbf {e} }_{2}+a_{3}{\mathbf {e} }_{3},} where a1, a2, a3 are called the vector components (or vector projections) of a on the basis vectors or, equivalently, on the corresponding Cartesian axes x, y, and z (see figure), while a1, a2, a3 are the respective scalar components (or scalar projections). In introductory physics textbooks, the standard basis vectors are often denoted i , j , k {\displaystyle \mathbf {i} ,\mathbf {j} ,\mathbf {k} } instead (or x ^ , y ^ , z ^ {\displaystyle \mathbf {\hat {x}} ,\mathbf {\hat {y}} ,\mathbf {\hat {z}} } , in which the hat symbol ^ {\displaystyle \mathbf {\hat {}} } typically denotes unit vectors). In this case, the scalar and vector components are denoted respectively ax, ay, az, and ax, ay, az (note the difference in boldface). Thus, a = a x + a y + a z = a x i + a y j + a z k . {\displaystyle \mathbf {a} =\mathbf {a} _{x}+\mathbf {a} _{y}+\mathbf {a} _{z}=a_{x}{\mathbf {i} }+a_{y}{\mathbf {j} }+a_{z}{\mathbf {k} }.} The notation ei is compatible with the index notation and the summation convention commonly used in higher level mathematics, physics, and engineering. === Decomposition or resolution === As explained above, a vector is often described by a set of vector components that add up to form the given vector. Typically, these components are the projections of the vector on a set of mutually perpendicular reference axes (basis vectors). The vector is said to be decomposed or resolved with respect to that set. The decomposition or resolution of a vector into components is not unique, because it depends on the choice of the axes on which the vector is projected. Moreover, the use of Cartesian unit vectors such as x ^ , y ^ , z ^ {\displaystyle \mathbf {\hat {x}} ,\mathbf {\hat {y}} ,\mathbf {\hat {z}} } as a basis in which to represent a vector is not mandated. Vectors can also be expressed in terms of an arbitrary basis, including the unit vectors of a cylindrical coordinate system ( ρ ^ , ϕ ^ , z ^ {\displaystyle {\boldsymbol {\hat {\rho }}},{\boldsymbol {\hat {\phi }}},\mathbf {\hat {z}} } ) or spherical coordinate system ( r ^ , θ ^ , ϕ ^ {\displaystyle \mathbf {\hat {r}} ,{\boldsymbol {\hat {\theta }}},{\boldsymbol {\hat {\phi }}}} ). The latter two choices are more convenient for solving problems which possess cylindrical or spherical symmetry, respectively. The choice of a basis does not affect the properties of a vector or its behaviour under transformations. A vector can also be broken up with respect to "non-fixed" basis vectors that change their orientation as a function of time or space. For example, a vector in three-dimensional space can be decomposed with respect to two axes, respectively normal, and tangent to a surface (see figure). Moreover, the radial and tangential components of a vector relate to the radius of rotation of an object. The former is parallel to the radius and the latter is orthogonal to it. In these cases, each of the components may be in turn decomposed with respect to a fixed coordinate system or basis set (e.g., a global coordinate system, or inertial reference frame). == Properties and operations == The following section uses the Cartesian coordinate system with basis vectors e 1 = ( 1 , 0 , 0 ) , e 2 = ( 0 , 1 , 0 ) , e 3 = ( 0 , 0 , 1 ) {\displaystyle {\mathbf {e} }_{1}=(1,0,0),\ {\mathbf {e} }_{2}=(0,1,0),\ {\mathbf {e} }_{3}=(0,0,1)} and assumes that all vectors have the origin as a common base point. A vector a will be written as a = a 1 e 1 + a 2 e 2 + a 3 e 3 . {\displaystyle {\mathbf {a} }=a_{1}{\mathbf {e} }_{1}+a_{2}{\mathbf {e} }_{2}+a_{3}{\mathbf {e} }_{3}.} === Equality === Two vectors are said to be equal if they have the same magnitude and direction. Equivalently they will be equal if their coordinates are equal. So two vectors a = a 1 e 1 + a 2 e 2 + a 3 e 3 {\displaystyle {\mathbf {a} }=a_{1}{\mathbf {e} }_{1}+a_{2}{\mathbf {e} }_{2}+a_{3}{\mathbf {e} }_{3}} and b = b 1 e 1 + b 2 e 2 + b 3 e 3 {\displaystyle {\mathbf {b} }=b_{1}{\mathbf {e} }_{1}+b_{2}{\mathbf {e} }_{2}+b_{3}{\mathbf {e} }_{3}} are equal if a 1 = b 1 , a 2 = b 2 , a 3 = b 3 . {\displaystyle a_{1}=b_{1},\quad a_{2}=b_{2},\quad a_{3}=b_{3}.\,} === Opposite, parallel, and antiparallel vectors === Two vectors are opposite if they have the same magnitude but opposite direction; so two vectors a = a 1 e 1 + a 2 e 2 + a 3 e 3 {\displaystyle {\mathbf {a} }=a_{1}{\mathbf {e} }_{1}+a_{2}{\mathbf {e} }_{2}+a_{3}{\mathbf {e} }_{3}} and b = b 1 e 1 + b 2 e 2 + b 3 e 3 {\displaystyle {\mathbf {b} }=b_{1}{\mathbf {e} }_{1}+b_{2}{\mathbf {e} }_{2}+b_{3}{\mathbf {e} }_{3}} are opposite if a 1 = − b 1 , a 2 = − b 2 , a 3 = − b 3 . {\displaystyle a_{1}=-b_{1},\quad a_{2}=-b_{2},\quad a_{3}=-b_{3}.\,} Two vectors are equidirectional (or codirectional) if they have the same direction but not necessarily the same magnitude. Two vectors are parallel if they have either the same or opposite direction, but not necessarily the same magnitude; two vectors are antiparallel if they have strictly opposite direction, but not necessarily the same magnitude. === Addition and subtraction === The sum of a and b of two vectors may be defined as a + b = ( a 1 + b 1 ) e 1 + ( a 2 + b 2 ) e 2 + ( a 3 + b 3 ) e 3 . {\displaystyle \mathbf {a} +\mathbf {b} =(a_{1}+b_{1})\mathbf {e} _{1}+(a_{2}+b_{2})\mathbf {e} _{2}+(a_{3}+b_{3})\mathbf {e} _{3}.} The resulting vector is sometimes called the resultant vector of a and b. The addition may be represented graphically by placing the tail of the arrow b at the head of the arrow a, and then drawing an arrow from the tail of a to the head of b. The new arrow drawn represents the vector a + b, as illustrated below: This addition method is sometimes called the parallelogram rule because a and b form the sides of a parallelogram and a + b is one of the diagonals. If a and b are bound vectors that have the same base point, this point will also be the base point of a + b. One can check geometrically that a + b = b + a and (a + b) + c = a + (b + c). The difference of a and b is a − b = ( a 1 − b 1 ) e 1 + ( a 2 − b 2 ) e 2 + ( a 3 − b 3 ) e 3 . {\displaystyle \mathbf {a} -\mathbf {b} =(a_{1}-b_{1})\mathbf {e} _{1}+(a_{2}-b_{2})\mathbf {e} _{2}+(a_{3}-b_{3})\mathbf {e} _{3}.} Subtraction of two vectors can be geometrically illustrated as follows: to subtract b from a, place the tails of a and b at the same point, and then draw an arrow from the head of b to the head of a. This new arrow represents the vector (-b) + a, with (-b) being the opposite of b, see drawing. And (-b) + a = a − b. === Scalar multiplication === A vector may also be multiplied, or re-scaled, by any real number r. In the context of conventional vector algebra, these real numbers are often called scalars (from scale) to distinguish them from vectors. The operation of multiplying a vector by a scalar is called scalar multiplication. The resulting vector is r a = ( r a 1 ) e 1 + ( r a 2 ) e 2 + ( r a 3 ) e 3 . {\displaystyle r\mathbf {a} =(ra_{1})\mathbf {e} _{1}+(ra_{2})\mathbf {e} _{2}+(ra_{3})\mathbf {e} _{3}.} Intuitively, multiplying by a scalar r stretches a vector out by a factor of r. Geometrically, this can be visualized (at least in the case when r is an integer) as placing r copies of the vector in a line where the endpoint of one vector is the initial point of the next vector. If r is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples (r = −1 and r = 2) are given below: Scalar multiplication is distributive over vector addition in the following sense: r(a + b) = ra + rb for all vectors a and b and all scalars r. One can also show that a − b = a + (−1)b. === Length === The length, magnitude or norm of the vector a is denoted by ‖a‖ or, less commonly, |a|, which is not to be confused with the absolute value (a scalar "norm"). The length of the vector a can be computed with the Euclidean norm, ‖ a ‖ = a 1 2 + a 2 2 + a 3 2 , {\displaystyle \left\|\mathbf {a} \right\|={\sqrt {a_{1}^{2}+a_{2}^{2}+a_{3}^{2}}},} which is a consequence of the Pythagorean theorem since the basis vectors e1, e2, e3 are orthogonal unit vectors. This happens to be equal to the square root of the dot product, discussed below, of the vector with itself: ‖ a ‖ = a ⋅ a . {\displaystyle \left\|\mathbf {a} \right\|={\sqrt {\mathbf {a} \cdot \mathbf {a} }}.} ==== Unit vector ==== A unit vector is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided by its length to create a unit vector. This is known as normalizing a vector. A unit vector is often indicated with a hat as in â. To normalize a vector a = (a1, a2, a3), scale the vector by the reciprocal of its length ‖a‖. That is: a ^ = a ‖ a ‖ = a 1 ‖ a ‖ e 1 + a 2 ‖ a ‖ e 2 + a 3 ‖ a ‖ e 3 {\displaystyle \mathbf {\hat {a}} ={\frac {\mathbf {a} }{\left\|\mathbf {a} \right\|}}={\frac {a_{1}}{\left\|\mathbf {a} \right\|}}\mathbf {e} _{1}+{\frac {a_{2}}{\left\|\mathbf {a} \right\|}}\mathbf {e} _{2}+{\frac {a_{3}}{\left\|\mathbf {a} \right\|}}\mathbf {e} _{3}} ==== Zero vector ==== The zero vector is the vector with length zero. Written out in coordinates, the vector is (0, 0, 0), and it is commonly denoted 0 → {\displaystyle {\vec {0}}} , 0, or simply 0. Unlike any other vector, it has an arbitrary or indeterminate direction, and cannot be normalized (that is, there is no unit vector that is a multiple of the zero vector). The sum of the zero vector with any vector a is a (that is, 0 + a = a). === Dot product === The dot product of two vectors a and b (sometimes called the inner product, or, since its result is a scalar, the scalar product) is denoted by a ∙ b, and is defined as: a ⋅ b = ‖ a ‖ ‖ b ‖ cos ⁡ θ , {\displaystyle \mathbf {a} \cdot \mathbf {b} =\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\cos \theta ,} where θ is the measure of the angle between a and b (see trigonometric function for an explanation of cosine). Geometrically, this means that a and b are drawn with a common start point, and then the length of a is multiplied with the length of the component of b that points in the same direction as a. The dot product can also be defined as the sum of the products of the components of each vector as a ⋅ b = a 1 b 1 + a 2 b 2 + a 3 b 3 . {\displaystyle \mathbf {a} \cdot \mathbf {b} =a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3}.} === Cross product === The cross product (also called the vector product or outer product) is only meaningful in three or seven dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted a × b, is a vector perpendicular to both a and b and is defined as a × b = ‖ a ‖ ‖ b ‖ sin ⁡ ( θ ) n {\displaystyle \mathbf {a} \times \mathbf {b} =\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\sin(\theta )\,\mathbf {n} } where θ is the measure of the angle between a and b, and n is a unit vector perpendicular to both a and b which completes a right-handed system. The right-handedness constraint is necessary because there exist two unit vectors that are perpendicular to both a and b, namely, n and (−n). The cross product a × b is defined so that a, b, and a × b also becomes a right-handed system (although a and b are not necessarily orthogonal). This is the right-hand rule. The length of a × b can be interpreted as the area of the parallelogram having a and b as sides. The cross product can be written as a × b = ( a 2 b 3 − a 3 b 2 ) e 1 + ( a 3 b 1 − a 1 b 3 ) e 2 + ( a 1 b 2 − a 2 b 1 ) e 3 . {\displaystyle {\mathbf {a} }\times {\mathbf {b} }=(a_{2}b_{3}-a_{3}b_{2}){\mathbf {e} }_{1}+(a_{3}b_{1}-a_{1}b_{3}){\mathbf {e} }_{2}+(a_{1}b_{2}-a_{2}b_{1}){\mathbf {e} }_{3}.} For arbitrary choices of spatial orientation (that is, allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a pseudovector instead of a vector (see below). === Scalar triple product === The scalar triple product (also called the box product or mixed triple product) is not really a new operator, but a way of applying the other two multiplication operators to three vectors. The scalar triple product is sometimes denoted by (a b c) and defined as: ( a b c ) = a ⋅ ( b × c ) . {\displaystyle (\mathbf {a} \ \mathbf {b} \ \mathbf {c} )=\mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} ).} It has three primary uses. First, the absolute value of the box product is the volume of the parallelepiped which has edges that are defined by the three vectors. Second, the scalar triple product is zero if and only if the three vectors are linearly dependent, which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane. Third, the box product is positive if and only if the three vectors a, b and c are right-handed. In components (with respect to a right-handed orthonormal basis), if the three vectors are thought of as rows (or columns, but in the same order), the scalar triple product is simply the determinant of the 3-by-3 matrix having the three vectors as rows ( a b c ) = | a 1 a 2 a 3 b 1 b 2 b 3 c 1 c 2 c 3 | {\displaystyle (\mathbf {a} \ \mathbf {b} \ \mathbf {c} )={\begin{vmatrix}a_{1}&a_{2}&a_{3}\\b_{1}&b_{2}&b_{3}\\c_{1}&c_{2}&c_{3}\\\end{vmatrix}}} The scalar triple product is linear in all three entries and anti-symmetric in the following sense: ( a b c ) = ( c a b ) = ( b c a ) = − ( a c b ) = − ( b a c ) = − ( c b a ) . {\displaystyle (\mathbf {a} \ \mathbf {b} \ \mathbf {c} )=(\mathbf {c} \ \mathbf {a} \ \mathbf {b} )=(\mathbf {b} \ \mathbf {c} \ \mathbf {a} )=-(\mathbf {a} \ \mathbf {c} \ \mathbf {b} )=-(\mathbf {b} \ \mathbf {a} \ \mathbf {c} )=-(\mathbf {c} \ \mathbf {b} \ \mathbf {a} ).} === Conversion between multiple Cartesian bases === All examples thus far have dealt with vectors expressed in terms of the same basis, namely, the e basis {e1, e2, e3}. However, a vector can be expressed in terms of any number of different bases that are not necessarily aligned with each other, and still remain the same vector. In the e basis, a vector a is expressed, by definition, as a = p e 1 + q e 2 + r e 3 . {\displaystyle \mathbf {a} =p\mathbf {e} _{1}+q\mathbf {e} _{2}+r\mathbf {e} _{3}.} The scalar components in the e basis are, by definition, p = a ⋅ e 1 , q = a ⋅ e 2 , r = a ⋅ e 3 . {\displaystyle {\begin{aligned}p&=\mathbf {a} \cdot \mathbf {e} _{1},\\q&=\mathbf {a} \cdot \mathbf {e} _{2},\\r&=\mathbf {a} \cdot \mathbf {e} _{3}.\end{aligned}}} In another orthonormal basis n = {n1, n2, n3} that is not necessarily aligned with e, the vector a is expressed as a = u n 1 + v n 2 + w n 3 {\displaystyle \mathbf {a} =u\mathbf {n} _{1}+v\mathbf {n} _{2}+w\mathbf {n} _{3}} and the scalar components in the n basis are, by definition, u = a ⋅ n 1 , v = a ⋅ n 2 , w = a ⋅ n 3 . {\displaystyle {\begin{aligned}u&=\mathbf {a} \cdot \mathbf {n} _{1},\\v&=\mathbf {a} \cdot \mathbf {n} _{2},\\w&=\mathbf {a} \cdot \mathbf {n} _{3}.\end{aligned}}} The values of p, q, r, and u, v, w relate to the unit vectors in such a way that the resulting vector sum is exactly the same physical vector a in both cases. It is common to encounter vectors known in terms of different bases (for example, one basis fixed to the Earth and a second basis fixed to a moving vehicle). In such a case it is necessary to develop a method to convert between bases so the basic vector operations such as addition and subtraction can be performed. One way to express u, v, w in terms of p, q, r is to use column matrices along with a direction cosine matrix containing the information that relates the two bases. Such an expression can be formed by substitution of the above equations to form u = ( p e 1 + q e 2 + r e 3 ) ⋅ n 1 , v = ( p e 1 + q e 2 + r e 3 ) ⋅ n 2 , w = ( p e 1 + q e 2 + r e 3 ) ⋅ n 3 . {\displaystyle {\begin{aligned}u&=(p\mathbf {e} _{1}+q\mathbf {e} _{2}+r\mathbf {e} _{3})\cdot \mathbf {n} _{1},\\v&=(p\mathbf {e} _{1}+q\mathbf {e} _{2}+r\mathbf {e} _{3})\cdot \mathbf {n} _{2},\\w&=(p\mathbf {e} _{1}+q\mathbf {e} _{2}+r\mathbf {e} _{3})\cdot \mathbf {n} _{3}.\end{aligned}}} Distributing the dot-multiplication gives u = p e 1 ⋅ n 1 + q e 2 ⋅ n 1 + r e 3 ⋅ n 1 , v = p e 1 ⋅ n 2 + q e 2 ⋅ n 2 + r e 3 ⋅ n 2 , w = p e 1 ⋅ n 3 + q e 2 ⋅ n 3 + r e 3 ⋅ n 3 . {\displaystyle {\begin{aligned}u&=p\mathbf {e} _{1}\cdot \mathbf {n} _{1}+q\mathbf {e} _{2}\cdot \mathbf {n} _{1}+r\mathbf {e} _{3}\cdot \mathbf {n} _{1},\\v&=p\mathbf {e} _{1}\cdot \mathbf {n} _{2}+q\mathbf {e} _{2}\cdot \mathbf {n} _{2}+r\mathbf {e} _{3}\cdot \mathbf {n} _{2},\\w&=p\mathbf {e} _{1}\cdot \mathbf {n} _{3}+q\mathbf {e} _{2}\cdot \mathbf {n} _{3}+r\mathbf {e} _{3}\cdot \mathbf {n} _{3}.\end{aligned}}} Replacing each dot product with a unique scalar gives u = c 11 p + c 12 q + c 13 r , v = c 21 p + c 22 q + c 23 r , w = c 31 p + c 32 q + c 33 r , {\displaystyle {\begin{aligned}u&=c_{11}p+c_{12}q+c_{13}r,\\v&=c_{21}p+c_{22}q+c_{23}r,\\w&=c_{31}p+c_{32}q+c_{33}r,\end{aligned}}} and these equations can be expressed as the single matrix equation [ u v w ] = [ c 11 c 12 c 13 c 21 c 22 c 23 c 31 c 32 c 33 ] [ p q r ] . {\displaystyle {\begin{bmatrix}u\\v\\w\\\end{bmatrix}}={\begin{bmatrix}c_{11}&c_{12}&c_{13}\\c_{21}&c_{22}&c_{23}\\c_{31}&c_{32}&c_{33}\end{bmatrix}}{\begin{bmatrix}p\\q\\r\end{bmatrix}}.} This matrix equation relates the scalar components of a in the n basis (u,v, and w) with those in the e basis (p, q, and r). Each matrix element cjk is the direction cosine relating nj to ek. The term direction cosine refers to the cosine of the angle between two unit vectors, which is also equal to their dot product. Therefore, c 11 = n 1 ⋅ e 1 c 12 = n 1 ⋅ e 2 c 13 = n 1 ⋅ e 3 c 21 = n 2 ⋅ e 1 c 22 = n 2 ⋅ e 2 c 23 = n 2 ⋅ e 3 c 31 = n 3 ⋅ e 1 c 32 = n 3 ⋅ e 2 c 33 = n 3 ⋅ e 3 {\displaystyle {\begin{aligned}c_{11}&=\mathbf {n} _{1}\cdot \mathbf {e} _{1}\\c_{12}&=\mathbf {n} _{1}\cdot \mathbf {e} _{2}\\c_{13}&=\mathbf {n} _{1}\cdot \mathbf {e} _{3}\\c_{21}&=\mathbf {n} _{2}\cdot \mathbf {e} _{1}\\c_{22}&=\mathbf {n} _{2}\cdot \mathbf {e} _{2}\\c_{23}&=\mathbf {n} _{2}\cdot \mathbf {e} _{3}\\c_{31}&=\mathbf {n} _{3}\cdot \mathbf {e} _{1}\\c_{32}&=\mathbf {n} _{3}\cdot \mathbf {e} _{2}\\c_{33}&=\mathbf {n} _{3}\cdot \mathbf {e} _{3}\end{aligned}}} By referring collectively to e1, e2, e3 as the e basis and to n1, n2, n3 as the n basis, the matrix containing all the cjk is known as the "transformation matrix from e to n", or the "rotation matrix from e to n" (because it can be imagined as the "rotation" of a vector from one basis to another), or the "direction cosine matrix from e to n" (because it contains direction cosines). The properties of a rotation matrix are such that its inverse is equal to its transpose. This means that the "rotation matrix from e to n" is the transpose of "rotation matrix from n to e". The properties of a direction cosine matrix, C are: the determinant is unity, |C| = 1; the inverse is equal to the transpose; the rows and columns are orthogonal unit vectors, therefore their dot products are zero. The advantage of this method is that a direction cosine matrix can usually be obtained independently by using Euler angles or a quaternion to relate the two vector bases, so the basis conversions can be performed directly, without having to work out all the dot products described above. By applying several matrix multiplications in succession, any vector can be expressed in any basis so long as the set of direction cosines is known relating the successive bases. === Other dimensions === With the exception of the cross and triple products, the above formulae generalise to two dimensions and higher dimensions. For example, addition generalises to two dimensions as ( a 1 e 1 + a 2 e 2 ) + ( b 1 e 1 + b 2 e 2 ) = ( a 1 + b 1 ) e 1 + ( a 2 + b 2 ) e 2 , {\displaystyle (a_{1}{\mathbf {e} }_{1}+a_{2}{\mathbf {e} }_{2})+(b_{1}{\mathbf {e} }_{1}+b_{2}{\mathbf {e} }_{2})=(a_{1}+b_{1}){\mathbf {e} }_{1}+(a_{2}+b_{2}){\mathbf {e} }_{2},} and in four dimensions as ( a 1 e 1 + a 2 e 2 + a 3 e 3 + a 4 e 4 ) + ( b 1 e 1 + b 2 e 2 + b 3 e 3 + b 4 e 4 ) = ( a 1 + b 1 ) e 1 + ( a 2 + b 2 ) e 2 + ( a 3 + b 3 ) e 3 + ( a 4 + b 4 ) e 4 . {\displaystyle {\begin{aligned}(a_{1}{\mathbf {e} }_{1}+a_{2}{\mathbf {e} }_{2}+a_{3}{\mathbf {e} }_{3}+a_{4}{\mathbf {e} }_{4})&+(b_{1}{\mathbf {e} }_{1}+b_{2}{\mathbf {e} }_{2}+b_{3}{\mathbf {e} }_{3}+b_{4}{\mathbf {e} }_{4})=\\(a_{1}+b_{1}){\mathbf {e} }_{1}+(a_{2}+b_{2}){\mathbf {e} }_{2}&+(a_{3}+b_{3}){\mathbf {e} }_{3}+(a_{4}+b_{4}){\mathbf {e} }_{4}.\end{aligned}}} The cross product does not readily generalise to other dimensions, though the closely related exterior product does, whose result is a bivector. In two dimensions this is simply a pseudoscalar ( a 1 e 1 + a 2 e 2 ) ∧ ( b 1 e 1 + b 2 e 2 ) = ( a 1 b 2 − a 2 b 1 ) e 1 e 2 . {\displaystyle (a_{1}{\mathbf {e} }_{1}+a_{2}{\mathbf {e} }_{2})\wedge (b_{1}{\mathbf {e} }_{1}+b_{2}{\mathbf {e} }_{2})=(a_{1}b_{2}-a_{2}b_{1})\mathbf {e} _{1}\mathbf {e} _{2}.} A seven-dimensional cross product is similar to the cross product in that its result is a vector orthogonal to the two arguments; there is however no natural way of selecting one of the possible such products. == Physics == Vectors have many uses in physics and other sciences. === Length and units === In abstract vector spaces, the length of the arrow depends on a dimensionless scale. If it represents, for example, a force, the "scale" is of physical dimension length/force. Thus there is typically consistency in scale among quantities of the same dimension, but otherwise scale ratios may vary; for example, if "1 newton" and "5 m" are both represented with an arrow of 2 cm, the scales are 1 m:50 N and 1:250 respectively. Equal length of vectors of different dimension has no particular significance unless there is some proportionality constant inherent in the system that the diagram represents. Also length of a unit vector (of dimension length, not length/force, etc.) has no coordinate-system-invariant significance. === Vector-valued functions === Often in areas of physics and mathematics, a vector evolves in time, meaning that it depends on a time parameter t. For instance, if r represents the position vector of a particle, then r(t) gives a parametric representation of the trajectory of the particle. Vector-valued functions can be differentiated and integrated by differentiating or integrating the components of the vector, and many of the familiar rules from calculus continue to hold for the derivative and integral of vector-valued functions. === Position, velocity and acceleration === The position of a point x = (x1, x2, x3) in three-dimensional space can be represented as a position vector whose base point is the origin x = x 1 e 1 + x 2 e 2 + x 3 e 3 . {\displaystyle {\mathbf {x} }=x_{1}{\mathbf {e} }_{1}+x_{2}{\mathbf {e} }_{2}+x_{3}{\mathbf {e} }_{3}.} The position vector has dimensions of length. Given two points x = (x1, x2, x3), y = (y1, y2, y3) their displacement is a vector y − x = ( y 1 − x 1 ) e 1 + ( y 2 − x 2 ) e 2 + ( y 3 − x 3 ) e 3 . {\displaystyle {\mathbf {y} }-{\mathbf {x} }=(y_{1}-x_{1}){\mathbf {e} }_{1}+(y_{2}-x_{2}){\mathbf {e} }_{2}+(y_{3}-x_{3}){\mathbf {e} }_{3}.} which specifies the position of y relative to x. The length of this vector gives the straight-line distance from x to y. Displacement has the dimensions of length. The velocity v of a point or particle is a vector, its length gives the speed. For constant velocity the position at time t will be x t = t v + x 0 , {\displaystyle {\mathbf {x} }_{t}=t{\mathbf {v} }+{\mathbf {x} }_{0},} where x0 is the position at time t = 0. Velocity is the time derivative of position. Its dimensions are length/time. Acceleration a of a point is vector which is the time derivative of velocity. Its dimensions are length/time2. === Force, energy, work === Force is a vector with dimensions of mass×length/time2 (N m s −2) and Newton's second law is the scalar multiplication F = m a {\displaystyle {\mathbf {F} }=m{\mathbf {a} }} Work is the dot product of force and displacement W = F ⋅ ( x 2 − x 1 ) . {\displaystyle W={\mathbf {F} }\cdot ({\mathbf {x} }_{2}-{\mathbf {x} }_{1}).} == Vectors, pseudovectors, and transformations == An alternative characterization of Euclidean vectors, especially in physics, describes them as lists of quantities which behave in a certain way under a coordinate transformation. A contravariant vector is required to have components that "transform opposite to the basis" under changes of basis. The vector itself does not change when the basis is transformed; instead, the components of the vector make a change that cancels the change in the basis. In other words, if the reference axes (and the basis derived from it) were rotated in one direction, the component representation of the vector would rotate in the opposite way to generate the same final vector. Similarly, if the reference axes were stretched in one direction, the components of the vector would reduce in an exactly compensating way. Mathematically, if the basis undergoes a transformation described by an invertible matrix M, so that a coordinate vector x is transformed to x′ = Mx, then a contravariant vector v must be similarly transformed via v′ = M − 1 {\displaystyle ^{-1}} v. This important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities. For example, if v consists of the x, y, and z-components of velocity, then v is a contravariant vector: if the coordinates of space are stretched, rotated, or twisted, then the components of the velocity transform in the same way. On the other hand, for instance, a triple consisting of the length, width, and height of a rectangular box could make up the three components of an abstract vector, but this vector would not be contravariant, since rotating the box does not change the box's length, width, and height. Examples of contravariant vectors include displacement, velocity, electric field, momentum, force, and acceleration. In the language of differential geometry, the requirement that the components of a vector transform according to the same matrix of the coordinate transition is equivalent to defining a contravariant vector to be a tensor of contravariant rank one. Alternatively, a contravariant vector is defined to be a tangent vector, and the rules for transforming a contravariant vector follow from the chain rule. Some vectors transform like contravariant vectors, except that when they are reflected through a mirror, they flip and gain a minus sign. A transformation that switches right-handedness to left-handedness and vice versa like a mirror does is said to change the orientation of space. A vector which gains a minus sign when the orientation of space changes is called a pseudovector or an axial vector. Ordinary vectors are sometimes called true vectors or polar vectors to distinguish them from pseudovectors. Pseudovectors occur most frequently as the cross product of two ordinary vectors. One example of a pseudovector is angular velocity. Driving in a car, and looking forward, each of the wheels has an angular velocity vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the reflection of this angular velocity vector points to the right, but the actual angular velocity vector of the wheel still points to the left, corresponding to the minus sign. Other examples of pseudovectors include magnetic field, torque, or more generally any cross product of two (true) vectors. This distinction between vectors and pseudovectors is often ignored, but it becomes important in studying symmetry properties. == See also == == Notes == == References == === Mathematical treatments === Apostol, Tom (1967). Calculus. Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra. Wiley. ISBN 978-0-471-00005-1. Apostol, Tom (1969). Calculus. Vol. 2: Multi-Variable Calculus and Linear Algebra with Applications. Wiley. ISBN 978-0-471-00007-5. Heinbockel, J. H. (2001), Introduction to Tensor Calculus and Continuum Mechanics, Trafford Publishing, ISBN 1-55369-133-4. Itô, Kiyosi (1993), Encyclopedic Dictionary of Mathematics (2nd ed.), MIT Press, ISBN 978-0-262-59020-4. Ivanov, A.B. (2001) [1994], "Vector", Encyclopedia of Mathematics, EMS Press. Kane, Thomas R.; Levinson, David A. (1996), Dynamics Online, Sunnyvale, California: OnLine Dynamics. Lang, Serge (1986). Introduction to Linear Algebra (2nd ed.). Springer. ISBN 0-387-96205-0. Pedoe, Daniel (1988). Geometry: A comprehensive course. Dover. ISBN 0-486-65812-0. === Physical treatments === Aris, R. (1990). Vectors, Tensors and the Basic Equations of Fluid Mechanics. Dover. ISBN 978-0-486-66110-0. Feynman, Richard; Leighton, R.; Sands, M. (2005). "Chapter 11". The Feynman Lectures on Physics. Vol. I (2nd ed.). Addison Wesley. ISBN 978-0-8053-9046-9. == External links == "Vector", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Online vector identities (PDF) Introducing Vectors A conceptual introduction (applied mathematics)
Wikipedia:Eugene Dynkin#0
Eugene Borisovich Dynkin (Russian: Евгений Борисович Дынкин; 11 May 1924 – 14 November 2014) was a Soviet and American mathematician. He made contributions to the fields of probability and algebra, especially semisimple Lie groups, Lie algebras, and Markov processes. The Dynkin diagram, the Dynkin system, and Dynkin's lemma are named after him. == Biography == Dynkin was born into a Jewish family, living in Leningrad until 1935, when his family was exiled to Kazakhstan. Two years later, when Dynkin was 13, his father disappeared in the Gulag. === Moscow University === At the age of 16, in 1940, Dynkin was admitted to Moscow University. He avoided military service in World War II because of his poor eyesight, and received his MS in 1945 and his PhD in 1948. He became an assistant professor at Moscow, but was not awarded a "chair" until 1954 because of his political undesirability. His academic progress was made difficult due to his father's fate, as well as Dynkin's Jewish origin; the special efforts of Andrey Kolmogorov, his PhD supervisor, made it possible for Dynkin to progress through graduate school into a teaching position. === USSR Academy of Sciences === In 1968, Dynkin was forced to transfer from the Moscow University to the Central Economic Mathematical Institute of the USSR Academy of Sciences. He worked there on the theory of economic growth and economic equilibrium. === Cornell === He remained at the Institute until 1976, when he emigrated to the United States. In 1977, he became a professor at Cornell University. === Death === Dynkin died at the Cayuga Medical Center in Ithaca, New York, aged 90. Dynkin was an atheist. == Mathematical work == Dynkin is considered to be a rare example of a mathematician who made fundamental contributions to two very distinct areas of mathematics: algebra and probability theory. The algebraic period of Dynkin's mathematical work was between 1944 and 1954, though even during this time a probabilistic theme was noticeable. Indeed, Dynkin's first publication was in 1945, jointly with N. A. Dmitriev, solved a problem on the eigenvalues of stochastic matrices. This problem was raised at Kolmogorov's seminar on Markov chains, while both Dynkin and Dmitriev were undergraduates. === Lie Theory === While Dynkin was a student at Moscow University, he attended Israel Gelfand's seminar on Lie groups. In 1944, Gelfand asked him to prepare a survey on the structure and classification of semisimple Lie groups, based on the papers by Hermann Weyl and Bartel Leendert van der Waerden. Dynkin found the papers difficult to read, and in an attempt to better understand the results, he invented the notion of a "simple root" in a root system. He represented the pairwise angles between these simple roots in the form of a Dynkin diagram. In this way he obtained a cleaner exposition of the classification of complex semisimple Lie algebras. Of Dynkin's 1947 paper "Structure of semisimple Lie algebras", Bertram Kostant wrote: In this paper, using only elementary mathematics, and starting with almost nothing, Dynkin, brilliantly and elegantly developed the structure and machinery of semisimple Lie algebras. What he accomplished in this paper was to take a hitherto esoteric subject, and to make it into beautiful and powerful mathematics. Dynkin's 1952 influential paper "Semisimple subalgebras of semisimple Lie algebras", contained large tables and lists, and studied the subalgebras of the exceptional Lie algebras. === Probability theory === Dynkin is considered one of the founders of the modern theory of Markov processes. The results obtained by Dynkin and other participants of his seminar at Moscow University were summarized in two books. The first of these, "Theory of Markov Processes", was published in 1959, and laid the foundations of the theory. Dynkin's one-hour talk at the 1962 International Congress of Mathematicians in Stockholm, was delivered by Kolmogorov, since prior to his emigration, Dynkin was never permitted to travel to the West. This talk was titled "Markov processes and problems in analysis". == Prizes and awards == Prize of the Moscow Mathematical Society, 1951 Institute of Mathematical Statistics, Fellow, 1962 American Academy of Arts and Sciences, Fellow, 1978 National Academy of Sciences of the USA, Member, 1985 American Mathematical Society, Leroy P. Steele Prize for Total Mathematical Work, 1993 Moscow Mathematical Society, Honorary Member, 1995 Doctor Honoris Causa of the Pierre and Marie Curie University (Paris 6), 1997 Doctor of Science (honoris causa) of the University of Warwick, 2003. Doctor Honoris Causa of the Independent Moscow University (Russia), 2003 Fellow of the American Mathematical Society, 2012 == Publications == Theory of Markov Processes. Prentice-Hall. 1961. Die Grundlagen der Theorie der Markoffschen Prozesse. Grundlehren der mathematischen Wissenschaften, Band 108. Springer Verlag. 1961. Markov Processes. Grundlehren der mathematischen Wissenschaften. Springer Verlag. 1965. Controlled Markov Processes. Grundlehren der mathematischen Wissenschaften. Springer Verlag. 1979. Markov Processes and Related Problems of Analysis, Selected Papers. London Math. Soc. Lecture Notes Series, 54. Cambridge University Press. 1982. Dynkin, Eugene B. (2000). Yushkevich, A. A.; Seitz, G. M.; Onishchik, A. L. (eds.). Selected papers of E. B. Dynkin with commentary. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-1065-1. MR 1757976. Diffusions, Superdiffusions and Partial Differential Equations. AMS Colloquium Publications. 2002. Superdiffusions and Positive Solutions of Nonlinear Partial Differential Equations. American Mathematical Society. 2004. == See also == Algebra Affine Dynkin diagram Coxeter–Dynkin diagram Dynkin index Dynkin–Specht–Wever Lemma Probability Dynkin's card trick Dynkin's formula Dynkin system == Notes == == External links == Eugene Dynkin at the Mathematics Genealogy Project O'Connor, John J.; Robertson, Edmund F., "Eugene Dynkin", MacTutor History of Mathematics Archive, University of St Andrews Department listing at Cornell University Personal web page Genealogy Tree of Dynkin's School Collection of interviews assembled by Dynkin
Wikipedia:Eugene P. Northrop#0
Eugene P. Northrop (1908–1969) was an American research mathematician and a math popularizer. Northrop received his PhD from Yale University in 1934 with thesis advisor Einar Hille. Northrop held the William Rainey Harper Chair of Mathematics at the University of Chicago, and frequently served in administrative roles and on technical commissions. He is most remembered for his 1944 book Riddles in Mathematics, which was well-received by the mathematical community and remains in print as a Dover book (first published in 2014). == References == == External links == Eugene P. Northrop at the Mathematics Genealogy Project
Wikipedia:Eugenia Malinnikova#0
Eugenia Malinnikova (born 23 April 1974) is a mathematician, winner of the 2017 Clay Research Award which she shared with Aleksandr Logunov "in recognition of their introduction of a novel geometric combinatorial method to study doubling properties of solutions to elliptic eigenvalue problems". == Education and career == As a high school student, she competed three times in the International Mathematical Olympiad, winning three Gold medals (including two perfect scores). She is a member of the International Mathematical Olympiad Hall of Fame. She got her PhD from St. Petersburg State University in 1999, under the supervision of Viktor Petrovich Havin. Currently she works as a professor of mathematics at Stanford University after previously working at the Norwegian University of Science and Technology. == Recognition == In 2018 she was inducted into the Norwegian Academy of Science and Letters. She is also a member of the Royal Norwegian Society of Sciences and Letters and the Norwegian Academy of Technological Sciences. She was elected as a Fellow of the American Mathematical Society in the 2024 class of fellows. == References == == External links == Homepage at Norwegian University of Science and Technology
Wikipedia:Eugenia O'Reilly-Regueiro#0
Eugenia O'Reilly-Regueiro is a Mexican mathematician specializing in algebraic combinatorics and particular in the symmetries of combinatorial designs, circulant graphs, and abstract polytopes. She is a researcher in the Institute of Mathematics of the National Autonomous University of Mexico (UNAM). == Education and career == O'Reilly-Regueiro is originally from Mexico City. She was a mathematics student at UNAM, graduating in 1995. For the next two years she continued to work at UNAM as an assistant in the mathematics department of the Faculty of Chemistry, while studying harpsichord at UNAM's National School of Music, working there with musician Luisa Durón. Next, with a scholarship from the UNAM Dirección General de Asuntos del Personal Académico (DGAPA), she traveled to England for graduate study at Imperial College London, at that time part of the University of London system. She completed her PhD in 2003. Her dissertation, Flag-Transitive Symmetric Designs, was supervised by Martin Liebeck. On completing her doctorate, she returned to UNAM as a researcher for the Institute of Mathematics. == Recognition == O'Reilly-Regueiro was elected to the Mexican Academy of Sciences in 2022. == References == == External links == Home page Eugenia O'Reilly-Regueiro publications indexed by Google Scholar
Wikipedia:Eugenie Maria Morenus#0
Eugenie Maria Morenus (February 21, 1881 – October 15, 1966) was an American mathematician and college professor and one of the few women to earn a PhD in math before World War II. She taught Latin and mathematics at Sweet Briar College from 1909 to 1946. == Early life and education == Morenus was born in Cleveland, New York, the daughter of Eugene Morenus and Maria Euphemia Van Blarcom Morenus. Her father managed a glassworks. She graduated from Monogahela High School in 1898. She earned a bachelor's degree from Vassar College in 1904, and a master's degree from the same school in 1905. She completed doctoral studies in mathematics at Columbia University in 1922. Her dissertation under Edward Kasner was titled "Geometric properties completely characterizing the set of all the curves of constant pressure in a field of force". Morenus was also a student for briefer periods at the University of Chicago, and at Göttingen. == Career == Morenus taught mathematics and Latin at a school in Watertown, New York and at Poughkeepsie High School after her master's degree. She was a Latin instructor at Sweet Briar College from 1909 to 1916, and was a mathematics professor at the same school from 1916 to 1946. She was head of the mathematics department for much of that time. While at the school she was prominent in campus events, as a chorister, photographer, and play director. Her horse, October or "Toby", was a familiar figure on campus, and Morenus would lead ten-day rides for students over spring breaks. Morenus was a charter member of the Mathematical Association of America, belonged to the Virginia Academy of Science, and was active in the American Association of University Women (AAUW). She was active in the Order of the Eastern Star and the Daughters of the American Revolution. She received an Anna Brackett Fellowship by the AAUW in 1927, to study at Cambridge. After her retirement from Sweet Briar College in 1946, she taught briefly at Connecticut College for Women, and spent her winters in Florida. == Personal life == Morenus died in Lake Wales, Florida in 1966, aged 85 years. There was a scholarship endowment fund named for Morenus at Sweet Briar College, beginning in 1960. == References ==
Wikipedia:Eugenius Nulty#0
Eugenius Nulty (1790 – July 3, 1871) was an Irish born American mathematician of the 19th century. He served on the faculty of Dickinson College from 1814 to 1816, and later taught and tutored prominent Philadelphians, including the brothers Mathew Carey Lea and Henry Charles Lea. == Career == After arriving in the United States from his native Ireland, Nulty quickly became ensconced as a member of the new nation's small intelligentsia. Contemporaries described him as “brilliant”. In 1814, Nulty became a professor of mathematics at Dickinson College, where he remained for two years. In 1816 he moved to Philadelphia at the invitation of The Philadelphia Life Insurance Company and the Pennsylvania Company, who each recruited Nulty as one of the first U.S. actuarial scientists. His new countrymen also called Nulty to assist with mathematics for the Survey of the Coast (which became the United States Coast Survey in 1836 and the United States Coast and Geodetic Survey in 1878). In 1817, Nulty was elected a member of the American Philosophical Society. In 1823, the University of Pennsylvania awarded Nulty an honorary A.M. He was elected an Associate Fellow of the American Academy of Arts and Sciences 1832. Nulty was also a correspondent of mathematician, chemist and natural philosopher Robert M. Patterson. Nulty contributed to the defunct Mathematical Diary, one of the 3 earliest learned mathematical journals published in the U.S. His Elements of Geometry, theoretical and practical Philadelphia: J. Wetham (1836) was one of the first two or three original geometries published in the United States and is still over 150 years later available from multiple publishers in historical reprints. In 1840, P.J. Walker, director of the National Institute for the Promotion of Science, called Nulty "unsurpassed at home or abroad" in pure mathematics. == References ==
Wikipedia:Eugénie Hunsicker#0
Eugénie Lee Hunsicker is an American mathematician who works at Loughborough University in England as a senior lecturer in pure mathematics and as director of equality and diversity for the school of science. Her research in pure mathematics has concerned topics "at the intersection of analysis, geometry and topology"; she has also worked on more applied topics in data science and image classification. == Education and career == Hunsicker grew up in Iowa City, and was inspired to do mathematics in part by a high school teacher who was married to a mathematics professor at the University of Iowa. She went to Haverford College, where she was mentored by mathematician Curtis Greene, including two summers of mathematical research with Greene. She also visited the University of Oxford as an exchange student, and earned an honorable mention for the 1992 Alice T. Schafer Prize for excellence in mathematics by an undergraduate woman, won that year by Zvezdelina Stankova. Hunsicker graduated from Haverford magna cum laude in 1992, and went on to graduate study at the University of Chicago, supported in part by a fellowship from the American Association of University Women. Her 1999 dissertation, L(2)-Cohomology and L(2)-Harmonic Forms for Complete Noncompact Kähler and Warped Product Metrics, was jointly supervised by Melvin G. Rothenberg and Kevin Corlette. She went straight from her doctorate to a faculty position at Lawrence University, a liberal arts college focused primarily on undergraduate teaching, but five years later found herself missing the research life, and after earning tenure she went on the academic job market again. She applied to Loughborough "almost on a whim" after a honeymoon visit to England, and moved there in 2006. == Film == In 2018, as Chair of the London Mathematical Society Women in Maths Committee, Hunsicker worked with filmmaker Irina Linke to produce a short film on Faces of Women in Mathematics. == Recognition == Hunsicker won the Trevor Evans Award of the Mathematical Association of America in 2003 for her work with Laura Taalman on the mathematics of modular architecture. In 2018, she won the Suffrage Science Award for Mathematics and Computing "for her achievements in science and for her work encouraging others to aim for leadership roles in the sector". She was selected as a Fellow of the Association for Women in Mathematics in the Class of 2021 "for leadership of the United Kingdom community of women in mathematics; tireless advocacy for women in mathematics everywhere through talks, writing, and the film 'Faces of Women in Mathematics'; and application of mathematical and statistical expertise to research into equity and diversity issues facing the mathematical community". == References == == External links == Eugénie Hunsicker publications indexed by Google Scholar
Wikipedia:Euler product#0
In number theory, an Euler product is an expansion of a Dirichlet series into an infinite product indexed by prime numbers. The original such product was given for the sum of all positive integers raised to a certain power as proven by Leonhard Euler. This series and its continuation to the entire complex plane would later become known as the Riemann zeta function. == Definition == In general, if a is a bounded multiplicative function, then the Dirichlet series ∑ n = 1 ∞ a ( n ) n s {\displaystyle \sum _{n=1}^{\infty }{\frac {a(n)}{n^{s}}}} is equal to ∏ p ∈ P P ( p , s ) for Re ⁡ ( s ) > 1. {\displaystyle \prod _{p\in \mathbb {P} }P(p,s)\quad {\text{for }}\operatorname {Re} (s)>1.} where the product is taken over prime numbers p, and P(p, s) is the sum ∑ k = 0 ∞ a ( p k ) p k s = 1 + a ( p ) p s + a ( p 2 ) p 2 s + a ( p 3 ) p 3 s + ⋯ {\displaystyle \sum _{k=0}^{\infty }{\frac {a(p^{k})}{p^{ks}}}=1+{\frac {a(p)}{p^{s}}}+{\frac {a(p^{2})}{p^{2s}}}+{\frac {a(p^{3})}{p^{3s}}}+\cdots } In fact, if we consider these as formal generating functions, the existence of such a formal Euler product expansion is a necessary and sufficient condition that a(n) be multiplicative: this says exactly that a(n) is the product of the a(pk) whenever n factors as the product of the powers pk of distinct primes p. An important special case is that in which a(n) is totally multiplicative, so that P(p, s) is a geometric series. Then P ( p , s ) = 1 1 − a ( p ) p s , {\displaystyle P(p,s)={\frac {1}{1-{\frac {a(p)}{p^{s}}}}},} as is the case for the Riemann zeta function, where a(n) = 1, and more generally for Dirichlet characters. == Convergence == In practice all the important cases are such that the infinite series and infinite product expansions are absolutely convergent in some region Re ⁡ ( s ) > C , {\displaystyle \operatorname {Re} (s)>C,} that is, in some right half-plane in the complex numbers. This already gives some information, since the infinite product, to converge, must give a non-zero value; hence the function given by the infinite series is not zero in such a half-plane. In the theory of modular forms it is typical to have Euler products with quadratic polynomials in the denominator here. The general Langlands philosophy includes a comparable explanation of the connection of polynomials of degree m, and the representation theory for GLm. == Examples == The following examples will use the notation P {\displaystyle \mathbb {P} } for the set of all primes, that is: P = { p ∈ N | p is prime } . {\displaystyle \mathbb {P} =\{p\in \mathbb {N} \,|\,p{\text{ is prime}}\}.} The Euler product attached to the Riemann zeta function ζ(s), also using the sum of the geometric series, is ∏ p ∈ P ( 1 1 − 1 p s ) = ∏ p ∈ P ( ∑ k = 0 ∞ 1 p k s ) = ∑ n = 1 ∞ 1 n s = ζ ( s ) . {\displaystyle {\begin{aligned}\prod _{p\,\in \,\mathbb {P} }\left({\frac {1}{1-{\frac {1}{p^{s}}}}}\right)&=\prod _{p\ \in \ \mathbb {P} }\left(\sum _{k=0}^{\infty }{\frac {1}{p^{ks}}}\right)\\&=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=\zeta (s).\end{aligned}}} while for the Liouville function λ(n) = (−1)ω(n), it is ∏ p ∈ P ( 1 1 + 1 p s ) = ∑ n = 1 ∞ λ ( n ) n s = ζ ( 2 s ) ζ ( s ) . {\displaystyle \prod _{p\,\in \,\mathbb {P} }\left({\frac {1}{1+{\frac {1}{p^{s}}}}}\right)=\sum _{n=1}^{\infty }{\frac {\lambda (n)}{n^{s}}}={\frac {\zeta (2s)}{\zeta (s)}}.} Using their reciprocals, two Euler products for the Möbius function μ(n) are ∏ p ∈ P ( 1 − 1 p s ) = ∑ n = 1 ∞ μ ( n ) n s = 1 ζ ( s ) {\displaystyle \prod _{p\,\in \,\mathbb {P} }\left(1-{\frac {1}{p^{s}}}\right)=\sum _{n=1}^{\infty }{\frac {\mu (n)}{n^{s}}}={\frac {1}{\zeta (s)}}} and ∏ p ∈ P ( 1 + 1 p s ) = ∑ n = 1 ∞ | μ ( n ) | n s = ζ ( s ) ζ ( 2 s ) . {\displaystyle \prod _{p\,\in \,\mathbb {P} }\left(1+{\frac {1}{p^{s}}}\right)=\sum _{n=1}^{\infty }{\frac {|\mu (n)|}{n^{s}}}={\frac {\zeta (s)}{\zeta (2s)}}.} Taking the ratio of these two gives ∏ p ∈ P ( 1 + 1 p s 1 − 1 p s ) = ∏ p ∈ P ( p s + 1 p s − 1 ) = ζ ( s ) 2 ζ ( 2 s ) . {\displaystyle \prod _{p\,\in \,\mathbb {P} }\left({\frac {1+{\frac {1}{p^{s}}}}{1-{\frac {1}{p^{s}}}}}\right)=\prod _{p\,\in \,\mathbb {P} }\left({\frac {p^{s}+1}{p^{s}-1}}\right)={\frac {\zeta (s)^{2}}{\zeta (2s)}}.} Since for even values of s the Riemann zeta function ζ(s) has an analytic expression in terms of a rational multiple of πs, then for even exponents, this infinite product evaluates to a rational number. For example, since ζ(2) = ⁠π2/6⁠, ζ(4) = ⁠π4/90⁠, and ζ(8) = ⁠π8/9450⁠, then ∏ p ∈ P ( p 2 + 1 p 2 − 1 ) = 5 3 ⋅ 10 8 ⋅ 26 24 ⋅ 50 48 ⋅ 122 120 ⋯ = ζ ( 2 ) 2 ζ ( 4 ) = 5 2 , ∏ p ∈ P ( p 4 + 1 p 4 − 1 ) = 17 15 ⋅ 82 80 ⋅ 626 624 ⋅ 2402 2400 ⋯ = ζ ( 4 ) 2 ζ ( 8 ) = 7 6 , {\displaystyle {\begin{aligned}\prod _{p\,\in \,\mathbb {P} }\left({\frac {p^{2}+1}{p^{2}-1}}\right)&={\frac {5}{3}}\cdot {\frac {10}{8}}\cdot {\frac {26}{24}}\cdot {\frac {50}{48}}\cdot {\frac {122}{120}}\cdots &={\frac {\zeta (2)^{2}}{\zeta (4)}}&={\frac {5}{2}},\\[6pt]\prod _{p\,\in \,\mathbb {P} }\left({\frac {p^{4}+1}{p^{4}-1}}\right)&={\frac {17}{15}}\cdot {\frac {82}{80}}\cdot {\frac {626}{624}}\cdot {\frac {2402}{2400}}\cdots &={\frac {\zeta (4)^{2}}{\zeta (8)}}&={\frac {7}{6}},\end{aligned}}} and so on, with the first result known by Ramanujan. This family of infinite products is also equivalent to ∏ p ∈ P ( 1 + 2 p s + 2 p 2 s + ⋯ ) = ∑ n = 1 ∞ 2 ω ( n ) n s = ζ ( s ) 2 ζ ( 2 s ) , {\displaystyle \prod _{p\,\in \,\mathbb {P} }\left(1+{\frac {2}{p^{s}}}+{\frac {2}{p^{2s}}}+\cdots \right)=\sum _{n=1}^{\infty }{\frac {2^{\omega (n)}}{n^{s}}}={\frac {\zeta (s)^{2}}{\zeta (2s)}},} where ω(n) counts the number of distinct prime factors of n, and 2ω(n) is the number of square-free divisors. If χ(n) is a Dirichlet character of conductor N, so that χ is totally multiplicative and χ(n) only depends on n mod N, and χ(n) = 0 if n is not coprime to N, then ∏ p ∈ P 1 1 − χ ( p ) p s = ∑ n = 1 ∞ χ ( n ) n s . {\displaystyle \prod _{p\,\in \,\mathbb {P} }{\frac {1}{1-{\frac {\chi (p)}{p^{s}}}}}=\sum _{n=1}^{\infty }{\frac {\chi (n)}{n^{s}}}.} Here it is convenient to omit the primes p dividing the conductor N from the product. In his notebooks, Ramanujan generalized the Euler product for the zeta function as ∏ p ∈ P ( x − 1 p s ) ≈ 1 Li s ⁡ ( x ) {\displaystyle \prod _{p\,\in \,\mathbb {P} }\left(x-{\frac {1}{p^{s}}}\right)\approx {\frac {1}{\operatorname {Li} _{s}(x)}}} for s > 1 where Lis(x) is the polylogarithm. For x = 1 the product above is just ⁠1/ζ(s)⁠. == Notable constants == Many well known constants have Euler product expansions. The Leibniz formula for π π 4 = ∑ n = 0 ∞ ( − 1 ) n 2 n + 1 = 1 − 1 3 + 1 5 − 1 7 + ⋯ {\displaystyle {\frac {\pi }{4}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}=1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+\cdots } can be interpreted as a Dirichlet series using the (unique) Dirichlet character modulo 4, and converted to an Euler product of superparticular ratios (fractions where numerator and denominator differ by 1): π 4 = ( ∏ p ≡ 1 ( mod 4 ) p p − 1 ) ( ∏ p ≡ 3 ( mod 4 ) p p + 1 ) = 3 4 ⋅ 5 4 ⋅ 7 8 ⋅ 11 12 ⋅ 13 12 ⋯ , {\displaystyle {\frac {\pi }{4}}=\left(\prod _{p\equiv 1{\pmod {4}}}{\frac {p}{p-1}}\right)\left(\prod _{p\equiv 3{\pmod {4}}}{\frac {p}{p+1}}\right)={\frac {3}{4}}\cdot {\frac {5}{4}}\cdot {\frac {7}{8}}\cdot {\frac {11}{12}}\cdot {\frac {13}{12}}\cdots ,} where each numerator is a prime number and each denominator is the nearest multiple of 4. Other Euler products for known constants include: The Hardy–Littlewood twin prime constant: ∏ p > 2 ( 1 − 1 ( p − 1 ) 2 ) = 0.660161... {\displaystyle \prod _{p>2}\left(1-{\frac {1}{\left(p-1\right)^{2}}}\right)=0.660161...} The Landau–Ramanujan constant: π 4 ∏ p ≡ 1 ( mod 4 ) ( 1 − 1 p 2 ) 1 2 = 0.764223... 1 2 ∏ p ≡ 3 ( mod 4 ) ( 1 − 1 p 2 ) − 1 2 = 0.764223... {\displaystyle {\begin{aligned}{\frac {\pi }{4}}\prod _{p\equiv 1{\pmod {4}}}\left(1-{\frac {1}{p^{2}}}\right)^{\frac {1}{2}}&=0.764223...\\[6pt]{\frac {1}{\sqrt {2}}}\prod _{p\equiv 3{\pmod {4}}}\left(1-{\frac {1}{p^{2}}}\right)^{-{\frac {1}{2}}}&=0.764223...\end{aligned}}} Murata's constant (sequence A065485 in the OEIS): ∏ p ( 1 + 1 ( p − 1 ) 2 ) = 2.826419... {\displaystyle \prod _{p}\left(1+{\frac {1}{\left(p-1\right)^{2}}}\right)=2.826419...} The strongly carefree constant ×ζ(2)2 OEIS: A065472: ∏ p ( 1 − 1 ( p + 1 ) 2 ) = 0.775883... {\displaystyle \prod _{p}\left(1-{\frac {1}{\left(p+1\right)^{2}}}\right)=0.775883...} Artin's constant OEIS: A005596: ∏ p ( 1 − 1 p ( p − 1 ) ) = 0.373955... {\displaystyle \prod _{p}\left(1-{\frac {1}{p(p-1)}}\right)=0.373955...} Landau's totient constant OEIS: A082695: ∏ p ( 1 + 1 p ( p − 1 ) ) = 315 2 π 4 ζ ( 3 ) = 1.943596... {\displaystyle \prod _{p}\left(1+{\frac {1}{p(p-1)}}\right)={\frac {315}{2\pi ^{4}}}\zeta (3)=1.943596...} The carefree constant ×ζ(2) OEIS: A065463: ∏ p ( 1 − 1 p ( p + 1 ) ) = 0.704442... {\displaystyle \prod _{p}\left(1-{\frac {1}{p(p+1)}}\right)=0.704442...} and its reciprocal OEIS: A065489: ∏ p ( 1 + 1 p 2 + p − 1 ) = 1.419562... {\displaystyle \prod _{p}\left(1+{\frac {1}{p^{2}+p-1}}\right)=1.419562...} The Feller–Tornier constant OEIS: A065493: 1 2 + 1 2 ∏ p ( 1 − 2 p 2 ) = 0.661317... {\displaystyle {\frac {1}{2}}+{\frac {1}{2}}\prod _{p}\left(1-{\frac {2}{p^{2}}}\right)=0.661317...} The quadratic class number constant OEIS: A065465: ∏ p ( 1 − 1 p 2 ( p + 1 ) ) = 0.881513... {\displaystyle \prod _{p}\left(1-{\frac {1}{p^{2}(p+1)}}\right)=0.881513...} The totient summatory constant OEIS: A065483: ∏ p ( 1 + 1 p 2 ( p − 1 ) ) = 1.339784... {\displaystyle \prod _{p}\left(1+{\frac {1}{p^{2}(p-1)}}\right)=1.339784...} Sarnak's constant OEIS: A065476: ∏ p > 2 ( 1 − p + 2 p 3 ) = 0.723648... {\displaystyle \prod _{p>2}\left(1-{\frac {p+2}{p^{3}}}\right)=0.723648...} The carefree constant OEIS: A065464: ∏ p ( 1 − 2 p − 1 p 3 ) = 0.428249... {\displaystyle \prod _{p}\left(1-{\frac {2p-1}{p^{3}}}\right)=0.428249...} The strongly carefree constant OEIS: A065473: ∏ p ( 1 − 3 p − 2 p 3 ) = 0.286747... {\displaystyle \prod _{p}\left(1-{\frac {3p-2}{p^{3}}}\right)=0.286747...} Stephens' constant OEIS: A065478: ∏ p ( 1 − p p 3 − 1 ) = 0.575959... {\displaystyle \prod _{p}\left(1-{\frac {p}{p^{3}-1}}\right)=0.575959...} Barban's constant OEIS: A175640: ∏ p ( 1 + 3 p 2 − 1 p ( p + 1 ) ( p 2 − 1 ) ) = 2.596536... {\displaystyle \prod _{p}\left(1+{\frac {3p^{2}-1}{p(p+1)\left(p^{2}-1\right)}}\right)=2.596536...} Taniguchi's constant OEIS: A175639: ∏ p ( 1 − 3 p 3 + 2 p 4 + 1 p 5 − 1 p 6 ) = 0.678234... {\displaystyle \prod _{p}\left(1-{\frac {3}{p^{3}}}+{\frac {2}{p^{4}}}+{\frac {1}{p^{5}}}-{\frac {1}{p^{6}}}\right)=0.678234...} The Heath-Brown and Moroz constant OEIS: A118228: ∏ p ( 1 − 1 p ) 7 ( 1 + 7 p + 1 p 2 ) = 0.0013176... {\displaystyle \prod _{p}\left(1-{\frac {1}{p}}\right)^{7}\left(1+{\frac {7p+1}{p^{2}}}\right)=0.0013176...} == Notes == == References == == External links ==
Wikipedia:Euler's formula#0
Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that, for any real number x, one has e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively. This complex exponential function is sometimes denoted cis x ("cosine plus i sine"). The formula is still valid if x is a complex number, and is also called Euler's formula in this more general case. Euler's formula is ubiquitous in mathematics, physics, chemistry, and engineering. The physicist Richard Feynman called the equation "our jewel" and "the most remarkable formula in mathematics". When x = π, Euler's formula may be rewritten as eiπ + 1 = 0 or eiπ = −1, which is known as Euler's identity. == History == In 1714, the English mathematician Roger Cotes presented a geometrical argument that can be interpreted (after correcting a misplaced factor of − 1 {\displaystyle {\sqrt {-1}}} ) as: i x = ln ⁡ ( cos ⁡ x + i sin ⁡ x ) . {\displaystyle ix=\ln(\cos x+i\sin x).} Exponentiating this equation yields Euler's formula. Note that the logarithmic statement is not universally correct for complex numbers, since a complex logarithm can have infinitely many values, differing by multiples of 2πi. Around 1740 Leonhard Euler turned his attention to the exponential function and derived the equation named after him by comparing the series expansions of the exponential and trigonometric expressions. The formula was first published in 1748 in his foundational work Introductio in analysin infinitorum. Johann Bernoulli had found that 1 1 + x 2 = 1 2 ( 1 1 − i x + 1 1 + i x ) . {\displaystyle {\frac {1}{1+x^{2}}}={\frac {1}{2}}\left({\frac {1}{1-ix}}+{\frac {1}{1+ix}}\right).} And since ∫ d x 1 + a x = 1 a ln ⁡ ( 1 + a x ) + C , {\displaystyle \int {\frac {dx}{1+ax}}={\frac {1}{a}}\ln(1+ax)+C,} the above equation tells us something about complex logarithms by relating natural logarithms to imaginary (complex) numbers. Bernoulli, however, did not evaluate the integral. Bernoulli's correspondence with Euler (who also knew the above equation) shows that Bernoulli did not fully understand complex logarithms. Euler also suggested that complex logarithms can have infinitely many values. The view of complex numbers as points in the complex plane was described about 50 years later by Caspar Wessel. == Definitions of complex exponentiation == The exponential function ex for real values of x may be defined in a few different equivalent ways (see Characterizations of the exponential function). Several of these methods may be directly extended to give definitions of ez for complex values of z simply by substituting z in place of x and using the complex algebraic operations. In particular, we may use any of the three following definitions, which are equivalent. From a more advanced perspective, each of these definitions may be interpreted as giving the unique analytic continuation of ex to the complex plane. === Differential equation definition === The exponential function f ( z ) = e z {\displaystyle f(z)=e^{z}} is the unique differentiable function of a complex variable for which the derivative equals the function d f d z = f {\displaystyle {\frac {df}{dz}}=f} and f ( 0 ) = 1. {\displaystyle f(0)=1.} === Power series definition === For complex z e z = 1 + z 1 ! + z 2 2 ! + z 3 3 ! + ⋯ = ∑ n = 0 ∞ z n n ! . {\displaystyle e^{z}=1+{\frac {z}{1!}}+{\frac {z^{2}}{2!}}+{\frac {z^{3}}{3!}}+\cdots =\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.} Using the ratio test, it is possible to show that this power series has an infinite radius of convergence and so defines ez for all complex z. === Limit definition === For complex z e z = lim n → ∞ ( 1 + z n ) n . {\displaystyle e^{z}=\lim _{n\to \infty }\left(1+{\frac {z}{n}}\right)^{n}.} Here, n is restricted to positive integers, so there is no question about what the power with exponent n means. == Proofs == Various proofs of the formula are possible. === Using differentiation === This proof shows that the quotient of the trigonometric and exponential expressions is the constant function one, so they must be equal (the exponential function is never zero, so this is permitted). Consider the function f(θ) f ( θ ) = cos ⁡ θ + i sin ⁡ θ e i θ = e − i θ ( cos ⁡ θ + i sin ⁡ θ ) {\displaystyle f(\theta )={\frac {\cos \theta +i\sin \theta }{e^{i\theta }}}=e^{-i\theta }\left(\cos \theta +i\sin \theta \right)} for real θ. Differentiating gives by the product rule f ′ ( θ ) = e − i θ ( i cos ⁡ θ − sin ⁡ θ ) − i e − i θ ( cos ⁡ θ + i sin ⁡ θ ) = 0 {\displaystyle f'(\theta )=e^{-i\theta }\left(i\cos \theta -\sin \theta \right)-ie^{-i\theta }\left(\cos \theta +i\sin \theta \right)=0} Thus, f(θ) is a constant. Since f(0) = 1, then f(θ) = 1 for all real θ, and thus e i θ = cos ⁡ θ + i sin ⁡ θ . {\displaystyle e^{i\theta }=\cos \theta +i\sin \theta .} === Using power series === Here is a proof of Euler's formula using power-series expansions, as well as basic facts about the powers of i: i 0 = 1 , i 1 = i , i 2 = − 1 , i 3 = − i , i 4 = 1 , i 5 = i , i 6 = − 1 , i 7 = − i ⋮ ⋮ ⋮ ⋮ {\displaystyle {\begin{aligned}i^{0}&=1,&i^{1}&=i,&i^{2}&=-1,&i^{3}&=-i,\\i^{4}&=1,&i^{5}&=i,&i^{6}&=-1,&i^{7}&=-i\\&\vdots &&\vdots &&\vdots &&\vdots \end{aligned}}} Using now the power-series definition from above, we see that for real values of x e i x = 1 + i x + ( i x ) 2 2 ! + ( i x ) 3 3 ! + ( i x ) 4 4 ! + ( i x ) 5 5 ! + ( i x ) 6 6 ! + ( i x ) 7 7 ! + ( i x ) 8 8 ! + ⋯ = 1 + i x − x 2 2 ! − i x 3 3 ! + x 4 4 ! + i x 5 5 ! − x 6 6 ! − i x 7 7 ! + x 8 8 ! + ⋯ = ( 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + x 8 8 ! − ⋯ ) + i ( x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ ) = cos ⁡ x + i sin ⁡ x , {\displaystyle {\begin{aligned}e^{ix}&=1+ix+{\frac {(ix)^{2}}{2!}}+{\frac {(ix)^{3}}{3!}}+{\frac {(ix)^{4}}{4!}}+{\frac {(ix)^{5}}{5!}}+{\frac {(ix)^{6}}{6!}}+{\frac {(ix)^{7}}{7!}}+{\frac {(ix)^{8}}{8!}}+\cdots \\[8pt]&=1+ix-{\frac {x^{2}}{2!}}-{\frac {ix^{3}}{3!}}+{\frac {x^{4}}{4!}}+{\frac {ix^{5}}{5!}}-{\frac {x^{6}}{6!}}-{\frac {ix^{7}}{7!}}+{\frac {x^{8}}{8!}}+\cdots \\[8pt]&=\left(1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+{\frac {x^{8}}{8!}}-\cdots \right)+i\left(x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \right)\\[8pt]&=\cos x+i\sin x,\end{aligned}}} where in the last step we recognize the two terms are the Maclaurin series for cos x and sin x. The rearrangement of terms is justified because each series is absolutely convergent. === Using polar coordinates === Another proof is based on the fact that all complex numbers can be expressed in polar coordinates. Therefore, for some r and θ depending on x, e i x = r ( cos ⁡ θ + i sin ⁡ θ ) . {\displaystyle e^{ix}=r\left(\cos \theta +i\sin \theta \right).} No assumptions are being made about r and θ; they will be determined in the course of the proof. From any of the definitions of the exponential function it can be shown that the derivative of eix is ieix. Therefore, differentiating both sides gives i e i x = ( cos ⁡ θ + i sin ⁡ θ ) d r d x + r ( − sin ⁡ θ + i cos ⁡ θ ) d θ d x . {\displaystyle ie^{ix}=\left(\cos \theta +i\sin \theta \right){\frac {dr}{dx}}+r\left(-\sin \theta +i\cos \theta \right){\frac {d\theta }{dx}}.} Substituting r(cos θ + i sin θ) for eix and equating real and imaginary parts in this formula gives ⁠dr/dx⁠ = 0 and ⁠dθ/dx⁠ = 1. Thus, r is a constant, and θ is x + C for some constant C. The initial values r(0) = 1 and θ(0) = 0 come from e0i = 1, giving r = 1 and θ = x. This proves the formula e i θ = 1 ( cos ⁡ θ + i sin ⁡ θ ) = cos ⁡ θ + i sin ⁡ θ . {\displaystyle e^{i\theta }=1(\cos \theta +i\sin \theta )=\cos \theta +i\sin \theta .} == Applications == === Applications in complex number theory === ==== Interpretation of the formula ==== This formula can be interpreted as saying that the function eiφ is a unit complex number, i.e., it traces out the unit circle in the complex plane as φ ranges through the real numbers. Here φ is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counterclockwise and in radians. The original proof is based on the Taylor series expansions of the exponential function ez (where z is a complex number) and of sin x and cos x for real numbers x (see above). In fact, the same proof shows that Euler's formula is even valid for all complex numbers x. A point in the complex plane can be represented by a complex number written in cartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates and polar coordinates. The polar form simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex number z = x + iy, and its complex conjugate, z = x − iy, can be written as z = x + i y = | z | ( cos ⁡ φ + i sin ⁡ φ ) = r e i φ , z ¯ = x − i y = | z | ( cos ⁡ φ − i sin ⁡ φ ) = r e − i φ , {\displaystyle {\begin{aligned}z&=x+iy=|z|(\cos \varphi +i\sin \varphi )=re^{i\varphi },\\{\bar {z}}&=x-iy=|z|(\cos \varphi -i\sin \varphi )=re^{-i\varphi },\end{aligned}}} where x = Re z is the real part, y = Im z is the imaginary part, r = |z| = √x2 + y2 is the magnitude of z and φ = arg z = atan2(y, x). φ is the argument of z, i.e., the angle between the x axis and the vector z measured counterclockwise in radians, which is defined up to addition of 2π. Many texts write φ = tan−1 ⁠y/x⁠ instead of φ = atan2(y, x), but the first equation needs adjustment when x ≤ 0. This is because for any real x and y, not both zero, the angles of the vectors (x, y) and (−x, −y) differ by π radians, but have the identical value of tan φ = ⁠y/x⁠. ==== Use of the formula to define the logarithm of complex numbers ==== Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation): a = e ln ⁡ a , {\displaystyle a=e^{\ln a},} and that e a e b = e a + b , {\displaystyle e^{a}e^{b}=e^{a+b},} both valid for any complex numbers a and b. Therefore, one can write: z = | z | e i φ = e ln ⁡ | z | e i φ = e ln ⁡ | z | + i φ {\displaystyle z=\left|z\right|e^{i\varphi }=e^{\ln \left|z\right|}e^{i\varphi }=e^{\ln \left|z\right|+i\varphi }} for any z ≠ 0. Taking the logarithm of both sides shows that ln ⁡ z = ln ⁡ | z | + i φ , {\displaystyle \ln z=\ln \left|z\right|+i\varphi ,} and in fact, this can be used as the definition for the complex logarithm. The logarithm of a complex number is thus a multi-valued function, because φ is multi-valued. Finally, the other exponential law ( e a ) k = e a k , {\displaystyle \left(e^{a}\right)^{k}=e^{ak},} which can be seen to hold for all integers k, together with Euler's formula, implies several trigonometric identities, as well as de Moivre's formula. ==== Relationship to trigonometry ==== Euler's formula, the definitions of the trigonometric functions and the standard identities for exponentials are sufficient to easily derive most trigonometric identities. It provides a powerful connection between analysis and trigonometry, and provides an interpretation of the sine and cosine functions as weighted sums of the exponential function: cos ⁡ x = Re ⁡ ( e i x ) = e i x + e − i x 2 , sin ⁡ x = Im ⁡ ( e i x ) = e i x − e − i x 2 i . {\displaystyle {\begin{aligned}\cos x&=\operatorname {Re} \left(e^{ix}\right)={\frac {e^{ix}+e^{-ix}}{2}},\\\sin x&=\operatorname {Im} \left(e^{ix}\right)={\frac {e^{ix}-e^{-ix}}{2i}}.\end{aligned}}} The two equations above can be derived by adding or subtracting Euler's formulas: e i x = cos ⁡ x + i sin ⁡ x , e − i x = cos ⁡ ( − x ) + i sin ⁡ ( − x ) = cos ⁡ x − i sin ⁡ x {\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x,\\e^{-ix}&=\cos(-x)+i\sin(-x)=\cos x-i\sin x\end{aligned}}} and solving for either cosine or sine. These formulas can even serve as the definition of the trigonometric functions for complex arguments x. For example, letting x = iy, we have: cos ⁡ i y = e − y + e y 2 = cosh ⁡ y , sin ⁡ i y = e − y − e y 2 i = e y − e − y 2 i = i sinh ⁡ y . {\displaystyle {\begin{aligned}\cos iy&={\frac {e^{-y}+e^{y}}{2}}=\cosh y,\\\sin iy&={\frac {e^{-y}-e^{y}}{2i}}={\frac {e^{y}-e^{-y}}{2}}i=i\sinh y.\end{aligned}}} In addition cosh ⁡ i x = e i x + e − i x 2 = cos ⁡ x , sinh ⁡ i x = e i x − e − i x 2 = i sin ⁡ x . {\displaystyle {\begin{aligned}\cosh ix&={\frac {e^{ix}+e^{-ix}}{2}}=\cos x,\\\sinh ix&={\frac {e^{ix}-e^{-ix}}{2}}=i\sin x.\end{aligned}}} Complex exponentials can simplify trigonometry, because they are mathematically easier to manipulate than their sine and cosine components. One technique is simply to convert sines and cosines into equivalent expressions in terms of exponentials sometimes called complex sinusoids. After the manipulations, the simplified result is still real-valued. For example: cos ⁡ x cos ⁡ y = e i x + e − i x 2 ⋅ e i y + e − i y 2 = 1 2 ⋅ e i ( x + y ) + e i ( x − y ) + e i ( − x + y ) + e i ( − x − y ) 2 = 1 2 ( e i ( x + y ) + e − i ( x + y ) 2 + e i ( x − y ) + e − i ( x − y ) 2 ) = 1 2 ( cos ⁡ ( x + y ) + cos ⁡ ( x − y ) ) . {\displaystyle {\begin{aligned}\cos x\cos y&={\frac {e^{ix}+e^{-ix}}{2}}\cdot {\frac {e^{iy}+e^{-iy}}{2}}\\&={\frac {1}{2}}\cdot {\frac {e^{i(x+y)}+e^{i(x-y)}+e^{i(-x+y)}+e^{i(-x-y)}}{2}}\\&={\frac {1}{2}}{\bigg (}{\frac {e^{i(x+y)}+e^{-i(x+y)}}{2}}+{\frac {e^{i(x-y)}+e^{-i(x-y)}}{2}}{\bigg )}\\&={\frac {1}{2}}\left(\cos(x+y)+\cos(x-y)\right).\end{aligned}}} Another technique is to represent sines and cosines in terms of the real part of a complex expression and perform the manipulations on the complex expression. For example: cos ⁡ n x = Re ⁡ ( e i n x ) = Re ⁡ ( e i ( n − 1 ) x ⋅ e i x ) = Re ⁡ ( e i ( n − 1 ) x ⋅ ( e i x + e − i x ⏟ 2 cos ⁡ x − e − i x ) ) = Re ⁡ ( e i ( n − 1 ) x ⋅ 2 cos ⁡ x − e i ( n − 2 ) x ) = cos ⁡ [ ( n − 1 ) x ] ⋅ [ 2 cos ⁡ x ] − cos ⁡ [ ( n − 2 ) x ] . {\displaystyle {\begin{aligned}\cos nx&=\operatorname {Re} \left(e^{inx}\right)\\&=\operatorname {Re} \left(e^{i(n-1)x}\cdot e^{ix}\right)\\&=\operatorname {Re} {\Big (}e^{i(n-1)x}\cdot {\big (}\underbrace {e^{ix}+e^{-ix}} _{2\cos x}-e^{-ix}{\big )}{\Big )}\\&=\operatorname {Re} \left(e^{i(n-1)x}\cdot 2\cos x-e^{i(n-2)x}\right)\\&=\cos[(n-1)x]\cdot [2\cos x]-\cos[(n-2)x].\end{aligned}}} This formula is used for recursive generation of cos nx for integer values of n and arbitrary x (in radians). Considering cos x a parameter in equation above yields recursive formula for Chebyshev polynomials of the first kind. === Topological interpretation === In the language of topology, Euler's formula states that the imaginary exponential function t ↦ e i t {\displaystyle t\mapsto e^{it}} is a (surjective) morphism of topological groups from the real line R {\displaystyle \mathbb {R} } to the unit circle S 1 {\displaystyle \mathbb {S} ^{1}} . In fact, this exhibits R {\displaystyle \mathbb {R} } as a covering space of S 1 {\displaystyle \mathbb {S} ^{1}} . Similarly, Euler's identity says that the kernel of this map is τ Z {\displaystyle \tau \mathbb {Z} } , where τ = 2 π {\displaystyle \tau =2\pi } . These observations may be combined and summarized in the commutative diagram below: === Other applications === In differential equations, the function eix is often used to simplify solutions, even if the final answer is a real function involving sine and cosine. The reason for this is that the exponential function is the eigenfunction of the operation of differentiation. In electrical engineering, signal processing, and similar fields, signals that vary periodically over time are often described as a combination of sinusoidal functions (see Fourier analysis), and these are more conveniently expressed as the sum of exponential functions with imaginary exponents, using Euler's formula. Also, phasor analysis of circuits can include Euler's formula to represent the impedance of a capacitor or an inductor. In the four-dimensional space of quaternions, there is a sphere of imaginary units. For any point r on this sphere, and x a real number, Euler's formula applies: exp ⁡ x r = cos ⁡ x + r sin ⁡ x , {\displaystyle \exp xr=\cos x+r\sin x,} and the element is called a versor in quaternions. The set of all versors forms a 3-sphere in the 4-space. == Other special cases == The special cases that evaluate to units illustrate rotation around the complex unit circle: The special case at x = τ (where τ = 2π, one turn) yields eiτ = 1 + 0. This is also argued to link five fundamental constants with three basic arithmetic operations, but, unlike Euler's identity, without rearranging the addends from the general case: e i τ = cos ⁡ τ + i sin ⁡ τ = 1 + 0 {\displaystyle {\begin{aligned}e^{i\tau }&=\cos \tau +i\sin \tau \\&=1+0\end{aligned}}} An interpretation of the simplified form eiτ = 1 is that rotating by a full turn is an identity function. == See also == Complex number Euler's identity Integration using Euler's formula History of Lorentz transformations List of topics named after Leonhard Euler == References == == Further reading == Nahin, Paul J. (2006). Dr. Euler's Fabulous Formula: Cures Many Mathematical Ills. Princeton University Press. ISBN 978-0-691-11822-2. Wilson, Robin (2018). Euler's Pioneering Equation: The Most Beautiful Theorem in Mathematics. Oxford: Oxford University Press. ISBN 978-0-19-879492-9. MR 3791469. == External links == Elements of Algebra
Wikipedia:Euler's identity#0
In mathematics, Euler's identity (also known as Euler's equation) is the equality e i π + 1 = 0 {\displaystyle e^{i\pi }+1=0} where e {\displaystyle e} is Euler's number, the base of natural logarithms, i {\displaystyle i} is the imaginary unit, which by definition satisfies i 2 = − 1 {\displaystyle i^{2}=-1} , and π {\displaystyle \pi } is pi, the ratio of the circumference of a circle to its diameter. Euler's identity is named after the Swiss mathematician Leonhard Euler. It is a special case of Euler's formula e i x = cos ⁡ x + i sin ⁡ x {\displaystyle e^{ix}=\cos x+i\sin x} when evaluated for x = π {\displaystyle x=\pi } . Euler's identity is considered an exemplar of mathematical beauty, as it shows a profound connection between the most fundamental numbers in mathematics. In addition, it is directly used in a proof that π is transcendental, which implies the impossibility of squaring the circle. == Mathematical beauty == Euler's identity is often cited as an example of deep mathematical beauty. Three of the basic arithmetic operations occur exactly once each: addition, multiplication, and exponentiation. The identity also links five fundamental mathematical constants: The number 0, the additive identity The number 1, the multiplicative identity The number π (π = 3.14159...), the fundamental circle constant The number e (e = 2.71828...), also known as Euler's number, which occurs widely in mathematical analysis The number i, the imaginary unit such that i 2 = − 1 {\displaystyle i^{2}=-1} The equation is often given in the form of an expression set equal to zero, which is common practice in several areas of mathematics. Stanford University mathematics professor Keith Devlin has said, "like a Shakespearean sonnet that captures the very essence of love, or a painting that brings out the beauty of the human form that is far more than just skin deep, Euler's equation reaches down into the very depths of existence". Paul Nahin, a professor emeritus at the University of New Hampshire who wrote a book dedicated to Euler's formula and its applications in Fourier analysis, said Euler's identity is "of exquisite beauty". Mathematics writer Constance Reid has said that Euler's identity is "the most famous formula in all mathematics". Benjamin Peirce, a 19th-century American philosopher, mathematician, and professor at Harvard University, after proving Euler's identity during a lecture, said that it "is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth". A 1990 poll of readers by The Mathematical Intelligencer named Euler's identity the "most beautiful theorem in mathematics". In a 2004 poll of readers by Physics World, Euler's identity tied with Maxwell's equations (of electromagnetism) as the "greatest equation ever". At least three books in popular mathematics have been published about Euler's identity: Dr. Euler's Fabulous Formula: Cures Many Mathematical Ills, by Paul Nahin (2011) A Most Elegant Equation: Euler's formula and the beauty of mathematics, by David Stipp (2017) Euler's Pioneering Equation: The most beautiful theorem in mathematics, by Robin Wilson (2018). == Explanations == === Imaginary exponents === Euler's identity asserts that e i π {\displaystyle e^{i\pi }} is equal to −1. The expression e i π {\displaystyle e^{i\pi }} is a special case of the expression e z {\displaystyle e^{z}} , where z is any complex number. In general, e z {\displaystyle e^{z}} is defined for complex z by extending one of the definitions of the exponential function from real exponents to complex exponents. For example, one common definition is: e z = lim n → ∞ ( 1 + z n ) n . {\displaystyle e^{z}=\lim _{n\to \infty }\left(1+{\frac {z}{n}}\right)^{n}.} Euler's identity therefore states that the limit, as n approaches infinity, of ( 1 + i π n ) n {\displaystyle (1+{\tfrac {i\pi }{n}})^{n}} is equal to −1. This limit is illustrated in the animation to the right. Euler's identity is a special case of Euler's formula, which states that for any real number x, e i x = cos ⁡ x + i sin ⁡ x {\displaystyle e^{ix}=\cos x+i\sin x} where the inputs of the trigonometric functions sine and cosine are given in radians. In particular, when x = π, e i π = cos ⁡ π + i sin ⁡ π . {\displaystyle e^{i\pi }=\cos \pi +i\sin \pi .} Since cos ⁡ π = − 1 {\displaystyle \cos \pi =-1} and sin ⁡ π = 0 , {\displaystyle \sin \pi =0,} it follows that e i π = − 1 + 0 i , {\displaystyle e^{i\pi }=-1+0i,} which yields Euler's identity: e i π + 1 = 0. {\displaystyle e^{i\pi }+1=0.} === Geometric interpretation === Any complex number z = x + i y {\displaystyle z=x+iy} can be represented by the point ( x , y ) {\displaystyle (x,y)} on the complex plane. This point can also be represented in polar coordinates as ( r , θ ) {\displaystyle (r,\theta )} , where r is the absolute value of z (distance from the origin), and θ {\displaystyle \theta } is the argument of z (angle counterclockwise from the positive x-axis). By the definitions of sine and cosine, this point has cartesian coordinates of ( r cos ⁡ θ , r sin ⁡ θ ) {\displaystyle (r\cos \theta ,r\sin \theta )} , implying that z = r ( cos ⁡ θ + i sin ⁡ θ ) {\displaystyle z=r(\cos \theta +i\sin \theta )} . According to Euler's formula, this is equivalent to saying z = r e i θ {\displaystyle z=re^{i\theta }} . Euler's identity says that − 1 = e i π {\displaystyle -1=e^{i\pi }} . Since e i π {\displaystyle e^{i\pi }} is r e i θ {\displaystyle re^{i\theta }} for r = 1 and θ = π {\displaystyle \theta =\pi } , this can be interpreted as a fact about the number −1 on the complex plane: its distance from the origin is 1, and its angle from the positive x-axis is π {\displaystyle \pi } radians. Additionally, when any complex number z is multiplied by e i θ {\displaystyle e^{i\theta }} , it has the effect of rotating z {\displaystyle z} counterclockwise by an angle of θ {\displaystyle \theta } on the complex plane. Since multiplication by −1 reflects a point across the origin, Euler's identity can be interpreted as saying that rotating any point π {\displaystyle \pi } radians around the origin has the same effect as reflecting the point across the origin. Similarly, setting θ {\displaystyle \theta } equal to 2 π {\displaystyle 2\pi } yields the related equation e 2 π i = 1 , {\displaystyle e^{2\pi i}=1,} which can be interpreted as saying that rotating any point by one turn around the origin returns it to its original position. == Generalizations == Euler's identity is also a special case of the more general identity that the nth roots of unity, for n > 1, add up to 0: ∑ k = 0 n − 1 e 2 π i k n = 0. {\displaystyle \sum _{k=0}^{n-1}e^{2\pi i{\frac {k}{n}}}=0.} Euler's identity is the case where n = 2. A similar identity also applies to quaternion exponential: let {i, j, k} be the basis quaternions; then, e 1 3 ( i ± j ± k ) π + 1 = 0. {\displaystyle e^{{\frac {1}{\sqrt {3}}}(i\pm j\pm k)\pi }+1=0.} More generally, let q be a quaternion with a zero real part and a norm equal to 1; that is, q = a i + b j + c k , {\displaystyle q=ai+bj+ck,} with a 2 + b 2 + c 2 = 1. {\displaystyle a^{2}+b^{2}+c^{2}=1.} Then one has e q π + 1 = 0. {\displaystyle e^{q\pi }+1=0.} The same formula applies to octonions, with a zero real part and a norm equal to 1. These formulas are a direct generalization of Euler's identity, since i {\displaystyle i} and − i {\displaystyle -i} are the only complex numbers with a zero real part and a norm (absolute value) equal to 1. == History == Euler's identity is a direct result of Euler's formula, published in his monumental 1748 work of mathematical analysis, Introductio in analysin infinitorum, but it is questionable whether the particular concept of linking five fundamental constants in a compact form can be attributed to Euler himself, as he may never have expressed it. Robin Wilson writes: We've seen how [Euler's identity] can easily be deduced from results of Johann Bernoulli and Roger Cotes, but that neither of them seem to have done so. Even Euler does not seem to have written it down explicitly—and certainly it doesn't appear in any of his publications—though he must surely have realized that it follows immediately from his identity [i.e. Euler's formula], eix = cos x + i sin x. Moreover, it seems to be unknown who first stated the result explicitly == See also == De Moivre's formula Exponential function Gelfond's constant == Notes == == References == === Sources === Conway, John H., and Guy, Richard K. (1996), The Book of Numbers, Springer ISBN 978-0-387-97993-9 Crease, Robert P. (10 May 2004), "The greatest equations ever", Physics World [registration required] Dunham, William (1999), Euler: The Master of Us All, Mathematical Association of America ISBN 978-0-88385-328-3 Euler, Leonhard (1922), Leonhardi Euleri opera omnia. 1, Opera mathematica. Volumen VIII, Leonhardi Euleri introductio in analysin infinitorum. Tomus primus, Leipzig: B. G. Teubneri Kasner, E., and Newman, J. (1940), Mathematics and the Imagination, Simon & Schuster Maor, Eli (1998), e: The Story of a number, Princeton University Press ISBN 0-691-05854-7 Nahin, Paul J. (2006), Dr. Euler's Fabulous Formula: Cures Many Mathematical Ills, Princeton University Press ISBN 978-0-691-11822-2 Paulos, John Allen (1992), Beyond Numeracy: An Uncommon Dictionary of Mathematics, Penguin Books ISBN 0-14-014574-5 Reid, Constance (various editions), From Zero to Infinity, Mathematical Association of America Sandifer, C. Edward (2007), Euler's Greatest Hits, Mathematical Association of America ISBN 978-0-88385-563-8 Stipp, David (2017), A Most Elegant Equation: Euler's formula and the beauty of mathematics, Basic Books Wells, David (1990). "Are these the most beautiful?". The Mathematical Intelligencer. 12 (3): 37–41. doi:10.1007/BF03024015. S2CID 121503263. Wilson, Robin (2018), Euler's Pioneering Equation: The most beautiful theorem in mathematics, Oxford University Press, ISBN 978-0-192-51406-6 Zeki, S.; Romaya, J. P.; Benincasa, D. M. T.; Atiyah, M. F. (2014), "The experience of mathematical beauty and its neural correlates", Frontiers in Human Neuroscience, 8: 68, doi:10.3389/fnhum.2014.00068, PMC 3923150, PMID 24592230 == External links == Intuitive understanding of Euler's formula
Wikipedia:Euler's totient function#0
In number theory, Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. It is written using the Greek letter phi as φ ( n ) {\displaystyle \varphi (n)} or ϕ ( n ) {\displaystyle \phi (n)} , and may also be called Euler's phi function. In other words, it is the number of integers k in the range 1 ≤ k ≤ n for which the greatest common divisor gcd(n, k) is equal to 1. The integers k of this form are sometimes referred to as totatives of n. For example, the totatives of n = 9 are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, since gcd(9, 3) = gcd(9, 6) = 3 and gcd(9, 9) = 9. Therefore, φ(9) = 6. As another example, φ(1) = 1 since for n = 1 the only integer in the range from 1 to n is 1 itself, and gcd(1, 1) = 1. Euler's totient function is a multiplicative function, meaning that if two numbers m and n are relatively prime, then φ(mn) = φ(m)φ(n). This function gives the order of the multiplicative group of integers modulo n (the group of units of the ring Z / n Z {\displaystyle \mathbb {Z} /n\mathbb {Z} } ). It is also used for defining the RSA encryption system. == History, terminology, and notation == Leonhard Euler introduced the function in 1763. However, he did not at that time choose any specific symbol to denote it. In a 1784 publication, Euler studied the function further, choosing the Greek letter π to denote it: he wrote πD for "the multitude of numbers less than D, and which have no common divisor with it". This definition varies from the current definition for the totient function at D = 1 but is otherwise the same. The now-standard notation φ(A) comes from Gauss's 1801 treatise Disquisitiones Arithmeticae, although Gauss did not use parentheses around the argument and wrote φA. Thus, it is often called Euler's phi function or simply the phi function. In 1879, J. J. Sylvester coined the term totient for this function, so it is also referred to as Euler's totient function, the Euler totient, or Euler's totient. Jordan's totient is a generalization of Euler's. The cototient of n is defined as n − φ(n). It counts the number of positive integers less than or equal to n that have at least one prime factor in common with n. == Computing Euler's totient function == There are several formulae for computing φ(n). === Euler's product formula === It states φ ( n ) = n ∏ p ∣ n ( 1 − 1 p ) , {\displaystyle \varphi (n)=n\prod _{p\mid n}\left(1-{\frac {1}{p}}\right),} where the product is over the distinct prime numbers dividing n. (For notation, see Arithmetical function.) An equivalent formulation is φ ( n ) = p 1 k 1 − 1 ( p 1 − 1 ) p 2 k 2 − 1 ( p 2 − 1 ) ⋯ p r k r − 1 ( p r − 1 ) , {\displaystyle \varphi (n)=p_{1}^{k_{1}-1}(p_{1}{-}1)\,p_{2}^{k_{2}-1}(p_{2}{-}1)\cdots p_{r}^{k_{r}-1}(p_{r}{-}1),} where n = p 1 k 1 p 2 k 2 ⋯ p r k r {\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}}} is the prime factorization of n {\displaystyle n} (that is, p 1 , p 2 , … , p r {\displaystyle p_{1},p_{2},\ldots ,p_{r}} are distinct prime numbers). The proof of these formulae depends on two important facts. ==== Phi is a multiplicative function ==== This means that if gcd(m, n) = 1, then φ(m) φ(n) = φ(mn). Proof outline: Let A, B, C be the sets of positive integers which are coprime to and less than m, n, mn, respectively, so that |A| = φ(m), etc. Then there is a bijection between A × B and C by the Chinese remainder theorem. ==== Value of phi for a prime power argument ==== If p is prime and k ≥ 1, then φ ( p k ) = p k − p k − 1 = p k − 1 ( p − 1 ) = p k ( 1 − 1 p ) . {\displaystyle \varphi \left(p^{k}\right)=p^{k}-p^{k-1}=p^{k-1}(p-1)=p^{k}\left(1-{\tfrac {1}{p}}\right).} Proof: Since p is a prime number, the only possible values of gcd(pk, m) are 1, p, p2, ..., pk, and the only way to have gcd(pk, m) > 1 is if m is a multiple of p, that is, m ∈ {p, 2p, 3p, ..., pk − 1p = pk}, and there are pk − 1 such multiples not greater than pk. Therefore, the other pk − pk − 1 numbers are all relatively prime to pk. ==== Proof of Euler's product formula ==== The fundamental theorem of arithmetic states that if n > 1 there is a unique expression n = p 1 k 1 p 2 k 2 ⋯ p r k r , {\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}},} where p1 < p2 < ... < pr are prime numbers and each ki ≥ 1. (The case n = 1 corresponds to the empty product.) Repeatedly using the multiplicative property of φ and the formula for φ(pk) gives φ ( n ) = φ ( p 1 k 1 ) φ ( p 2 k 2 ) ⋯ φ ( p r k r ) = p 1 k 1 ( 1 − 1 p 1 ) p 2 k 2 ( 1 − 1 p 2 ) ⋯ p r k r ( 1 − 1 p r ) = p 1 k 1 p 2 k 2 ⋯ p r k r ( 1 − 1 p 1 ) ( 1 − 1 p 2 ) ⋯ ( 1 − 1 p r ) = n ( 1 − 1 p 1 ) ( 1 − 1 p 2 ) ⋯ ( 1 − 1 p r ) . {\displaystyle {\begin{array}{rcl}\varphi (n)&=&\varphi (p_{1}^{k_{1}})\,\varphi (p_{2}^{k_{2}})\cdots \varphi (p_{r}^{k_{r}})\\[.1em]&=&p_{1}^{k_{1}}\left(1-{\frac {1}{p_{1}}}\right)p_{2}^{k_{2}}\left(1-{\frac {1}{p_{2}}}\right)\cdots p_{r}^{k_{r}}\left(1-{\frac {1}{p_{r}}}\right)\\[.1em]&=&p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}}\left(1-{\frac {1}{p_{1}}}\right)\left(1-{\frac {1}{p_{2}}}\right)\cdots \left(1-{\frac {1}{p_{r}}}\right)\\[.1em]&=&n\left(1-{\frac {1}{p_{1}}}\right)\left(1-{\frac {1}{p_{2}}}\right)\cdots \left(1-{\frac {1}{p_{r}}}\right).\end{array}}} This gives both versions of Euler's product formula. An alternative proof that does not require the multiplicative property instead uses the inclusion-exclusion principle applied to the set { 1 , 2 , … , n } {\displaystyle \{1,2,\ldots ,n\}} , excluding the sets of integers divisible by the prime divisors. ==== Example ==== φ ( 20 ) = φ ( 2 2 5 ) = 20 ( 1 − 1 2 ) ( 1 − 1 5 ) = 20 ⋅ 1 2 ⋅ 4 5 = 8. {\displaystyle \varphi (20)=\varphi (2^{2}5)=20\,(1-{\tfrac {1}{2}})\,(1-{\tfrac {1}{5}})=20\cdot {\tfrac {1}{2}}\cdot {\tfrac {4}{5}}=8.} In words: the distinct prime factors of 20 are 2 and 5; half of the twenty integers from 1 to 20 are divisible by 2, leaving ten; a fifth of those are divisible by 5, leaving eight numbers coprime to 20; these are: 1, 3, 7, 9, 11, 13, 17, 19. The alternative formula uses only integers: φ ( 20 ) = φ ( 2 2 5 1 ) = 2 2 − 1 ( 2 − 1 ) 5 1 − 1 ( 5 − 1 ) = 2 ⋅ 1 ⋅ 1 ⋅ 4 = 8. {\displaystyle \varphi (20)=\varphi (2^{2}5^{1})=2^{2-1}(2{-}1)\,5^{1-1}(5{-}1)=2\cdot 1\cdot 1\cdot 4=8.} === Fourier transform === The totient is the discrete Fourier transform of the gcd, evaluated at 1. Let F { x } [ m ] = ∑ k = 1 n x k ⋅ e − 2 π i m k n {\displaystyle {\mathcal {F}}\{\mathbf {x} \}[m]=\sum \limits _{k=1}^{n}x_{k}\cdot e^{{-2\pi i}{\frac {mk}{n}}}} where xk = gcd(k,n) for k ∈ {1, ..., n}. Then φ ( n ) = F { x } [ 1 ] = ∑ k = 1 n gcd ( k , n ) e − 2 π i k n . {\displaystyle \varphi (n)={\mathcal {F}}\{\mathbf {x} \}[1]=\sum \limits _{k=1}^{n}\gcd(k,n)e^{-2\pi i{\frac {k}{n}}}.} The real part of this formula is φ ( n ) = ∑ k = 1 n gcd ( k , n ) cos ⁡ 2 π k n . {\displaystyle \varphi (n)=\sum \limits _{k=1}^{n}\gcd(k,n)\cos {\tfrac {2\pi k}{n}}.} For example, using cos ⁡ π 5 = 5 + 1 4 {\displaystyle \cos {\tfrac {\pi }{5}}={\tfrac {{\sqrt {5}}+1}{4}}} and cos ⁡ 2 π 5 = 5 − 1 4 {\displaystyle \cos {\tfrac {2\pi }{5}}={\tfrac {{\sqrt {5}}-1}{4}}} : φ ( 10 ) = gcd ( 1 , 10 ) cos ⁡ 2 π 10 + gcd ( 2 , 10 ) cos ⁡ 4 π 10 + gcd ( 3 , 10 ) cos ⁡ 6 π 10 + ⋯ + gcd ( 10 , 10 ) cos ⁡ 20 π 10 = 1 ⋅ ( 5 + 1 4 ) + 2 ⋅ ( 5 − 1 4 ) + 1 ⋅ ( − 5 − 1 4 ) + 2 ⋅ ( − 5 + 1 4 ) + 5 ⋅ ( − 1 ) + 2 ⋅ ( − 5 + 1 4 ) + 1 ⋅ ( − 5 − 1 4 ) + 2 ⋅ ( 5 − 1 4 ) + 1 ⋅ ( 5 + 1 4 ) + 10 ⋅ ( 1 ) = 4. {\displaystyle {\begin{array}{rcl}\varphi (10)&=&\gcd(1,10)\cos {\tfrac {2\pi }{10}}+\gcd(2,10)\cos {\tfrac {4\pi }{10}}+\gcd(3,10)\cos {\tfrac {6\pi }{10}}+\cdots +\gcd(10,10)\cos {\tfrac {20\pi }{10}}\\&=&1\cdot ({\tfrac {{\sqrt {5}}+1}{4}})+2\cdot ({\tfrac {{\sqrt {5}}-1}{4}})+1\cdot (-{\tfrac {{\sqrt {5}}-1}{4}})+2\cdot (-{\tfrac {{\sqrt {5}}+1}{4}})+5\cdot (-1)\\&&+\ 2\cdot (-{\tfrac {{\sqrt {5}}+1}{4}})+1\cdot (-{\tfrac {{\sqrt {5}}-1}{4}})+2\cdot ({\tfrac {{\sqrt {5}}-1}{4}})+1\cdot ({\tfrac {{\sqrt {5}}+1}{4}})+10\cdot (1)\\&=&4.\end{array}}} Unlike the Euler product and the divisor sum formula, this one does not require knowing the factors of n. However, it does involve the calculation of the greatest common divisor of n and every positive integer less than n, which suffices to provide the factorization anyway. === Divisor sum === The property established by Gauss, that ∑ d ∣ n φ ( d ) = n , {\displaystyle \sum _{d\mid n}\varphi (d)=n,} where the sum is over all positive divisors d of n, can be proven in several ways. (See Arithmetical function for notational conventions.) One proof is to note that φ(d) is also equal to the number of possible generators of the cyclic group Cd ; specifically, if Cd = ⟨g⟩ with gd = 1, then gk is a generator for every k coprime to d. Since every element of Cn generates a cyclic subgroup, and each subgroup Cd ⊆ Cn is generated by precisely φ(d) elements of Cn, the formula follows. Equivalently, the formula can be derived by the same argument applied to the multiplicative group of the nth roots of unity and the primitive dth roots of unity. The formula can also be derived from elementary arithmetic. For example, let n = 20 and consider the positive fractions up to 1 with denominator 20: 1 20 , 2 20 , 3 20 , 4 20 , 5 20 , 6 20 , 7 20 , 8 20 , 9 20 , 10 20 , 11 20 , 12 20 , 13 20 , 14 20 , 15 20 , 16 20 , 17 20 , 18 20 , 19 20 , 20 20 . {\displaystyle {\tfrac {1}{20}},\,{\tfrac {2}{20}},\,{\tfrac {3}{20}},\,{\tfrac {4}{20}},\,{\tfrac {5}{20}},\,{\tfrac {6}{20}},\,{\tfrac {7}{20}},\,{\tfrac {8}{20}},\,{\tfrac {9}{20}},\,{\tfrac {10}{20}},\,{\tfrac {11}{20}},\,{\tfrac {12}{20}},\,{\tfrac {13}{20}},\,{\tfrac {14}{20}},\,{\tfrac {15}{20}},\,{\tfrac {16}{20}},\,{\tfrac {17}{20}},\,{\tfrac {18}{20}},\,{\tfrac {19}{20}},\,{\tfrac {20}{20}}.} Put them into lowest terms: 1 20 , 1 10 , 3 20 , 1 5 , 1 4 , 3 10 , 7 20 , 2 5 , 9 20 , 1 2 , 11 20 , 3 5 , 13 20 , 7 10 , 3 4 , 4 5 , 17 20 , 9 10 , 19 20 , 1 1 {\displaystyle {\tfrac {1}{20}},\,{\tfrac {1}{10}},\,{\tfrac {3}{20}},\,{\tfrac {1}{5}},\,{\tfrac {1}{4}},\,{\tfrac {3}{10}},\,{\tfrac {7}{20}},\,{\tfrac {2}{5}},\,{\tfrac {9}{20}},\,{\tfrac {1}{2}},\,{\tfrac {11}{20}},\,{\tfrac {3}{5}},\,{\tfrac {13}{20}},\,{\tfrac {7}{10}},\,{\tfrac {3}{4}},\,{\tfrac {4}{5}},\,{\tfrac {17}{20}},\,{\tfrac {9}{10}},\,{\tfrac {19}{20}},\,{\tfrac {1}{1}}} These twenty fractions are all the positive ⁠k/d⁠ ≤ 1 whose denominators are the divisors d = 1, 2, 4, 5, 10, 20. The fractions with 20 as denominator are those with numerators relatively prime to 20, namely ⁠1/20⁠, ⁠3/20⁠, ⁠7/20⁠, ⁠9/20⁠, ⁠11/20⁠, ⁠13/20⁠, ⁠17/20⁠, ⁠19/20⁠; by definition this is φ(20) fractions. Similarly, there are φ(10) fractions with denominator 10, and φ(5) fractions with denominator 5, etc. Thus the set of twenty fractions is split into subsets of size φ(d) for each d dividing 20. A similar argument applies for any n. Möbius inversion applied to the divisor sum formula gives φ ( n ) = ∑ d ∣ n μ ( d ) ⋅ n d = n ∑ d ∣ n μ ( d ) d , {\displaystyle \varphi (n)=\sum _{d\mid n}\mu \left(d\right)\cdot {\frac {n}{d}}=n\sum _{d\mid n}{\frac {\mu (d)}{d}},} where μ is the Möbius function, the multiplicative function defined by μ ( p ) = − 1 {\displaystyle \mu (p)=-1} and μ ( p k ) = 0 {\displaystyle \mu (p^{k})=0} for each prime p and k ≥ 2. This formula may also be derived from the product formula by multiplying out ∏ p ∣ n ( 1 − 1 p ) {\textstyle \prod _{p\mid n}(1-{\frac {1}{p}})} to get ∑ d ∣ n μ ( d ) d . {\textstyle \sum _{d\mid n}{\frac {\mu (d)}{d}}.} An example: φ ( 20 ) = μ ( 1 ) ⋅ 20 + μ ( 2 ) ⋅ 10 + μ ( 4 ) ⋅ 5 + μ ( 5 ) ⋅ 4 + μ ( 10 ) ⋅ 2 + μ ( 20 ) ⋅ 1 = 1 ⋅ 20 − 1 ⋅ 10 + 0 ⋅ 5 − 1 ⋅ 4 + 1 ⋅ 2 + 0 ⋅ 1 = 8. {\displaystyle {\begin{aligned}\varphi (20)&=\mu (1)\cdot 20+\mu (2)\cdot 10+\mu (4)\cdot 5+\mu (5)\cdot 4+\mu (10)\cdot 2+\mu (20)\cdot 1\\[.5em]&=1\cdot 20-1\cdot 10+0\cdot 5-1\cdot 4+1\cdot 2+0\cdot 1=8.\end{aligned}}} == Some values == The first 100 values (sequence A000010 in the OEIS) are shown in the table and graph below: In the graph at right the top line y = n − 1 is an upper bound valid for all n other than one, and attained if and only if n is a prime number. A simple lower bound is φ ( n ) ≥ n / 2 {\displaystyle \varphi (n)\geq {\sqrt {n/2}}} , which is rather loose: in fact, the lower limit of the graph is proportional to ⁠n/log log n⁠. == Euler's theorem == This states that if a and n are relatively prime then a φ ( n ) ≡ 1 mod n . {\displaystyle a^{\varphi (n)}\equiv 1\mod n.} The special case where n is prime is known as Fermat's little theorem. This follows from Lagrange's theorem and the fact that φ(n) is the order of the multiplicative group of integers modulo n. The RSA cryptosystem is based on this theorem: it implies that the inverse of the function a ↦ ae mod n, where e is the (public) encryption exponent, is the function b ↦ bd mod n, where d, the (private) decryption exponent, is the multiplicative inverse of e modulo φ(n). The difficulty of computing φ(n) without knowing the factorization of n is thus the difficulty of computing d: this is known as the RSA problem which can be solved by factoring n. The owner of the private key knows the factorization, since an RSA private key is constructed by choosing n as the product of two (randomly chosen) large primes p and q. Only n is publicly disclosed, and given the difficulty to factor large numbers we have the guarantee that no one else knows the factorization. == Other formulae == a ∣ b ⟹ φ ( a ) ∣ φ ( b ) {\displaystyle a\mid b\implies \varphi (a)\mid \varphi (b)} m ∣ φ ( a m − 1 ) {\displaystyle m\mid \varphi (a^{m}-1)} φ ( m n ) = φ ( m ) φ ( n ) ⋅ d φ ( d ) where d = gcd ⁡ ( m , n ) {\displaystyle \varphi (mn)=\varphi (m)\varphi (n)\cdot {\frac {d}{\varphi (d)}}\quad {\text{where }}d=\operatorname {gcd} (m,n)} In particular: φ ( 2 m ) = { 2 φ ( m ) if m is even φ ( m ) if m is odd {\displaystyle \varphi (2m)={\begin{cases}2\varphi (m)&{\text{ if }}m{\text{ is even}}\\\varphi (m)&{\text{ if }}m{\text{ is odd}}\end{cases}}} φ ( n m ) = n m − 1 φ ( n ) {\displaystyle \varphi \left(n^{m}\right)=n^{m-1}\varphi (n)} φ ( lcm ⁡ ( m , n ) ) ⋅ φ ( gcd ⁡ ( m , n ) ) = φ ( m ) ⋅ φ ( n ) {\displaystyle \varphi (\operatorname {lcm} (m,n))\cdot \varphi (\operatorname {gcd} (m,n))=\varphi (m)\cdot \varphi (n)} Compare this to the formula lcm ⁡ ( m , n ) ⋅ gcd ⁡ ( m , n ) = m ⋅ n {\textstyle \operatorname {lcm} (m,n)\cdot \operatorname {gcd} (m,n)=m\cdot n} (see least common multiple). φ(n) is even for n ≥ 3. Moreover, if n has r distinct odd prime factors, 2r | φ(n) For any a > 1 and n > 6 such that 4 ∤ n there exists an l ≥ 2n such that l | φ(an − 1). φ ( n ) n = φ ( rad ⁡ ( n ) ) rad ⁡ ( n ) {\displaystyle {\frac {\varphi (n)}{n}}={\frac {\varphi (\operatorname {rad} (n))}{\operatorname {rad} (n)}}} where rad(n) is the radical of n (the product of all distinct primes dividing n). ∑ d ∣ n μ 2 ( d ) φ ( d ) = n φ ( n ) {\displaystyle \sum _{d\mid n}{\frac {\mu ^{2}(d)}{\varphi (d)}}={\frac {n}{\varphi (n)}}} ∑ 1 ≤ k ≤ n − 1 g c d ( k , n ) = 1 k = 1 2 n φ ( n ) for n > 1 {\displaystyle \sum _{1\leq k\leq n-1 \atop gcd(k,n)=1}\!\!k={\tfrac {1}{2}}n\varphi (n)\quad {\text{for }}n>1} ∑ k = 1 n φ ( k ) = 1 2 ( 1 + ∑ k = 1 n μ ( k ) ⌊ n k ⌋ 2 ) = 3 π 2 n 2 + O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 4 3 ) {\displaystyle \sum _{k=1}^{n}\varphi (k)={\tfrac {1}{2}}\left(1+\sum _{k=1}^{n}\mu (k)\left\lfloor {\frac {n}{k}}\right\rfloor ^{2}\right)={\frac {3}{\pi ^{2}}}n^{2}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)} ( cited in) ∑ k = 1 n φ ( k ) = 3 π 2 n 2 + O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 1 3 ) {\displaystyle \sum _{k=1}^{n}\varphi (k)={\frac {3}{\pi ^{2}}}n^{2}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {1}{3}}\right)} [Liu (2016)] ∑ k = 1 n φ ( k ) k = ∑ k = 1 n μ ( k ) k ⌊ n k ⌋ = 6 π 2 n + O ( ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 4 3 ) {\displaystyle \sum _{k=1}^{n}{\frac {\varphi (k)}{k}}=\sum _{k=1}^{n}{\frac {\mu (k)}{k}}\left\lfloor {\frac {n}{k}}\right\rfloor ={\frac {6}{\pi ^{2}}}n+O\left((\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)} ∑ k = 1 n k φ ( k ) = 315 ζ ( 3 ) 2 π 4 n − log ⁡ n 2 + O ( ( log ⁡ n ) 2 3 ) {\displaystyle \sum _{k=1}^{n}{\frac {k}{\varphi (k)}}={\frac {315\,\zeta (3)}{2\pi ^{4}}}n-{\frac {\log n}{2}}+O\left((\log n)^{\frac {2}{3}}\right)} ∑ k = 1 n 1 φ ( k ) = 315 ζ ( 3 ) 2 π 4 ( log ⁡ n + γ − ∑ p prime log ⁡ p p 2 − p + 1 ) + O ( ( log ⁡ n ) 2 3 n ) {\displaystyle \sum _{k=1}^{n}{\frac {1}{\varphi (k)}}={\frac {315\,\zeta (3)}{2\pi ^{4}}}\left(\log n+\gamma -\sum _{p{\text{ prime}}}{\frac {\log p}{p^{2}-p+1}}\right)+O\left({\frac {(\log n)^{\frac {2}{3}}}{n}}\right)} (where γ is the Euler–Mascheroni constant). === Menon's identity === In 1965 P. Kesava Menon proved ∑ gcd ( k , n ) = 1 1 ≤ k ≤ n gcd ( k − 1 , n ) = φ ( n ) d ( n ) , {\displaystyle \sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}\!\!\!\!\gcd(k-1,n)=\varphi (n)d(n),} where d(n) = σ0(n) is the number of divisors of n. === Divisibility by any fixed positive integer === The following property, which is part of the « folklore » (i.e., apparently unpublished as a specific result: see the introduction of this article in which it is stated as having « long been known ») has important consequences. For instance it rules out uniform distribution of the values of φ ( n ) {\displaystyle \varphi (n)} in the arithmetic progressions modulo q {\displaystyle q} for any integer q > 1 {\displaystyle q>1} . For every fixed positive integer q {\displaystyle q} , the relation q | φ ( n ) {\displaystyle q|\varphi (n)} holds for almost all n {\displaystyle n} , meaning for all but o ( x ) {\displaystyle o(x)} values of n ≤ x {\displaystyle n\leq x} as x → ∞ {\displaystyle x\rightarrow \infty } . This is an elementary consequence of the fact that the sum of the reciprocals of the primes congruent to 1 modulo q {\displaystyle q} diverges, which itself is a corollary of the proof of Dirichlet's theorem on arithmetic progressions. == Generating functions == The Dirichlet series for φ(n) may be written in terms of the Riemann zeta function as: ∑ n = 1 ∞ φ ( n ) n s = ζ ( s − 1 ) ζ ( s ) {\displaystyle \sum _{n=1}^{\infty }{\frac {\varphi (n)}{n^{s}}}={\frac {\zeta (s-1)}{\zeta (s)}}} where the left-hand side converges for ℜ ( s ) > 2 {\displaystyle \Re (s)>2} . The Lambert series generating function is ∑ n = 1 ∞ φ ( n ) q n 1 − q n = q ( 1 − q ) 2 {\displaystyle \sum _{n=1}^{\infty }{\frac {\varphi (n)q^{n}}{1-q^{n}}}={\frac {q}{(1-q)^{2}}}} which converges for |q| < 1. Both of these are proved by elementary series manipulations and the formulae for φ(n). == Growth rate == In the words of Hardy & Wright, the order of φ(n) is "always 'nearly n'." First lim sup φ ( n ) n = 1 , {\displaystyle \lim \sup {\frac {\varphi (n)}{n}}=1,} but as n goes to infinity, for all δ > 0 φ ( n ) n 1 − δ → ∞ . {\displaystyle {\frac {\varphi (n)}{n^{1-\delta }}}\rightarrow \infty .} These two formulae can be proved by using little more than the formulae for φ(n) and the divisor sum function σ(n). In fact, during the proof of the second formula, the inequality 6 π 2 < φ ( n ) σ ( n ) n 2 < 1 , {\displaystyle {\frac {6}{\pi ^{2}}}<{\frac {\varphi (n)\sigma (n)}{n^{2}}}<1,} true for n > 1, is proved. We also have lim inf φ ( n ) n log ⁡ log ⁡ n = e − γ . {\displaystyle \lim \inf {\frac {\varphi (n)}{n}}\log \log n=e^{-\gamma }.} Here γ is Euler's constant, γ = 0.577215665..., so eγ = 1.7810724... and e−γ = 0.56145948.... Proving this does not quite require the prime number theorem. Since log log n goes to infinity, this formula shows that lim inf φ ( n ) n = 0. {\displaystyle \lim \inf {\frac {\varphi (n)}{n}}=0.} In fact, more is true. φ ( n ) > n e γ log ⁡ log ⁡ n + 3 log ⁡ log ⁡ n for n > 2 {\displaystyle \varphi (n)>{\frac {n}{e^{\gamma }\;\log \log n+{\frac {3}{\log \log n}}}}\quad {\text{for }}n>2} and φ ( n ) < n e γ log ⁡ log ⁡ n for infinitely many n . {\displaystyle \varphi (n)<{\frac {n}{e^{\gamma }\log \log n}}\quad {\text{for infinitely many }}n.} The second inequality was shown by Jean-Louis Nicolas. Ribenboim says "The method of proof is interesting, in that the inequality is shown first under the assumption that the Riemann hypothesis is true, secondly under the contrary assumption.": 173 For the average order, we have φ ( 1 ) + φ ( 2 ) + ⋯ + φ ( n ) = 3 n 2 π 2 + O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 4 3 ) as n → ∞ , {\displaystyle \varphi (1)+\varphi (2)+\cdots +\varphi (n)={\frac {3n^{2}}{\pi ^{2}}}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)\quad {\text{as }}n\rightarrow \infty ,} due to Arnold Walfisz, its proof exploiting estimates on exponential sums due to I. M. Vinogradov and N. M. Korobov. By a combination of van der Corput's and Vinogradov's methods, H.-Q. Liu (On Euler's function.Proc. Roy. Soc. Edinburgh Sect. A 146 (2016), no. 4, 769–775) improved the error term to O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 1 3 ) {\displaystyle O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {1}{3}}\right)} (this is currently the best known estimate of this type). The "Big O" stands for a quantity that is bounded by a constant times the function of n inside the parentheses (which is small compared to n2). This result can be used to prove that the probability of two randomly chosen numbers being relatively prime is ⁠6/π2⁠. == Ratio of consecutive values == In 1950 Somayajulu proved lim inf φ ( n + 1 ) φ ( n ) = 0 and lim sup φ ( n + 1 ) φ ( n ) = ∞ . {\displaystyle {\begin{aligned}\lim \inf {\frac {\varphi (n+1)}{\varphi (n)}}&=0\quad {\text{and}}\\[5px]\lim \sup {\frac {\varphi (n+1)}{\varphi (n)}}&=\infty .\end{aligned}}} In 1954 Schinzel and Sierpiński strengthened this, proving that the set { φ ( n + 1 ) φ ( n ) , n = 1 , 2 , … } {\displaystyle \left\{{\frac {\varphi (n+1)}{\varphi (n)}},\;\;n=1,2,\ldots \right\}} is dense in the positive real numbers. They also proved that the set { φ ( n ) n , n = 1 , 2 , … } {\displaystyle \left\{{\frac {\varphi (n)}{n}},\;\;n=1,2,\ldots \right\}} is dense in the interval (0,1). == Totient number == A totient number is a value of Euler's totient function: that is, an m for which there is at least one n for which φ(n) = m. The valency or multiplicity of a totient number m is the number of solutions to this equation. A nontotient is a natural number which is not a totient number. Every odd integer exceeding 1 is trivially a nontotient. There are also infinitely many even nontotients, and indeed every positive integer has a multiple which is an even nontotient. The number of totient numbers up to a given limit x is x log ⁡ x e ( C + o ( 1 ) ) ( log ⁡ log ⁡ log ⁡ x ) 2 {\displaystyle {\frac {x}{\log x}}e^{{\big (}C+o(1){\big )}(\log \log \log x)^{2}}} for a constant C = 0.8178146.... If counted accordingly to multiplicity, the number of totient numbers up to a given limit x is | { n : φ ( n ) ≤ x } | = ζ ( 2 ) ζ ( 3 ) ζ ( 6 ) ⋅ x + R ( x ) {\displaystyle {\Big \vert }\{n:\varphi (n)\leq x\}{\Big \vert }={\frac {\zeta (2)\zeta (3)}{\zeta (6)}}\cdot x+R(x)} where the error term R is of order at most ⁠x/(log x)k⁠ for any positive k. It is known that the multiplicity of m exceeds mδ infinitely often for any δ < 0.55655. === Ford's theorem === Ford (1999) proved that for every integer k ≥ 2 there is a totient number m of multiplicity k: that is, for which the equation φ(n) = m has exactly k solutions; this result had previously been conjectured by Wacław Sierpiński, and it had been obtained as a consequence of Schinzel's hypothesis H. Indeed, each multiplicity that occurs, does so infinitely often. However, no number m is known with multiplicity k = 1. Carmichael's totient function conjecture is the statement that there is no such m. === Perfect totient numbers === A perfect totient number is an integer that is equal to the sum of its iterated totients. That is, we apply the totient function to a number n, apply it again to the resulting totient, and so on, until the number 1 is reached, and add together the resulting sequence of numbers; if the sum equals n, then n is a perfect totient number. == Applications == === Cyclotomy === In the last section of the Disquisitiones Gauss proves that a regular n-gon can be constructed with straightedge and compass if φ(n) is a power of 2. If n is a power of an odd prime number the formula for the totient says its totient can be a power of two only if n is a first power and n − 1 is a power of 2. The primes that are one more than a power of 2 are called Fermat primes, and only five are known: 3, 5, 17, 257, and 65537. Fermat and Gauss knew of these. Nobody has been able to prove whether there are any more. Thus, a regular n-gon has a straightedge-and-compass construction if n is a product of distinct Fermat primes and any power of 2. The first few such n are 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40,... (sequence A003401 in the OEIS). === Prime number theorem for arithmetic progressions === === The RSA cryptosystem === Setting up an RSA system involves choosing large prime numbers p and q, computing n = pq and k = φ(n), and finding two numbers e and d such that ed ≡ 1 (mod k). The numbers n and e (the "encryption key") are released to the public, and d (the "decryption key") is kept private. A message, represented by an integer m, where 0 < m < n, is encrypted by computing S = me (mod n). It is decrypted by computing t = Sd (mod n). Euler's Theorem can be used to show that if 0 < t < n, then t = m. The security of an RSA system would be compromised if the number n could be efficiently factored or if φ(n) could be efficiently computed without factoring n. == Unsolved problems == === Lehmer's conjecture === If p is prime, then φ(p) = p − 1. In 1932 D. H. Lehmer asked if there are any composite numbers n such that φ(n) divides n − 1. None are known. In 1933 he proved that if any such n exists, it must be odd, square-free, and divisible by at least seven primes (i.e. ω(n) ≥ 7). In 1980 Cohen and Hagis proved that n > 1020 and that ω(n) ≥ 14. Further, Hagis showed that if 3 divides n then n > 101937042 and ω(n) ≥ 298848. === Carmichael's conjecture === This states that there is no number n with the property that for all other numbers m, m ≠ n, φ(m) ≠ φ(n). See Ford's theorem above. As stated in the main article, if there is a single counterexample to this conjecture, there must be infinitely many counterexamples, and the smallest one has at least ten billion digits in base 10. === Riemann hypothesis === The Riemann hypothesis is true if and only if the inequality n φ ( n ) < e γ log ⁡ log ⁡ n + e γ ( 4 + γ − log ⁡ 4 π ) log ⁡ n {\displaystyle {\frac {n}{\varphi (n)}}<e^{\gamma }\log \log n+{\frac {e^{\gamma }(4+\gamma -\log 4\pi )}{\sqrt {\log n}}}} is true for all n ≥ p120569# where γ is Euler's constant and p120569# is the product of the first 120569 primes. == See also == Carmichael function (λ) Dedekind psi function (𝜓) Divisor function (σ) Duffin–Schaeffer conjecture Generalizations of Fermat's little theorem Highly composite number Multiplicative group of integers modulo n Ramanujan sum Totient summatory function (𝛷) == Notes == == References == == External links == "Totient function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Euler's Phi Function and the Chinese Remainder Theorem — proof that φ(n) is multiplicative Archived 2021-02-28 at the Wayback Machine Euler's totient function calculator in JavaScript — up to 20 digits Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions Archived 2021-01-16 at the Wayback Machine Plytage, Loomis, Polhill Summing Up The Euler Phi Function
Wikipedia:Euler–Maclaurin formula#0
In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence. The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula. == The formula == If m and n are natural numbers and f(x) is a real or complex valued continuous function for real numbers x in the interval [m,n], then the integral I = ∫ m n f ( x ) d x {\displaystyle I=\int _{m}^{n}f(x)\,dx} can be approximated by the sum (or vice versa) S = f ( m + 1 ) + ⋯ + f ( n − 1 ) + f ( n ) {\displaystyle S=f(m+1)+\cdots +f(n-1)+f(n)} (see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives f(k)(x) evaluated at the endpoints of the interval, that is to say x = m and x = n. Explicitly, for p a positive integer and a function f(x) that is p times continuously differentiable on the interval [m,n], we have S − I = ∑ k = 1 p B k k ! ( f ( k − 1 ) ( n ) − f ( k − 1 ) ( m ) ) + R p , {\displaystyle S-I=\sum _{k=1}^{p}{{\frac {B_{k}}{k!}}\left(f^{(k-1)}(n)-f^{(k-1)}(m)\right)}+R_{p},} where Bk is the kth Bernoulli number (with B1 = ⁠1/2⁠) and Rp is an error term which depends on n, m, p, and f and is usually small for suitable values of p. The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for B1. In this case we have ∑ i = m n f ( i ) = ∫ m n f ( x ) d x + f ( n ) + f ( m ) 2 + ∑ k = 1 ⌊ p 2 ⌋ B 2 k ( 2 k ) ! ( f ( 2 k − 1 ) ( n ) − f ( 2 k − 1 ) ( m ) ) + R p , {\displaystyle \sum _{i=m}^{n}f(i)=\int _{m}^{n}f(x)\,dx+{\frac {f(n)+f(m)}{2}}+\sum _{k=1}^{\left\lfloor {\frac {p}{2}}\right\rfloor }{\frac {B_{2k}}{(2k)!}}\left(f^{(2k-1)}(n)-f^{(2k-1)}(m)\right)+R_{p},} or alternatively ∑ i = m + 1 n f ( i ) = ∫ m n f ( x ) d x + f ( n ) − f ( m ) 2 + ∑ k = 1 ⌊ p 2 ⌋ B 2 k ( 2 k ) ! ( f ( 2 k − 1 ) ( n ) − f ( 2 k − 1 ) ( m ) ) + R p . {\displaystyle \sum _{i=m+1}^{n}f(i)=\int _{m}^{n}f(x)\,dx+{\frac {f(n)-f(m)}{2}}+\sum _{k=1}^{\left\lfloor {\frac {p}{2}}\right\rfloor }{\frac {B_{2k}}{(2k)!}}\left(f^{(2k-1)}(n)-f^{(2k-1)}(m)\right)+R_{p}.} === The remainder term === The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated integration by parts to successive intervals [r, r + 1] for r = m, m + 1, …, n − 1. The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term. The remainder term has an exact expression in terms of the periodized Bernoulli functions Pk(x). The Bernoulli polynomials may be defined recursively by B0(x) = 1 and, for k ≥ 1, B k ′ ( x ) = k B k − 1 ( x ) , ∫ 0 1 B k ( x ) d x = 0. {\displaystyle {\begin{aligned}B_{k}'(x)&=kB_{k-1}(x),\\\int _{0}^{1}B_{k}(x)\,dx&=0.\end{aligned}}} The periodized Bernoulli functions are defined as P k ( x ) = B k ( x − ⌊ x ⌋ ) , {\displaystyle P_{k}(x)=B_{k}{\bigl (}x-\lfloor x\rfloor {\bigr )},} where ⌊x⌋ denotes the largest integer less than or equal to x, so that x − ⌊x⌋ always lies in the interval [0,1). With this notation, the remainder term Rp equals R p = ( − 1 ) p + 1 ∫ m n f ( p ) ( x ) P p ( x ) p ! d x . {\displaystyle R_{p}=(-1)^{p+1}\int _{m}^{n}f^{(p)}(x){\frac {P_{p}(x)}{p!}}\,dx.} When k > 0, it can be shown that for 0 ≤ x ≤ 1, | B k ( x ) | ≤ 2 ⋅ k ! ( 2 π ) k ζ ( k ) , {\displaystyle {\bigl |}B_{k}(x){\bigr |}\leq {\frac {2\cdot k!}{(2\pi )^{k}}}\zeta (k),} where ζ denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials Bk(x). The bound is achieved for even k when x is zero. The term ζ(k) may be omitted for odd k but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated as | R p | ≤ 2 ζ ( p ) ( 2 π ) p ∫ m n | f ( p ) ( x ) | d x . {\displaystyle \left|R_{p}\right|\leq {\frac {2\zeta (p)}{(2\pi )^{p}}}\int _{m}^{n}\left|f^{(p)}(x)\right|\,dx.} === Low-order cases === The Bernoulli numbers from B1 to B7 are ⁠1/2⁠, ⁠1/6⁠, 0, −⁠1/30⁠, 0, ⁠1/42⁠, 0. Therefore, the low-order cases of the Euler–Maclaurin formula are: ∑ i = m n f ( i ) − ∫ m n f ( x ) d x = f ( m ) + f ( n ) 2 + ∫ m n f ′ ( x ) P 1 ( x ) d x = f ( m ) + f ( n ) 2 + 1 6 f ′ ( n ) − f ′ ( m ) 2 ! − ∫ m n f ″ ( x ) P 2 ( x ) 2 ! d x = f ( m ) + f ( n ) 2 + 1 6 f ′ ( n ) − f ′ ( m ) 2 ! + ∫ m n f ‴ ( x ) P 3 ( x ) 3 ! d x = f ( m ) + f ( n ) 2 + 1 6 f ′ ( n ) − f ′ ( m ) 2 ! − 1 30 f ‴ ( n ) − f ‴ ( m ) 4 ! − ∫ m n f ( 4 ) ( x ) P 4 ( x ) 4 ! d x = f ( m ) + f ( n ) 2 + 1 6 f ′ ( n ) − f ′ ( m ) 2 ! − 1 30 f ‴ ( n ) − f ‴ ( m ) 4 ! + ∫ m n f ( 5 ) ( x ) P 5 ( x ) 5 ! d x = f ( m ) + f ( n ) 2 + 1 6 f ′ ( n ) − f ′ ( m ) 2 ! − 1 30 f ‴ ( n ) − f ‴ ( m ) 4 ! + 1 42 f ( 5 ) ( n ) − f ( 5 ) ( m ) 6 ! − ∫ m n f ( 6 ) ( x ) P 6 ( x ) 6 ! d x = f ( m ) + f ( n ) 2 + 1 6 f ′ ( n ) − f ′ ( m ) 2 ! − 1 30 f ‴ ( n ) − f ‴ ( m ) 4 ! + 1 42 f ( 5 ) ( n ) − f ( 5 ) ( m ) 6 ! + ∫ m n f ( 7 ) ( x ) P 7 ( x ) 7 ! d x . {\displaystyle {\begin{aligned}\sum _{i=m}^{n}f(i)-\int _{m}^{n}f(x)\,dx&={\frac {f(m)+f(n)}{2}}+\int _{m}^{n}f'(x)P_{1}(x)\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}-\int _{m}^{n}f''(x){\frac {P_{2}(x)}{2!}}\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}+\int _{m}^{n}f'''(x){\frac {P_{3}(x)}{3!}}\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}-{\frac {1}{30}}{\frac {f'''(n)-f'''(m)}{4!}}-\int _{m}^{n}f^{(4)}(x){\frac {P_{4}(x)}{4!}}\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}-{\frac {1}{30}}{\frac {f'''(n)-f'''(m)}{4!}}+\int _{m}^{n}f^{(5)}(x){\frac {P_{5}(x)}{5!}}\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}-{\frac {1}{30}}{\frac {f'''(n)-f'''(m)}{4!}}+{\frac {1}{42}}{\frac {f^{(5)}(n)-f^{(5)}(m)}{6!}}-\int _{m}^{n}f^{(6)}(x){\frac {P_{6}(x)}{6!}}\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}-{\frac {1}{30}}{\frac {f'''(n)-f'''(m)}{4!}}+{\frac {1}{42}}{\frac {f^{(5)}(n)-f^{(5)}(m)}{6!}}+\int _{m}^{n}f^{(7)}(x){\frac {P_{7}(x)}{7!}}\,dx.\end{aligned}}} == Applications == === The Basel problem === The Basel problem is to determine the sum 1 + 1 4 + 1 9 + 1 16 + 1 25 + ⋯ = ∑ n = 1 ∞ 1 n 2 . {\displaystyle 1+{\frac {1}{4}}+{\frac {1}{9}}+{\frac {1}{16}}+{\frac {1}{25}}+\cdots =\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}.} Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals ⁠π2/6⁠, which he proved in the same year. === Sums involving a polynomial === If f is a polynomial and p is big enough, then the remainder term vanishes. For instance, if f(x) = x3, we can choose p = 2 to obtain, after simplification, ∑ i = 0 n i 3 = ( n ( n + 1 ) 2 ) 2 . {\displaystyle \sum _{i=0}^{n}i^{3}=\left({\frac {n(n+1)}{2}}\right)^{2}.} === Approximation of integrals === The formula provides a means of approximating a finite integral. Let a < b be the endpoints of the interval of integration. Fix N, the number of points to use in the approximation, and denote the corresponding step size by h = ⁠b − a/N − 1⁠. Set xi = a + (i − 1)h, so that x1 = a and xN = b. Then: I = ∫ a b f ( x ) d x ∼ h ( f ( x 1 ) 2 + f ( x 2 ) + ⋯ + f ( x N − 1 ) + f ( x N ) 2 ) + h 2 12 [ f ′ ( x 1 ) − f ′ ( x N ) ] − h 4 720 [ f ‴ ( x 1 ) − f ‴ ( x N ) ] + ⋯ {\displaystyle {\begin{aligned}I&=\int _{a}^{b}f(x)\,dx\\&\sim h\left({\frac {f(x_{1})}{2}}+f(x_{2})+\cdots +f(x_{N-1})+{\frac {f(x_{N})}{2}}\right)+{\frac {h^{2}}{12}}{\bigl [}f'(x_{1})-f'(x_{N}){\bigr ]}-{\frac {h^{4}}{720}}{\bigl [}f'''(x_{1})-f'''(x_{N}){\bigr ]}+\cdots \end{aligned}}} This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some p, depending upon f and h, such that the terms past order p increase rapidly. Thus, the remainder term generally demands close attention. The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation. === Asymptotic expansion of sums === In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is ∑ n = a b f ( n ) ∼ ∫ a b f ( x ) d x + f ( b ) + f ( a ) 2 + ∑ k = 1 ∞ B 2 k ( 2 k ) ! ( f ( 2 k − 1 ) ( b ) − f ( 2 k − 1 ) ( a ) ) , {\displaystyle \sum _{n=a}^{b}f(n)\sim \int _{a}^{b}f(x)\,dx+{\frac {f(b)+f(a)}{2}}+\sum _{k=1}^{\infty }\,{\frac {B_{2k}}{(2k)!}}\left(f^{(2k-1)}(b)-f^{(2k-1)}(a)\right),} where a and b are integers. Often the expansion remains valid even after taking the limits a → −∞ or b → +∞ or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example, ∑ k = 0 ∞ 1 ( z + k ) 2 ∼ ∫ 0 ∞ 1 ( z + k ) 2 d k ⏟ = 1 z + 1 2 z 2 + ∑ t = 1 ∞ B 2 t z 2 t + 1 . {\displaystyle \sum _{k=0}^{\infty }{\frac {1}{(z+k)^{2}}}\sim \underbrace {\int _{0}^{\infty }{\frac {1}{(z+k)^{2}}}\,dk} _{={\dfrac {1}{z}}}+{\frac {1}{2z^{2}}}+\sum _{t=1}^{\infty }{\frac {B_{2t}}{z^{2t+1}}}.} Here the left-hand side is equal to ψ(1)(z), namely the first-order polygamma function defined by ψ ( 1 ) ( z ) = d 2 d z 2 log ⁡ Γ ( z ) ; {\displaystyle \psi ^{(1)}(z)={\frac {d^{2}}{dz^{2}}}\log \Gamma (z);} the gamma function Γ(z) is equal to (z − 1)! when z is a positive integer. This results in an asymptotic expansion for ψ(1)(z). That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function. ==== Examples ==== If s is an integer greater than 1 we have: ∑ k = 1 n 1 k s ≈ 1 s − 1 + 1 2 − 1 ( s − 1 ) n s − 1 + 1 2 n s + ∑ i = 1 B 2 i ( 2 i ) ! [ ( s + 2 i − 2 ) ! ( s − 1 ) ! − ( s + 2 i − 2 ) ! ( s − 1 ) ! n s + 2 i − 1 ] . {\displaystyle \sum _{k=1}^{n}{\frac {1}{k^{s}}}\approx {\frac {1}{s-1}}+{\frac {1}{2}}-{\frac {1}{(s-1)n^{s-1}}}+{\frac {1}{2n^{s}}}+\sum _{i=1}{\frac {B_{2i}}{(2i)!}}\left[{\frac {(s+2i-2)!}{(s-1)!}}-{\frac {(s+2i-2)!}{(s-1)!n^{s+2i-1}}}\right].} Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion: ∑ k = 1 n 1 k s ∼ ζ ( s ) − 1 ( s − 1 ) n s − 1 + 1 2 n s − ∑ i = 1 B 2 i ( 2 i ) ! ( s + 2 i − 2 ) ! ( s − 1 ) ! n s + 2 i − 1 . {\displaystyle \sum _{k=1}^{n}{\frac {1}{k^{s}}}\sim \zeta (s)-{\frac {1}{(s-1)n^{s-1}}}+{\frac {1}{2n^{s}}}-\sum _{i=1}{\frac {B_{2i}}{(2i)!}}{\frac {(s+2i-2)!}{(s-1)!n^{s+2i-1}}}.} For s equal to 2 this simplifies to ∑ k = 1 n 1 k 2 ∼ ζ ( 2 ) − 1 n + 1 2 n 2 − ∑ i = 1 B 2 i n 2 i + 1 , {\displaystyle \sum _{k=1}^{n}{\frac {1}{k^{2}}}\sim \zeta (2)-{\frac {1}{n}}+{\frac {1}{2n^{2}}}-\sum _{i=1}{\frac {B_{2i}}{n^{2i+1}}},} or ∑ k = 1 n 1 k 2 ∼ π 2 6 − 1 n + 1 2 n 2 − 1 6 n 3 + 1 30 n 5 − 1 42 n 7 + ⋯ . {\displaystyle \sum _{k=1}^{n}{\frac {1}{k^{2}}}\sim {\frac {\pi ^{2}}{6}}-{\frac {1}{n}}+{\frac {1}{2n^{2}}}-{\frac {1}{6n^{3}}}+{\frac {1}{30n^{5}}}-{\frac {1}{42n^{7}}}+\cdots .} When s = 1, the corresponding technique gives an asymptotic expansion for the harmonic numbers: ∑ k = 1 n 1 k ∼ γ + log ⁡ n + 1 2 n − ∑ k = 1 ∞ B 2 k 2 k n 2 k , {\displaystyle \sum _{k=1}^{n}{\frac {1}{k}}\sim \gamma +\log n+{\frac {1}{2n}}-\sum _{k=1}^{\infty }{\frac {B_{2k}}{2kn^{2k}}},} where γ ≈ 0.5772... is the Euler–Mascheroni constant. == Proofs == === Derivation by mathematical induction === We outline the argument given in Apostol. The Bernoulli polynomials Bn(x) and the periodic Bernoulli functions Pn(x) for n = 0, 1, 2, ... were introduced above. The first several Bernoulli polynomials are B 0 ( x ) = 1 , B 1 ( x ) = x − 1 2 , B 2 ( x ) = x 2 − x + 1 6 , B 3 ( x ) = x 3 − 3 2 x 2 + 1 2 x , B 4 ( x ) = x 4 − 2 x 3 + x 2 − 1 30 , ⋮ {\displaystyle {\begin{aligned}B_{0}(x)&=1,\\B_{1}(x)&=x-{\tfrac {1}{2}},\\B_{2}(x)&=x^{2}-x+{\tfrac {1}{6}},\\B_{3}(x)&=x^{3}-{\tfrac {3}{2}}x^{2}+{\tfrac {1}{2}}x,\\B_{4}(x)&=x^{4}-2x^{3}+x^{2}-{\tfrac {1}{30}},\\&\,\,\,\vdots \end{aligned}}} The values Bn(1) are the Bernoulli numbers Bn. Notice that for n ≠ 1 we have B n = B n ( 1 ) = B n ( 0 ) , {\displaystyle B_{n}=B_{n}(1)=B_{n}(0),} and for n = 1, B 1 = B 1 ( 1 ) = − B 1 ( 0 ) . {\displaystyle B_{1}=B_{1}(1)=-B_{1}(0).} The functions Pn agree with the Bernoulli polynomials on the interval [0, 1] and are periodic with period 1. Furthermore, except when n = 1, they are also continuous. Thus, P n ( 0 ) = P n ( 1 ) = B n for n ≠ 1. {\displaystyle P_{n}(0)=P_{n}(1)=B_{n}\quad {\text{for }}n\neq 1.} Let k be an integer, and consider the integral ∫ k k + 1 f ( x ) d x = ∫ k k + 1 u d v , {\displaystyle \int _{k}^{k+1}f(x)\,dx=\int _{k}^{k+1}u\,dv,} where u = f ( x ) , d u = f ′ ( x ) d x , d v = P 0 ( x ) d x since P 0 ( x ) = 1 , v = P 1 ( x ) . {\displaystyle {\begin{aligned}u&=f(x),\\du&=f'(x)\,dx,\\dv&=P_{0}(x)\,dx&{\text{since }}P_{0}(x)&=1,\\v&=P_{1}(x).\end{aligned}}} Integrating by parts, we get ∫ k k + 1 f ( x ) d x = [ u v ] k k + 1 − ∫ k k + 1 v d u = [ f ( x ) P 1 ( x ) ] k k + 1 − ∫ k k + 1 f ′ ( x ) P 1 ( x ) d x = B 1 ( 1 ) f ( k + 1 ) − B 1 ( 0 ) f ( k ) − ∫ k k + 1 f ′ ( x ) P 1 ( x ) d x . {\displaystyle {\begin{aligned}\int _{k}^{k+1}f(x)\,dx&={\bigl [}uv{\bigr ]}_{k}^{k+1}-\int _{k}^{k+1}v\,du\\&={\bigl [}f(x)P_{1}(x){\bigr ]}_{k}^{k+1}-\int _{k}^{k+1}f'(x)P_{1}(x)\,dx\\&=B_{1}(1)f(k+1)-B_{1}(0)f(k)-\int _{k}^{k+1}f'(x)P_{1}(x)\,dx.\end{aligned}}} Using B1(0) = −⁠1/2⁠, B1(1) = ⁠1/2⁠, and summing the above from k = 0 to k = n − 1, we get ∫ 0 n f ( x ) d x = ∫ 0 1 f ( x ) d x + ⋯ + ∫ n − 1 n f ( x ) d x = f ( 0 ) 2 + f ( 1 ) + ⋯ + f ( n − 1 ) + f ( n ) 2 − ∫ 0 n f ′ ( x ) P 1 ( x ) d x . {\displaystyle {\begin{aligned}\int _{0}^{n}f(x)\,dx&=\int _{0}^{1}f(x)\,dx+\cdots +\int _{n-1}^{n}f(x)\,dx\\&={\frac {f(0)}{2}}+f(1)+\dotsb +f(n-1)+{\frac {f(n)}{2}}-\int _{0}^{n}f'(x)P_{1}(x)\,dx.\end{aligned}}} Adding ⁠f(n) − f(0)/2⁠ to both sides and rearranging, we have ∑ k = 1 n f ( k ) = ∫ 0 n f ( x ) d x + f ( n ) − f ( 0 ) 2 + ∫ 0 n f ′ ( x ) P 1 ( x ) d x . {\displaystyle \sum _{k=1}^{n}f(k)=\int _{0}^{n}f(x)\,dx+{\frac {f(n)-f(0)}{2}}+\int _{0}^{n}f'(x)P_{1}(x)\,dx.} This is the p = 1 case of the summation formula. To continue the induction, we apply integration by parts to the error term: ∫ k k + 1 f ′ ( x ) P 1 ( x ) d x = ∫ k k + 1 u d v , {\displaystyle \int _{k}^{k+1}f'(x)P_{1}(x)\,dx=\int _{k}^{k+1}u\,dv,} where u = f ′ ( x ) , d u = f ″ ( x ) d x , d v = P 1 ( x ) d x , v = 1 2 P 2 ( x ) . {\displaystyle {\begin{aligned}u&=f'(x),\\du&=f''(x)\,dx,\\dv&=P_{1}(x)\,dx,\\v&={\tfrac {1}{2}}P_{2}(x).\end{aligned}}} The result of integrating by parts is [ u v ] k k + 1 − ∫ k k + 1 v d u = [ f ′ ( x ) P 2 ( x ) 2 ] k k + 1 − 1 2 ∫ k k + 1 f ″ ( x ) P 2 ( x ) d x = B 2 2 ( f ′ ( k + 1 ) − f ′ ( k ) ) − 1 2 ∫ k k + 1 f ″ ( x ) P 2 ( x ) d x . {\displaystyle {\begin{aligned}{\bigl [}uv{\bigr ]}_{k}^{k+1}-\int _{k}^{k+1}v\,du&=\left[{\frac {f'(x)P_{2}(x)}{2}}\right]_{k}^{k+1}-{\frac {1}{2}}\int _{k}^{k+1}f''(x)P_{2}(x)\,dx\\&={\frac {B_{2}}{2}}(f'(k+1)-f'(k))-{\frac {1}{2}}\int _{k}^{k+1}f''(x)P_{2}(x)\,dx.\end{aligned}}} Summing from k = 0 to k = n − 1 and substituting this for the lower order error term results in the p = 2 case of the formula, ∑ k = 1 n f ( k ) = ∫ 0 n f ( x ) d x + f ( n ) − f ( 0 ) 2 + B 2 2 ( f ′ ( n ) − f ′ ( 0 ) ) − 1 2 ∫ 0 n f ″ ( x ) P 2 ( x ) d x . {\displaystyle \sum _{k=1}^{n}f(k)=\int _{0}^{n}f(x)\,dx+{\frac {f(n)-f(0)}{2}}+{\frac {B_{2}}{2}}{\bigl (}f'(n)-f'(0){\bigr )}-{\frac {1}{2}}\int _{0}^{n}f''(x)P_{2}(x)\,dx.} This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions. == See also == Cesàro summation Euler summation Gauss–Kronrod quadrature formula Darboux's formula Euler–Boole summation == References == == Further reading == == External links == Weisstein, Eric W. "Euler–Maclaurin Integration Formulas". MathWorld.
Wikipedia:Eureka (University of Cambridge magazine)#0
Eureka is a journal published annually by The Archimedeans, the mathematical society of Cambridge University. It is one of the oldest recreational mathematics publications still in existence. Eureka includes many mathematical articles on a variety of different topics – written by students and mathematicians from all over the world – as well as a short summary of the activities of the society, problem sets, puzzles, artwork and book reviews. Eureka has been published 66 times since 1939, and authors include many famous mathematicians and scientists such as Paul Erdős, Martin Gardner, Douglas Hofstadter, G. H. Hardy, Béla Bollobás, John Conway, Stephen Hawking, Roger Penrose, W. T. Tutte (writing with friends under the pseudonym Blanche Descartes), popular maths writer Ian Stewart, Fields Medallist Timothy Gowers and Nobel laureate Paul Dirac. The journal was formerly distributed free of charge to all current members of the Archimedeans. Today, it is published electronically as well as in print. In 2020, the publication archive was made freely available online. Eureka is edited by students from the university. Of the mathematical articles, there is a paper by Freeman Dyson where he defined the rank of a partition in an effort to prove combinatorially the partition congruences earlier discovered by Srinivasa Ramanujan. In the article, Dyson made a series of conjectures that were all eventually resolved. == References == == External links == Eureka at the website of The Archimedeans Archive of old issues
Wikipedia:European Study Groups with Industry#0
A European Study Group with Industry (ESGI) is usually a week-long meeting where applied mathematicians work on problems presented by industry and research centres. The aim of the meeting is to solve or at least make progress on the problems. The study group concept originated in Oxford, in 1968 (initiated by Leslie Fox and Alan Tayler). Subsequently, the format was adopted in other European countries to form ESGIs. Currently, with a variety of names, they appear in the same or a similar format throughout the world. More specific topics have also formed the subject of focussed meetings, such as the environment, medicine and agriculture. Problems successfully tackled at study groups are discussed in a number of textbooks as well as a collection of case studies, European Success Stories in Industrial Mathematics. A guide for organising and running study groups is provided by the European Consortium for Mathematics in Industry. == European Study Group with Industry == A European Study Group with Industry or ESGI is a type of workshop where mathematicians work on problems presented by industry representatives. The meetings typically last five days, from Monday to Friday. On the Monday morning the industry representatives present problems of current interest to an audience of applied mathematicians. Subsequently, the mathematicians split into working groups to investigate the suggested topics. On the Friday solutions and results are presented back to the industry representative. After the meeting a report is prepared for the company, detailing the progress made and usually with suggestions for further work or experiments. == History == The original Study Groups with Industry started in Oxford in 1968. The format provided a method for initiating interaction between universities and private industry which often led to further collaboration, student projects and new fields of research (many advances in the field of free or moving boundary problems are attributed to the industrial case studies of the 1970s.). Study groups were later adopted in other countries, starting in Europe and then spreading throughout the world. The subject areas have also diversified, for example the Mathematics in Medicine Study Groups, Mathematics in the Plant Sciences Study Groups, the environment, uncertainty quantification and agriculture. The academics work on the problems for free. The following have been given as motivation for this work: Discovering new problems and research areas with practical applications. The possibility of further projects and collaboration with industry. The opportunity for future funding. A number of reasons have also been quoted for companies to attend ESGIs: The possibility of a quick solution to their problem, or at least guidance on a way forward. Mathematicians can help to identify and correctly formulate a problem for further study. Access to state-of-the-art techniques. Building contacts with top researchers in a given field. ESGIs are currently an activity of the European Consortium for Mathematics in Industry. Their ESGI webpage contains details of European meetings and contact details for prospective industry or academics participants. The current co-ordinator of the ESGIs is Prof. Tim Myers of the Centre de Recerca Matemática, Barcelona. Between 2015 and 2019 ESGIs are eligible for funding through the COST network MI-Net (Maths for Industry Network). == List of recent meetings == Past European meetings are listed on the European Consortium for Mathematics in Industry website. International meetings are covered by the Mathematics in Industry Information Service. Recent ESGIs include: ESGI 150, Basque Centre for Applied Mathematics, 21–25 October 2019 ESGI 144, Warsaw, 17 – 22 March 2019 ESGI 145, Cambridge, Apr. 8-12 2019 ESGI 147 Spain, Apr. 8-12 2019 ESGI 152, Palanga, Lithuania, 10–14 June 2019 ESGI 155, Polytechnic Institute of Leiria, Portugal, 1–5 July 2019. ESGI 154, U. Southern Denmark, 19–23 August 2019 ESGI 148/SWI 2019 Netherlands, Wageningen, 28 Jan. – 1 Feb., 2019 ESGI 151 Estonia, Tartu 4-8 Feb. 2019 ESGI 149 Innsbruck, March 4–8, 2019 == International study groups == As well as being held throughout Europe, annual study groups take place in Australia, Brazil, Canada, India, New Zealand, United States, Russia, and South Africa. A site dedicated solely to Dutch study groups may be found here Dutch ESGI. Information on past and upcoming meetings throughout the world may be found on the Mathematics in Industry Information Service website. == Literature == There are many books on mathematical modelling, a number of them containing problems arising from ESGIs or other study groups from around the world, examples include: Practical Applied Mathematics Modelling, Analysis, Approximation Topics in Industrial Mathematics: Case Studies and Related Mathematical Methods Industrial Mathematics: A Course in Solving Real-World Problems The book European Success Stories in Industrial Mathematics contains brief descriptions of a wide variety of industrial mathematics case studies. The Mathematics in Industry Information Service contains a large repository of past reports from study groups throughout the world. A guide for organising and running study groups, the ESGI Handbook, has been developed by the Mathematics for Industry Network. == References ==
Wikipedia:Eustachy Żyliński#0
Eustachy Karol Żyliński (19 September 1889 – 4 July 1954) was a Polish mathematician and university professor known for his work on number theory, algebra, and logic. He was a member of the Lwów School of Mathematics. == Biography == === Early life and career (1889–1919) === Żyliński was born in to a landless noble family. In 1907 he graduated with a gold medal from the gymnasium in Kiev, and in 1911 with a first-degree diploma from the Faculty of Physics and Mathematics of the University of Imperial Saint Vladimir University in Kiev, where he then worked from 1912to 1914, while doing internships in Göttingen, Cambridge and Marburg. In 1914 he obtained a master's degree (equivalent to today's PhD). In 1916 he was drafted into the Imperial Russian army. He graduated from a military-engineering school in Kiev and an electrical engineering school in St. Petersburg in 1917 put himself at the disposal of the 1st Polish Corps. From 1918, as an associate professor, he lectured at the Polish University College and the Higher Technical Institute in Kiev. In 1919 he stayed in Warsaw, performing military service as an officer of the Polish Army. === Lwów School of Mathematics (1919–1945) === From October 1919, he was associate professor at the Jan Kazimierz University in Lwów and in July 1922 Żyliński was appointed full professor at University in Lwów, and then head of the Department of Mathematics A at the Faculty of Philosophy. In 1925, he proved that "there are exactly two binary functors (namely, binegation and the Sheffer stroke) each of which is sufficient for defining all other unary and binary functors of classical propositional logic." His colleagues included mathematicians such as Stefan Banach and Hugo Steinhaus. 1929 he was dean of the Faculty of Mathematics and Natural Sciences of the Jagiellonian University and was the promoter of Władysław Orlicz's doctoral dissertation. During the Soviet occupation of Lwów he worked as the head of the Algebra Department at the University of Lwów, and during the German occupation (1941–1944) he took part in clandestine teaching at the Jagiellonian University. === Postwar career (1945–1954) === After Lwów was reoccupied by the Red Army, he started working at the university. From March 15, 1945, Żyliński was a member of the Union of Polish Patriots in Lviv but was removed from the union in 1946. After the deportation from of Poles Lviv, he initially settled in Łódź. At that time, he worked at the Ministry of Foreign Affairs, and was also nominated as consul general in Kiev, but did not assume this office. In 1947 he moved to Gliwice, where in the years 1946–1951 he was the head of the Department of Mathematics at the Faculty of Engineering and Construction of the Silesian University of Technology. In 1951 he retired and returned to Lodz. Żyliński died there in 1954 of a cerebral hemorrhage. == References ==
Wikipedia:Eva Vedel Jensen#0
Eva Bjørn Vedel Jensen (born 14 June 1951) is a Danish mathematician and statistician known for her work in spatial statistics, stereology, stochastic geometry, and medical imaging. She is a professor emeritus in the Department of Mathematical Sciences at Aarhus University. == Education and career == After earning a master's degree at Aarhus University in 1976, she became a faculty member at the university in 1979. She completed a doctorate at Aarhus in 1987, and became full professor there in 2003. == Recognition == Vedel Jensen has been an Elected Member of the International Statistical Institute since 1992, and is also a member of the Royal Danish Academy of Sciences and Letters. She won the Villum Kann Rasmussen Annual Award for Technical and Scientific Research of the Villum Foundation in 2009. She was named a knight of the Order of the Dannebrog in 2010. The University of Bern gave her an honorary doctorate in 2013. == Selected publications == Vedel Jensen is the author of books including: Local Stereology (World Scientific, 1998) Stereology for Statisticians (with Adrian Baddeley, Chapman & Hall/CRC, 2005) She has also written several highly cited papers with Hans Jørgen G. Gundersen including: Jensen, E. B.; Gundersen, H. J. G.; Østerby, R. (January 1979), "Determination of membrane thickness distribution from orthogonal intercepts", Journal of Microscopy, 115 (1): 19–33, doi:10.1111/j.1365-2818.1979.tb00149.x, PMID 423237, S2CID 24466831 Gundersen, H. J. G.; Jensen, E. B. (May 1985), "Stereological estimation of the volume-weighted mean volume of arbitrary particles observed on random sections", Journal of Microscopy, 138 (2): 127–142, doi:10.1111/j.1365-2818.1985.tb02607.x, PMID 4020857, S2CID 40399794 Gundersen, H. J. G.; Jensen, E. B. (September 1987), "The efficiency of systematic sampling in stereology and its prediction", Journal of Microscopy, 147 (3): 229–263, doi:10.1111/j.1365-2818.1987.tb02837.x, PMID 3430576, S2CID 29713041 Vedel Jensen, E. B.; Gundersen, H. J. G. (April 1993), "The rotator", Journal of Microscopy, 170 (1): 35–44, doi:10.1111/j.1365-2818.1993.tb03321.x, S2CID 221874026 Gundersen, H. J. G.; Jensen, E. B. V.; Kieu, K.; Nielsen, J. (March 1999), "The efficiency of systematic sampling in stereology – reconsidered", Journal of Microscopy, 193 (3): 199–211, doi:10.1046/j.1365-2818.1999.00457.x, PMID 10348656, S2CID 35784656 == References == == External links == Eva Vedel Jensen at the Mathematics Genealogy Project
Wikipedia:Evarist Giné#0
Evarist Giné-Masdéu (July 31, 1944 – March 13, 2015), or simply Evarist Giné, was a Catalan mathematician and statistician. He is known for his pioneering works in probability in Banach spaces, empirical process theory, U-statistics and processes, and nonparametric statistics. == Education and career == Giné was born in Falset in Catalonia. He studied at the University of Barcelona, obtaining a Licenciatura degree in 1967. He went to the United States and completed his PhD at the Massachusetts Institute of Technology in 1973 under the supervision of Richard M. Dudley. He was a lecturer in statistics at University of California, Berkeley from 1974 to 1975. He spent time afterwards at the Venezuelan Institute for Scientific Research, where he was the head of the mathematics department, before moving back to the United States. In 1983, Giné became a professor at Texas A&M University and later moved to College of Staten Island of the City University of New York in 1988. He became a professor of mathematics at the University of Connecticut in 1990, and was the head of the department of mathematics from 2012. He stayed at the University of Connecticut until his death. == Bibliography == Araujo, Aloisio; Giné, Evarist (1980). The central limit theorem for real and Banach valued random variables. Wiley series in probability and mathematical statistics. New York: Wiley. ISBN 978-0-471-05304-0. Giné, Evarist; Grimmett, Geoffrey R.; Saloff-Coste, Laurent (1997). Lectures on Probability Theory and Statistics: Ecole d'Ete de Probabilites de Saint-Flour XXVI. Springer Berlin Heidelberg. ISBN 978-3-540-63190-3. de la Peña, Víctor H.; Giné, Evarist (1999). Decoupling: From Dependence to Independence. Springer New York. ISBN 978-1-4612-6808-6. Giné, Evarist; Koltchinskii, Vladimir; Norvaisa, Rimas, eds. (2010). Selected Works of R.M. Dudley. Springer New York. doi:10.1007/978-1-4419-5821-1. ISBN 978-1-4419-5820-4. Giné, Evarist; Nickl, Richard (18 November 2015). Mathematical Foundations of Infinite-Dimensional Statistical Models. Cambridge University Press. ISBN 978-1-107-04316-9. == References ==
Wikipedia:Eve Oja#0
Eve Oja (10 October 1948 – 27 January 2019) was an Estonian mathematician specializing in functional analysis. She was a professor at the University of Tartu. == Early life and education == Oja was born in Tallinn and studied at the Tartu State University (now the University of Tartu), completing her undergraduate studies in 1972 and earning a doctorate (Cand.Sc.) in 1975. Her dissertation, Безусловные шаудеровы разложения в локально выпуклых пространствах (Unconditional Schauder decompositions in locally convex spaces) was supervised by Gunnar Kangro. == Career == Oja was on the faculty of the University of Tartu since 1975, with a year (1977-78) teaching in Mali, and another (1980-81) doing postdoctoral research at Aix-Marseille University in France. She served several terms as head of the Institute of Pure Mathematics at the university, and from 2009-15 she headed the Estonian School of Mathematics and Statistics. She was editor-in-chief of the mathematics journal Acta et Commentationes Universitatis Tartuensis de Mathematica since 1998. == Recognition == Oja was elected to the Estonian Academy of Sciences in 2010. She was also a member of the European Academy of Sciences and Arts. == Death == Oja died on 27 January 2019. == References ==
Wikipedia:Eve Torrence#0
Eve Alexandra Littig Torrence (born 1963) is an American mathematician, a professor emerita of mathematics at Randolph–Macon College, and a former president of mathematics society Pi Mu Epsilon. She is known for her award-winning writing and books in mathematics, for her mathematical origami art, and for her efforts debunking overly broad claims regarding the ubiquity of the golden ratio. == Education, career, and service == Torrence was an undergraduate at Tufts University. She completed her Ph.D. in 1991 at the University of Virginia; her dissertation, The Coordination of a Hexagonal-Barbilian Plane by a Quadratic Jordan Algebra, was supervised by John Faulkner. She was Claire Booth Luce assistant professor at Trinity Washington University from 1991 to 1994, before joining the Randolph–Macon College faculty in 1994. She earned tenure there in 1999, and became a full professor in 2008. She retired in 2021, and was given the Bruce M. Unger Award by Randolph–Macon College on the occasion of her retirement. She served as president of Pi Mu Epsilon, the US national honor society in mathematics, from 2011 to 2014. The Maryland-District of Columbia-Virginia Section of the Mathematical Association of America gave her their Sister Helen Christensen Service Award in 2019. == Selected works == Torrence won the 2007 Trevor Evans Award of the Mathematical Association of America for a paper she wrote with Adrian Rice on Dodgson condensation: Rice, Adrian; Torrence, Eve (November 2006), "Lewis Carroll's Condensation Method for Evaluating Determinants" (PDF), Math Horizons, 14 (2): 12–15, doi:10.1080/10724117.2006.11974674, JSTOR 25678651, S2CID 125114713 Her books include: Torrence, Bruce F.; Torrence, Eve (1999), The Student's Introduction to Mathematica: A Handbook for Precalculus, Calculus, and Linear Algebra, Cambridge University Press Torrence, Eve (2011), Cut and Assemble Icosahedra: Twelve Models in White and Color, Dover Publications A sculpture, "Sunshine", by Torrence is displayed in a Randolph–Macon College building lobby; it depicts the compound of five tetrahedra as five interlocked aluminum shapes, inspired by an origami version of the same compound folded by Tom Hull. She also won the "Best in Show" award in a 2015 juried mathematical art exhibit, for her pieces titled "Day" and "Night", mathematical origami using folded cardstock rhombi to make hyperbolic paraboloid surfaces, connected in the pattern of a rhombic dodecahedron: Torrence, Eve (2014), "Making Sunshine: A First Geometric Sculpture", in Greenfield, Gary; Hart, George; Sarhangi, Reza (eds.), Proceedings of Bridges 2014: Mathematics, Music, Art, Architecture, Culture, Phoenix, Arizona: Tessellations Publishing, pp. 461–464, ISBN 978-1-938664-11-3 Torrence, Eve (2015), "Day and Night", Mathematical Art Galleries: 2015 bridges conference, The Bridges Organization == References ==
Wikipedia:Evectant#0
In mathematical invariant theory, an evectant is a contravariant constructed from an invariant by acting on it with a differential operator called an evector. Evectants and evectors were introduced by Sylvester (1854, p.95). == References == Sylvester, James Joseph (1853), "On the calculus of forms, otherwise the theory of invariants", The Cambridge and Dublin Mathematical Journal, 8: 257–269 Sylvester, James Joseph (1854), "On the calculus of forms, otherwise the theory of invariants", The Cambridge and Dublin Mathematical Journal, 9: 85–103
Wikipedia:Evelyn Buckwar#0
Evelyn Buckwar is a German mathematician specializing in stochastic differential equations. She is Professor for Stochastics at the Johannes Kepler University Linz in Austria. == Education == Buckwar earned a diploma in mathematics in 1992 from the Free University of Berlin, and completed her doctorate there in 1997. Her dissertation, Iterative Approximation of the Positive Solutions of a Class of Nonlinear Volterra-type Integral Equations, was supervised by Rudolf Gorenflo. == Career == After working as a Marie Curie Fellow at the University of Manchester and then as a researcher at the Humboldt University of Berlin, where she completed a habilitation in 2005, she became a visiting professor at Otto von Guericke University Magdeburg and Technische Universität Berlin before becoming a lecturer at Heriot-Watt University in 2007. She took her present position at the Johannes Kepler University Linz in 2011. == References == == External links == Evelyn Buckwar publications indexed by Google Scholar
Wikipedia:Evelyn Prescott Wiggin#0
Evelyn Prescott Wiggin (1900–1964) was an American mathematician and university professor. She was one of the few women to earn a PhD in mathematics in the United States before World War II. == Early life == Evelyn Prescott Wiggin was born March 1, 1900, in Stratham, New Hampshire, to Margaret Prescott Green and George Herbert Wiggin. Her mother died shortly after giving birth to her second daughter in 1905. She attended the Robinson Seminary public secondary school in Exeter, New Hampshire, and then enrolled in Wellesley College in 1917 where she was a Durant scholar. She graduated as a mathematics major in 1921 and immediately began teaching mathematics at the Massena High School in New York. On March 28, 1922, Wiggin received a letter from professor Roland George Dwight Richardson at Brown University, who said, "at the suggestion of my good friends, Misses [Helen A.] Merrill and [Clara E.] Smith at Wellesley," invited her to apply for a graduate assistant position in the mathematics department at Brown University. He noted that he usually had "four or five students studying for master's degrees each year, half of whom were women." Wiggin took the position and began work toward her M.A. degree in the autumn of 1922. As Richardson's assistant, she also taught classes of girls who were deficient in algebra and geometry. She completed her master's degree in 1924. For the next three years, Wiggin taught mathematics at Hood College in Frederick, Maryland, and in 1927, she received another note from Richardson encouraging her to continue graduate work saying, "If you care to take some course for credit in absentia here at Brown, I think it could be arranged. I know you will want to keep on with your studies in some sort of way. Possibly you can go to Chicago some summer." She enrolled at the University of Chicago that same year. == Career == In 1929, Wiggin joined the faculty of Randolph-Macon Woman's College (R-MWC) (now Randolph College) in Lynchburg, Virginia, as an associate professor. On March 31, 1931, her Randolph colleague Gillie Larew wrote to Gilbert Ames Bliss, who had been Larew's own doctoral advisor at the University of Chicago, saying, "I want to thank you again for sending us Miss Wiggin and express the hope that we may keep her for a long, long time. She is everything she should be both as teacher and as a person." In 1935, Wiggin returned to the University of Chicago for a year and finished her dissertation, titled A Boundary Value Problem of the Calculus of Variations, supervised by Gilbert Ames Bliss and William Thomas Reid. She was awarded her doctorate in 1936 and rejoined Randolph-Macon, where, in 1941, she was promoted to full professor. She remained there until her retirement, though she did take several leaves to teach elsewhere, including at Wellesley College, Emory University and the University of Chicago. According to Judy Green, Wiggin belonged to several professional societies. American Mathematical Society, elected 1923 Mathematical Association of America Sigma Delta Epsilon American Association of University Women American Association of University Professors Phi Beta Kappa == Personal life == On June 20, 1956, at age 56, Evelyn Wiggin married Sidney Casner, a retired lawyer, in Chicago, and took the name Evelyn Wiggin Casner. Sidney Casner died around 1962. Wiggin died at the University of Virginia Hospital in Charlottesville, Virginia, at the age of 64 on November 5, 1964. == Selected publications == Wiggin, E. P., A boundary value problem of the calculus of variations. In Contributions to the Calculus of Variations, 1933-37, 243-75. Chicago: University of Chicago Press. Published version of PhD dissertation. Reviews: JFM 63.0483.03 (H. Boerner); Zbl 017.36203 (L. M. Graves). Review of volume: Bull. Amer. Math. Soc. 44:604-09 (A. Dresden). 1937 Wiggin, E., "The value of mathematics in a liberal education". Math. Mag. 19:418. 1945 == References ==
Wikipedia:Evgenii Landis#0
Evgenii Mikhailovich Landis (Russian: Евге́ний Миха́йлович Ла́ндис, Yevgeny Mikhaylovich Landis; 6 October 1921 – 12 December 1997) was a Soviet mathematician who worked mainly on partial differential equations. == Life == Landis was born in Kharkiv, Ukrainian SSR, Soviet Union. He was Jewish. He studied and worked at the Moscow State University, where his advisor was Alexander Kronrod, and later Ivan Petrovsky. In 1946, together with Kronrod, he rediscovered Sard's lemma, unknown in USSR at the time. Later, he worked on uniqueness theorems for elliptic and parabolic differential equations, Harnack inequalities, and Phragmén–Lindelöf type theorems. With Georgy Adelson-Velsky, he invented the AVL tree data structure (where "AVL" stands for Adelson-Velsky Landis). He died in Moscow. His students include Yulij Ilyashenko. == External links == Evgenii Mikhailovich Landis at the Mathematics Genealogy Project Biography of Y.M. Landis at the International Centre for Mathematical Sciences.
Wikipedia:Evgenii Nikishin#0
Evgenii Mikhailovich Nikishin (Евгений Михайлович Никишин; 23 June 1945, in Penza Oblast – 17 December 1986) was a Russian mathematician, who specialized in harmonic analysis. == Biography == Nikishin, at age of 24, earned his candidate doctorate at Moscow State University, becoming the youngest Candidate Doctorate in a history of MSU and in 1971 his habilitation (Russian doctorate) at the Steklov Institute under Pyotr Ulyanov (1928–2006). In 1977 he became a professor at Moscow State University, where he remained until his death after a long battle with cancer. He worked on approximation theory, especially Padé approximants. Nikishin systems of functions are named after him. Also named in his honour is the Nikishin-Stein factorisation theorem, which is a 1970 generalization by Nikishin of the Stein factorisation theorem. Nikishin also did research on rational approximations in number theory and wrote a monograph on such approximations in a unified approach that also treated rational approximations in function spaces. In 1972 he won the Lenin Komsomol Prize and in 1973 he won the Salem Prize, that awarded every year to a young mathematician judged to have done outstanding work world wide. In 1978 he was an Invited Speaker (The Padé Approximants) at the International Congress of Mathematicians in Helsinki. Evgeniy was a long friend and colleague of Anatoly Fomenko with whom they were developing a revising historical chronology. == Selected publications == Nikishin, E. M. (1970). "Resonance theorems and super linear operators". Russian Mathematical Surveys. 25 (6): 125–187. Bibcode:1970RuMaS..25..125N. doi:10.1070/rm1970v025n06abeh001270. with Vladimir Nikolaevich Sorokin: Rational approximations and orthogonality. AMS. 1991. == References == == External links == Mathnet.ru
Wikipedia:Evgeny Golod#0
Evgenii Solomonovich Golod (Russian: Евгений Соломонович Голод, 21 October 1935 – 5 July 2018) was a Russian mathematician who proved the Golod–Shafarevich theorem on class field towers. As an application, he gave a negative solution to the Kurosh–Levitzky problem on the nilpotency of finitely generated nil algebras, and so to a weak form of Burnside's problem. Golod was a student of Igor Shafarevich. As of 2015, Golod had 39 academic descendants, most of them through his student Luchezar L. Avramov. == Selected publications == Golod, E.S; Shafarevich, I.R. (1964), "On the class field tower", Izv. Akad. Nauk SSSSR (in Russian), 28: 261–272, MR 0161852 Golod, E.S (1964), "On nil-algebras and finitely approximable p-groups.", Izv. Akad. Nauk SSSSR (in Russian), 28: 273–276, MR 0161878 == References ==
Wikipedia:Evgeny Moiseev#0
Evgeny Moiseev (Russian: Евге́ний Моисе́ев, IPA: [evgeˈnij moiˈsejev] ; 7 March 1948 – 25 December 2022) was a Russian mathematician, academician of the Russian Academy of Sciences, Dean of the Faculty of Computational Mathematics and Cybernetics at Moscow State University (MSU CMC), Head of the Department of Functional Analysis and its Applications at MSU CMC, Professor, Dr.Sc. == Biography == Evgeny Moiseev was born in Odintsovo, Moscow region on 7 March 1948, and attended a school with specialized training in programming in Reutov. In 1965, after graduating from high school, he entered Moscow State University, the Faculty of Physics. After graduating from the Faculty of Physics in 1971, he became a postgraduate student at the MSU Faculty of Computational Mathematics and Cybernetics and received his Candidate of Sciences (PhD) degree in Physics and Mathematics in 1974 for a thesis entitled «On the uniqueness of solutions of the second boundary value problem for an elliptic equation». Moiseev worked at the Faculty of Computational Mathematics and Cybernetics from 1974. He was an assistant (1974-1979), an assistant professor (1979-1983), a professor at the Department of General Mathematics (1983-2008). He was awarded a degree of Doctor of Science in Physics and Mathematics for his doctoral thesis «Some problems of mixed type equations spectral theory» in 1981. In 1999 Evgeny Moiseev was appointed Dean of the Faculty of Computational Mathematics and Cybernetics. Since 2008 he has been a professor and the Head of the Department of Functional Analysis and its Applications. He has also worked part-time at the Computational center of the Russian Academy of Sciences, most recently in the position of Chief Researcher. In Moscow State University Evgeny Moiseev delivered the following lecture courses: Functional Analysis, Mathematical Analysis, Applied Functional Analysis, Mixed Equations, Singular Integral Equations, and Spectral Methods for Non-Classical Mathematical Physics Problems Solution. He also conducted special seminars. Moiseev supervised 7 Doctors of Science and 15 PhDs in Mathematics and Physics. Moiseev headed MSU Young Researchers Council for five years (1983–1988). He worked as Academic Secretary of CMC Council. He was a Deputy Chairman of the Expert Council of the Higher Attestation Commission, the Editor in Chief of the journal “Integral Transforms and Special Functions”, the Editor in Chief of the series “Computational Mathematics and Cybernetics” in “MSU Vestnik”, an editorial board member of the journals “Differential Equations” and “RFBR Vestnik”. Moiseev died on 25 December 2022, at the age of 74. == Research career == Moiseev`s research spans areas including computer science, mathematical modeling, spectral theory, and differential equations. He has found the sectors on the complex plane which encompass the Tricomi problem spectrum for mixed equations in the gas dynamics theory. The solution of the Tricomi, Frankl and Gellersterdt problem has been efficiently presented in the form of biorthogonal series for both two-dimensional and three-dimensional cases. He has also researched the basis property of relevant root systems. Moiseev has developed different methods for solving boundary value problems with non-local boundary conditions arising in turbulent plasma theory. He has solved the problem of determining the Riemann space-time coordinates functional dependence on the Mincowski space coordinates. He has obtained the representation of forced oscillations in a coaxial layered waveguide in the form of finite sums of normal and adjoined waves and has proved the approximation possibility with such sums. In the theory of hyperbolic problems with boundary control Moiseev has solved the ZH-L Lions problem of a priori estimation of function gradient, Over the past years Evgeny Moiseev (in collaboration with Vladimir Il'in) has published a great number of works on optimal boundary control of string's oscillations with shift or elastic force. == Awards and honours == Evgeny Moiseev has been awarded top national and international honours and prizes: Full Member of the International Higher Education Academy of Sciences (1994) Corresponding Member of the Russian Academy of Sciences (1997) Honorary Professor at Moscow State University (2001) Honorary Professor at Eurasian University (2001) Academician of the Russian Academy of Sciences (2003) Honorary Doctor at Eurasian University (Astana, Kazakhstan, 2004) Lenin Komsomol Prize in science and technology (1980) MSU Lomonosov Prize (1994) Medal "In Commemoration of the 850th Anniversary of Moscow" (1997) Order of Friendship (2005) == Main scientific publications == Published more than 140 research papers and 17 monographs. == References == == External links == Evgenij Moiseev on the website Russian Academy of Sciences (in Russian) Evgenij Moiseev — scientific works on the website Math-Net.Ru (in English) Biography Evgenij Moiseev on the website of the MSU Faculty of Computational Mathematics and Cybernetics (in Russian) Evgenij Moiseev — scientific works on the website ISTINA MSU (in Russian)
Wikipedia:Evgeny Tyrtyshnikov#0
Evgeny Tyrtyshnikov (Russian: Евге́ний Евге́ньевич Тырты́шников) (born 1955) is a Russian mathematician, Dr.Sc., Professor, Academician of the Russian Academy of Sciences, a professor at the Faculty of Computer Science at the Moscow State University. He graduated from the faculty MSU CMC (1977). Has been working at Moscow State University since 2004. He defended the thesis "Matrices of the Toeplitz type and their applications" for the degree of Doctor of Physical and Mathematical Sciences (1990). Was awarded the title of Professor (1996), Corresponding Member of the Russian Academy of Sciences (2006), Academician of the Russian Academy of Sciences (2016). Author of 12 books and more than 130 scientific articles. Research interests: linear algebra and its applications, asymptotic analysis of matrix spectra, integral equations of mathematical physics, computational methods. == References == == Bibliography == Evgeny Grigoriev (2010). Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory (1 500 экз ed.). Moscow: Publishing house of Moscow University. pp. 205–206. ISBN 978-5-211-05838-5. == External links == Russian Academy of Sciences(in Russian) Annals of the Moscow University(in Russian) MSU CMC(in Russian) Scientific works of Evgeny Tyrtyshnikov Scientific works of Evgeny Tyrtyshnikov(in English)
Wikipedia:Ewa Ligocka#0
Ewa Ligocka (13 October 1947 – 28 October 2022) was a Polish mathematician specializing in complex analysis, and a political activist. == Early life and education == Ligocka was born in Katowice on 13 October 1947, the daughter of Polish photography critic and historian Alfred Ligocki. As a high school student under the tutelage of Teodor Paliczka, she competed for Poland in the International Mathematical Olympiad in 1965. She earned a master's degree at the University of Warsaw in 1970, and completed a Ph.D. there in 1973 under the supervision of Wiesław Żelazko. During this period, her research concerned the theory of analytic functions on topological vector spaces. The story goes that, in 1972, she plucked and cooked the goose given to Per Enflo as the prize for solving Mazur's goose problem. == Career and later life == After completing her doctorate, Ligocka continued as a researcher at the University of Warsaw. As an assistant professor in 1976, she signed an open letter of protest regarding the June 1976 protests in Radom and Ursus. Despite the efforts of other mathematicians to protect her, this protest led to her transfer to a branch campus of the university in Białystok and then, in 1977, her dismissal from the university. Meanwhile, she had begun working with Maciej Skwarczyński on the Bergman kernel, and by 1978 she began her research with Massachusetts Institute of Technology student Steven R. Bell on Fefferman's theorem on the smooth extension of biholomorphisms to the boundaries of their domains. This work, published in Inventiones Mathematicae in 1980, already created a stir in Polish mathematics in the late 1970s, and in 1979 she was hired by Czesław Olech as a researcher at the Institute of Mathematics of the Polish Academy of Sciences, without any political restrictions. She completed a habilitation in 1986, and in 1992 returned to the University of Warsaw as an associate professor. She was given the degree of professor in 1994. She retired in 2008, and died on 28 October 2022. == Recognition == Ligocka was the 1986 recipient of the Stanisław Zaremba Grand Prize of the Polish Mathematical Society. She and Steven R. Bell received the 1991 Stefan Bergman Prize of the American Mathematical Society, given for their work on Fefferman's theorem. == References ==
Wikipedia:Examples of anonymous functions#0
In computer programming, an anonymous function (function literal, expression or block) is a function definition that is not bound to an identifier. Anonymous functions are often arguments being passed to higher-order functions or used for constructing the result of a higher-order function that needs to return a function. If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function. Anonymous functions are ubiquitous in functional programming languages and other languages with first-class functions, where they fulfil the same role for the function type as literals do for other data types. Anonymous functions originate in the work of Alonzo Church in his invention of the lambda calculus, in which all functions are anonymous, in 1936, before electronic computers. In several programming languages, anonymous functions are introduced using the keyword lambda, and anonymous functions are often referred to as lambdas or lambda abstractions. Anonymous functions have been a feature of programming languages since Lisp in 1958, and a growing number of modern programming languages support anonymous functions. (Full article...) == Examples of anonymous functions == Numerous languages support anonymous functions, or something similar. === APL === Only some dialects support anonymous functions, either as dfns, in the tacit style or a combination of both. === C (non-standard extension) === The anonymous function is not supported by standard C programming language, but supported by some C dialects, such as GCC and Clang. ==== GCC ==== The GNU Compiler Collection (GCC) supports anonymous functions, mixed by nested functions and statement expressions. It has the form: The following example works only with GCC. Because of how macros are expanded, the l_body cannot contain any commas outside of parentheses; GCC treats the comma as a delimiter between macro arguments. The argument l_ret_type can be removed if __typeof__ is available; in the example below using __typeof__ on array would return testtype *, which can be dereferenced for the actual value if needed. ==== Clang (C, C++, Objective-C, Objective-C++) ==== Clang supports anonymous functions, called blocks, which have the form: The type of the blocks above is return_type (^)(parameters). Using the aforementioned blocks extension and Grand Central Dispatch (libdispatch), the code could look simpler: The code with blocks should be compiled with -fblocks and linked with -lBlocksRuntime === C++ (since C++11) === C++11 supports anonymous functions (technically function objects), called lambda expressions, which have the form: where "specs" is of the form "specifiers exception attr trailing-return-type in that order; each of these components is optional". If it is absent, the return type is deduced from return statements as if for a function with declared return type auto. This is an example lambda expression: C++11 also supports closures, here called captures. Captures are defined between square brackets [and ] in the declaration of lambda expression. The mechanism allows these variables to be captured by value or by reference. The following table demonstrates this: Variables captured by value are constant by default. Adding mutable after the parameter list makes them non-constant. C++14 and newer versions support init-capture, for example: The following two examples demonstrate use of a lambda expression: This computes the total of all elements in the list. The variable total is stored as a part of the lambda function's closure. Since it is a reference to the stack variable total, it can change its value. This will cause total to be stored as a reference, but value will be stored as a copy. The capture of this is special. It can only be captured by value, not by reference. However in C++17, the current object can be captured by value (denoted by *this), or can be captured by reference (denoted by this). this can only be captured if the closest enclosing function is a non-static member function. The lambda will have the same access as the member that created it, in terms of protected/private members. If this is captured, either explicitly or implicitly, then the scope of the enclosed class members is also tested. Accessing members of this does not need explicit use of this-> syntax. The specific internal implementation can vary, but the expectation is that a lambda function that captures everything by reference will store the actual stack pointer of the function it is created in, rather than individual references to stack variables. However, because most lambda functions are small and local in scope, they are likely candidates for inlining, and thus need no added storage for references. If a closure object containing references to local variables is invoked after the innermost block scope of its creation, the behaviour is undefined. Lambda functions are function objects of an implementation-dependent type; this type's name is only available to the compiler. If the user wishes to take a lambda function as a parameter, the parameter type must be a template type, or they must create a std::function or a similar object to capture the lambda value. The use of the auto keyword can help store the lambda function, Here is an example of storing anonymous functions in variables, vectors, and arrays; and passing them as named parameters: A lambda expression with an empty capture specification ([]) can be implicitly converted into a function pointer with the same type as the lambda was declared with. So this is legal: Since C++17, a lambda can be declared constexpr, and since C++20, consteval with the usual semantics. These specifiers go after the parameter list, like mutable. Starting from C++23, the lambda can also be static if it has no captures. The static and mutable specifiers are not allowed to be combined. Also since C++23 a lambda expression can be recursive through explicit this as first parameter: In addition to that, C++23 modified the syntax so that the parentheses can be omitted in the case of a lambda that takes no arguments even if the lambda has a specifier. It also made it so that an attribute specifier sequence that appears before the parameter list, lambda specifiers, or noexcept specifier (there must be one of them) applies to the function call operator or operator template of the closure type. Otherwise, it applies to the type of the function call operator or operator template. Previously, such a sequence always applied to the type of the function call operator or operator template of the closure type making e.g the [[noreturn]] attribute impossible to use with lambdas. The Boost library provides its own syntax for lambda functions as well, using the following syntax: Since C++14, the function parameters of a lambda can be declared with auto. The resulting lambda is called a generic lambda and is essentially an anonymous function template since the rules for type deduction of the auto parameters are the rules of template argument deduction. As of C++20, template parameters can also be declared explicitly with the following syntax: === C# === In C#, support for anonymous functions has deepened through the various versions of the language compiler. The language v3.0, released in November 2007 with .NET Framework v3.5, has full support of anonymous functions.: 7–8 : 26 C# names them lambda expressions, following the original version of anonymous functions, the lambda calculus.: 7–8, 91 : 91 // the first int is the x' type // the second int is the return type // <see href="http://msdn.microsoft.com/en-us/library/bb549151.aspx" /> Func<int,int> foo = x => x * x; Console.WriteLine(foo(7)); While the function is anonymous, it cannot be assigned to an implicitly typed variable, because the lambda syntax may be used for denoting an anonymous function or an expression tree, and the choice cannot automatically be decided by the compiler.: 101–103 E.g., this does not work: However, a lambda expression can take part in type inference and can be used as a method argument, e.g. to use anonymous functions with the Map capability available with System.Collections.Generic.List (in the ConvertAll() method): Prior versions of C# had more limited support for anonymous functions. C# v1.0, introduced in February 2002 with the .NET Framework v1.0, provided partial anonymous function support through the use of delegates.: 6 C# names them lambda expressions, following the original version of anonymous functions, the lambda calculus.: 91 This construct is somewhat similar to PHP delegates. In C# 1.0, delegates are like function pointers that refer to an explicitly named method within a class. (But unlike PHP, the name is unneeded at the time the delegate is used.) C# v2.0, released in November 2005 with the .NET Framework v2.0, introduced the concept of anonymous methods as a way to write unnamed inline statement blocks that can be executed in a delegate invocation.: 6–7 C# 3.0 continues to support these constructs, but also supports the lambda expression construct. This example will compile in C# 3.0, and exhibits the three forms: In the case of the C# 2.0 version, the C# compiler takes the code block of the anonymous function and creates a static private function. Internally, the function gets a generated name, of course; this generated name is based on the name of the method in which the Delegate is declared. But the name is not exposed to application code except by using reflection.: 103 In the case of the C# 3.0 version, the same mechanism applies. === ColdFusion Markup Language (CFML) === Using the function keyword: Or using an arrow function: CFML supports any statements within the function's definition, not simply expressions. CFML supports recursive anonymous functions: CFML anonymous functions implement closure. === D === D uses inline delegates to implement anonymous functions. The full syntax for an inline delegate is If unambiguous, the return type and the keyword delegate can be omitted. Since version 2.0, D allocates closures on the heap unless the compiler can prove it is unnecessary; the scope keyword can be used for forcing stack allocation. Since version 2.058, it is possible to use shorthand notation: An anonymous function can be assigned to a variable and used like this: === Dart === Dart supports anonymous functions. or === Delphi === Delphi introduced anonymous functions in version 2009. === PascalABC.NET === PascalABC.NET supports anonymous functions using lambda syntax === Elixir === Elixir uses the closure fn for anonymous functions. === Erlang === Erlang uses a syntax for anonymous functions similar to that of named functions. === Go === Go supports anonymous functions. === Haskell === Haskell uses a concise syntax for anonymous functions (lambda expressions). The backslash is supposed to resemble λ. Lambda expressions are fully integrated with the type inference engine, and support all the syntax and features of "ordinary" functions (except for the use of multiple definitions for pattern-matching, since the argument list is only specified once). The following are all equivalent: === Haxe === In Haxe, anonymous functions are called lambda, and use the syntax function(argument-list) expression; . === Java === Java supports anonymous functions, named Lambda Expressions, starting with JDK 8. A lambda expression consists of a comma separated list of the formal parameters enclosed in parentheses, an arrow token (->), and a body. Data types of the parameters can always be omitted, as can the parentheses if there is only one parameter. The body can consist of one statement or a statement block. Lambda expressions are converted to "functional interfaces" (defined as interfaces that contain only one abstract method in addition to one or more default or static methods), as in the following example: In this example, a functional interface called IntegerMath is declared. Lambda expressions that implement IntegerMath are passed to the apply() method to be executed. Default methods like swap define methods on functions. Java 8 introduced another mechanism named method reference (the :: operator) to create a lambda on an existing method. A method reference does not indicate the number or types of arguments because those are extracted from the abstract method of the functional interface. In the example above, the functional interface IntBinaryOperator declares an abstract method int applyAsInt(int, int), so the compiler looks for a method int sum(int, int) in the class java.lang.Integer. ==== Differences compared to Anonymous Classes ==== Anonymous classes of lambda-compatible interfaces are similar, but not exactly equivalent, to lambda expressions. To illustrate, in the following example, anonymousClass and lambdaExpression are both instances of IntegerMath that add their two parameters: The main difference here is that the lambda expression does not necessarily need to allocate a new instance for the IntegerMath, and can return the same instance every time this code is run. Additionally, in the OpenJDK implementation at least, lambdas are compiled to invokedynamic instructions, with the lambda body inserted as a static method into the surrounding class, rather than generating a new class file entirely. ==== Java limitations ==== Java 8 lambdas have the following limitations: Lambdas can throw checked exceptions, but such lambdas will not work with the interfaces used by the Collection API. Variables that are in-scope where the lambda is declared may only be accessed inside the lambda if they are effectively final, i.e. if the variable is not mutated inside or outside of the lambda scope. === JavaScript === JavaScript/ECMAScript supports anonymous functions. ES6 supports "arrow function" syntax, where a => symbol separates the anonymous function's parameter list from the body: This construct is often used in Bookmarklets. For example, to change the title of the current document (visible in its window's title bar) to its URL, the following bookmarklet may seem to work. However, as the assignment statement returns a value (the URL itself), many browsers actually create a new page to display this value. Instead, an anonymous function, that does not return a value, can be used: The function statement in the first (outer) pair of parentheses declares an anonymous function, which is then executed when used with the last pair of parentheses. This is almost equivalent to the following, which populates the environment with f unlike an anonymous function. Use void() to avoid new pages for arbitrary anonymous functions: or just: JavaScript has syntactic subtleties for the semantics of defining, invoking and evaluating anonymous functions. These subliminal nuances are a direct consequence of the evaluation of parenthetical expressions. The following constructs which are called immediately-invoked function expression illustrate this: and Representing "function(){ ... }" by f, the form of the constructs are a parenthetical within a parenthetical (f()) and a parenthetical applied to a parenthetical (f)(). Note the general syntactic ambiguity of a parenthetical expression, parenthesized arguments to a function and the parentheses around the formal parameters in a function definition. In particular, JavaScript defines a , (comma) operator in the context of a parenthetical expression. It is no mere coincidence that the syntactic forms coincide for an expression and a function's arguments (ignoring the function formal parameter syntax)! If f is not identified in the constructs above, they become (()) and ()(). The first provides no syntactic hint of any resident function but the second MUST evaluate the first parenthetical as a function to be legal JavaScript. (Aside: for instance, the ()'s could be ([],{},42,"abc",function(){}) as long as the expression evaluates to a function.) Also, a function is an Object instance (likewise objects are Function instances) and the object literal notation brackets, {} for braced code, are used when defining a function this way (as opposed to using new Function(...)). In a very broad non-rigorous sense (especially since global bindings are compromised), an arbitrary sequence of braced JavaScript statements, {stuff}, can be considered to be a fixed point of More correctly but with caveats, Note the implications of the anonymous function in the JavaScript fragments that follow: function(){ ... }() without surrounding ()'s is generally not legal (f=function(){ ... }) does not "forget" f globally unlike (function f(){ ... }) Performance metrics to analyze the space and time complexities of function calls, call stack, etc. in a JavaScript interpreter engine implement easily with these last anonymous function constructs. From the implications of the results, it is possible to deduce some of an engine's recursive versus iterative implementation details, especially tail-recursion. === Julia === In Julia anonymous functions are defined using the syntax (arguments)->(expression), === Kotlin === Kotlin supports anonymous functions with the syntax {arguments -> expression}, === Lisp === Lisp and Scheme support anonymous functions using the "lambda" construct, which is a reference to lambda calculus. Clojure supports anonymous functions with the "fn" special form and #() reader syntax. ==== Common Lisp ==== Common Lisp has the concept of lambda expressions. A lambda expression is written as a list with the symbol "lambda" as its first element. The list then contains the argument list, documentation or declarations and a function body. Lambda expressions can be used inside lambda forms and with the special operator "function". "function" can be abbreviated as #'. Also, macro lambda exists, which expands into a function form: One typical use of anonymous functions in Common Lisp is to pass them to higher-order functions like mapcar, which applies a function to each element of a list and returns a list of the results. The lambda form in Common Lisp allows a lambda expression to be written in a function call: Anonymous functions in Common Lisp can also later be given global names: ==== Scheme ==== Scheme's named functions is simply syntactic sugar for anonymous functions bound to names: expands (and is equivalent) to ==== Clojure ==== Clojure supports anonymous functions through the "fn" special form: There is also a reader syntax to define a lambda: Like Scheme, Clojure's "named functions" are simply syntactic sugar for lambdas bound to names: expands to: === Lua === In Lua (much as in Scheme) all functions are anonymous. A named function in Lua is simply a variable holding a reference to a function object. Thus, in Lua is just syntactical sugar for An example of using anonymous functions for reverse-order sorting: === Wolfram Language, Mathematica === The Wolfram Language is the programming language of Mathematica. Anonymous functions are important in programming the latter. There are several ways to create them. Below are a few anonymous functions that increment a number. The first is the most common. #1 refers to the first argument and & marks the end of the anonymous function. So, for instance: Also, Mathematica has an added construct to make recursive anonymous functions. The symbol '#0' refers to the entire function. The following function calculates the factorial of its input: For example, 6 factorial would be: === MATLAB, Octave === Anonymous functions in MATLAB or Octave are defined using the syntax @(argument-list)expression. Any variables that are not found in the argument list are inherited from the enclosing scope and are captured by value. === Maxima === In Maxima anonymous functions are defined using the syntax lambda(argument-list,expression), === ML === The various dialects of ML support anonymous functions. ==== OCaml ==== Anonymous functions in OCaml are functions without a declared name. Here is an example of an anonymous function that multiplies its input by two: In the example, fun is a keyword indicating that the function is an anonymous function. We are passing in an argument x and -> to separate the argument from the body. ==== F# ==== F# supports anonymous functions, as follows: ==== Standard ML ==== Standard ML supports anonymous functions, as follows: fn arg => arg * arg === Nim === Nim supports multi-line multi-expression anonymous functions. Multi-line example: Anonymous functions may be passed as input parameters of other functions: An anonymous function is basically a function without a name. === Perl === ==== Perl 5 ==== Perl 5 supports anonymous functions, as follows: Other constructs take bare blocks as arguments, which serve a function similar to lambda functions of one parameter, but do not have the same parameter-passing convention as functions -- @_ is not set. === PHP === Before 4.0.1, PHP had no anonymous function support. ==== PHP 4.0.1 to 5.3 ==== PHP 4.0.1 introduced the create_function which was the initial anonymous function support. This function call makes a new randomly named function and returns its name (as a string) The argument list and function body must be in single quotes, or the dollar signs must be escaped. Otherwise, PHP assumes "$x" means the variable $x and will substitute it into the string (despite possibly not existing) instead of leaving "$x" in the string. For functions with quotes or functions with many variables, it can get quite tedious to ensure the intended function body is what PHP interprets. Each invocation of create_function makes a new function, which exists for the rest of the program, and cannot be garbage collected, using memory in the program irreversibly. If this is used to create anonymous functions many times, e.g., in a loop, it can cause problems such as memory bloat. ==== PHP 5.3 ==== PHP 5.3 added a new class called Closure and magic method __invoke() that makes a class instance invocable. In this example, $func is an instance of Closure and echo $func($x) is equivalent to echo $func->__invoke($x). PHP 5.3 mimics anonymous functions but it does not support true anonymous functions because PHP functions are still not first-class objects. PHP 5.3 does support closures but the variables must be explicitly indicated as such: The variable $x is bound by reference so the invocation of $func modifies it and the changes are visible outside of the function. ==== PHP 7.4 ==== Arrow functions were introduced in PHP 7.4 === Prolog's dialects === ==== Logtalk ==== Logtalk uses the following syntax for anonymous predicates (lambda expressions): A simple example with no free variables and using a list mapping predicate is: Currying is also supported. The above example can be written as: ==== Visual Prolog ==== Anonymous functions (in general anonymous predicates) were introduced in Visual Prolog in version 7.2. Anonymous predicates can capture values from the context. If created in an object member, it can also access the object state (by capturing This). mkAdder returns an anonymous function, which has captured the argument X in the closure. The returned function is a function that adds X to its argument: === Python === Python supports simple anonymous functions through the lambda form. The executable body of the lambda must be an expression and can't be a statement, which is a restriction that limits its utility. The value returned by the lambda is the value of the contained expression. Lambda forms can be used anywhere ordinary functions can. However these restrictions make it a very limited version of a normal function. Here is an example: In general, the Python convention encourages the use of named functions defined in the same scope as one might typically use an anonymous function in other languages. This is acceptable as locally defined functions implement the full power of closures and are almost as efficient as the use of a lambda in Python. In this example, the built-in power function can be said to have been curried: === R === In R the anonymous functions are defined using the syntax function(argument-list)expression , which has shorthand since version 4.1.0 \, akin to Haskell. === Raku === In Raku, all blocks (even those associated with if, while, etc.) are anonymous functions. A block that is not used as an rvalue is executed immediately. fully anonymous, called as created assigned to a variable currying WhateverCode object === Ruby === Ruby supports anonymous functions by using a syntactical structure called block. There are two data types for blocks in Ruby. Procs behave similarly to closures, whereas lambdas behave more analogous to an anonymous function. When passed to a method, a block is converted into a Proc in some circumstances. === Rust === In Rust, anonymous functions are called closures. They are defined using the following syntax: For example: With type inference, however, the compiler is able to infer the type of each parameter and the return type, so the above form can be written as: With closures with a single expression (i.e. a body with one line) and implicit return type, the curly braces may be omitted: Closures with no input parameter are written like so: Closures may be passed as input parameters of functions that expect a function pointer: However, one may need complex rules to describe how values in the body of the closure are captured. They are implemented using the Fn, FnMut, and FnOnce traits: Fn: the closure captures by reference (&T). They are used for functions that can still be called if they only have reference access (with &) to their environment. FnMut: the closure captures by mutable reference (&mut T). They are used for functions that can be called if they have mutable reference access (with &mut) to their environment. FnOnce: the closure captures by value (T). They are used for functions that are only called once. With these traits, the compiler will capture variables in the least restrictive manner possible. They help govern how values are moved around between scopes, which is largely important since Rust follows a lifetime construct to ensure values are "borrowed" and moved in a predictable and explicit manner. The following demonstrates how one may pass a closure as an input parameter using the Fn trait: The previous function definition can also be shortened for convenience as follows: === Scala === In Scala, anonymous functions use the following syntax: In certain contexts, like when an anonymous function is a parameter being passed to another function, the compiler can infer the types of the parameters of the anonymous function and they can be omitted in the syntax. In such contexts, it is also possible to use a shorthand for anonymous functions using the underscore character to introduce unnamed parameters. === Smalltalk === In Smalltalk anonymous functions are called blocks and they are invoked (called) by sending them a "value" message. If several arguments are to be passed, a "value:...value:" message with a corresponding number of value arguments must be used. For example, in GNU Smalltalk, Smalltalk blocks are technically closures, allowing them to outlive their defining scope and still refer to the variables declared therein. === Swift === In Swift, anonymous functions are called closures. The syntax has following form: For example: For sake of brevity and expressiveness, the parameter types and return type can be omitted if these can be inferred: Similarly, Swift also supports implicit return statements for one-statement closures: Finally, the parameter names can be omitted as well; when omitted, the parameters are referenced using shorthand argument names, consisting of the $ symbol followed by their position (e.g. $0, $1, $2, etc.): === Tcl === In Tcl, applying the anonymous squaring function to 2 looks as follows: This example involves two candidates for what it means to be a function in Tcl. The most generic is usually called a command prefix, and if the variable f holds such a function, then the way to perform the function application f(x) would be where {*} is the expansion prefix (new in Tcl 8.5). The command prefix in the above example is apply {x {expr {$x*$x}}} Command names can be bound to command prefixes by means of the interp alias command. Command prefixes support currying. Command prefixes are very common in Tcl APIs. The other candidate for "function" in Tcl is usually called a lambda, and appears as the {x {expr {$x*$x}}} part of the above example. This is the part which caches the compiled form of the anonymous function, but it can only be invoked by being passed to the apply command. Lambdas do not support currying, unless paired with an apply to form a command prefix. Lambdas are rare in Tcl APIs. === Vala === In Vala, anonymous functions are supported as lambda expressions. === Visual Basic .NET === Visual Basic .NET 2008 introduced anonymous functions through the lambda form. Combined with implicit typing, VB provides an economical syntax for anonymous functions. As with Python, in VB.NET, anonymous functions must be defined on one line; they cannot be compound statements. Further, an anonymous function in VB.NET must truly be a VB.NET Function - it must return a value. Visual Basic.NET 2010 added support for multiline lambda expressions and anonymous functions without a return value. For example, a function for use in a Thread. == References ==
Wikipedia:Exceptional Lie algebra#0
In mathematics, an exceptional Lie algebra is a complex simple Lie algebra whose Dynkin diagram is of exceptional (nonclassical) type. There are exactly five of them: g 2 , f 4 , e 6 , e 7 , e 8 {\displaystyle {\mathfrak {g}}_{2},{\mathfrak {f}}_{4},{\mathfrak {e}}_{6},{\mathfrak {e}}_{7},{\mathfrak {e}}_{8}} ; their respective dimensions are 14, 52, 78, 133, 248. The corresponding diagrams are: G2 : F4 : E6 : E7 : E8 : In contrast, simple Lie algebras that are not exceptional are called classical Lie algebras (there are infinitely many of them). == Construction == There is no simple universally accepted way to construct exceptional Lie algebras; in fact, they were discovered only in the process of the classification program. Here are some constructions: § 22.1-2 of (Fulton & Harris 1991) give a detailed construction of g 2 {\displaystyle {\mathfrak {g}}_{2}} . Exceptional Lie algebras may be realized as the derivation algebras of appropriate nonassociative algebras. Construct e 8 {\displaystyle {\mathfrak {e}}_{8}} first and then find e 6 , e 7 {\displaystyle {\mathfrak {e}}_{6},{\mathfrak {e}}_{7}} as subalgebras. Tits has given a uniformed construction of the five exceptional Lie algebras. == References == == Further reading == https://www.encyclopediaofmath.org/index.php/Lie_algebra,_exceptional http://math.ucr.edu/home/baez/octonions/node13.html
Wikipedia:Exceptional isomorphism#0
In mathematics, an exceptional isomorphism, also called an accidental isomorphism, is an isomorphism between members ai and bj of two families, usually infinite, of mathematical objects, which is incidental, in that it is not an instance of a general pattern of such isomorphisms. These coincidences are at times considered a matter of trivia, but in other respects they can give rise to consequential phenomena, such as exceptional objects. In the following, coincidences are organized according to the structures where they occur. == Groups == === Finite simple groups === The exceptional isomorphisms between the series of finite simple groups mostly involve projective special linear groups and alternating groups, and are: PSL2(4) ≅ PSL2(5) ≅ A5, the smallest non-abelian simple group (order 60); PSL2(7) ≅ PSL3(2), the second-smallest non-abelian simple group (order 168) – PSL(2,7); PSL2(9) ≅ A6; PSL4(2) ≅ A8; PSU4(2) ≅ PSp4(3), between a projective special unitary group and a projective symplectic group. === Alternating groups and symmetric groups === There are coincidences between symmetric/alternating groups and small groups of Lie type/polyhedral groups: S3 ≅ PSL2(2) ≅ dihedral group of order 6, A4 ≅ PSL2(3), S4 ≅ PGL2(3) ≅ PSL2(Z / 4), A5 ≅ PSL2(4) ≅ PSL2(5), S5 ≅ PΓL2(4) ≅ PGL2(5), A6 ≅ PSL2(9) ≅ Sp4(2)′, S6 ≅ Sp4(2), A8 ≅ PSL4(2) ≅ O+6(2)′, S8 ≅ O+6(2). These can all be explained in a systematic way by using linear algebra (and the action of Sn on affine nspace) to define the isomorphism going from the right side to the left side. (The above isomorphisms for A8 and S8 are linked via the exceptional isomorphism SL4 / μ2 ≅ SO6.) There are also some coincidences with symmetries of regular polyhedra: the alternating group A5 agrees with the chiral icosahedral group (itself an exceptional object), and the double cover of the alternating group A5 is the binary icosahedral group. === Trivial group === The trivial group arises in numerous ways. The trivial group is often omitted from the beginning of a classical family. For instance: C1, the cyclic group of order 1; A0 ≅ A1 ≅ A2, the alternating group on 0, 1, or 2 letters; S0 ≅ S1, the symmetric group on 0 or 1 letters; GL(0, K) ≅ SL(0, K) ≅ PGL(0, K) ≅ PSL(0, K), linear groups of a 0-dimensional vector space; SL(1, K) ≅ PGL(1, K) ≅ PSL(1, K), linear groups of a 1-dimensional vector space and many others. === Spheres === The spheres S0, S1, and S3 admit group structures, which can be described in many ways: S0 ≅ Spin(1) ≅ O(1) ≅ (Z / 2Z)+ ≅ Z×, the last being the group of units of the integers; S1 ≅ Spin(2) ≅ SO(2) ≅ U(1) ≅ R / Z ≅ circle group; S3 ≅ Spin(3) ≅ SU(2) ≅ Sp(1) ≅ unit quaternions. === Spin groups === In addition to Spin(1), Spin(2) and Spin(3) above, there are isomorphisms for higher dimensional spin groups: Spin(4) ≅ Sp(1) × Sp(1) ≅ SU(2) × SU(2) Spin(5) ≅ Sp(2) Spin(6) ≅ SU(4) Also, Spin(8) has an exceptional order 3 triality automorphism. == Coxeter–Dynkin diagrams == There are some exceptional isomorphisms of Dynkin diagrams, yielding isomorphisms of the corresponding Coxeter groups and of polytopes realizing the symmetries, as well as isomorphisms of Lie algebras whose root systems are described by the same diagrams. These are: == See also == Exceptional object Mathematical coincidence, for numerical coincidences == Notes == == References ==
Wikipedia:Exeter Mathematics School#0
Exeter Mathematics School is a maths school located in Exeter in the English county of Devon. It opened in September 2014 under the free schools initiative and is sponsored by Exeter College and the University of Exeter. It is intended to be a regional centre of excellence in mathematics for Cornwall, Devon, Dorset and Somerset. As a result, the school offers boarding facilities for pupils who live more than an hour's drive away from the school. A total of 120 students are catered for at the school with some boarding from Monday to Friday during term time. The school is highly selective, with prospective students expected to have GCSE qualifications at grade 8-9 in Mathematics and Physics or Computer Science. Prospective students must also have five GCSEs in total at grade 5 or above including English at grade 6. The course structure of Exeter Mathematics School requires all students to study A-level Mathematics and Further Mathematics and either A-level Physics or Computer Science. Students may choose to study both, but one may be chosen and an additional A-level from a wider range of options, which are taught at Exeter College, may be taken as an alternative. == References == == External links == Official website "Exeter Mathematics School". University of Exeter. "Exeter University backs free school for maths". The Guardian. 21 January 2013. "Rougemont House transformed into inspirational new Exeter Mathematics School". Western Morning News. 8 September 2015. "Gifted students secure places at Exeter Mathematics School". Dawlish Learning Partnership. Archived from the original on 22 April 2016. Retrieved 9 May 2016. "Exeter Mathematics School". EduBase2. Department for Education.
Wikipedia:Exhaustion by compact sets#0
In mathematics, especially general topology and analysis, an exhaustion by compact sets of a topological space X {\displaystyle X} is a nested sequence of compact subsets K i {\displaystyle K_{i}} of X {\displaystyle X} (i.e. K 1 ⊆ K 2 ⊆ K 3 ⊆ ⋯ {\displaystyle K_{1}\subseteq K_{2}\subseteq K_{3}\subseteq \cdots } ), such that each K i {\displaystyle K_{i}} is contained in the interior of K i + 1 {\displaystyle K_{i+1}} , i.e. K i ⊂ int ( K i + 1 ) {\displaystyle K_{i}\subset {\text{int}}(K_{i+1})} , and X = ⋃ i = 1 ∞ K i {\displaystyle X=\bigcup _{i=1}^{\infty }K_{i}} . A space admitting an exhaustion by compact sets is called exhaustible by compact sets. As an example, for the space X = R n {\displaystyle X=\mathbb {R} ^{n}} , the sequence of closed balls K i = { x : | x | ≤ i } {\displaystyle K_{i}=\{x:|x|\leq i\}} forms an exhaustion of the space by compact sets. There is a weaker condition that drops the requirement that K i {\displaystyle K_{i}} is in the interior of K i + 1 {\displaystyle K_{i+1}} , meaning the space is σ-compact (i.e., a countable union of compact subsets.) == Construction == If there is an exhaustion by compact sets, the space is necessarily locally compact (if Hausdorff). The converse is also often true. For example, for a locally compact Hausdorff space X {\displaystyle X} that is a countable union of compact subsets, we can construct an exhaustion as follows. We write X = ⋃ 1 ∞ K n {\displaystyle X=\bigcup _{1}^{\infty }K_{n}} as a union of compact sets K n {\displaystyle K_{n}} . Then inductively choose open sets V n ⊃ V n − 1 ¯ ∪ K n {\displaystyle V_{n}\supset {\overline {V_{n-1}}}\cup K_{n}} with compact closures, where V 0 = ∅ {\displaystyle V_{0}=\emptyset } . Then V n ¯ {\displaystyle {\overline {V_{n}}}} is a required exhaustion. For a locally compact Hausdorff space that is second-countable, a similar argument can be used to construct an exhaustion. == Application == For a Hausdorff space X {\displaystyle X} , an exhaustion by compact sets can be used to show the space is paracompact. Indeed, suppose we have an increasing sequence V 1 ⊂ V 2 ⊂ ⋯ {\displaystyle V_{1}\subset V_{2}\subset \cdots } of open subsets such that X = ⋃ V n {\displaystyle X=\bigcup V_{n}} and each V n ¯ {\displaystyle {\overline {V_{n}}}} is compact and is contained in V n + 1 {\displaystyle V_{n+1}} . Let U {\displaystyle {\mathcal {U}}} be an open cover of X {\displaystyle X} . We also let V n = ∅ , n ≤ 0 {\displaystyle V_{n}=\emptyset ,\,n\leq 0} . Then, for each n ≥ 1 {\displaystyle n\geq 1} , { ( V n + 1 − V n − 2 ¯ ) ∩ U ∣ U ∈ U } {\displaystyle \{(V_{n+1}-{\overline {V_{n-2}}})\cap U\mid U\in {\mathcal {U}}\}} is an open cover of the compact set V n ¯ − V n − 1 {\displaystyle {\overline {V_{n}}}-V_{n-1}} and thus admits a finite subcover V n {\displaystyle {\mathcal {V}}_{n}} . Then V := ⋃ n = 1 ∞ V n {\displaystyle {\mathcal {V}}:=\bigcup _{n=1}^{\infty }{\mathcal {V}}_{n}} is a locally finite refinement of U . {\displaystyle {\mathcal {U}}.} Remark: The proof in fact shows that each open cover admits a countable refinement consisting of open sets with compact closures and each of whose members intersects only finitely many others. The following type of converse also holds. A paracompact locally compact Hausdorff space with countably many connected components is a countable union of compact sets and thus admits an exhaustion by compact subsets. == Relation to other properties == The following are equivalent for a topological space X {\displaystyle X} : X {\displaystyle X} is exhaustible by compact sets. X {\displaystyle X} is σ-compact and weakly locally compact. X {\displaystyle X} is Lindelöf and weakly locally compact. (where weakly locally compact means locally compact in the weak sense that each point has a compact neighborhood). The hemicompact property is intermediate between exhaustible by compact sets and σ-compact. Every space exhaustible by compact sets is hemicompact and every hemicompact space is σ-compact, but the reverse implications do not hold. For example, the Arens-Fort space and the Appert space are hemicompact, but not exhaustible by compact sets (because not weakly locally compact), and the set Q {\displaystyle \mathbb {Q} } of rational numbers with the usual topology is σ-compact, but not hemicompact. Every regular Hausdorff space that is a countable union of compact sets is paracompact. == Notes == == References == Leon Ehrenpreis, Theory of Distributions for Locally Compact Spaces, American Mathematical Society, 1982. ISBN 0-8218-1221-1. Hans Grauert and Reinhold Remmert, Theory of Stein Spaces, Springer Verlag (Classics in Mathematics), 2004. ISBN 978-3540003731. Harder, Günter (2011). Lectures on algebraic geometry. 1: Sheaves, cohomology of sheaves, and applications to Riemann surfaces (2nd ed.). ISBN 978-3834818447. Lee, John M. (2011). Introduction to topological manifolds (2nd ed.). New York: Springer. ISBN 978-1-4419-7939-1. Warner, Frak W. (1983). Foundations of Differentiable Manifolds and Lie Groups. Graduate Texts in Mathematics. Springer-Verlag. Wall, C. T. C. (4 July 2016). Differential Topology. Cambridge University Press. ISBN 9781107153523. == External links == "Exhaustion by compact sets". PlanetMath. "Existence of exhaustion by compact sets". Mathematics Stack Exchange.
Wikipedia:Expander mixing lemma#0
The expander mixing lemma intuitively states that the edges of certain d {\displaystyle d} -regular graphs are evenly distributed throughout the graph. In particular, the number of edges between two vertex subsets S {\displaystyle S} and T {\displaystyle T} is always close to the expected number of edges between them in a random d {\displaystyle d} -regular graph, namely d n | S | | T | {\displaystyle {\frac {d}{n}}|S||T|} . == d-Regular Expander Graphs == Define an ( n , d , λ ) {\displaystyle (n,d,\lambda )} -graph to be a d {\displaystyle d} -regular graph G {\displaystyle G} on n {\displaystyle n} vertices such that all of the eigenvalues of its adjacency matrix A G {\displaystyle A_{G}} except one have absolute value at most λ . {\displaystyle \lambda .} The d {\displaystyle d} -regularity of the graph guarantees that its largest absolute value of an eigenvalue is d . {\displaystyle d.} In fact, the all-1's vector 1 {\displaystyle \mathbf {1} } is an eigenvector of A G {\displaystyle A_{G}} with eigenvalue d {\displaystyle d} , and the eigenvalues of the adjacency matrix will never exceed the maximum degree of G {\displaystyle G} in absolute value. If we fix d {\displaystyle d} and λ {\displaystyle \lambda } then ( n , d , λ ) {\displaystyle (n,d,\lambda )} -graphs form a family of expander graphs with a constant spectral gap. == Statement == Let G = ( V , E ) {\displaystyle G=(V,E)} be an ( n , d , λ ) {\displaystyle (n,d,\lambda )} -graph. For any two subsets S , T ⊆ V {\displaystyle S,T\subseteq V} , let e ( S , T ) = | { ( x , y ) ∈ S × T : x y ∈ E ( G ) } | {\displaystyle e(S,T)=|\{(x,y)\in S\times T:xy\in E(G)\}|} be the number of edges between S and T (counting edges contained in the intersection of S and T twice). Then | e ( S , T ) − d | S | | T | n | ≤ λ | S | | T | . {\displaystyle \left|e(S,T)-{\frac {d|S||T|}{n}}\right|\leq \lambda {\sqrt {|S||T|}}\,.} === Tighter Bound === We can in fact show that | e ( S , T ) − d | S | | T | n | ≤ λ | S | | T | ( 1 − | S | / n ) ( 1 − | T | / n ) {\displaystyle \left|e(S,T)-{\frac {d|S||T|}{n}}\right|\leq \lambda {\sqrt {|S||T|(1-|S|/n)(1-|T|/n)}}\,} using similar techniques. === Biregular Graphs === For biregular graphs, we have the following variation, where we take λ {\displaystyle \lambda } to be the second largest eigenvalue. Let G = ( L , R , E ) {\displaystyle G=(L,R,E)} be a bipartite graph such that every vertex in L {\displaystyle L} is adjacent to d L {\displaystyle d_{L}} vertices of R {\displaystyle R} and every vertex in R {\displaystyle R} is adjacent to d R {\displaystyle d_{R}} vertices of L {\displaystyle L} . Let S ⊆ L , T ⊆ R {\displaystyle S\subseteq L,T\subseteq R} with | S | = α | L | {\displaystyle |S|=\alpha |L|} and | T | = β | R | {\displaystyle |T|=\beta |R|} . Let e ( G ) = | E ( G ) | {\displaystyle e(G)=|E(G)|} . Then | e ( S , T ) e ( G ) − α β | ≤ λ d L d R α β ( 1 − α ) ( 1 − β ) ≤ λ d L d R α β . {\displaystyle \left|{\frac {e(S,T)}{e(G)}}-\alpha \beta \right|\leq {\frac {\lambda }{\sqrt {d_{L}d_{R}}}}{\sqrt {\alpha \beta (1-\alpha )(1-\beta )}}\leq {\frac {\lambda }{\sqrt {d_{L}d_{R}}}}{\sqrt {\alpha \beta }}\,.} Note that d L d R {\displaystyle {\sqrt {d_{L}d_{R}}}} is the largest eigenvalue of G {\displaystyle G} . == Proofs == === Proof of First Statement === Let A G {\displaystyle A_{G}} be the adjacency matrix of G {\displaystyle G} and let λ 1 ≥ ⋯ ≥ λ n {\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{n}} be the eigenvalues of A G {\displaystyle A_{G}} (these eigenvalues are real because A G {\displaystyle A_{G}} is symmetric). We know that λ 1 = d {\displaystyle \lambda _{1}=d} with corresponding eigenvector v 1 = 1 n 1 {\displaystyle v_{1}={\frac {1}{\sqrt {n}}}\mathbf {1} } , the normalization of the all-1's vector. Define λ = max { λ 2 2 , … , λ n 2 } {\displaystyle \lambda ={\sqrt {\max\{\lambda _{2}^{2},\dots ,\lambda _{n}^{2}\}}}} and note that max { λ 2 2 , … , λ n 2 } ≤ λ 2 ≤ λ 1 2 = d 2 {\displaystyle \max\{\lambda _{2}^{2},\dots ,\lambda _{n}^{2}\}\leq \lambda ^{2}\leq \lambda _{1}^{2}=d^{2}} . Because A G {\displaystyle A_{G}} is symmetric, we can pick eigenvectors v 2 , … , v n {\displaystyle v_{2},\ldots ,v_{n}} of A G {\displaystyle A_{G}} corresponding to eigenvalues λ 2 , … , λ n {\displaystyle \lambda _{2},\ldots ,\lambda _{n}} so that { v 1 , … , v n } {\displaystyle \{v_{1},\ldots ,v_{n}\}} forms an orthonormal basis of R n {\displaystyle \mathbf {R} ^{n}} . Let J {\displaystyle J} be the n × n {\displaystyle n\times n} matrix of all 1's. Note that v 1 {\displaystyle v_{1}} is an eigenvector of J {\displaystyle J} with eigenvalue n {\displaystyle n} and each other v i {\displaystyle v_{i}} , being perpendicular to v 1 = 1 {\displaystyle v_{1}=\mathbf {1} } , is an eigenvector of J {\displaystyle J} with eigenvalue 0. For a vertex subset U ⊆ V {\displaystyle U\subseteq V} , let 1 U {\displaystyle 1_{U}} be the column vector with v th {\displaystyle v^{\text{th}}} coordinate equal to 1 if v ∈ U {\displaystyle v\in U} and 0 otherwise. Then, | e ( S , T ) − d n | S | | T | | = | 1 S T ( A G − d n J ) 1 T | {\displaystyle \left|e(S,T)-{\frac {d}{n}}|S||T|\right|=\left|1_{S}^{\operatorname {T} }\left(A_{G}-{\frac {d}{n}}J\right)1_{T}\right|} . Let M = A G − d n J {\displaystyle M=A_{G}-{\frac {d}{n}}J} . Because A G {\displaystyle A_{G}} and J {\displaystyle J} share eigenvectors, the eigenvalues of M {\displaystyle M} are 0 , λ 2 , … , λ n {\displaystyle 0,\lambda _{2},\ldots ,\lambda _{n}} . By the Cauchy-Schwarz inequality, we have that | 1 S T M 1 T | = ⟨ 1 S , M 1 T ⟩ ≤ ‖ 1 S ‖ ‖ M 1 T ‖ {\displaystyle |1_{S}^{\operatorname {T} }M1_{T}|=\langle 1_{S},M1_{T}\rangle \leq \|1_{S}\|\|M1_{T}\|} . Furthermore, because M {\displaystyle M} is self-adjoint, we can write ‖ M 1 T ‖ 2 = ⟨ M 1 T , M 1 T ⟩ = ⟨ 1 T , M 2 1 T ⟩ = ⟨ 1 T , ∑ i = 1 n M 2 ⟨ 1 T , v i ⟩ v i ⟩ = ∑ i = 2 n λ i 2 ⟨ 1 T , v i ⟩ 2 ≤ λ 2 ‖ 1 T ‖ 2 {\displaystyle \|M1_{T}\|^{2}=\langle M1_{T},M1_{T}\rangle =\langle 1_{T},M^{2}1_{T}\rangle =\left\langle 1_{T},\sum _{i=1}^{n}M^{2}\langle 1_{T},v_{i}\rangle v_{i}\right\rangle =\sum _{i=2}^{n}\lambda _{i}^{2}\langle 1_{T},v_{i}\rangle ^{2}\leq \lambda ^{2}\|1_{T}\|^{2}} . This implies that ‖ M 1 T ‖ ≤ λ ‖ 1 T ‖ {\displaystyle \|M1_{T}\|\leq \lambda \|1_{T}\|} and | e ( S , T ) − d n | S | | T | | ≤ λ ‖ 1 S ‖ ‖ 1 T ‖ = λ | S | | T | {\displaystyle \left|e(S,T)-{\frac {d}{n}}|S||T|\right|\leq \lambda \|1_{S}\|\|1_{T}\|=\lambda {\sqrt {|S||T|}}} . === Proof Sketch of Tighter Bound === To show the tighter bound above, we instead consider the vectors 1 S − | S | n 1 {\displaystyle 1_{S}-{\frac {|S|}{n}}\mathbf {1} } and 1 T − | T | n 1 {\displaystyle 1_{T}-{\frac {|T|}{n}}\mathbf {1} } , which are both perpendicular to v 1 {\displaystyle v_{1}} . We can expand 1 S T A G 1 T = ( | S | n 1 ) T A G ( | T | n 1 ) + ( 1 S − | S | n 1 ) T A G ( 1 T − | T | n 1 ) {\displaystyle 1_{S}^{\operatorname {T} }A_{G}1_{T}=\left({\frac {|S|}{n}}\mathbf {1} \right)^{\operatorname {T} }A_{G}\left({\frac {|T|}{n}}\mathbf {1} \right)+\left(1_{S}-{\frac {|S|}{n}}\mathbf {1} \right)^{\operatorname {T} }A_{G}\left(1_{T}-{\frac {|T|}{n}}\mathbf {1} \right)} because the other two terms of the expansion are zero. The first term is equal to | S | | T | n 2 1 T A G 1 = d n | S | | T | {\displaystyle {\frac {|S||T|}{n^{2}}}\mathbf {1} ^{\operatorname {T} }A_{G}\mathbf {1} ={\frac {d}{n}}|S||T|} , so we find that | e ( S , T ) − d n | S | | T | | ≤ | ( 1 S − | S | n 1 ) T A G ( 1 T − | T | n 1 ) | {\displaystyle \left|e(S,T)-{\frac {d}{n}}|S||T|\right|\leq \left|\left(1_{S}-{\frac {|S|}{n}}\mathbf {1} \right)^{\operatorname {T} }A_{G}\left(1_{T}-{\frac {|T|}{n}}\mathbf {1} \right)\right|} We can bound the right hand side by λ ‖ 1 S − | S | | n | 1 ‖ ‖ 1 T − | T | | n | 1 ‖ = λ | S | | T | ( 1 − | S | n ) ( 1 − | T | n ) {\displaystyle \lambda \left\|1_{S}-{\frac {|S|}{|n|}}\mathbf {1} \right\|\left\|1_{T}-{\frac {|T|}{|n|}}\mathbf {1} \right\|=\lambda {\sqrt {|S||T|\left(1-{\frac {|S|}{n}}\right)\left(1-{\frac {|T|}{n}}\right)}}} using the same methods as in the earlier proof. == Applications == The expander mixing lemma can be used to upper bound the size of an independent set within a graph. In particular, the size of an independent set in an ( n , d , λ ) {\displaystyle (n,d,\lambda )} -graph is at most λ n / d . {\displaystyle \lambda n/d.} This is proved by letting T = S {\displaystyle T=S} in the statement above and using the fact that e ( S , S ) = 0. {\displaystyle e(S,S)=0.} An additional consequence is that, if G {\displaystyle G} is an ( n , d , λ ) {\displaystyle (n,d,\lambda )} -graph, then its chromatic number χ ( G ) {\displaystyle \chi (G)} is at least d / λ . {\displaystyle d/\lambda .} This is because, in a valid graph coloring, the set of vertices of a given color is an independent set. By the above fact, each independent set has size at most λ n / d , {\displaystyle \lambda n/d,} so at least d / λ {\displaystyle d/\lambda } such sets are needed to cover all of the vertices. A second application of the expander mixing lemma is to provide an upper bound on the maximum possible size of an independent set within a polarity graph. Given a finite projective plane π {\displaystyle \pi } with a polarity ⊥ , {\displaystyle \perp ,} the polarity graph is a graph where the vertices are the points a of π {\displaystyle \pi } , and vertices x {\displaystyle x} and y {\displaystyle y} are connected if and only if x ∈ y ⊥ . {\displaystyle x\in y^{\perp }.} In particular, if π {\displaystyle \pi } has order q , {\displaystyle q,} then the expander mixing lemma can show that an independent set in the polarity graph can have size at most q 3 / 2 − q + 2 q 1 / 2 − 1 , {\displaystyle q^{3/2}-q+2q^{1/2}-1,} a bound proved by Hobart and Williford. == Converse == Bilu and Linial showed that a converse holds as well: if a d {\displaystyle d} -regular graph G = ( V , E ) {\displaystyle G=(V,E)} satisfies that for any two subsets S , T ⊆ V {\displaystyle S,T\subseteq V} with S ∩ T = ∅ {\displaystyle S\cap T=\emptyset } we have | e ( S , T ) − d | S | | T | n | ≤ λ | S | | T | , {\displaystyle \left|e(S,T)-{\frac {d|S||T|}{n}}\right|\leq \lambda {\sqrt {|S||T|}},} then its second-largest (in absolute value) eigenvalue is bounded by O ( λ ( 1 + log ⁡ ( d / λ ) ) ) {\displaystyle O(\lambda (1+\log(d/\lambda )))} . == Generalization to hypergraphs == Friedman and Widgerson proved the following generalization of the mixing lemma to hypergraphs. Let H {\displaystyle H} be a k {\displaystyle k} -uniform hypergraph, i.e. a hypergraph in which every "edge" is a tuple of k {\displaystyle k} vertices. For any choice of subsets V 1 , . . . , V k {\displaystyle V_{1},...,V_{k}} of vertices, | | e ( V 1 , . . . , V k ) | − k ! | E ( H ) | n k | V 1 | . . . | V k | | ≤ λ 2 ( H ) | V 1 | . . . | V k | . {\displaystyle \left||e(V_{1},...,V_{k})|-{\frac {k!|E(H)|}{n^{k}}}|V_{1}|...|V_{k}|\right|\leq \lambda _{2}(H){\sqrt {|V_{1}|...|V_{k}|}}.} == Notes == == References == Alon, N.; Chung, F. R. K. (1988), "Explicit construction of linear sized tolerant networks", Discrete Mathematics, 72 (1–3): 15–19, CiteSeerX 10.1.1.300.7495, doi:10.1016/0012-365X(88)90189-6. F.C. Bussemaker, D.M. Cvetković, J.J. Seidel. Graphs related to exceptional root systems, Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), volume 18 of Colloq. Math. Soc. János Bolyai (1978), 185-191. Haemers, W. H. (1979). Eigenvalue Techniques in Design and Graph Theory (PDF) (Ph.D.). Haemers, W. H. (1995), "Interlacing Eigenvalues and Graphs", Linear Algebra Appl., 226: 593–616, doi:10.1016/0024-3795(95)00199-2. Hoory, S.; Linial, N.; Wigderson, A. (2006), "Expander Graphs and their Applications" (PDF), Bull. Amer. Math. Soc. (N.S.), 43 (4): 439–561, doi:10.1090/S0273-0979-06-01126-8. Friedman, J.; Widgerson, A. (1995), "On the second eigenvalue of hypergraphs" (PDF), Combinatorica, 15 (1): 43–65, doi:10.1007/BF01294459, S2CID 17896683.
Wikipedia:Exponentially equivalent measures#0
In mathematics, exponential equivalence of measures is how two sequences or families of probability measures are "the same" from the point of view of large deviations theory. == Definition == Let ( M , d ) {\displaystyle (M,d)} be a metric space and consider two one-parameter families of probability measures on M {\displaystyle M} , say ( μ ε ) ε > 0 {\displaystyle (\mu _{\varepsilon })_{\varepsilon >0}} and ( ν ε ) ε > 0 {\displaystyle (\nu _{\varepsilon })_{\varepsilon >0}} . These two families are said to be exponentially equivalent if there exist a one-parameter family of probability spaces ( Ω , Σ ε , P ε ) ε > 0 {\displaystyle (\Omega ,\Sigma _{\varepsilon },P_{\varepsilon })_{\varepsilon >0}} , two families of M {\displaystyle M} -valued random variables ( Y ε ) ε > 0 {\displaystyle (Y_{\varepsilon })_{\varepsilon >0}} and ( Z ε ) ε > 0 {\displaystyle (Z_{\varepsilon })_{\varepsilon >0}} , such that for each ε > 0 {\displaystyle \varepsilon >0} , the P ε {\displaystyle P_{\varepsilon }} -law (i.e. the push-forward measure) of Y ε {\displaystyle Y_{\varepsilon }} is μ ε {\displaystyle \mu _{\varepsilon }} , and the P ε {\displaystyle P_{\varepsilon }} -law of Z ε {\displaystyle Z_{\varepsilon }} is ν ε {\displaystyle \nu _{\varepsilon }} , for each δ > 0 {\displaystyle \delta >0} , " Y ε {\displaystyle Y_{\varepsilon }} and Z ε {\displaystyle Z_{\varepsilon }} are further than δ {\displaystyle \delta } apart" is a Σ ε {\displaystyle \Sigma _{\varepsilon }} -measurable event, i.e. { ω ∈ Ω | d ( Y ε ( ω ) , Z ε ( ω ) ) > δ } ∈ Σ ε , {\displaystyle {\big \{}\omega \in \Omega {\big |}d(Y_{\varepsilon }(\omega ),Z_{\varepsilon }(\omega ))>\delta {\big \}}\in \Sigma _{\varepsilon },} for each δ > 0 {\displaystyle \delta >0} , lim sup ε ↓ 0 ε log ⁡ P ε ( d ( Y ε , Z ε ) > δ ) = − ∞ . {\displaystyle \limsup _{\varepsilon \downarrow 0}\,\varepsilon \log P_{\varepsilon }{\big (}d(Y_{\varepsilon },Z_{\varepsilon })>\delta {\big )}=-\infty .} The two families of random variables ( Y ε ) ε > 0 {\displaystyle (Y_{\varepsilon })_{\varepsilon >0}} and ( Z ε ) ε > 0 {\displaystyle (Z_{\varepsilon })_{\varepsilon >0}} are also said to be exponentially equivalent. == Properties == The main use of exponential equivalence is that as far as large deviations principles are concerned, exponentially equivalent families of measures are indistinguishable. More precisely, if a large deviations principle holds for ( μ ε ) ε > 0 {\displaystyle (\mu _{\varepsilon })_{\varepsilon >0}} with good rate function I {\displaystyle I} , and ( μ ε ) ε > 0 {\displaystyle (\mu _{\varepsilon })_{\varepsilon >0}} and ( ν ε ) ε > 0 {\displaystyle (\nu _{\varepsilon })_{\varepsilon >0}} are exponentially equivalent, then the same large deviations principle holds for ( ν ε ) ε > 0 {\displaystyle (\nu _{\varepsilon })_{\varepsilon >0}} with the same good rate function I {\displaystyle I} . == References == Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR 1619036. (See section 4.2.2)
Wikipedia:Exposed point#0
In mathematics, an exposed point of a convex set C {\displaystyle C} is a point x ∈ C {\displaystyle x\in C} at which some continuous linear functional attains its strict maximum over C {\displaystyle C} . Such a functional is then said to expose x {\displaystyle x} . There can be many exposing functionals for x {\displaystyle x} . The set of exposed points of C {\displaystyle C} is usually denoted exp ⁡ ( C ) {\displaystyle \exp(C)} . A stronger notion is that of strongly exposed point of C {\displaystyle C} which is an exposed point x ∈ C {\displaystyle x\in C} such that some exposing functional f {\displaystyle f} of x {\displaystyle x} attains its strong maximum over C {\displaystyle C} at x {\displaystyle x} , i.e. for each sequence ( x n ) ⊂ C {\displaystyle (x_{n})\subset C} we have the following implication: f ( x n ) → max f ( C ) ⟹ ‖ x n − x ‖ → 0 {\displaystyle f(x_{n})\to \max f(C)\Longrightarrow \|x_{n}-x\|\to 0} . The set of all strongly exposed points of C {\displaystyle C} is usually denoted str ⁡ exp ⁡ ( C ) {\displaystyle \operatorname {str} \exp(C)} . There are two weaker notions, that of extreme point and that of support point of C {\displaystyle C} . == See also == Exposed face == References ==
Wikipedia:Expression (mathematics)#0
In mathematics, an expression is a written arrangement of symbols following the context-dependent, syntactic conventions of mathematical notation. Symbols can denote numbers, variables, operations, and functions. Other symbols include punctuation marks and brackets, used for grouping where there is not a well-defined order of operations. Expressions are commonly distinguished from formulas: expressions are a kind of mathematical object, whereas formulas are statements about mathematical objects. This is analogous to natural language, where a noun phrase refers to an object, and a whole sentence refers to a fact. For example, 8 x − 5 {\displaystyle 8x-5} is an expression, while the inequality 8 x − 5 ≥ 3 {\displaystyle 8x-5\geq 3} is a formula. To evaluate an expression means to find a numerical value equivalent to the expression. Expressions can be evaluated or simplified by replacing operations that appear in them with their result. For example, the expression 8 × 2 − 5 {\displaystyle 8\times 2-5} simplifies to 16 − 5 {\displaystyle 16-5} , and evaluates to 11. {\displaystyle 11.} An expression is often used to define a function, by taking the variables to be arguments, or inputs, of the function, and assigning the output to be the evaluation of the resulting expression. For example, x ↦ x 2 + 1 {\displaystyle x\mapsto x^{2}+1} and f ( x ) = x 2 + 1 {\displaystyle f(x)=x^{2}+1} define the function that associates to each number its square plus one. An expression with no variables would define a constant function. Usually, two expressions are considered equal or equivalent if they define the same function. Such an equality is called a "semantic equality", that is, both expressions "mean the same thing." == History == === Early written mathematics === The earliest written mathematics likely began with tally marks, where each mark represented one unit, carved into wood or stone. An example of early counting is the Ishango bone, found near the Nile and dating back over 20,000 years ago, which is thought to show a six-month lunar calendar. Ancient Egypt developed a symbolic system using hieroglyphics, assigning symbols for powers of ten and using addition and subtraction symbols resembling legs in motion. This system, recorded in texts like the Rhind Mathematical Papyrus (c. 2000–1800 BC), influenced other Mediterranean cultures. In Mesopotamia, a similar system evolved, with numbers written in a base-60 (sexagesimal) format on clay tablets written in Cuneiform, a technique originating with the Sumerians around 3000 BC. This base-60 system persists today in measuring time and angles. === Syncopated stage === The "syncopated" stage of mathematics introduced symbolic abbreviations for commonly used operations and quantities, marking a shift from purely geometric reasoning. Ancient Greek mathematics, largely geometric in nature, drew on Egyptian numerical systems (especially Attic numerals), with little interest in algebraic symbols, until the arrival of Diophantus of Alexandria, who pioneered a form of syncopated algebra in his Arithmetica, which introduced symbolic manipulation of expressions. His notation represented unknowns and powers symbolically, but without modern symbols for relations (such as equality or inequality) or exponents. An unknown number was called ζ {\displaystyle \zeta } . The square of ζ {\displaystyle \zeta } was Δ v {\displaystyle \Delta ^{v}} ; the cube was K v {\displaystyle K^{v}} ; the fourth power was Δ v Δ {\displaystyle \Delta ^{v}\Delta } ; the fifth power was Δ K v {\displaystyle \Delta K^{v}} ; and ⋔ {\displaystyle \pitchfork } meant to subtract everything on the right from the left. So for example, what would be written in modern notation as: x 3 − 2 x 2 + 10 x − 1 , {\displaystyle x^{3}-2x^{2}+10x-1,} Would be written in Diophantus's syncopated notation as: K υ α ¯ ζ ι ¯ ⋔ Δ υ β ¯ M α ¯ {\displaystyle \mathrm {K} ^{\upsilon }{\overline {\alpha }}\;\zeta {\overline {\iota }}\;\,\pitchfork \;\,\Delta ^{\upsilon }{\overline {\beta }}\;\mathrm {M} {\overline {\alpha }}\,\;} In the 7th century, Brahmagupta used different colours to represent the unknowns in algebraic equations in the Brāhmasphuṭasiddhānta. Greek and other ancient mathematical advances, were often trapped in cycles of bursts of creativity, followed by long periods of stagnation, but this began to change as knowledge spread in the early modern period. === Symbolic stage and early arithmetic === The transition to fully symbolic algebra began with Ibn al-Banna' al-Marrakushi (1256–1321) and Abū al-Ḥasan ibn ʿAlī al-Qalaṣādī, (1412–1482) who introduced symbols for operations using Arabic characters. The plus sign (+) appeared around 1351 with Nicole Oresme, likely derived from the Latin et (meaning "and"), while the minus sign (−) was first used in 1489 by Johannes Widmann. Luca Pacioli included these symbols in his works, though much was based on earlier contributions by Piero della Francesca. The radical symbol (√) for square root was introduced by Christoph Rudolff in the 1500s, and parentheses for precedence by Niccolò Tartaglia in 1556. François Viète’s New Algebra (1591) formalized modern symbolic manipulation. The multiplication sign (×) was first used by William Oughtred and the division sign (÷) by Johann Rahn. René Descartes further advanced algebraic symbolism in La Géométrie (1637), where he introduced the use of letters at the end of the alphabet (x, y, z) for variables, along with the Cartesian coordinate system, which bridged algebra and geometry. Isaac Newton and Gottfried Wilhelm Leibniz independently developed calculus in the late 17th century, with Leibniz's notation becoming the standard. == Variables and evaluation == In elementary algebra, a variable in an expression is a letter that represents a number whose value may change. To evaluate an expression with a variable means to find the value of the expression when the variable is assigned a given number. Expressions can be evaluated or simplified by replacing operations that appear in them with their result, or by combining like-terms. For example, take the expression 4 x 2 + 8 {\displaystyle 4x^{2}+8} ; it can be evaluated at x = 3 in the following steps: 4 ( 3 ) 2 + 3 {\textstyle 4(3)^{2}+3} , (replace x with 3) 4 ⋅ ( 3 ⋅ 3 ) + 8 {\displaystyle 4\cdot (3\cdot 3)+8} (use definition of exponent) 4 ⋅ 9 + 8 {\displaystyle 4\cdot 9+8} (simplify) 36 + 8 {\displaystyle 36+8} 44 {\displaystyle 44} A term is a constant or the product of a constant and one or more variables. Some examples include 7 , 5 x , 13 x 2 y , 4 b {\displaystyle 7,\;5x,\;13x^{2}y,\;4b} The constant of the product is called the coefficient. Terms that are either constants or have the same variables raised to the same powers are called like terms. If there are like terms in an expression, one can simplify the expression by combining the like terms. One adds the coefficients and keeps the same variable. 4 x + 7 x + 2 x = 15 x {\displaystyle 4x+7x+2x=15x} Any variable can be classified as being either a free variable or a bound variable. For a given combination of values for the free variables, an expression may be evaluated, although for some combinations of values of the free variables, the value of the expression may be undefined. Thus an expression represents an operation over constants and free variables and whose output is the resulting value of the expression. For a non-formalized language, that is, in most mathematical texts outside of mathematical logic, for an individual expression it is not always possible to identify which variables are free and bound. For example, in ∑ i < k a i k {\textstyle \sum _{i<k}a_{ik}} , depending on the context, the variable i {\textstyle i} can be free and k {\textstyle k} bound, or vice-versa, but they cannot both be free. Determining which value is assumed to be free depends on context and semantics. === Equivalence === An expression is often used to define a function, or denote compositions of functions, by taking the variables to be arguments, or inputs, of the function, and assigning the output to be the evaluation of the resulting expression. For example, x ↦ x 2 + 1 {\displaystyle x\mapsto x^{2}+1} and f ( x ) = x 2 + 1 {\displaystyle f(x)=x^{2}+1} define the function that associates to each number its square plus one. An expression with no variables would define a constant function. In this way, two expressions are said to be equivalent if, for each combination of values for the free variables, they have the same output, i.e., they represent the same function. The equivalence between two expressions is called an identity and is sometimes denoted with ≡ . {\displaystyle \equiv .} For example, in the expression ∑ n = 1 3 ( 2 n x ) , {\textstyle \sum _{n=1}^{3}(2nx),} the variable n is bound, and the variable x is free. This expression is equivalent to the simpler expression 12 x; that is ∑ n = 1 3 ( 2 n x ) ≡ 12 x . {\displaystyle \sum _{n=1}^{3}(2nx)\equiv 12x.} The value for x = 3 is 36, which can be denoted ∑ n = 1 3 ( 2 n x ) | x = 3 = 36. {\displaystyle \sum _{n=1}^{3}(2nx){\Big |}_{x=3}=36.} === Polynomial evaluation === A polynomial consists of variables and coefficients, that involve only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. The problem of polynomial evaluation arises frequently in practice. In computational geometry, polynomials are used to compute function approximations using Taylor polynomials. In cryptography and hash tables, polynomials are used to compute k-independent hashing. In the former case, polynomials are evaluated using floating-point arithmetic, which is not exact. Thus different schemes for the evaluation will, in general, give slightly different answers. In the latter case, the polynomials are usually evaluated in a finite field, in which case the answers are always exact. For evaluating the univariate polynomial a n x n + a n − 1 x n − 1 + ⋯ + a 0 , {\textstyle a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{0},} the most naive method would use n {\displaystyle n} multiplications to compute a n x n {\displaystyle a_{n}x^{n}} , use n − 1 {\textstyle n-1} multiplications to compute a n − 1 x n − 1 {\displaystyle a_{n-1}x^{n-1}} and so on for a total of n ( n + 1 ) 2 {\textstyle {\frac {n(n+1)}{2}}} multiplications and n {\displaystyle n} additions. Using better methods, such as Horner's rule, this can be reduced to n {\displaystyle n} multiplications and n {\displaystyle n} additions. If some preprocessing is allowed, even more savings are possible. === Computation === A computation is any type of arithmetic or non-arithmetic calculation that is "well-defined". The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing machine. Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages. Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements. All statements characterised in modern programming languages are well-defined, including C++, Python, and Java. Common examples of computation are basic arithmetic and the execution of computer algorithms. A calculation is a deliberate mathematical process that transforms one or more inputs into one or more outputs or results. For example, multiplying 7 by 6 is a simple algorithmic calculation. Extracting the square root or the cube root of a number using mathematical models is a more complex algorithmic calculation. ==== Rewriting ==== Expressions can be computed by means of an evaluation strategy. To illustrate, executing a function call f(a,b) may first evaluate the arguments a and b, store the results in references or memory locations ref_a and ref_b, then evaluate the function's body with those references passed in. This gives the function the ability to look up the original argument values passed in through dereferencing the parameters (some languages use specific operators to perform this), to modify them via assignment as if they were local variables, and to return values via the references. This is the call-by-reference evaluation strategy. Evaluation strategy is part of the semantics of the programming language definition. Some languages, such as PureScript, have variants with different evaluation strategies. Some declarative languages, such as Datalog, support multiple evaluation strategies. Some languages define a calling convention. In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. A rewriting strategy specifies, out of all the reducible subterms (redexes), which one should be reduced (contracted) within a term. One of the most common systems involves lambda calculus. == Well-defined expressions == The language of mathematics exhibits a kind of grammar (called formal grammar) about how expressions may be written. There are two considerations for well-definedness of mathematical expressions, syntax and semantics. Syntax is concerned with the rules used for constructing, or transforming the symbols of an expression without regard to any interpretation or meaning given to them. Expressions that are syntactically correct are called well-formed. Semantics is concerned with the meaning of these well-formed expressions. Expressions that are semantically correct are called well-defined. === Well-formed === The syntax of mathematical expressions can be described somewhat informally as follows: the allowed operators must have the correct number of inputs in the correct places (usually written with infix notation), the sub-expressions that make up these inputs must be well-formed themselves, have a clear order of operations, etc. Strings of symbols that conform to the rules of syntax are called well-formed, and those that are not well-formed are called, ill-formed, and do not constitute mathematical expressions. For example, in arithmetic, the expression 1 + 2 × 3 is well-formed, but × 4 ) x + , / y {\displaystyle \times 4)x+,/y} . is not. However, being well-formed is not enough to be considered well-defined. For example in arithmetic, the expression 1 0 {\textstyle {\frac {1}{0}}} is well-formed, but it is not well-defined. (See Division by zero). Such expressions are called undefined. === Well-defined === Semantics is the study of meaning. Formal semantics is about attaching meaning to expressions. An expression that defines a unique value or meaning is said to be well-defined. Otherwise, the expression is said to be ill defined or ambiguous. In general the meaning of expressions is not limited to designating values; for instance, an expression might designate a condition, or an equation that is to be solved, or it can be viewed as an object in its own right that can be manipulated according to certain rules. Certain expressions that designate a value simultaneously express a condition that is assumed to hold, for instance those involving the operator ⊕ {\displaystyle \oplus } to designate an internal direct sum. In algebra, an expression may be used to designate a value, which might depend on values assigned to variables occurring in the expression. The determination of this value depends on the semantics attached to the symbols of the expression. The choice of semantics depends on the context of the expression. The same syntactic expression 1 + 2 × 3 can have different values (mathematically 7, but also 9), depending on the order of operations implied by the context (See also Operations § Calculators). For real numbers, the product a × b × c {\displaystyle a\times b\times c} is unambiguous because ( a × b ) × c = a × ( b × c ) {\displaystyle (a\times b)\times c=a\times (b\times c)} ; hence the notation is said to be well defined. This property, also known as associativity of multiplication, guarantees the result does not depend on the sequence of multiplications; therefore, a specification of the sequence can be omitted. The subtraction operation is non-associative; despite that, there is a convention that a − b − c {\displaystyle a-b-c} is shorthand for ( a − b ) − c {\displaystyle (a-b)-c} , thus it is considered "well-defined". On the other hand, Division is non-associative, and in the case of a / b / c {\displaystyle a/b/c} , parenthesization conventions are not well established; therefore, this expression is often considered ill-defined. Unlike with functions, notational ambiguities can be overcome by means of additional definitions (e.g., rules of precedence, associativity of the operator). For example, in the programming language C, the operator - for subtraction is left-to-right-associative, which means that a-b-c is defined as (a-b)-c, and the operator = for assignment is right-to-left-associative, which means that a=b=c is defined as a=(b=c). In the programming language APL there is only one rule: from right to left – but parentheses first. == Formal definition == The term 'expression' is part of the language of mathematics, that is to say, it is not defined within mathematics, but taken as a primitive part of the language. To attempt to define the term would not be doing mathematics, but rather, one would be engaging in a kind of metamathematics (the metalanguage of mathematics), usually mathematical logic. Within mathematical logic, mathematics is usually described as a kind of formal language, and a well-formed expression can be defined recursively as follows: The alphabet consists of: A set of individual constants: Symbols representing fixed objects in the domain of discourse, such as numerals (1, 2.5, 1/7, ...), sets ( ∅ , { 1 , 2 , 3 } {\displaystyle \varnothing ,\{1,2,3\}} , ...), truth values (T or F), etc. A set of individual variables: A countably infinite amount of symbols representing variables used for representing an unspecified object in the domain. (Usually letters like x, or y) A set of operations: Function symbols representing operations that can be performed on elements over the domain, like addition (+), multiplication (×), or set operations like union (∪), or intersection (∩). (Functions can be understood as unary operations) Brackets ( ) With this alphabet, the recursive rules for forming a well-formed expression (WFE) are as follows: Any constant or variable as defined are the atomic expressions, the simplest well-formed expressions (WFE's). For instance, the constant 2 {\displaystyle 2} or the variable x {\displaystyle x} are syntactically correct expressions. Let F {\displaystyle F} be a metavariable for any n-ary operation over the domain, and let ϕ 1 , ϕ 2 , . . . ϕ n {\displaystyle \phi _{1},\phi _{2},...\phi _{n}} be metavariables for any WFE's. Then F ( ϕ 1 , ϕ 2 , . . . ϕ n ) {\displaystyle F(\phi _{1},\phi _{2},...\phi _{n})} is also well-formed. For the most often used operations, more convenient notations (like infix notation) have been developed over the centuries. For instance, if the domain of discourse is the real numbers, F {\displaystyle F} can denote the binary operation +, then ϕ 1 + ϕ 2 {\displaystyle \phi _{1}+\phi _{2}} is well-formed. Or F {\displaystyle F} can be the unary operation √ {\displaystyle \surd } so ϕ 1 {\displaystyle {\sqrt {\phi _{1}}}} is well-formed. Brackets are initially around each non-atomic expression, but they can be deleted in cases where there is a defined order of operations, or where order doesn't matter (i.e. where operations are associative). A well-formed expression can be thought as a syntax tree. The leaf nodes are always atomic expressions. Operations + {\displaystyle +} and ∪ {\displaystyle \cup } have exactly two child nodes, while operations x {\textstyle {\sqrt {x}}} , ln ( x ) {\textstyle {\text{ln}}(x)} and d d x {\textstyle {\frac {d}{dx}}} have exactly one. There are countably infinitely many WFE's, however, each WFE has a finite number of nodes. === Lambda calculus === Formal languages allow formalizing the concept of well-formed expressions. In the 1930s, a new type of expression, the lambda expression, was introduced by Alonzo Church and Stephen Kleene for formalizing functions and their evaluation. The lambda operators (lambda abstraction and function application) form the basis for lambda calculus, a formal system used in mathematical logic and programming language theory. The equivalence of two lambda expressions is undecidable (but see unification (computer science)). This is also the case for the expressions representing real numbers, which are built from the integers by using the arithmetical operations, the logarithm and the exponential (Richardson's theorem). == Types of expressions == === Algebraic expression === An algebraic expression is an expression built up from algebraic constants, variables, and the algebraic operations (addition, subtraction, multiplication, division and exponentiation by a rational number). For example, 3x2 − 2xy + c is an algebraic expression. Since taking the square root is the same as raising to the power ⁠1/2⁠, the following is also an algebraic expression: 1 − x 2 1 + x 2 {\displaystyle {\sqrt {\frac {1-x^{2}}{1+x^{2}}}}} See also: Algebraic equation and Algebraic closure === Polynomial expression === A polynomial expression is an expression built with scalars (numbers of elements of some field), indeterminates, and the operators of addition, multiplication, and exponentiation to nonnegative integer powers; for example 3 ( x + 1 ) 2 − x y . {\displaystyle 3(x+1)^{2}-xy.} Using associativity, commutativity and distributivity, every polynomial expression is equivalent to a polynomial, that is an expression that is a linear combination of products of integer powers of the indeterminates. For example the above polynomial expression is equivalent (denote the same polynomial as 3 x 2 − x y + 6 x + 3. {\displaystyle 3x^{2}-xy+6x+3.} Many author do not distinguish polynomials and polynomial expressions. In this case the expression of a polynomial expression as a linear combination is called the canonical form, normal form, or expanded form of the polynomial. === Computational expression === In computer science, an expression is a syntactic entity in a programming language that may be evaluated to determine its value or fail to terminate, in which case the expression is undefined. It is a combination of one or more constants, variables, functions, and operators that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value. This process, for mathematical expressions, is called evaluation. In simple settings, the resulting value is usually one of various primitive types, such as string, Boolean, or numerical (such as integer, floating-point, or complex). In computer algebra, formulas are viewed as expressions that can be evaluated as a Boolean, depending on the values that are given to the variables occurring in the expressions. For example 8 x − 5 ≥ 3 {\displaystyle 8x-5\geq 3} takes the value false if x is given a value less than 1, and the value true otherwise. Expressions are often contrasted with statements—syntactic entities that have no value (an instruction). Except for numbers and variables, every mathematical expression may be viewed as the symbol of an operator followed by a sequence of operands. In computer algebra software, the expressions are usually represented in this way. This representation is very flexible, and many things that seem not to be mathematical expressions at first glance, may be represented and manipulated as such. For example, an equation is an expression with "=" as an operator, a matrix may be represented as an expression with "matrix" as an operator and its rows as operands. See: Computer algebra expression === Logical expression === In mathematical logic, a "logical expression" can refer to either terms or formulas. A term denotes a mathematical object while a formula denotes a mathematical fact. In particular, terms appear as components of a formula. A first-order term is recursively constructed from constant symbols, variables, and function symbols. An expression formed by applying a predicate symbol to an appropriate number of terms is called an atomic formula, which evaluates to true or false in bivalent logics, given an interpretation. For example, ⁠ ( x + 1 ) ∗ ( x + 1 ) {\displaystyle (x+1)*(x+1)} ⁠ is a term built from the constant 1, the variable x, and the binary function symbols ⁠ + {\displaystyle +} ⁠ and ⁠ ∗ {\displaystyle *} ⁠; it is part of the atomic formula ⁠ ( x + 1 ) ∗ ( x + 1 ) ≥ 0 {\displaystyle (x+1)*(x+1)\geq 0} ⁠ which evaluates to true for each real-numbered value of x. === Formal expression === A formal expression is a kind of string of symbols, created by the same production rules as standard expressions, however, they are used without regard to the meaning of the expression. In this way, two formal expressions are considered equal only if they are syntactically equal, that is, if they are the exact same expression. For instance, the formal expressions "2" and "1+1" are not equal. == See also == == Notes == == References == == Works Cited == Descartes, René (2006) [1637]. A discourse on the method of correctly conducting one's reason and seeking truth in the sciences. Translated by Ian Maclean. Oxford University Press. ISBN 0-19-282514-3.
Wikipedia:Exterior calculus identities#0
This article summarizes several identities in exterior calculus, a mathematical notation used in differential geometry. == Notation == The following summarizes short definitions and notations that are used in this article. === Manifold === M {\displaystyle M} , N {\displaystyle N} are n {\displaystyle n} -dimensional smooth manifolds, where n ∈ N {\displaystyle n\in \mathbb {N} } . That is, differentiable manifolds that can be differentiated enough times for the purposes on this page. p ∈ M {\displaystyle p\in M} , q ∈ N {\displaystyle q\in N} denote one point on each of the manifolds. The boundary of a manifold M {\displaystyle M} is a manifold ∂ M {\displaystyle \partial M} , which has dimension n − 1 {\displaystyle n-1} . An orientation on M {\displaystyle M} induces an orientation on ∂ M {\displaystyle \partial M} . We usually denote a submanifold by Σ ⊂ M {\displaystyle \Sigma \subset M} . === Tangent and cotangent bundles === T M {\displaystyle TM} , T ∗ M {\displaystyle T^{*}M} denote the tangent bundle and cotangent bundle, respectively, of the smooth manifold M {\displaystyle M} . T p M {\displaystyle T_{p}M} , T q N {\displaystyle T_{q}N} denote the tangent spaces of M {\displaystyle M} , N {\displaystyle N} at the points p {\displaystyle p} , q {\displaystyle q} , respectively. T p ∗ M {\displaystyle T_{p}^{*}M} denotes the cotangent space of M {\displaystyle M} at the point p {\displaystyle p} . Sections of the tangent bundles, also known as vector fields, are typically denoted as X , Y , Z ∈ Γ ( T M ) {\displaystyle X,Y,Z\in \Gamma (TM)} such that at a point p ∈ M {\displaystyle p\in M} we have X | p , Y | p , Z | p ∈ T p M {\displaystyle X|_{p},Y|_{p},Z|_{p}\in T_{p}M} . Sections of the cotangent bundle, also known as differential 1-forms (or covector fields), are typically denoted as α , β ∈ Γ ( T ∗ M ) {\displaystyle \alpha ,\beta \in \Gamma (T^{*}M)} such that at a point p ∈ M {\displaystyle p\in M} we have α | p , β | p ∈ T p ∗ M {\displaystyle \alpha |_{p},\beta |_{p}\in T_{p}^{*}M} . An alternative notation for Γ ( T ∗ M ) {\displaystyle \Gamma (T^{*}M)} is Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} . === Differential k-forms === Differential k {\displaystyle k} -forms, which we refer to simply as k {\displaystyle k} -forms here, are differential forms defined on T M {\displaystyle TM} . We denote the set of all k {\displaystyle k} -forms as Ω k ( M ) {\displaystyle \Omega ^{k}(M)} . For 0 ≤ k , l , m ≤ n {\displaystyle 0\leq k,\ l,\ m\leq n} we usually write α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} , β ∈ Ω l ( M ) {\displaystyle \beta \in \Omega ^{l}(M)} , γ ∈ Ω m ( M ) {\displaystyle \gamma \in \Omega ^{m}(M)} . 0 {\displaystyle 0} -forms f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} are just scalar functions C ∞ ( M ) {\displaystyle C^{\infty }(M)} on M {\displaystyle M} . 1 ∈ Ω 0 ( M ) {\displaystyle \mathbf {1} \in \Omega ^{0}(M)} denotes the constant 0 {\displaystyle 0} -form equal to 1 {\displaystyle 1} everywhere. === Omitted elements of a sequence === When we are given ( k + 1 ) {\displaystyle (k+1)} inputs X 0 , … , X k {\displaystyle X_{0},\ldots ,X_{k}} and a k {\displaystyle k} -form α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} we denote omission of the i {\displaystyle i} th entry by writing α ( X 0 , … , X ^ i , … , X k ) := α ( X 0 , … , X i − 1 , X i + 1 , … , X k ) . {\displaystyle \alpha (X_{0},\ldots ,{\hat {X}}_{i},\ldots ,X_{k}):=\alpha (X_{0},\ldots ,X_{i-1},X_{i+1},\ldots ,X_{k}).} === Exterior product === The exterior product is also known as the wedge product. It is denoted by ∧ : Ω k ( M ) × Ω l ( M ) → Ω k + l ( M ) {\displaystyle \wedge :\Omega ^{k}(M)\times \Omega ^{l}(M)\rightarrow \Omega ^{k+l}(M)} . The exterior product of a k {\displaystyle k} -form α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} and an l {\displaystyle l} -form β ∈ Ω l ( M ) {\displaystyle \beta \in \Omega ^{l}(M)} produce a ( k + l ) {\displaystyle (k+l)} -form α ∧ β ∈ Ω k + l ( M ) {\displaystyle \alpha \wedge \beta \in \Omega ^{k+l}(M)} . It can be written using the set S ( k , k + l ) {\displaystyle S(k,k+l)} of all permutations σ {\displaystyle \sigma } of { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} such that σ ( 1 ) < … < σ ( k ) , σ ( k + 1 ) < … < σ ( k + l ) {\displaystyle \sigma (1)<\ldots <\sigma (k),\ \sigma (k+1)<\ldots <\sigma (k+l)} as ( α ∧ β ) ( X 1 , … , X k + l ) = ∑ σ ∈ S ( k , k + l ) sign ( σ ) α ( X σ ( 1 ) , … , X σ ( k ) ) ⊗ β ( X σ ( k + 1 ) , … , X σ ( k + l ) ) . {\displaystyle (\alpha \wedge \beta )(X_{1},\ldots ,X_{k+l})=\sum _{\sigma \in S(k,k+l)}{\text{sign}}(\sigma )\alpha (X_{\sigma (1)},\ldots ,X_{\sigma (k)})\otimes \beta (X_{\sigma (k+1)},\ldots ,X_{\sigma (k+l)}).} === Directional derivative === The directional derivative of a 0-form f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} along a section X ∈ Γ ( T M ) {\displaystyle X\in \Gamma (TM)} is a 0-form denoted ∂ X f . {\displaystyle \partial _{X}f.} === Exterior derivative === The exterior derivative d k : Ω k ( M ) → Ω k + 1 ( M ) {\displaystyle d_{k}:\Omega ^{k}(M)\rightarrow \Omega ^{k+1}(M)} is defined for all 0 ≤ k ≤ n {\displaystyle 0\leq k\leq n} . We generally omit the subscript when it is clear from the context. For a 0 {\displaystyle 0} -form f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} we have d 0 f ∈ Ω 1 ( M ) {\displaystyle d_{0}f\in \Omega ^{1}(M)} as the 1 {\displaystyle 1} -form that gives the directional derivative, i.e., for the section X ∈ Γ ( T M ) {\displaystyle X\in \Gamma (TM)} we have ( d 0 f ) ( X ) = ∂ X f {\displaystyle (d_{0}f)(X)=\partial _{X}f} , the directional derivative of f {\displaystyle f} along X {\displaystyle X} . For 0 < k ≤ n {\displaystyle 0<k\leq n} , ( d k ω ) ( X 0 , … , X k ) = ∑ 0 ≤ j ≤ k ( − 1 ) j d 0 ( ω ( X 0 , … , X ^ j , … , X k ) ) ( X j ) + ∑ 0 ≤ i < j ≤ k ( − 1 ) i + j ω ( [ X i , X j ] , X 0 , … , X ^ i , … , X ^ j , … , X k ) . {\displaystyle (d_{k}\omega )(X_{0},\ldots ,X_{k})=\sum _{0\leq j\leq k}(-1)^{j}d_{0}(\omega (X_{0},\ldots ,{\hat {X}}_{j},\ldots ,X_{k}))(X_{j})+\sum _{0\leq i<j\leq k}(-1)^{i+j}\omega ([X_{i},X_{j}],X_{0},\ldots ,{\hat {X}}_{i},\ldots ,{\hat {X}}_{j},\ldots ,X_{k}).} === Lie bracket === The Lie bracket of sections X , Y ∈ Γ ( T M ) {\displaystyle X,Y\in \Gamma (TM)} is defined as the unique section [ X , Y ] ∈ Γ ( T M ) {\displaystyle [X,Y]\in \Gamma (TM)} that satisfies ∀ f ∈ Ω 0 ( M ) ⇒ ∂ [ X , Y ] f = ∂ X ∂ Y f − ∂ Y ∂ X f . {\displaystyle \forall f\in \Omega ^{0}(M)\Rightarrow \partial _{[X,Y]}f=\partial _{X}\partial _{Y}f-\partial _{Y}\partial _{X}f.} === Tangent maps === If ϕ : M → N {\displaystyle \phi :M\rightarrow N} is a smooth map, then d ϕ | p : T p M → T ϕ ( p ) N {\displaystyle d\phi |_{p}:T_{p}M\rightarrow T_{\phi (p)}N} defines a tangent map from M {\displaystyle M} to N {\displaystyle N} . It is defined through curves γ {\displaystyle \gamma } on M {\displaystyle M} with derivative γ ′ ( 0 ) = X ∈ T p M {\displaystyle \gamma '(0)=X\in T_{p}M} such that d ϕ ( X ) := ( ϕ ∘ γ ) ′ . {\displaystyle d\phi (X):=(\phi \circ \gamma )'.} Note that ϕ {\displaystyle \phi } is a 0 {\displaystyle 0} -form with values in N {\displaystyle N} . === Pull-back === If ϕ : M → N {\displaystyle \phi :M\rightarrow N} is a smooth map, then the pull-back of a k {\displaystyle k} -form α ∈ Ω k ( N ) {\displaystyle \alpha \in \Omega ^{k}(N)} is defined such that for any k {\displaystyle k} -dimensional submanifold Σ ⊂ M {\displaystyle \Sigma \subset M} ∫ Σ ϕ ∗ α = ∫ ϕ ( Σ ) α . {\displaystyle \int _{\Sigma }\phi ^{*}\alpha =\int _{\phi (\Sigma )}\alpha .} The pull-back can also be expressed as ( ϕ ∗ α ) ( X 1 , … , X k ) = α ( d ϕ ( X 1 ) , … , d ϕ ( X k ) ) . {\displaystyle (\phi ^{*}\alpha )(X_{1},\ldots ,X_{k})=\alpha (d\phi (X_{1}),\ldots ,d\phi (X_{k})).} === Interior product === Also known as the interior derivative, the interior product given a section Y ∈ Γ ( T M ) {\displaystyle Y\in \Gamma (TM)} is a map ι Y : Ω k + 1 ( M ) → Ω k ( M ) {\displaystyle \iota _{Y}:\Omega ^{k+1}(M)\rightarrow \Omega ^{k}(M)} that effectively substitutes the first input of a ( k + 1 ) {\displaystyle (k+1)} -form with Y {\displaystyle Y} . If α ∈ Ω k + 1 ( M ) {\displaystyle \alpha \in \Omega ^{k+1}(M)} and X i ∈ Γ ( T M ) {\displaystyle X_{i}\in \Gamma (TM)} then ( ι Y α ) ( X 1 , … , X k ) = α ( Y , X 1 , … , X k ) . {\displaystyle (\iota _{Y}\alpha )(X_{1},\ldots ,X_{k})=\alpha (Y,X_{1},\ldots ,X_{k}).} === Metric tensor === Given a nondegenerate bilinear form g p ( ⋅ , ⋅ ) {\displaystyle g_{p}(\cdot ,\cdot )} on each T p M {\displaystyle T_{p}M} that is continuous on M {\displaystyle M} , the manifold becomes a pseudo-Riemannian manifold. We denote the metric tensor g {\displaystyle g} , defined pointwise by g ( X , Y ) | p = g p ( X | p , Y | p ) {\displaystyle g(X,Y)|_{p}=g_{p}(X|_{p},Y|_{p})} . We call s = sign ⁡ ( g ) {\displaystyle s=\operatorname {sign} (g)} the signature of the metric. A Riemannian manifold has s = 1 {\displaystyle s=1} , whereas Minkowski space has s = − 1 {\displaystyle s=-1} . === Musical isomorphisms === The metric tensor g ( ⋅ , ⋅ ) {\displaystyle g(\cdot ,\cdot )} induces duality mappings between vector fields and one-forms: these are the musical isomorphisms flat ♭ {\displaystyle \flat } and sharp ♯ {\displaystyle \sharp } . A section A ∈ Γ ( T M ) {\displaystyle A\in \Gamma (TM)} corresponds to the unique one-form A ♭ ∈ Ω 1 ( M ) {\displaystyle A^{\flat }\in \Omega ^{1}(M)} such that for all sections X ∈ Γ ( T M ) {\displaystyle X\in \Gamma (TM)} , we have: A ♭ ( X ) = g ( A , X ) . {\displaystyle A^{\flat }(X)=g(A,X).} A one-form α ∈ Ω 1 ( M ) {\displaystyle \alpha \in \Omega ^{1}(M)} corresponds to the unique vector field α ♯ ∈ Γ ( T M ) {\displaystyle \alpha ^{\sharp }\in \Gamma (TM)} such that for all X ∈ Γ ( T M ) {\displaystyle X\in \Gamma (TM)} , we have: α ( X ) = g ( α ♯ , X ) . {\displaystyle \alpha (X)=g(\alpha ^{\sharp },X).} These mappings extend via multilinearity to mappings from k {\displaystyle k} -vector fields to k {\displaystyle k} -forms and k {\displaystyle k} -forms to k {\displaystyle k} -vector fields through ( A 1 ∧ A 2 ∧ ⋯ ∧ A k ) ♭ = A 1 ♭ ∧ A 2 ♭ ∧ ⋯ ∧ A k ♭ {\displaystyle (A_{1}\wedge A_{2}\wedge \cdots \wedge A_{k})^{\flat }=A_{1}^{\flat }\wedge A_{2}^{\flat }\wedge \cdots \wedge A_{k}^{\flat }} ( α 1 ∧ α 2 ∧ ⋯ ∧ α k ) ♯ = α 1 ♯ ∧ α 2 ♯ ∧ ⋯ ∧ α k ♯ . {\displaystyle (\alpha _{1}\wedge \alpha _{2}\wedge \cdots \wedge \alpha _{k})^{\sharp }=\alpha _{1}^{\sharp }\wedge \alpha _{2}^{\sharp }\wedge \cdots \wedge \alpha _{k}^{\sharp }.} === Hodge star === For an n-manifold M, the Hodge star operator ⋆ : Ω k ( M ) → Ω n − k ( M ) {\displaystyle {\star }:\Omega ^{k}(M)\rightarrow \Omega ^{n-k}(M)} is a duality mapping taking a k {\displaystyle k} -form α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} to an ( n − k ) {\displaystyle (n{-}k)} -form ( ⋆ α ) ∈ Ω n − k ( M ) {\displaystyle ({\star }\alpha )\in \Omega ^{n-k}(M)} . It can be defined in terms of an oriented frame ( X 1 , … , X n ) {\displaystyle (X_{1},\ldots ,X_{n})} for T M {\displaystyle TM} , orthonormal with respect to the given metric tensor g {\displaystyle g} : ( ⋆ α ) ( X 1 , … , X n − k ) = α ( X n − k + 1 , … , X n ) . {\displaystyle ({\star }\alpha )(X_{1},\ldots ,X_{n-k})=\alpha (X_{n-k+1},\ldots ,X_{n}).} === Co-differential operator === The co-differential operator δ : Ω k ( M ) → Ω k − 1 ( M ) {\displaystyle \delta :\Omega ^{k}(M)\rightarrow \Omega ^{k-1}(M)} on an n {\displaystyle n} dimensional manifold M {\displaystyle M} is defined by δ := ( − 1 ) k ⋆ − 1 d ⋆ = ( − 1 ) n k + n + 1 ⋆ d ⋆ . {\displaystyle \delta :=(-1)^{k}{\star }^{-1}d{\star }=(-1)^{nk+n+1}{\star }d{\star }.} The Hodge–Dirac operator, d + δ {\displaystyle d+\delta } , is a Dirac operator studied in Clifford analysis. === Oriented manifold === An n {\displaystyle n} -dimensional orientable manifold M is a manifold that can be equipped with a choice of an n-form μ ∈ Ω n ( M ) {\displaystyle \mu \in \Omega ^{n}(M)} that is continuous and nonzero everywhere on M. === Volume form === On an orientable manifold M {\displaystyle M} the canonical choice of a volume form given a metric tensor g {\displaystyle g} and an orientation is d e t := | det g | d X 1 ♭ ∧ … ∧ d X n ♭ {\displaystyle \mathbf {det} :={\sqrt {|\det g|}}\;dX_{1}^{\flat }\wedge \ldots \wedge dX_{n}^{\flat }} for any basis d X 1 , … , d X n {\displaystyle dX_{1},\ldots ,dX_{n}} ordered to match the orientation. === Area form === Given a volume form d e t {\displaystyle \mathbf {det} } and a unit normal vector N {\displaystyle N} we can also define an area form σ := ι N det {\displaystyle \sigma :=\iota _{N}{\textbf {det}}} on the boundary ∂ M . {\displaystyle \partial M.} === Bilinear form on k-forms === A generalization of the metric tensor, the symmetric bilinear form between two k {\displaystyle k} -forms α , β ∈ Ω k ( M ) {\displaystyle \alpha ,\beta \in \Omega ^{k}(M)} , is defined pointwise on M {\displaystyle M} by ⟨ α , β ⟩ | p := ⋆ ( α ∧ ⋆ β ) | p . {\displaystyle \langle \alpha ,\beta \rangle |_{p}:={\star }(\alpha \wedge {\star }\beta )|_{p}.} The L 2 {\displaystyle L^{2}} -bilinear form for the space of k {\displaystyle k} -forms Ω k ( M ) {\displaystyle \Omega ^{k}(M)} is defined by ⟨ ⟨ α , β ⟩ ⟩ := ∫ M α ∧ ⋆ β . {\displaystyle \langle \!\langle \alpha ,\beta \rangle \!\rangle :=\int _{M}\alpha \wedge {\star }\beta .} In the case of a Riemannian manifold, each is an inner product (i.e. is positive-definite). === Lie derivative === We define the Lie derivative L : Ω k ( M ) → Ω k ( M ) {\displaystyle {\mathcal {L}}:\Omega ^{k}(M)\rightarrow \Omega ^{k}(M)} through Cartan's magic formula for a given section X ∈ Γ ( T M ) {\displaystyle X\in \Gamma (TM)} as L X = d ∘ ι X + ι X ∘ d . {\displaystyle {\mathcal {L}}_{X}=d\circ \iota _{X}+\iota _{X}\circ d.} It describes the change of a k {\displaystyle k} -form along a flow ϕ t {\displaystyle \phi _{t}} associated to the section X {\displaystyle X} . === Laplace–Beltrami operator === The Laplacian Δ : Ω k ( M ) → Ω k ( M ) {\displaystyle \Delta :\Omega ^{k}(M)\rightarrow \Omega ^{k}(M)} is defined as Δ = − ( d δ + δ d ) {\displaystyle \Delta =-(d\delta +\delta d)} . == Important definitions == === Definitions on Ωk(M) === α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} is called... closed if d α = 0 {\displaystyle d\alpha =0} exact if α = d β {\displaystyle \alpha =d\beta } for some β ∈ Ω k − 1 {\displaystyle \beta \in \Omega ^{k-1}} coclosed if δ α = 0 {\displaystyle \delta \alpha =0} coexact if α = δ β {\displaystyle \alpha =\delta \beta } for some β ∈ Ω k + 1 {\displaystyle \beta \in \Omega ^{k+1}} harmonic if closed and coclosed === Cohomology === The k {\displaystyle k} -th cohomology of a manifold M {\displaystyle M} and its exterior derivative operators d 0 , … , d n − 1 {\displaystyle d_{0},\ldots ,d_{n-1}} is given by H k ( M ) := ker ( d k ) im ( d k − 1 ) {\displaystyle H^{k}(M):={\frac {{\text{ker}}(d_{k})}{{\text{im}}(d_{k-1})}}} Two closed k {\displaystyle k} -forms α , β ∈ Ω k ( M ) {\displaystyle \alpha ,\beta \in \Omega ^{k}(M)} are in the same cohomology class if their difference is an exact form i.e. [ α ] = [ β ] ⟺ α − β = d η for some η ∈ Ω k − 1 ( M ) {\displaystyle [\alpha ]=[\beta ]\ \ \Longleftrightarrow \ \ \alpha {-}\beta =d\eta \ {\text{ for some }}\eta \in \Omega ^{k-1}(M)} A closed surface of genus g {\displaystyle g} will have 2 g {\displaystyle 2g} generators which are harmonic. === Dirichlet energy === Given α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} , its Dirichlet energy is E D ( α ) := 1 2 ⟨ ⟨ d α , d α ⟩ ⟩ + 1 2 ⟨ ⟨ δ α , δ α ⟩ ⟩ {\displaystyle {\mathcal {E}}_{\text{D}}(\alpha ):={\dfrac {1}{2}}\langle \!\langle d\alpha ,d\alpha \rangle \!\rangle +{\dfrac {1}{2}}\langle \!\langle \delta \alpha ,\delta \alpha \rangle \!\rangle } == Properties == === Exterior derivative properties === ∫ Σ d α = ∫ ∂ Σ α {\displaystyle \int _{\Sigma }d\alpha =\int _{\partial \Sigma }\alpha } ( Stokes' theorem ) d ∘ d = 0 {\displaystyle d\circ d=0} ( cochain complex ) d ( α ∧ β ) = d α ∧ β + ( − 1 ) k α ∧ d β {\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta } for α ∈ Ω k ( M ) , β ∈ Ω l ( M ) {\displaystyle \alpha \in \Omega ^{k}(M),\ \beta \in \Omega ^{l}(M)} ( Leibniz rule ) d f ( X ) = ∂ X f {\displaystyle df(X)=\partial _{X}f} for f ∈ Ω 0 ( M ) , X ∈ Γ ( T M ) {\displaystyle f\in \Omega ^{0}(M),\ X\in \Gamma (TM)} ( directional derivative ) d α = 0 {\displaystyle d\alpha =0} for α ∈ Ω n ( M ) , dim ( M ) = n {\displaystyle \alpha \in \Omega ^{n}(M),\ {\text{dim}}(M)=n} === Exterior product properties === α ∧ β = ( − 1 ) k l β ∧ α {\displaystyle \alpha \wedge \beta =(-1)^{kl}\beta \wedge \alpha } for α ∈ Ω k ( M ) , β ∈ Ω l ( M ) {\displaystyle \alpha \in \Omega ^{k}(M),\ \beta \in \Omega ^{l}(M)} ( alternating ) ( α ∧ β ) ∧ γ = α ∧ ( β ∧ γ ) {\displaystyle (\alpha \wedge \beta )\wedge \gamma =\alpha \wedge (\beta \wedge \gamma )} ( associativity ) ( λ α ) ∧ β = λ ( α ∧ β ) {\displaystyle (\lambda \alpha )\wedge \beta =\lambda (\alpha \wedge \beta )} for λ ∈ R {\displaystyle \lambda \in \mathbb {R} } ( compatibility of scalar multiplication ) α ∧ ( β 1 + β 2 ) = α ∧ β 1 + α ∧ β 2 {\displaystyle \alpha \wedge (\beta _{1}+\beta _{2})=\alpha \wedge \beta _{1}+\alpha \wedge \beta _{2}} ( distributivity over addition ) α ∧ α = 0 {\displaystyle \alpha \wedge \alpha =0} for α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} when k {\displaystyle k} is odd or rank ⁡ α ≤ 1 {\displaystyle \operatorname {rank} \alpha \leq 1} . The rank of a k {\displaystyle k} -form α {\displaystyle \alpha } means the minimum number of monomial terms (exterior products of one-forms) that must be summed to produce α {\displaystyle \alpha } . === Pull-back properties === d ( ϕ ∗ α ) = ϕ ∗ ( d α ) {\displaystyle d(\phi ^{*}\alpha )=\phi ^{*}(d\alpha )} ( commutative with d {\displaystyle d} ) ϕ ∗ ( α ∧ β ) = ( ϕ ∗ α ) ∧ ( ϕ ∗ β ) {\displaystyle \phi ^{*}(\alpha \wedge \beta )=(\phi ^{*}\alpha )\wedge (\phi ^{*}\beta )} ( distributes over ∧ {\displaystyle \wedge } ) ( ϕ 1 ∘ ϕ 2 ) ∗ = ϕ 2 ∗ ϕ 1 ∗ {\displaystyle (\phi _{1}\circ \phi _{2})^{*}=\phi _{2}^{*}\phi _{1}^{*}} ( contravariant ) ϕ ∗ f = f ∘ ϕ {\displaystyle \phi ^{*}f=f\circ \phi } for f ∈ Ω 0 ( N ) {\displaystyle f\in \Omega ^{0}(N)} ( function composition ) === Musical isomorphism properties === ( X ♭ ) ♯ = X {\displaystyle (X^{\flat })^{\sharp }=X} ( α ♯ ) ♭ = α {\displaystyle (\alpha ^{\sharp })^{\flat }=\alpha } === Interior product properties === ι X ∘ ι X = 0 {\displaystyle \iota _{X}\circ \iota _{X}=0} ( nilpotent ) ι X ∘ ι Y = − ι Y ∘ ι X {\displaystyle \iota _{X}\circ \iota _{Y}=-\iota _{Y}\circ \iota _{X}} ι X ( α ∧ β ) = ( ι X α ) ∧ β + ( − 1 ) k α ∧ ( ι X β ) {\displaystyle \iota _{X}(\alpha \wedge \beta )=(\iota _{X}\alpha )\wedge \beta +(-1)^{k}\alpha \wedge (\iota _{X}\beta )} for α ∈ Ω k ( M ) , β ∈ Ω l ( M ) {\displaystyle \alpha \in \Omega ^{k}(M),\ \beta \in \Omega ^{l}(M)} ( Leibniz rule ) ι X α = α ( X ) {\displaystyle \iota _{X}\alpha =\alpha (X)} for α ∈ Ω 1 ( M ) {\displaystyle \alpha \in \Omega ^{1}(M)} ι X f = 0 {\displaystyle \iota _{X}f=0} for f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} ι X ( f α ) = f ι X α {\displaystyle \iota _{X}(f\alpha )=f\iota _{X}\alpha } for f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} === Hodge star properties === ⋆ ( λ 1 α + λ 2 β ) = λ 1 ( ⋆ α ) + λ 2 ( ⋆ β ) {\displaystyle {\star }(\lambda _{1}\alpha +\lambda _{2}\beta )=\lambda _{1}({\star }\alpha )+\lambda _{2}({\star }\beta )} for λ 1 , λ 2 ∈ R {\displaystyle \lambda _{1},\lambda _{2}\in \mathbb {R} } ( linearity ) ⋆ ⋆ α = s ( − 1 ) k ( n − k ) α {\displaystyle {\star }{\star }\alpha =s(-1)^{k(n-k)}\alpha } for α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} , n = dim ⁡ ( M ) {\displaystyle n=\dim(M)} , and s = sign ⁡ ( g ) {\displaystyle s=\operatorname {sign} (g)} the sign of the metric ⋆ ( − 1 ) = s ( − 1 ) k ( n − k ) ⋆ {\displaystyle {\star }^{(-1)}=s(-1)^{k(n-k)}{\star }} ( inversion ) ⋆ ( f α ) = f ( ⋆ α ) {\displaystyle {\star }(f\alpha )=f({\star }\alpha )} for f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} ( commutative with 0 {\displaystyle 0} -forms ) ⟨ ⟨ α , α ⟩ ⟩ = ⟨ ⟨ ⋆ α , ⋆ α ⟩ ⟩ {\displaystyle \langle \!\langle \alpha ,\alpha \rangle \!\rangle =\langle \!\langle {\star }\alpha ,{\star }\alpha \rangle \!\rangle } for α ∈ Ω 1 ( M ) {\displaystyle \alpha \in \Omega ^{1}(M)} ( Hodge star preserves 1 {\displaystyle 1} -form norm ) ⋆ 1 = d e t {\displaystyle {\star }\mathbf {1} =\mathbf {det} } ( Hodge dual of constant function 1 is the volume form ) === Co-differential operator properties === δ ∘ δ = 0 {\displaystyle \delta \circ \delta =0} ( nilpotent ) ⋆ δ = ( − 1 ) k d ⋆ {\displaystyle {\star }\delta =(-1)^{k}d{\star }} and ⋆ d = ( − 1 ) k + 1 δ ⋆ {\displaystyle {\star }d=(-1)^{k+1}\delta {\star }} ( Hodge adjoint to d {\displaystyle d} ) ⟨ ⟨ d α , β ⟩ ⟩ = ⟨ ⟨ α , δ β ⟩ ⟩ {\displaystyle \langle \!\langle d\alpha ,\beta \rangle \!\rangle =\langle \!\langle \alpha ,\delta \beta \rangle \!\rangle } if ∂ M = 0 {\displaystyle \partial M=0} ( δ {\displaystyle \delta } adjoint to d {\displaystyle d} ) In general, ∫ M d α ∧ ⋆ β = ∫ ∂ M α ∧ ⋆ β + ∫ M α ∧ ⋆ δ β {\displaystyle \int _{M}d\alpha \wedge \star \beta =\int _{\partial M}\alpha \wedge \star \beta +\int _{M}\alpha \wedge \star \delta \beta } δ f = 0 {\displaystyle \delta f=0} for f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} === Lie derivative properties === d ∘ L X = L X ∘ d {\displaystyle d\circ {\mathcal {L}}_{X}={\mathcal {L}}_{X}\circ d} ( commutative with d {\displaystyle d} ) ι X ∘ L X = L X ∘ ι X {\displaystyle \iota _{X}\circ {\mathcal {L}}_{X}={\mathcal {L}}_{X}\circ \iota _{X}} ( commutative with ι X {\displaystyle \iota _{X}} ) L X ( ι Y α ) = ι [ X , Y ] α + ι Y L X α {\displaystyle {\mathcal {L}}_{X}(\iota _{Y}\alpha )=\iota _{[X,Y]}\alpha +\iota _{Y}{\mathcal {L}}_{X}\alpha } L X ( α ∧ β ) = ( L X α ) ∧ β + α ∧ ( L X β ) {\displaystyle {\mathcal {L}}_{X}(\alpha \wedge \beta )=({\mathcal {L}}_{X}\alpha )\wedge \beta +\alpha \wedge ({\mathcal {L}}_{X}\beta )} ( Leibniz rule ) == Exterior calculus identities == ι X ( ⋆ 1 ) = ⋆ X ♭ {\displaystyle \iota _{X}({\star }\mathbf {1} )={\star }X^{\flat }} ι X ( ⋆ α ) = ( − 1 ) k ⋆ ( X ♭ ∧ α ) {\displaystyle \iota _{X}({\star }\alpha )=(-1)^{k}{\star }(X^{\flat }\wedge \alpha )} if α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} ι X ( ϕ ∗ α ) = ϕ ∗ ( ι d ϕ ( X ) α ) {\displaystyle \iota _{X}(\phi ^{*}\alpha )=\phi ^{*}(\iota _{d\phi (X)}\alpha )} ν , μ ∈ Ω n ( M ) , μ non-zero ⇒ ∃ f ∈ Ω 0 ( M ) : ν = f μ {\displaystyle \nu ,\mu \in \Omega ^{n}(M),\mu {\text{ non-zero }}\ \Rightarrow \ \exists \ f\in \Omega ^{0}(M):\ \nu =f\mu } X ♭ ∧ ⋆ Y ♭ = g ( X , Y ) ( ⋆ 1 ) {\displaystyle X^{\flat }\wedge {\star }Y^{\flat }=g(X,Y)({\star }\mathbf {1} )} ( bilinear form ) [ X , [ Y , Z ] ] + [ Y , [ Z , X ] ] + [ Z , [ X , Y ] ] = 0 {\displaystyle [X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]]=0} ( Jacobi identity ) === Dimensions === If n = dim ⁡ M {\displaystyle n=\dim M} dim ⁡ Ω k ( M ) = ( n k ) {\displaystyle \dim \Omega ^{k}(M)={\binom {n}{k}}} for 0 ≤ k ≤ n {\displaystyle 0\leq k\leq n} dim ⁡ Ω k ( M ) = 0 {\displaystyle \dim \Omega ^{k}(M)=0} for k < 0 , k > n {\displaystyle k<0,\ k>n} If X 1 , … , X n ∈ Γ ( T M ) {\displaystyle X_{1},\ldots ,X_{n}\in \Gamma (TM)} is a basis, then a basis of Ω k ( M ) {\displaystyle \Omega ^{k}(M)} is { X σ ( 1 ) ♭ ∧ … ∧ X σ ( k ) ♭ : σ ∈ S ( k , n ) } {\displaystyle \{X_{\sigma (1)}^{\flat }\wedge \ldots \wedge X_{\sigma (k)}^{\flat }\ :\ \sigma \in S(k,n)\}} === Exterior products === Let α , β , γ , α i ∈ Ω 1 ( M ) {\displaystyle \alpha ,\beta ,\gamma ,\alpha _{i}\in \Omega ^{1}(M)} and X , Y , Z , X i {\displaystyle X,Y,Z,X_{i}} be vector fields. α ( X ) = det [ α ( X ) ] {\displaystyle \alpha (X)=\det {\begin{bmatrix}\alpha (X)\\\end{bmatrix}}} ( α ∧ β ) ( X , Y ) = det [ α ( X ) α ( Y ) β ( X ) β ( Y ) ] {\displaystyle (\alpha \wedge \beta )(X,Y)=\det {\begin{bmatrix}\alpha (X)&\alpha (Y)\\\beta (X)&\beta (Y)\\\end{bmatrix}}} ( α ∧ β ∧ γ ) ( X , Y , Z ) = det [ α ( X ) α ( Y ) α ( Z ) β ( X ) β ( Y ) β ( Z ) γ ( X ) γ ( Y ) γ ( Z ) ] {\displaystyle (\alpha \wedge \beta \wedge \gamma )(X,Y,Z)=\det {\begin{bmatrix}\alpha (X)&\alpha (Y)&\alpha (Z)\\\beta (X)&\beta (Y)&\beta (Z)\\\gamma (X)&\gamma (Y)&\gamma (Z)\end{bmatrix}}} ( α 1 ∧ … ∧ α l ) ( X 1 , … , X l ) = det [ α 1 ( X 1 ) α 1 ( X 2 ) … α 1 ( X l ) α 2 ( X 1 ) α 2 ( X 2 ) … α 2 ( X l ) ⋮ ⋮ ⋱ ⋮ α l ( X 1 ) α l ( X 2 ) … α l ( X l ) ] {\displaystyle (\alpha _{1}\wedge \ldots \wedge \alpha _{l})(X_{1},\ldots ,X_{l})=\det {\begin{bmatrix}\alpha _{1}(X_{1})&\alpha _{1}(X_{2})&\dots &\alpha _{1}(X_{l})\\\alpha _{2}(X_{1})&\alpha _{2}(X_{2})&\dots &\alpha _{2}(X_{l})\\\vdots &\vdots &\ddots &\vdots \\\alpha _{l}(X_{1})&\alpha _{l}(X_{2})&\dots &\alpha _{l}(X_{l})\end{bmatrix}}} === Projection and rejection === ( − 1 ) k ι X ⋆ α = ⋆ ( X ♭ ∧ α ) {\displaystyle (-1)^{k}\iota _{X}{\star }\alpha ={\star }(X^{\flat }\wedge \alpha )} ( interior product ι X ⋆ {\displaystyle \iota _{X}{\star }} dual to wedge X ♭ ∧ {\displaystyle X^{\flat }\wedge } ) ( ι X α ) ∧ ⋆ β = α ∧ ⋆ ( X ♭ ∧ β ) {\displaystyle (\iota _{X}\alpha )\wedge {\star }\beta =\alpha \wedge {\star }(X^{\flat }\wedge \beta )} for α ∈ Ω k + 1 ( M ) , β ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k+1}(M),\beta \in \Omega ^{k}(M)} If | X | = 1 , α ∈ Ω k ( M ) {\displaystyle |X|=1,\ \alpha \in \Omega ^{k}(M)} , then ι X ∘ ( X ♭ ∧ ) : Ω k ( M ) → Ω k ( M ) {\displaystyle \iota _{X}\circ (X^{\flat }\wedge ):\Omega ^{k}(M)\rightarrow \Omega ^{k}(M)} is the projection of α {\displaystyle \alpha } onto the orthogonal complement of X {\displaystyle X} . ( X ♭ ∧ ) ∘ ι X : Ω k ( M ) → Ω k ( M ) {\displaystyle (X^{\flat }\wedge )\circ \iota _{X}:\Omega ^{k}(M)\rightarrow \Omega ^{k}(M)} is the rejection of α {\displaystyle \alpha } , the remainder of the projection. thus ι X ∘ ( X ♭ ∧ ) + ( X ♭ ∧ ) ∘ ι X = id {\displaystyle \iota _{X}\circ (X^{\flat }\wedge )+(X^{\flat }\wedge )\circ \iota _{X}={\text{id}}} ( projection–rejection decomposition ) Given the boundary ∂ M {\displaystyle \partial M} with unit normal vector N {\displaystyle N} t := ι N ∘ ( N ♭ ∧ ) {\displaystyle \mathbf {t} :=\iota _{N}\circ (N^{\flat }\wedge )} extracts the tangential component of the boundary. n := ( id − t ) {\displaystyle \mathbf {n} :=({\text{id}}-\mathbf {t} )} extracts the normal component of the boundary. === Sum expressions === ( d α ) ( X 0 , … , X k ) = ∑ 0 ≤ j ≤ k ( − 1 ) j d ( α ( X 0 , … , X ^ j , … , X k ) ) ( X j ) + ∑ 0 ≤ i < j ≤ k ( − 1 ) i + j α ( [ X i , X j ] , X 0 , … , X ^ i , … , X ^ j , … , X k ) {\displaystyle (d\alpha )(X_{0},\ldots ,X_{k})=\sum _{0\leq j\leq k}(-1)^{j}d(\alpha (X_{0},\ldots ,{\hat {X}}_{j},\ldots ,X_{k}))(X_{j})+\sum _{0\leq i<j\leq k}(-1)^{i+j}\alpha ([X_{i},X_{j}],X_{0},\ldots ,{\hat {X}}_{i},\ldots ,{\hat {X}}_{j},\ldots ,X_{k})} ( d α ) ( X 1 , … , X k ) = ∑ i = 1 k ( − 1 ) i + 1 ( ∇ X i α ) ( X 1 , … , X ^ i , … , X k ) {\displaystyle (d\alpha )(X_{1},\ldots ,X_{k})=\sum _{i=1}^{k}(-1)^{i+1}(\nabla _{X_{i}}\alpha )(X_{1},\ldots ,{\hat {X}}_{i},\ldots ,X_{k})} ( δ α ) ( X 1 , … , X k − 1 ) = − ∑ i = 1 n ( ι E i ( ∇ E i α ) ) ( X 1 , … , X ^ i , … , X k ) {\displaystyle (\delta \alpha )(X_{1},\ldots ,X_{k-1})=-\sum _{i=1}^{n}(\iota _{E_{i}}(\nabla _{E_{i}}\alpha ))(X_{1},\ldots ,{\hat {X}}_{i},\ldots ,X_{k})} given a positively oriented orthonormal frame E 1 , … , E n {\displaystyle E_{1},\ldots ,E_{n}} . ( L Y α ) ( X 1 , … , X k ) = ( ∇ Y α ) ( X 1 , … , X k ) − ∑ i = 1 k α ( X 1 , … , ∇ X i Y , … , X k ) {\displaystyle ({\mathcal {L}}_{Y}\alpha )(X_{1},\ldots ,X_{k})=(\nabla _{Y}\alpha )(X_{1},\ldots ,X_{k})-\sum _{i=1}^{k}\alpha (X_{1},\ldots ,\nabla _{X_{i}}Y,\ldots ,X_{k})} === Hodge decomposition === If ∂ M = ∅ {\displaystyle \partial M=\emptyset } , ω ∈ Ω k ( M ) ⇒ ∃ α ∈ Ω k − 1 , β ∈ Ω k + 1 , γ ∈ Ω k ( M ) , d γ = 0 , δ γ = 0 {\displaystyle \omega \in \Omega ^{k}(M)\Rightarrow \exists \alpha \in \Omega ^{k-1},\ \beta \in \Omega ^{k+1},\ \gamma \in \Omega ^{k}(M),\ d\gamma =0,\ \delta \gamma =0} such that ω = d α + δ β + γ {\displaystyle \omega =d\alpha +\delta \beta +\gamma } === Poincaré lemma === If a boundaryless manifold M {\displaystyle M} has trivial cohomology H k ( M ) = { 0 } {\displaystyle H^{k}(M)=\{0\}} , then any closed ω ∈ Ω k ( M ) {\displaystyle \omega \in \Omega ^{k}(M)} is exact. This is the case if M is contractible. == Relations to vector calculus == === Identities in Euclidean 3-space === Let Euclidean metric g ( X , Y ) := ⟨ X , Y ⟩ = X ⋅ Y {\displaystyle g(X,Y):=\langle X,Y\rangle =X\cdot Y} . We use ∇ = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) {\displaystyle \nabla =\left({\partial \over \partial x},{\partial \over \partial y},{\partial \over \partial z}\right)} differential operator R 3 {\displaystyle \mathbb {R} ^{3}} ι X α = g ( X , α ♯ ) = X ⋅ α ♯ {\displaystyle \iota _{X}\alpha =g(X,\alpha ^{\sharp })=X\cdot \alpha ^{\sharp }} for α ∈ Ω 1 ( M ) {\displaystyle \alpha \in \Omega ^{1}(M)} . d e t ( X , Y , Z ) = ⟨ X , Y × Z ⟩ = ⟨ X × Y , Z ⟩ {\displaystyle \mathbf {det} (X,Y,Z)=\langle X,Y\times Z\rangle =\langle X\times Y,Z\rangle } ( scalar triple product ) X × Y = ( ⋆ ( X ♭ ∧ Y ♭ ) ) ♯ {\displaystyle X\times Y=({\star }(X^{\flat }\wedge Y^{\flat }))^{\sharp }} ( cross product ) ι X α = − ( X × A ) ♭ {\displaystyle \iota _{X}\alpha =-(X\times A)^{\flat }} if α ∈ Ω 2 ( M ) , A = ( ⋆ α ) ♯ {\displaystyle \alpha \in \Omega ^{2}(M),\ A=({\star }\alpha )^{\sharp }} X ⋅ Y = ⋆ ( X ♭ ∧ ⋆ Y ♭ ) {\displaystyle X\cdot Y={\star }(X^{\flat }\wedge {\star }Y^{\flat })} ( scalar product ) ∇ f = ( d f ) ♯ {\displaystyle \nabla f=(df)^{\sharp }} ( gradient ) X ⋅ ∇ f = d f ( X ) {\displaystyle X\cdot \nabla f=df(X)} ( directional derivative ) ∇ ⋅ X = ⋆ d ⋆ X ♭ = − δ X ♭ {\displaystyle \nabla \cdot X={\star }d{\star }X^{\flat }=-\delta X^{\flat }} ( divergence ) ∇ × X = ( ⋆ d X ♭ ) ♯ {\displaystyle \nabla \times X=({\star }dX^{\flat })^{\sharp }} ( curl ) ⟨ X , N ⟩ σ = ⋆ X ♭ {\displaystyle \langle X,N\rangle \sigma ={\star }X^{\flat }} where N {\displaystyle N} is the unit normal vector of ∂ M {\displaystyle \partial M} and σ = ι N d e t {\displaystyle \sigma =\iota _{N}\mathbf {det} } is the area form on ∂ M {\displaystyle \partial M} . ∫ Σ d ⋆ X ♭ = ∫ ∂ Σ ⋆ X ♭ = ∫ ∂ Σ ⟨ X , N ⟩ σ {\displaystyle \int _{\Sigma }d{\star }X^{\flat }=\int _{\partial \Sigma }{\star }X^{\flat }=\int _{\partial \Sigma }\langle X,N\rangle \sigma } ( divergence theorem ) === Lie derivatives === L X f = X ⋅ ∇ f {\displaystyle {\mathcal {L}}_{X}f=X\cdot \nabla f} ( 0 {\displaystyle 0} -forms ) L X α = ( ∇ X α ♯ ) ♭ + g ( α ♯ , ∇ X ) {\displaystyle {\mathcal {L}}_{X}\alpha =(\nabla _{X}\alpha ^{\sharp })^{\flat }+g(\alpha ^{\sharp },\nabla X)} ( 1 {\displaystyle 1} -forms ) ⋆ L X β = ( ∇ X B − ∇ B X + ( div X ) B ) ♭ {\displaystyle {\star }{\mathcal {L}}_{X}\beta =\left(\nabla _{X}B-\nabla _{B}X+({\text{div}}X)B\right)^{\flat }} if B = ( ⋆ β ) ♯ {\displaystyle B=({\star }\beta )^{\sharp }} ( 2 {\displaystyle 2} -forms on 3 {\displaystyle 3} -manifolds ) ⋆ L X ρ = d q ( X ) + ( div X ) q {\displaystyle {\star }{\mathcal {L}}_{X}\rho =dq(X)+({\text{div}}X)q} if ρ = ⋆ q ∈ Ω 0 ( M ) {\displaystyle \rho ={\star }q\in \Omega ^{0}(M)} ( n {\displaystyle n} -forms ) L X ( d e t ) = ( div ( X ) ) d e t {\displaystyle {\mathcal {L}}_{X}(\mathbf {det} )=({\text{div}}(X))\mathbf {det} } == References ==
Wikipedia:Exterior derivative#0
On a differentiable manifold, the exterior derivative extends the concept of the differential of a function to differential forms of higher degree. The exterior derivative was first described in its current form by Élie Cartan in 1899. The resulting calculus, known as exterior calculus, allows for a natural, metric-independent generalization of Stokes' theorem, Gauss's theorem, and Green's theorem from vector calculus. If a differential k-form is thought of as measuring the flux through an infinitesimal k-parallelotope at each point of the manifold, then its exterior derivative can be thought of as measuring the net flux through the boundary of a (k + 1)-parallelotope at each point. == Definition == The exterior derivative of a differential form of degree k (also differential k-form, or just k-form for brevity here) is a differential form of degree k + 1. If f is a smooth function (a 0-form), then the exterior derivative of f is the differential of f . That is, df is the unique 1-form such that for every smooth vector field X, df (X) = dX f , where dX f is the directional derivative of f in the direction of X. The exterior product of differential forms (denoted with the same symbol ∧) is defined as their pointwise exterior product. There are a variety of equivalent definitions of the exterior derivative of a general k-form. === In terms of axioms === The exterior derivative is defined to be the unique ℝ-linear mapping from k-forms to (k + 1)-forms that has the following properties: The operator d {\displaystyle d} applied to the 0 {\displaystyle 0} -form f {\displaystyle f} is the differential d f {\displaystyle df} of f {\displaystyle f} If α {\displaystyle \alpha } and β {\displaystyle \beta } are two k {\displaystyle k} -forms, then d ( a α + b β ) = a d α + b d β {\displaystyle d(a\alpha +b\beta )=ad\alpha +bd\beta } for any field elements a , b {\displaystyle a,b} If α {\displaystyle \alpha } is a k {\displaystyle k} -form and β {\displaystyle \beta } is an l {\displaystyle l} -form, then d ( α ∧ β ) = d α ∧ β + ( − 1 ) k α ∧ d β {\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta } (graded product rule) If α {\displaystyle \alpha } is a k {\displaystyle k} -form, then d ( d α ) = 0 {\displaystyle d(d\alpha )=0} (Poincaré's lemma) If f {\displaystyle f} and g {\displaystyle g} are two 0 {\displaystyle 0} -forms (functions), then from the third property for the quantity d ( f ∧ g ) {\displaystyle d(f\wedge g)} , which is simply d ( f g ) {\displaystyle d(fg)} , the familiar product rule d ( f g ) = g d f + f d g {\displaystyle d(fg)=g\,df+f\,dg} is recovered. The third property can be generalised, for instance, if α {\displaystyle \alpha } is a k {\displaystyle k} -form, β {\displaystyle \beta } is an l {\displaystyle l} -form and γ {\displaystyle \gamma } is an m {\displaystyle m} -form, then d ( α ∧ β ∧ γ ) = d α ∧ β ∧ γ + ( − 1 ) k α ∧ d β ∧ γ + ( − 1 ) k + l α ∧ β ∧ d γ . {\displaystyle d(\alpha \wedge \beta \wedge \gamma )=d\alpha \wedge \beta \wedge \gamma +(-1)^{k}\alpha \wedge d\beta \wedge \gamma +(-1)^{k+l}\alpha \wedge \beta \wedge d\gamma .} === In terms of local coordinates === Alternatively, one can work entirely in a local coordinate system (x1, ..., xn). The coordinate differentials dx1, ..., dxn form a basis of the space of one-forms, each associated with a coordinate. Given a multi-index I = (i1, ..., ik) with 1 ≤ ip ≤ n for 1 ≤ p ≤ k (and denoting dxi1 ∧ ... ∧ dxik with dxI), the exterior derivative of a (simple) k-form φ = g d x I = g d x i 1 ∧ d x i 2 ∧ ⋯ ∧ d x i k {\displaystyle \varphi =g\,dx^{I}=g\,dx^{i_{1}}\wedge dx^{i_{2}}\wedge \cdots \wedge dx^{i_{k}}} over ℝn is defined as d φ = d g ∧ d x i 1 ∧ d x i 2 ∧ ⋯ ∧ d x i k = ∂ g ∂ x j d x j ∧ d x i 1 ∧ d x i 2 ∧ ⋯ ∧ d x i k {\displaystyle d{\varphi }=dg\wedge dx^{i_{1}}\wedge dx^{i_{2}}\wedge \cdots \wedge dx^{i_{k}}={\frac {\partial g}{\partial x^{j}}}\,dx^{j}\wedge \,dx^{i_{1}}\wedge dx^{i_{2}}\wedge \cdots \wedge dx^{i_{k}}} (using the Einstein summation convention). The definition of the exterior derivative is extended linearly to a general k-form (which is expressible as a linear combination of basic simple k {\displaystyle k} -forms) ω = f I d x I , {\displaystyle \omega =f_{I}\,dx^{I},} where each of the components of the multi-index I run over all the values in {1, ..., n}. Note that whenever j equals one of the components of the multi-index I then dxj ∧ dxI = 0 (see Exterior product). The definition of the exterior derivative in local coordinates follows from the preceding definition in terms of axioms. Indeed, with the k-form φ as defined above, d φ = d ( g d x i 1 ∧ ⋯ ∧ d x i k ) = d g ∧ ( d x i 1 ∧ ⋯ ∧ d x i k ) + g d ( d x i 1 ∧ ⋯ ∧ d x i k ) = d g ∧ d x i 1 ∧ ⋯ ∧ d x i k + g ∑ p = 1 k ( − 1 ) p − 1 d x i 1 ∧ ⋯ ∧ d x i p − 1 ∧ d 2 x i p ∧ d x i p + 1 ∧ ⋯ ∧ d x i k = d g ∧ d x i 1 ∧ ⋯ ∧ d x i k = ∂ g ∂ x i d x i ∧ d x i 1 ∧ ⋯ ∧ d x i k {\displaystyle {\begin{aligned}d{\varphi }&=d\left(g\,dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}\right)\\&=dg\wedge \left(dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}\right)+g\,d\left(dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}\right)\\&=dg\wedge dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}+g\sum _{p=1}^{k}(-1)^{p-1}\,dx^{i_{1}}\wedge \cdots \wedge dx^{i_{p-1}}\wedge d^{2}x^{i_{p}}\wedge dx^{i_{p+1}}\wedge \cdots \wedge dx^{i_{k}}\\&=dg\wedge dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}\\&={\frac {\partial g}{\partial x^{i}}}\,dx^{i}\wedge dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}\\\end{aligned}}} Here, we have interpreted g as a 0-form, and then applied the properties of the exterior derivative. This result extends directly to the general k-form ω as d ω = ∂ f I ∂ x i d x i ∧ d x I . {\displaystyle d\omega ={\frac {\partial f_{I}}{\partial x^{i}}}\,dx^{i}\wedge dx^{I}.} In particular, for a 1-form ω, the components of dω in local coordinates are ( d ω ) i j = ∂ i ω j − ∂ j ω i . {\displaystyle (d\omega )_{ij}=\partial _{i}\omega _{j}-\partial _{j}\omega _{i}.} Caution: There are two conventions regarding the meaning of d x i 1 ∧ ⋯ ∧ d x i k {\displaystyle dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}} . Most current authors have the convention that ( d x i 1 ∧ ⋯ ∧ d x i k ) ( ∂ ∂ x i 1 , … , ∂ ∂ x i k ) = 1. {\displaystyle \left(dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}\right)\left({\frac {\partial }{\partial x^{i_{1}}}},\ldots ,{\frac {\partial }{\partial x^{i_{k}}}}\right)=1.} while in older texts like Kobayashi and Nomizu or Helgason ( d x i 1 ∧ ⋯ ∧ d x i k ) ( ∂ ∂ x i 1 , … , ∂ ∂ x i k ) = 1 k ! . {\displaystyle \left(dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}\right)\left({\frac {\partial }{\partial x^{i_{1}}}},\ldots ,{\frac {\partial }{\partial x^{i_{k}}}}\right)={\frac {1}{k!}}.} === In terms of invariant formula === Alternatively, an explicit formula can be given for the exterior derivative of a k-form ω, when paired with k + 1 arbitrary smooth vector fields V0, V1, ..., Vk: d ω ( V 0 , … , V k ) = ∑ i ( − 1 ) i V i ( ω ( V 0 , … , V ^ i , … , V k ) ) + ∑ i < j ( − 1 ) i + j ω ( [ V i , V j ] , V 0 , … , V ^ i , … , V ^ j , … , V k ) {\displaystyle d\omega (V_{0},\ldots ,V_{k})=\sum _{i}(-1)^{i}V_{i}(\omega (V_{0},\ldots ,{\widehat {V}}_{i},\ldots ,V_{k}))+\sum _{i<j}(-1)^{i+j}\omega ([V_{i},V_{j}],V_{0},\ldots ,{\widehat {V}}_{i},\ldots ,{\widehat {V}}_{j},\ldots ,V_{k})} where [Vi, Vj] denotes the Lie bracket and a hat denotes the omission of that element: ω ( V 0 , … , V ^ i , … , V k ) = ω ( V 0 , … , V i − 1 , V i + 1 , … , V k ) . {\displaystyle \omega (V_{0},\ldots ,{\widehat {V}}_{i},\ldots ,V_{k})=\omega (V_{0},\ldots ,V_{i-1},V_{i+1},\ldots ,V_{k}).} In particular, when ω is a 1-form we have that dω(X, Y) = dX(ω(Y)) − dY(ω(X)) − ω([X, Y]). Note: With the conventions of e.g., Kobayashi–Nomizu and Helgason the formula differs by a factor of ⁠1/k + 1⁠: d ω ( V 0 , … , V k ) = 1 k + 1 ∑ i ( − 1 ) i V i ( ω ( V 0 , … , V ^ i , … , V k ) ) + 1 k + 1 ∑ i < j ( − 1 ) i + j ω ( [ V i , V j ] , V 0 , … , V ^ i , … , V ^ j , … , V k ) . {\displaystyle {\begin{aligned}d\omega (V_{0},\ldots ,V_{k})={}&{1 \over k+1}\sum _{i}(-1)^{i}\,V_{i}(\omega (V_{0},\ldots ,{\widehat {V}}_{i},\ldots ,V_{k}))\\&{}+{1 \over k+1}\sum _{i<j}(-1)^{i+j}\omega ([V_{i},V_{j}],V_{0},\ldots ,{\widehat {V}}_{i},\ldots ,{\widehat {V}}_{j},\ldots ,V_{k}).\end{aligned}}} == Examples == Example 1. Consider σ = u dx1 ∧ dx2 over a 1-form basis dx1, ..., dxn for a scalar field u. The exterior derivative is: d σ = d u ∧ d x 1 ∧ d x 2 = ( ∑ i = 1 n ∂ u ∂ x i d x i ) ∧ d x 1 ∧ d x 2 = ∑ i = 3 n ( ∂ u ∂ x i d x i ∧ d x 1 ∧ d x 2 ) {\displaystyle {\begin{aligned}d\sigma &=du\wedge dx^{1}\wedge dx^{2}\\&=\left(\sum _{i=1}^{n}{\frac {\partial u}{\partial x^{i}}}\,dx^{i}\right)\wedge dx^{1}\wedge dx^{2}\\&=\sum _{i=3}^{n}\left({\frac {\partial u}{\partial x^{i}}}\,dx^{i}\wedge dx^{1}\wedge dx^{2}\right)\end{aligned}}} The last formula, where summation starts at i = 3, follows easily from the properties of the exterior product. Namely, dxi ∧ dxi = 0. Example 2. Let σ = u dx + v dy be a 1-form defined over ℝ2. By applying the above formula to each term (consider x1 = x and x2 = y) we have the sum d σ = ( ∑ i = 1 2 ∂ u ∂ x i d x i ∧ d x ) + ( ∑ i = 1 2 ∂ v ∂ x i d x i ∧ d y ) = ( ∂ u ∂ x d x ∧ d x + ∂ u ∂ y d y ∧ d x ) + ( ∂ v ∂ x d x ∧ d y + ∂ v ∂ y d y ∧ d y ) = 0 − ∂ u ∂ y d x ∧ d y + ∂ v ∂ x d x ∧ d y + 0 = ( ∂ v ∂ x − ∂ u ∂ y ) d x ∧ d y {\displaystyle {\begin{aligned}d\sigma &=\left(\sum _{i=1}^{2}{\frac {\partial u}{\partial x^{i}}}dx^{i}\wedge dx\right)+\left(\sum _{i=1}^{2}{\frac {\partial v}{\partial x^{i}}}\,dx^{i}\wedge dy\right)\\&=\left({\frac {\partial {u}}{\partial {x}}}\,dx\wedge dx+{\frac {\partial {u}}{\partial {y}}}\,dy\wedge dx\right)+\left({\frac {\partial {v}}{\partial {x}}}\,dx\wedge dy+{\frac {\partial {v}}{\partial {y}}}\,dy\wedge dy\right)\\&=0-{\frac {\partial {u}}{\partial {y}}}\,dx\wedge dy+{\frac {\partial {v}}{\partial {x}}}\,dx\wedge dy+0\\&=\left({\frac {\partial {v}}{\partial {x}}}-{\frac {\partial {u}}{\partial {y}}}\right)\,dx\wedge dy\end{aligned}}} == Stokes' theorem on manifolds == If M is a compact smooth orientable n-dimensional manifold with boundary, and ω is an (n − 1)-form on M, then the generalized form of Stokes' theorem states that ∫ M d ω = ∫ ∂ M ω {\displaystyle \int _{M}d\omega =\int _{\partial {M}}\omega } Intuitively, if one thinks of M as being divided into infinitesimal regions, and one adds the flux through the boundaries of all the regions, the interior boundaries all cancel out, leaving the total flux through the boundary of M. == Further properties == === Closed and exact forms === A k-form ω is called closed if dω = 0; closed forms are the kernel of d. ω is called exact if ω = dα for some (k − 1)-form α; exact forms are the image of d. Because d2 = 0, every exact form is closed. The Poincaré lemma states that in a contractible region, the converse is true. === de Rham cohomology === Because the exterior derivative d has the property that d2 = 0, it can be used as the differential (coboundary) to define de Rham cohomology on a manifold. The k-th de Rham cohomology (group) is the vector space of closed k-forms modulo the exact k-forms; as noted in the previous section, the Poincaré lemma states that these vector spaces are trivial for a contractible region, for k > 0. For smooth manifolds, integration of forms gives a natural homomorphism from the de Rham cohomology to the singular cohomology over ℝ. The theorem of de Rham shows that this map is actually an isomorphism, a far-reaching generalization of the Poincaré lemma. As suggested by the generalized Stokes' theorem, the exterior derivative is the "dual" of the boundary map on singular simplices. === Naturality === The exterior derivative is natural in the technical sense: if f : M → N is a smooth map and Ωk is the contravariant smooth functor that assigns to each manifold the space of k-forms on the manifold, then the following diagram commutes so d( f∗ω) = f∗dω, where f∗ denotes the pullback of f . This follows from that f∗ω(·), by definition, is ω( f∗(·)), f∗ being the pushforward of f . Thus d is a natural transformation from Ωk to Ωk+1. == Exterior derivative in vector calculus == Most vector calculus operators are special cases of, or have close relationships to, the notion of exterior differentiation. === Gradient === A smooth function f : M → ℝ on a real differentiable manifold M is a 0-form. The exterior derivative of this 0-form is the 1-form df. When an inner product ⟨·,·⟩ is defined, the gradient ∇f of a function f is defined as the unique vector in V such that its inner product with any element of V is the directional derivative of f along the vector, that is such that ⟨ ∇ f , ⋅ ⟩ = d f = ∑ i = 1 n ∂ f ∂ x i d x i . {\displaystyle \langle \nabla f,\cdot \rangle =df=\sum _{i=1}^{n}{\frac {\partial f}{\partial x^{i}}}\,dx^{i}.} That is, ∇ f = ( d f ) ♯ = ∑ i = 1 n ∂ f ∂ x i ( d x i ) ♯ , {\displaystyle \nabla f=(df)^{\sharp }=\sum _{i=1}^{n}{\frac {\partial f}{\partial x^{i}}}\,\left(dx^{i}\right)^{\sharp },} where ♯ denotes the musical isomorphism ♯ : V∗ → V mentioned earlier that is induced by the inner product. The 1-form df is a section of the cotangent bundle, that gives a local linear approximation to f in the cotangent space at each point. === Divergence === A vector field V = (v1, v2, ..., vn) on ℝn has a corresponding (n − 1)-form ω V = v 1 ( d x 2 ∧ ⋯ ∧ d x n ) − v 2 ( d x 1 ∧ d x 3 ∧ ⋯ ∧ d x n ) + ⋯ + ( − 1 ) n − 1 v n ( d x 1 ∧ ⋯ ∧ d x n − 1 ) = ∑ i = 1 n ( − 1 ) ( i − 1 ) v i ( d x 1 ∧ ⋯ ∧ d x i − 1 ∧ d x i ^ ∧ d x i + 1 ∧ ⋯ ∧ d x n ) {\displaystyle {\begin{aligned}\omega _{V}&=v_{1}\left(dx^{2}\wedge \cdots \wedge dx^{n}\right)-v_{2}\left(dx^{1}\wedge dx^{3}\wedge \cdots \wedge dx^{n}\right)+\cdots +(-1)^{n-1}v_{n}\left(dx^{1}\wedge \cdots \wedge dx^{n-1}\right)\\&=\sum _{i=1}^{n}(-1)^{(i-1)}v_{i}\left(dx^{1}\wedge \cdots \wedge dx^{i-1}\wedge {\widehat {dx^{i}}}\wedge dx^{i+1}\wedge \cdots \wedge dx^{n}\right)\end{aligned}}} where d x i ^ {\displaystyle {\widehat {dx^{i}}}} denotes the omission of that element. (For instance, when n = 3, i.e. in three-dimensional space, the 2-form ωV is locally the scalar triple product with V.) The integral of ωV over a hypersurface is the flux of V over that hypersurface. The exterior derivative of this (n − 1)-form is the n-form d ω V = div ⁡ V ( d x 1 ∧ d x 2 ∧ ⋯ ∧ d x n ) . {\displaystyle d\omega _{V}=\operatorname {div} V\left(dx^{1}\wedge dx^{2}\wedge \cdots \wedge dx^{n}\right).} === Curl === A vector field V on ℝn also has a corresponding 1-form η V = v 1 d x 1 + v 2 d x 2 + ⋯ + v n d x n . {\displaystyle \eta _{V}=v_{1}\,dx^{1}+v_{2}\,dx^{2}+\cdots +v_{n}\,dx^{n}.} Locally, ηV is the dot product with V. The integral of ηV along a path is the work done against −V along that path. When n = 3, in three-dimensional space, the exterior derivative of the 1-form ηV is the 2-form d η V = ω curl ⁡ V . {\displaystyle d\eta _{V}=\omega _{\operatorname {curl} V}.} === Invariant formulations of operators in vector calculus === The standard vector calculus operators can be generalized for any pseudo-Riemannian manifold, and written in coordinate-free notation as follows: grad ⁡ f ≡ ∇ f = ( d f ) ♯ div ⁡ F ≡ ∇ ⋅ F = ⋆ d ⋆ ( F ♭ ) curl ⁡ F ≡ ∇ × F = ( ⋆ d ( F ♭ ) ) ♯ Δ f ≡ ∇ 2 f = ⋆ d ⋆ d f ∇ 2 F = ( d ⋆ d ⋆ ( F ♭ ) − ⋆ d ⋆ d ( F ♭ ) ) ♯ , {\displaystyle {\begin{array}{rcccl}\operatorname {grad} f&\equiv &\nabla f&=&\left(df\right)^{\sharp }\\\operatorname {div} F&\equiv &\nabla \cdot F&=&{\star d{\star }{\mathord {\left(F^{\flat }\right)}}}\\\operatorname {curl} F&\equiv &\nabla \times F&=&\left({\star }d{\mathord {\left(F^{\flat }\right)}}\right)^{\sharp }\\\Delta f&\equiv &\nabla ^{2}f&=&{\star }d{\star }df\\&&\nabla ^{2}F&=&\left(d{\star }d{\star }{\mathord {\left(F^{\flat }\right)}}-{\star }d{\star }d{\mathord {\left(F^{\flat }\right)}}\right)^{\sharp },\\\end{array}}} where ⋆ is the Hodge star operator, ♭ and ♯ are the musical isomorphisms, f is a scalar field and F is a vector field. Note that the expression for curl requires ♯ to act on ⋆d(F♭), which is a form of degree n − 2. A natural generalization of ♯ to k-forms of arbitrary degree allows this expression to make sense for any n. == See also == == Notes == == References == Cartan, Élie (1899). "Sur certaines expressions différentielles et le problème de Pfaff". Annales Scientifiques de l'École Normale Supérieure. Série 3 (in French). 16. Paris: Gauthier-Villars: 239–332. doi:10.24033/asens.467. ISSN 0012-9593. JFM 30.0313.04. Retrieved 2 Feb 2016. Conlon, Lawrence (2001). Differentiable manifolds. Basel, Switzerland: Birkhäuser. p. 239. ISBN 0-8176-4134-3. Darling, R. W. R. (1994). Differential forms and connections. Cambridge, UK: Cambridge University Press. p. 35. ISBN 0-521-46800-0. Flanders, Harley (1989). Differential forms with applications to the physical sciences. New York: Dover Publications. p. 20. ISBN 0-486-66169-5. Loomis, Lynn H.; Sternberg, Shlomo (1989). Advanced Calculus. Boston: Jones and Bartlett. pp. 304–473 (ch. 7–11). ISBN 0-486-66169-5. Ramanan, S. (2005). Global calculus. Providence, Rhode Island: American Mathematical Society. p. 54. ISBN 0-8218-3702-8. Spivak, Michael (1971). Calculus on Manifolds. Boulder, Colorado: Westview Press. ISBN 9780805390216. Spivak, MIchael (1970), A Comprehensive Introduction to Differential Geometry, vol. 1, Boston, MA: Publish or Perish, Inc, ISBN 0-914098-00-4 Warner, Frank W. (1983), Foundations of differentiable manifolds and Lie groups, Graduate Texts in Mathematics, vol. 94, Springer, ISBN 0-387-90894-3 == External links == Archived at Ghostarchive and the Wayback Machine: "The derivative isn't what you think it is". Aleph Zero. November 3, 2020 – via YouTube.
Wikipedia:Exterior dimension#0
In geometry, exterior dimension is a type of dimension that can be used to characterize the scaling behavior of "fat fractals". A fat fractal is defined to be a subset of Euclidean space such that, for every point p {\displaystyle p} of the set and every sufficiently small number ϵ {\displaystyle \epsilon } , the ball of radius ϵ {\displaystyle \epsilon } centered at p {\displaystyle p} contains both a nonzero Lebesgue measure of points belonging to the fractal, and a nonzero Lebesgue measure of points that do not belong to the fractal. For such a set, the Hausdorff dimension is the same as that of the ambient space. The Hausdorff dimension of a set S {\displaystyle S} can be computed by "fattening" S {\displaystyle S} (taking its Minkowski sum with a ball of radius ϵ {\displaystyle \epsilon } ), and examining how the volume of the resulting fattened set scales with ϵ {\displaystyle \epsilon } , in the limit as ϵ {\displaystyle \epsilon } tends to zero. The exterior dimension is computed in the same way but looking at the volume of the difference set obtained by subtracting the original set S {\displaystyle S} from the fattened set. In the paper introducing exterior dimension, it was claimed that it would be applicable to networks of blood vessels. However, inconsistent behavior of these vessels in different parts of the body, the relatively low number of levels of branching, and the slow convergence of methods based on exterior dimension cast into doubt the practical applicability of this parameter. == References ==
Wikipedia:External ray#0
An external ray is a curve that runs from infinity toward a Julia or Mandelbrot set. Although this curve is only rarely a half-line (ray) it is called a ray because it is an image of a ray. External rays are used in complex analysis, particularly in complex dynamics and geometric function theory. == History == External rays were introduced in Douady and Hubbard's study of the Mandelbrot set == Types == Criteria for classification : plane : parameter or dynamic map bifurcation of dynamic rays Stretching landing === plane === External rays of (connected) Julia sets on dynamical plane are often called dynamic rays. External rays of the Mandelbrot set (and similar one-dimensional connectedness loci) on parameter plane are called parameter rays. === bifurcation === Dynamic ray can be: bifurcated = branched = broken smooth = unbranched = unbroken When the filled Julia set is connected, there are no branching external rays. When the Julia set is not connected then some external rays branch. === stretching === Stretching rays were introduced by Branner and Hubbard: "The notion of stretching rays is a generalization of that of external rays for the Mandelbrot set to higher degree polynomials." === landing === Every rational parameter ray of the Mandelbrot set lands at a single parameter. == Maps == === Polynomials === ==== Dynamical plane = z-plane ==== External rays are associated to a compact, full, connected subset K {\displaystyle K\,} of the complex plane as : the images of radial rays under the Riemann map of the complement of K {\displaystyle K\,} the gradient lines of the Green's function of K {\displaystyle K\,} field lines of Douady-Hubbard potential an integral curve of the gradient vector field of the Green's function on neighborhood of infinity External rays together with equipotential lines of Douady-Hubbard potential ( level sets) form a new polar coordinate system for exterior ( complement ) of K {\displaystyle K\,} . In other words the external rays define vertical foliation which is orthogonal to horizontal foliation defined by the level sets of potential. ===== Uniformization ===== Let Ψ c {\displaystyle \Psi _{c}\,} be the conformal isomorphism from the complement (exterior) of the closed unit disk D ¯ {\displaystyle {\overline {\mathbb {D} }}} to the complement of the filled Julia set K c {\displaystyle \ K_{c}} . Ψ c : C ^ ∖ D ¯ → C ^ ∖ K c {\displaystyle \Psi _{c}:{\hat {\mathbb {C} }}\setminus {\overline {\mathbb {D} }}\to {\hat {\mathbb {C} }}\setminus K_{c}} where C ^ {\displaystyle {\hat {\mathbb {C} }}} denotes the extended complex plane. Let Φ c = Ψ c − 1 {\displaystyle \Phi _{c}=\Psi _{c}^{-1}\,} denote the Boettcher map. Φ c {\displaystyle \Phi _{c}\,} is a uniformizing map of the basin of attraction of infinity, because it conjugates f c {\displaystyle f_{c}} on the complement of the filled Julia set K c {\displaystyle K_{c}} to f 0 ( z ) = z 2 {\displaystyle f_{0}(z)=z^{2}} on the complement of the unit disk: Φ c : C ^ ∖ K c → C ^ ∖ D ¯ z ↦ lim n → ∞ ( f c n ( z ) ) 2 − n {\displaystyle {\begin{aligned}\Phi _{c}:{\hat {\mathbb {C} }}\setminus K_{c}&\to {\hat {\mathbb {C} }}\setminus {\overline {\mathbb {D} }}\\z&\mapsto \lim _{n\to \infty }(f_{c}^{n}(z))^{2^{-n}}\end{aligned}}} and Φ c ∘ f c ∘ Φ c − 1 = f 0 {\displaystyle \Phi _{c}\circ f_{c}\circ \Phi _{c}^{-1}=f_{0}} A value w = Φ c ( z ) {\displaystyle w=\Phi _{c}(z)} is called the Boettcher coordinate for a point z ∈ C ^ ∖ K c {\displaystyle z\in {\hat {\mathbb {C} }}\setminus K_{c}} . ===== Formal definition of dynamic ray ===== The external ray of angle θ {\displaystyle \theta \,} noted as R θ K {\displaystyle {\mathcal {R}}_{\theta }^{K}} is: the image under Ψ c {\displaystyle \Psi _{c}\,} of straight lines R θ = { ( r ⋅ e 2 π i θ ) : r > 1 } {\displaystyle {\mathcal {R}}_{\theta }=\{\left(r\cdot e^{2\pi i\theta }\right):\ r>1\}} R θ K = Ψ c ( R θ ) {\displaystyle {\mathcal {R}}_{\theta }^{K}=\Psi _{c}({\mathcal {R}}_{\theta })} set of points of exterior of filled-in Julia set with the same external angle θ {\displaystyle \theta } R θ K = { z ∈ C ^ ∖ K c : arg ⁡ ( Φ c ( z ) ) = θ } {\displaystyle {\mathcal {R}}_{\theta }^{K}=\{z\in {\hat {\mathbb {C} }}\setminus K_{c}:\arg(\Phi _{c}(z))=\theta \}} ====== Properties ====== The external ray for a periodic angle θ {\displaystyle \theta \,} satisfies: f ( R θ K ) = R 2 θ K {\displaystyle f({\mathcal {R}}_{\theta }^{K})={\mathcal {R}}_{2\theta }^{K}} and its landing point γ f ( θ ) {\displaystyle \gamma _{f}(\theta )} satisfies: f ( γ f ( θ ) ) = γ f ( 2 θ ) {\displaystyle f(\gamma _{f}(\theta ))=\gamma _{f}(2\theta )} ==== Parameter plane = c-plane ==== "Parameter rays are simply the curves that run perpendicular to the equipotential curves of the M-set." ===== Uniformization ===== Let Ψ M {\displaystyle \Psi _{M}\,} be the mapping from the complement (exterior) of the closed unit disk D ¯ {\displaystyle {\overline {\mathbb {D} }}} to the complement of the Mandelbrot set M {\displaystyle \ M} . Ψ M : C ^ ∖ D ¯ → C ^ ∖ M {\displaystyle \Psi _{M}:\mathbb {\hat {C}} \setminus {\overline {\mathbb {D} }}\to \mathbb {\hat {C}} \setminus M} and Boettcher map (function) Φ M {\displaystyle \Phi _{M}\,} , which is uniformizing map of complement of Mandelbrot set, because it conjugates complement of the Mandelbrot set M {\displaystyle \ M} and the complement (exterior) of the closed unit disk Φ M : C ^ ∖ M → C ^ ∖ D ¯ {\displaystyle \Phi _{M}:\mathbb {\hat {C}} \setminus M\to \mathbb {\hat {C}} \setminus {\overline {\mathbb {D} }}} it can be normalized so that : Φ M ( c ) c → 1 a s c → ∞ {\displaystyle {\frac {\Phi _{M}(c)}{c}}\to 1\ as\ c\to \infty \,} where : C ^ {\displaystyle \mathbb {\hat {C}} } denotes the extended complex plane Jungreis function Ψ M {\displaystyle \Psi _{M}\,} is the inverse of uniformizing map : Ψ M = Φ M − 1 {\displaystyle \Psi _{M}=\Phi _{M}^{-1}\,} In the case of complex quadratic polynomial one can compute this map using Laurent series about infinity c = Ψ M ( w ) = w + ∑ m = 0 ∞ b m w − m = w − 1 2 + 1 8 w − 1 4 w 2 + 15 128 w 3 + . . . {\displaystyle c=\Psi _{M}(w)=w+\sum _{m=0}^{\infty }b_{m}w^{-m}=w-{\frac {1}{2}}+{\frac {1}{8w}}-{\frac {1}{4w^{2}}}+{\frac {15}{128w^{3}}}+...\,} where c ∈ C ^ ∖ M {\displaystyle c\in \mathbb {\hat {C}} \setminus M} w ∈ C ^ ∖ D ¯ {\displaystyle w\in \mathbb {\hat {C}} \setminus {\overline {\mathbb {D} }}} ===== Formal definition of parameter ray ===== The external ray of angle θ {\displaystyle \theta \,} is: the image under Ψ c {\displaystyle \Psi _{c}\,} of straight lines R θ = { ( r ∗ e 2 π i θ ) : r > 1 } {\displaystyle {\mathcal {R}}_{\theta }=\{\left(r*e^{2\pi i\theta }\right):\ r>1\}} R θ M = Ψ M ( R θ ) {\displaystyle {\mathcal {R}}_{\theta }^{M}=\Psi _{M}({\mathcal {R}}_{\theta })} set of points of exterior of Mandelbrot set with the same external angle θ {\displaystyle \theta } R θ M = { c ∈ C ^ ∖ M : arg ⁡ ( Φ M ( c ) ) = θ } {\displaystyle {\mathcal {R}}_{\theta }^{M}=\{c\in \mathbb {\hat {C}} \setminus M:\arg(\Phi _{M}(c))=\theta \}} ===== Definition of the Boettcher map ===== Douady and Hubbard define: Φ M ( c ) = d e f Φ c ( z = c ) {\displaystyle \Phi _{M}(c)\ {\overset {\underset {\mathrm {def} }{}}{=}}\ \Phi _{c}(z=c)\,} so external angle of point c {\displaystyle c\,} of parameter plane is equal to external angle of point z = c {\displaystyle z=c\,} of dynamical plane ==== External angle ==== Angle θ is named external angle ( argument ). Principal value of external angles are measured in turns modulo 1 1 turn = 360 degrees = 2 × π radians Compare different types of angles : external ( point of set's exterior ) internal ( point of component's interior ) plain ( argument of complex number ) ===== Computation of external argument ===== argument of Böttcher coordinate as an external argument arg M ⁡ ( c ) = arg ⁡ ( Φ M ( c ) ) {\displaystyle \arg _{M}(c)=\arg(\Phi _{M}(c))} arg c ⁡ ( z ) = arg ⁡ ( Φ c ( z ) ) {\displaystyle \arg _{c}(z)=\arg(\Phi _{c}(z))} kneading sequence as a binary expansion of external argument === Transcendental maps === For transcendental maps ( for example exponential ) infinity is not a fixed point but an essential singularity and there is no Boettcher isomorphism. Here dynamic ray is defined as a curve : connecting a point in an escaping set and infinity lying in an escaping set == Images == === Dynamic rays === unbranched branched === Parameter rays === Mandelbrot set for complex quadratic polynomial with parameter rays of root points Parameter space of the complex exponential family f(z)=exp(z)+c. Eight parameter rays landing at this parameter are drawn in black. == Programs that can draw external rays == Mandel - program by Wolf Jung written in C++ using Qt with source code available under the GNU General Public License Java applets by Evgeny Demidov ( code of mndlbrot::turn function by Wolf Jung has been ported to Java ) with free source code ezfract by Michael Sargent, uses the code by Wolf Jung OTIS by Tomoki KAWAHIRA - Java applet without source code Spider XView program by Yuval Fisher YABMP by Prof. Eugene Zaustinsky Archived 2006-06-15 at the Wayback Machine for DOS without source code DH_Drawer Archived 2008-10-21 at the Wayback Machine by Arnaud Chéritat written for Windows 95 without source code Linas Vepstas C programs for Linux console with source code Program Julia by Curtis T. McMullen written in C and Linux commands for C shell console with source code mjwinq program by Matjaz Erat written in delphi/windows without source code ( For the external rays it uses the methods from quad.c in julia.tar by Curtis T McMullen) RatioField by Gert Buschmann, for windows with Pascal source code for Dev-Pascal 1.9.2 (with Free Pascal compiler ) Mandelbrot program by Milan Va, written in Delphi with source code Power MANDELZOOM by Robert Munafo ruff by Claude Heiland-Allen == See also == external rays of Misiurewicz point Orbit portrait Periodic points of complex quadratic mappings Prouhet-Thue-Morse constant Carathéodory's theorem Field lines of Julia sets == References == Lennart Carleson and Theodore W. Gamelin, Complex Dynamics, Springer 1993 Adrien Douady and John H. Hubbard, Etude dynamique des polynômes complexes, Prépublications mathémathiques d'Orsay 2/4 (1984 / 1985) John W. Milnor, Periodic Orbits, External Rays and the Mandelbrot Set: An Expository Account; Géométrie complexe et systèmes dynamiques (Orsay, 1995), Astérisque No. 261 (2000), 277–333. (First appeared as a Stony Brook IMS Preprint in 1999, available as arXiV:math.DS/9905169.) John Milnor, Dynamics in One Complex Variable, Third Edition, Princeton University Press, 2006, ISBN 0-691-12488-4 Wolf Jung : Homeomorphisms on Edges of the Mandelbrot Set. Ph.D. thesis of 2002 == External links == Hubbard Douady Potential, Field Lines by Inigo Quilez Intertwined Internal Rays in Julia Sets of Rational Maps by Robert L. Devaney Extending External Rays Throughout the Julia Sets of Rational Maps by Robert L. Devaney With Figen Cilingir and Elizabeth D. Russell John Hubbard's presentation, The Beauty and Complexity of the Mandelbrot Set, part 3.1 Archived 2008-02-26 at the Wayback Machine videos by ImpoliteFruit Milan Va. "Mandelbrot set drawing". Archived from the original on February 10, 2013. Retrieved 2009-06-15.
Wikipedia:Extraneous and missing solutions#0
In mathematics, an extraneous solution (or spurious solution) is one which emerges from the process of solving a problem but is not a valid solution to it. A missing solution is a valid one which is lost during the solution process. Both situations frequently result from performing operations that are not invertible for some or all values of the variables involved, which prevents the chain of logical implications from being bidirectional. == Extraneous solutions: multiplication == One of the basic principles of algebra is that one can multiply both sides of an equation by the same expression without changing the equation's solutions. However, strictly speaking, this is not true, in that multiplication by certain expressions may introduce new solutions that were not present before. For example, consider the following equation: x + 2 = 0. {\displaystyle x+2=0.} If we multiply both sides by zero, we get, 0 = 0. {\displaystyle 0=0.} This is true for all values of x {\displaystyle x} , so the solution set is all real numbers. But clearly not all real numbers are solutions to the original equation. The problem is that multiplication by zero is not invertible: if we multiply by any nonzero value, we can reverse the step by dividing by the same value, but division by zero is not defined, so multiplication by zero cannot be reversed. More subtly, suppose we take the same equation and multiply both sides by x {\displaystyle x} . We get x ( x + 2 ) = ( 0 ) x , {\displaystyle x(x+2)=(0)x,} x 2 + 2 x = 0. {\displaystyle x^{2}+2x=0.} This quadratic equation has two solutions: x = − 2 {\displaystyle x=-2} and x = 0. {\displaystyle x=0.} But if 0 {\displaystyle 0} is substituted for x {\displaystyle x} in the original equation, the result is the invalid equation 2 = 0 {\displaystyle 2=0} . This counterintuitive result occurs because in the case where x = 0 {\displaystyle x=0} , multiplying both sides by x {\displaystyle x} multiplies both sides by zero, and so necessarily produces a true equation just as in the first example. In general, whenever we multiply both sides of an equation by an expression involving variables, we introduce extraneous solutions wherever that expression is equal to zero. But it is not sufficient to exclude these values, because they may have been legitimate solutions to the original equation. For example, suppose we multiply both sides of our original equation x + 2 = 0 {\displaystyle x+2=0} by x + 2. {\displaystyle x+2.} We get ( x + 2 ) ( x + 2 ) = 0 ( x + 2 ) , {\displaystyle (x+2)(x+2)=0(x+2),} x 2 + 4 x + 4 = 0 , {\displaystyle x^{2}+4x+4=0,} which has only one real solution: x = − 2 {\displaystyle x=-2} . This is a solution to the original equation so cannot be excluded, even though x + 2 = 0 {\displaystyle x+2=0} for this value of x {\displaystyle x} . == Extraneous solutions: rational == Extraneous solutions can arise naturally in problems involving fractions with variables in the denominator. For example, consider this equation: 1 x − 2 = 3 x + 2 − 6 x ( x − 2 ) ( x + 2 ) . {\displaystyle {\frac {1}{x-2}}={\frac {3}{x+2}}-{\frac {6x}{(x-2)(x+2)}}\,.} To begin solving, we multiply each side of the equation by the least common denominator of all the fractions contained in the equation. In this case, the least common denominator is ( x − 2 ) ( x + 2 ) {\displaystyle (x-2)(x+2)} . After performing these operations, the fractions are eliminated, and the equation becomes: x + 2 = 3 ( x − 2 ) − 6 x . {\displaystyle x+2=3(x-2)-6x\,.} Solving this yields the single solution x = − 2. {\displaystyle x=-2.} However, when we substitute the solution back into the original equation, we obtain: 1 − 2 − 2 = 3 − 2 + 2 − 6 ( − 2 ) ( − 2 − 2 ) ( − 2 + 2 ) . {\displaystyle {\frac {1}{-2-2}}={\frac {3}{-2+2}}-{\frac {6(-2)}{(-2-2)(-2+2)}}\,.} The equation then becomes: 1 − 4 = 3 0 + 12 0 . {\displaystyle {\frac {1}{-4}}={\frac {3}{0}}+{\frac {12}{0}}\,.} This equation is not valid, since one cannot divide by zero. Therefore, the solution x = − 2 {\displaystyle x=-2} is extraneous and not valid, and the original equation has no solution. For this specific example, it could be recognized that (for the value x = − 2 {\displaystyle x=-2} ), the operation of multiplying by ( x − 2 ) ( x + 2 ) {\displaystyle (x-2)(x+2)} would be a multiplication by zero. However, it is not always simple to evaluate whether each operation already performed was allowed by the final answer. Because of this, often, the only simple effective way to deal with multiplication by expressions involving variables is to substitute each of the solutions obtained into the original equation and confirm that this yields a valid equation. After discarding solutions that yield an invalid equation, we will have the correct set of solutions. In some cases, as in the above example, all solutions may be discarded, in which case the original equation has no solution. == Missing solutions: division == Extraneous solutions are not too difficult to deal with because they just require checking all solutions for validity. However, more insidious are missing solutions, which can occur when performing operations on expressions that are invalid for certain values of those expressions. For example, if we were solving the following equation, the correct solution is obtained by subtracting 4 {\displaystyle 4} from both sides, then dividing both sides by 2 {\displaystyle 2} : 2 x + 4 = 0 , {\displaystyle 2x+4=0,} 2 x = − 4 , {\displaystyle 2x=-4,} x = − 2. {\displaystyle x=-2.} By analogy, we might suppose we can solve the following equation by subtracting 2 x {\displaystyle 2x} from both sides, then dividing by x {\displaystyle x} : x 2 + 2 x = 0 , {\displaystyle x^{2}+2x=0,} x 2 = − 2 x , {\displaystyle x^{2}=-2x,} x = − 2. {\displaystyle x=-2.} The solution x = − 2 {\displaystyle x=-2} is in fact a valid solution to the original equation; but the other solution, x = 0 {\displaystyle x=0} , has disappeared. The problem is that we divided both sides by x {\displaystyle x} , which involves the indeterminate operation of dividing by zero when x = 0. {\displaystyle x=0.} It is generally possible (and advisable) to avoid dividing by any expression that can be zero; however, where this is necessary, it is sufficient to ensure that any values of the variables that make it zero also fail to satisfy the original equation. For example, suppose we have this equation: x + 2 = 0. {\displaystyle x+2=0.} It is valid to divide both sides by x − 2 {\displaystyle x-2} , obtaining the following equation: x + 2 x − 2 = 0. {\displaystyle {\frac {x+2}{x-2}}=0.} This is valid because the only value of x {\displaystyle x} that makes x − 2 {\displaystyle x-2} equal to zero is x = 2 , {\displaystyle x=2,} which is not a solution to the original equation. In some cases we are not interested in certain solutions; for example, we may only want solutions where x {\displaystyle x} is positive. In this case it is okay to divide by an expression that is only zero when x {\displaystyle x} is zero or negative, because this can only remove solutions we do not care about. == Other operations == Multiplication and division are not the only operations that can modify the solution set. For example, take the problem: x 2 = 4. {\displaystyle x^{2}=4.} If we take the positive square root of both sides, we get: x = 2. {\displaystyle x=2.} We are not taking the square root of any negative values here, since both x 2 {\displaystyle x^{2}} and 4 {\displaystyle 4} are necessarily positive. But we have lost the solution x = − 2. {\displaystyle x=-2.} The reason is that x {\displaystyle x} is actually not in general the positive square root of x 2 . {\displaystyle x^{2}.} If x {\displaystyle x} is negative, the positive square root of x 2 {\displaystyle x^{2}} is − x . {\displaystyle -x.} If the step is taken correctly, it leads instead to the equation: x 2 = 4 . {\displaystyle {\sqrt {x^{2}}}={\sqrt {4}}.} | x | = 2. {\displaystyle |x|=2.} x = ± 2. {\displaystyle x=\pm 2.} This equation has the same two solutions as the original one: x = 2 {\displaystyle x=2} and x = − 2. {\displaystyle x=-2.} We can also modify the solution set by squaring both sides, because this will make any negative values in the ranges of the equation positive, causing extraneous solutions. == See also == Invalid proof == References ==
Wikipedia:Extreme point#0
In mathematics, an extreme point of a convex set S {\displaystyle S} in a real or complex vector space is a point in S {\displaystyle S} that does not lie in any open line segment joining two points of S . {\displaystyle S.} The extreme points of a line segment are called its endpoints. In linear programming problems, an extreme point is also called vertex or corner point of S . {\displaystyle S.} == Definition == Throughout, it is assumed that X {\displaystyle X} is a real or complex vector space. For any p , x , y ∈ X , {\displaystyle p,x,y\in X,} say that p {\displaystyle p} lies between x {\displaystyle x} and y {\displaystyle y} if x ≠ y {\displaystyle x\neq y} and there exists a 0 < t < 1 {\displaystyle 0<t<1} such that p = t x + ( 1 − t ) y . {\displaystyle p=tx+(1-t)y.} If K {\displaystyle K} is a subset of X {\displaystyle X} and p ∈ K , {\displaystyle p\in K,} then p {\displaystyle p} is called an extreme point of K {\displaystyle K} if it does not lie between any two distinct points of K . {\displaystyle K.} That is, if there does not exist x , y ∈ K {\displaystyle x,y\in K} and 0 < t < 1 {\displaystyle 0<t<1} such that x ≠ y {\displaystyle x\neq y} and p = t x + ( 1 − t ) y . {\displaystyle p=tx+(1-t)y.} The set of all extreme points of K {\displaystyle K} is denoted by extreme ⁡ ( K ) . {\displaystyle \operatorname {extreme} (K).} Generalizations If S {\displaystyle S} is a subset of a vector space then a linear sub-variety (that is, an affine subspace) A {\displaystyle A} of the vector space is called a support variety if A {\displaystyle A} meets S {\displaystyle S} (that is, A ∩ S {\displaystyle A\cap S} is not empty) and every open segment I ⊆ S {\displaystyle I\subseteq S} whose interior meets A {\displaystyle A} is necessarily a subset of A . {\displaystyle A.} A 0-dimensional support variety is called an extreme point of S . {\displaystyle S.} === Characterizations === The midpoint of two elements x {\displaystyle x} and y {\displaystyle y} in a vector space is the vector 1 2 ( x + y ) . {\displaystyle {\tfrac {1}{2}}(x+y).} For any elements x {\displaystyle x} and y {\displaystyle y} in a vector space, the set [ x , y ] = { t x + ( 1 − t ) y : 0 ≤ t ≤ 1 } {\displaystyle [x,y]=\{tx+(1-t)y:0\leq t\leq 1\}} is called the closed line segment or closed interval between x {\displaystyle x} and y . {\displaystyle y.} The open line segment or open interval between x {\displaystyle x} and y {\displaystyle y} is ( x , x ) = ∅ {\displaystyle (x,x)=\varnothing } when x = y {\displaystyle x=y} while it is ( x , y ) = { t x + ( 1 − t ) y : 0 < t < 1 } {\displaystyle (x,y)=\{tx+(1-t)y:0<t<1\}} when x ≠ y . {\displaystyle x\neq y.} The points x {\displaystyle x} and y {\displaystyle y} are called the endpoints of these interval. An interval is said to be a non−degenerate interval or a proper interval if its endpoints are distinct. The midpoint of an interval is the midpoint of its endpoints. The closed interval [ x , y ] {\displaystyle [x,y]} is equal to the convex hull of ( x , y ) {\displaystyle (x,y)} if (and only if) x ≠ y . {\displaystyle x\neq y.} So if K {\displaystyle K} is convex and x , y ∈ K , {\displaystyle x,y\in K,} then [ x , y ] ⊆ K . {\displaystyle [x,y]\subseteq K.} If K {\displaystyle K} is a nonempty subset of X {\displaystyle X} and F {\displaystyle F} is a nonempty subset of K , {\displaystyle K,} then F {\displaystyle F} is called a face of K {\displaystyle K} if whenever a point p ∈ F {\displaystyle p\in F} lies between two points of K , {\displaystyle K,} then those two points necessarily belong to F . {\displaystyle F.} == Examples == If a < b {\displaystyle a<b} are two real numbers then a {\displaystyle a} and b {\displaystyle b} are extreme points of the interval [ a , b ] . {\displaystyle [a,b].} However, the open interval ( a , b ) {\displaystyle (a,b)} has no extreme points. Any open interval in R {\displaystyle \mathbb {R} } has no extreme points while any non-degenerate closed interval not equal to R {\displaystyle \mathbb {R} } does have extreme points (that is, the closed interval's endpoint(s)). More generally, any open subset of finite-dimensional Euclidean space R n {\displaystyle \mathbb {R} ^{n}} has no extreme points. The extreme points of the closed unit disk in R 2 {\displaystyle \mathbb {R} ^{2}} is the unit circle. The perimeter of any convex polygon in the plane is a face of that polygon. The vertices of any convex polygon in the plane R 2 {\displaystyle \mathbb {R} ^{2}} are the extreme points of that polygon. An injective linear map F : X → Y {\displaystyle F:X\to Y} sends the extreme points of a convex set C ⊆ X {\displaystyle C\subseteq X} to the extreme points of the convex set F ( X ) . {\displaystyle F(X).} This is also true for injective affine maps. == Properties == The extreme points of a compact convex set form a Baire space (with the subspace topology) but this set may fail to be closed in X . {\displaystyle X.} == Theorems == === Krein–Milman theorem === The Krein–Milman theorem is arguably one of the most well-known theorems about extreme points. === For Banach spaces === These theorems are for Banach spaces with the Radon–Nikodym property. A theorem of Joram Lindenstrauss states that, in a Banach space with the Radon–Nikodym property, a nonempty closed and bounded set has an extreme point. (In infinite-dimensional spaces, the property of compactness is stronger than the joint properties of being closed and being bounded.) Edgar’s theorem implies Lindenstrauss’s theorem. == Related notions == A closed convex subset of a topological vector space is called strictly convex if every one of its (topological) boundary points is an extreme point. The unit ball of any Hilbert space is a strictly convex set. === k-extreme points === More generally, a point in a convex set S {\displaystyle S} is k {\displaystyle k} -extreme if it lies in the interior of a k {\displaystyle k} -dimensional convex set within S , {\displaystyle S,} but not a k + 1 {\displaystyle k+1} -dimensional convex set within S . {\displaystyle S.} Thus, an extreme point is also a 0 {\displaystyle 0} -extreme point. If S {\displaystyle S} is a polytope, then the k {\displaystyle k} -extreme points are exactly the interior points of the k {\displaystyle k} -dimensional faces of S . {\displaystyle S.} More generally, for any convex set S , {\displaystyle S,} the k {\displaystyle k} -extreme points are partitioned into k {\displaystyle k} -dimensional open faces. The finite-dimensional Krein–Milman theorem, which is due to Minkowski, can be quickly proved using the concept of k {\displaystyle k} -extreme points. If S {\displaystyle S} is closed, bounded, and n {\displaystyle n} -dimensional, and if p {\displaystyle p} is a point in S , {\displaystyle S,} then p {\displaystyle p} is k {\displaystyle k} -extreme for some k ≤ n . {\displaystyle k\leq n.} The theorem asserts that p {\displaystyle p} is a convex combination of extreme points. If k = 0 {\displaystyle k=0} then it is immediate. Otherwise p {\displaystyle p} lies on a line segment in S {\displaystyle S} which can be maximally extended (because S {\displaystyle S} is closed and bounded). If the endpoints of the segment are q {\displaystyle q} and r , {\displaystyle r,} then their extreme rank must be less than that of p , {\displaystyle p,} and the theorem follows by induction. == See also == Extreme set Exposed point Choquet theory – Area of functional analysis and convex analysis Bang–bang control == Citations == == Bibliography == Adasch, Norbert; Ernst, Bruno; Keim, Dieter (1978). Topological Vector Spaces: The Theory Without Convexity Conditions. Lecture Notes in Mathematics. Vol. 639. Berlin New York: Springer-Verlag. ISBN 978-3-540-08662-8. OCLC 297140003. Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190. Paul E. Black, ed. (2004-12-17). "extreme point". Dictionary of algorithms and data structures. US National institute of standards and technology. Retrieved 2011-03-24. Borowski, Ephraim J.; Borwein, Jonathan M. (1989). "extreme point". Dictionary of mathematics. Collins dictionary. HarperCollins. ISBN 0-00-434347-6. Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098. Halmos, Paul R. (8 November 1982). A Hilbert Space Problem Book. Graduate Texts in Mathematics. Vol. 19 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90685-0. OCLC 8169781. Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342. Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704. Köthe, Gottfried (1979). Topological Vector Spaces II. Grundlehren der mathematischen Wissenschaften. Vol. 237. New York: Springer Science & Business Media. ISBN 978-0-387-90400-9. OCLC 180577972. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
Wikipedia:Extreme set#0
In mathematics, most commonly in convex geometry, an extreme set or face of a set C ⊆ V {\displaystyle C\subseteq V} in a vector space V {\displaystyle V} is a subset F ⊆ C {\displaystyle F\subseteq C} with the property that if for any two points x , y ∈ C {\displaystyle x,y\in C} some in-between point z = θ x + ( 1 − θ ) y , θ ∈ [ 0 , 1 ] {\displaystyle z=\theta x+(1-\theta )y,\theta \in [0,1]} lies in F {\displaystyle F} , then we must have had x , y ∈ F {\displaystyle x,y\in F} . An extreme point of C {\displaystyle C} is a point p ∈ C {\displaystyle p\in C} for which { p } {\displaystyle \{p\}} is a face. An exposed face of C {\displaystyle C} is the subset of points of C {\displaystyle C} where a linear functional achieves its minimum on C {\displaystyle C} . Thus, if f {\displaystyle f} is a linear functional on V {\displaystyle V} and α = inf { f ( c ) : c ∈ C } > − ∞ {\displaystyle \alpha =\inf\{f(c)\ \colon c\in C\}>-\infty } , then { c ∈ C : f ( c ) = α } {\displaystyle \{c\in C\ \colon f(c)=\alpha \}} is an exposed face of C {\displaystyle C} . An exposed point of C {\displaystyle C} is a point p ∈ C {\displaystyle p\in C} such that { p } {\displaystyle \{p\}} is an exposed face. That is, f ( p ) > f ( c ) {\displaystyle f(p)>f(c)} for all c ∈ C ∖ { p } {\displaystyle c\in C\setminus \{p\}} . An exposed face is a face, but the converse is not true (see the figure). An exposed face of C {\displaystyle C} is convex if C {\displaystyle C} is convex. If F {\displaystyle F} is a face of C ⊆ V {\displaystyle C\subseteq V} , then E ⊆ F {\displaystyle E\subseteq F} is a face of F {\displaystyle F} if and only if E {\displaystyle E} is a face of C {\displaystyle C} . == Competing definitions == Some authors do not include C {\displaystyle C} and/or ∅ {\displaystyle \varnothing } among the (exposed) faces. Some authors require F {\displaystyle F} and/or C {\displaystyle C} to be convex (else the boundary of a disc is a face of the disc, as well as any subset of the boundary) or closed. Some authors require the functional f {\displaystyle f} to be continuous in a given vector topology. == See also == Face (geometry) == References == == Bibliography == Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. == External links == TOPOLOGICAL VECTOR SPACES AND CONTINUOUS LINEAR FUNCTIONALS, Chapter III of FUNCTIONAL ANALYSIS, Lawrence Baggett, University of Colorado Boulder. Functional Analysis, Peter Philip, Ludwig-Maximilians-universität München, 2024
Wikipedia:E∞-operad#0
In mathematics, an operad is a structure that consists of abstract operations, each one having a fixed finite number of inputs (arguments) and one output, as well as a specification of how to compose these operations. Given an operad O {\displaystyle O} , one defines an algebra over O {\displaystyle O} to be a set together with concrete operations on this set which behave just like the abstract operations of O {\displaystyle O} . For instance, there is a Lie operad L {\displaystyle L} such that the algebras over L {\displaystyle L} are precisely the Lie algebras; in a sense L {\displaystyle L} abstractly encodes the operations that are common to all Lie algebras. An operad is to its algebras as a group is to its group representations. == History == Operads originate in algebraic topology; they were introduced to characterize iterated loop spaces by J. Michael Boardman and Rainer M. Vogt in 1968 and by J. Peter May in 1972. Martin Markl, Steve Shnider, and Jim Stasheff write in their book on operads: "The name operad and the formal definition appear first in the early 1970's in J. Peter May's "The Geometry of Iterated Loop Spaces", but a year or more earlier, Boardman and Vogt described the same concept under the name categories of operators in standard form, inspired by PROPs and PACTs of Adams and Mac Lane. In fact, there is an abundance of prehistory. Weibel [Wei] points out that the concept first arose a century ago in A.N. Whitehead's "A Treatise on Universal Algebra", published in 1898." The word "operad" was created by May as a portmanteau of "operations" and "monad" (and also because his mother was an opera singer). Interest in operads was considerably renewed in the early 90s when, based on early insights of Maxim Kontsevich, Victor Ginzburg and Mikhail Kapranov discovered that some duality phenomena in rational homotopy theory could be explained using Koszul duality of operads. Operads have since found many applications, such as in deformation quantization of Poisson manifolds, the Deligne conjecture, or graph homology in the work of Maxim Kontsevich and Thomas Willwacher. == Intuition == Suppose X {\displaystyle X} is a set and for n ∈ N {\displaystyle n\in \mathbb {N} } we define P ( n ) := { f : X n → X } {\displaystyle P(n):=\{f\colon X^{n}\to X\}} , the set of all functions from the cartesian product of n {\displaystyle n} copies of X {\displaystyle X} to X {\displaystyle X} . We can compose these functions: given f ∈ P ( n ) {\displaystyle f\in P(n)} , f 1 ∈ P ( k 1 ) , … , f n ∈ P ( k n ) {\displaystyle f_{1}\in P(k_{1}),\ldots ,f_{n}\in P(k_{n})} , the function f ∘ ( f 1 , … , f n ) ∈ P ( k 1 + ⋯ + k n ) {\displaystyle f\circ (f_{1},\ldots ,f_{n})\in P(k_{1}+\cdots +k_{n})} is defined as follows: given k 1 + ⋯ + k n {\displaystyle k_{1}+\cdots +k_{n}} arguments from X {\displaystyle X} , we divide them into n {\displaystyle n} blocks, the first one having k 1 {\displaystyle k_{1}} arguments, the second one k 2 {\displaystyle k_{2}} arguments, etc., and then apply f 1 {\displaystyle f_{1}} to the first block, f 2 {\displaystyle f_{2}} to the second block, etc. We then apply f {\displaystyle f} to the list of n {\displaystyle n} values obtained from X {\displaystyle X} in such a way. We can also permute arguments, i.e. we have a right action ∗ {\displaystyle *} of the symmetric group S n {\displaystyle S_{n}} on P ( n ) {\displaystyle P(n)} , defined by ( f ∗ s ) ( x 1 , … , x n ) = f ( x s − 1 ( 1 ) , … , x s − 1 ( n ) ) {\displaystyle (f*s)(x_{1},\ldots ,x_{n})=f(x_{s^{-1}(1)},\ldots ,x_{s^{-1}(n)})} for f ∈ P ( n ) {\displaystyle f\in P(n)} , s ∈ S n {\displaystyle s\in S_{n}} and x 1 , … , x n ∈ X {\displaystyle x_{1},\ldots ,x_{n}\in X} . The definition of a symmetric operad given below captures the essential properties of these two operations ∘ {\displaystyle \circ } and ∗ {\displaystyle *} . == Definition == === Non-symmetric operad === A non-symmetric operad (sometimes called an operad without permutations, or a non- Σ {\displaystyle \Sigma } or plain operad) consists of the following: a sequence ( P ( n ) ) n ∈ N {\displaystyle (P(n))_{n\in \mathbb {N} }} of sets, whose elements are called n {\displaystyle n} -ary operations, an element 1 {\displaystyle 1} in P ( 1 ) {\displaystyle P(1)} called the identity, for all positive integers n {\displaystyle n} , k 1 , … , k n {\textstyle k_{1},\ldots ,k_{n}} , a composition function ∘ : P ( n ) × P ( k 1 ) × ⋯ × P ( k n ) → P ( k 1 + ⋯ + k n ) ( θ , θ 1 , … , θ n ) ↦ θ ∘ ( θ 1 , … , θ n ) , {\displaystyle {\begin{aligned}\circ :P(n)\times P(k_{1})\times \cdots \times P(k_{n})&\to P(k_{1}+\cdots +k_{n})\\(\theta ,\theta _{1},\ldots ,\theta _{n})&\mapsto \theta \circ (\theta _{1},\ldots ,\theta _{n}),\end{aligned}}} satisfying the following coherence axioms: identity: θ ∘ ( 1 , … , 1 ) = θ = 1 ∘ θ {\displaystyle \theta \circ (1,\ldots ,1)=\theta =1\circ \theta } associativity: θ ∘ ( θ 1 ∘ ( θ 1 , 1 , … , θ 1 , k 1 ) , … , θ n ∘ ( θ n , 1 , … , θ n , k n ) ) = ( θ ∘ ( θ 1 , … , θ n ) ) ∘ ( θ 1 , 1 , … , θ 1 , k 1 , … , θ n , 1 , … , θ n , k n ) {\displaystyle {\begin{aligned}&\theta \circ {\Big (}\theta _{1}\circ (\theta _{1,1},\ldots ,\theta _{1,k_{1}}),\ldots ,\theta _{n}\circ (\theta _{n,1},\ldots ,\theta _{n,k_{n}}){\Big )}\\={}&{\Big (}\theta \circ (\theta _{1},\ldots ,\theta _{n}){\Big )}\circ (\theta _{1,1},\ldots ,\theta _{1,k_{1}},\ldots ,\theta _{n,1},\ldots ,\theta _{n,k_{n}})\end{aligned}}} === Symmetric operad === A symmetric operad (often just called operad) is a non-symmetric operad P {\displaystyle P} as above, together with a right action of the symmetric group S n {\displaystyle S_{n}} on P ( n ) {\displaystyle P(n)} for n ∈ N {\displaystyle n\in \mathbb {N} } , denoted by ∗ {\displaystyle *} and satisfying equivariance: given a permutation t ∈ S n {\displaystyle t\in S_{n}} , ( θ ∗ t ) ∘ ( θ 1 , … , θ n ) = ( θ ∘ ( θ t − 1 ( 1 ) , … , θ t − 1 ( n ) ) ) ∗ t ′ {\displaystyle (\theta *t)\circ (\theta _{1},\ldots ,\theta _{n})=(\theta \circ (\theta _{t^{-1}(1)},\ldots ,\theta _{t^{-1}(n)}))*t'} (where t ′ {\displaystyle t'} on the right hand side refers to the element of S k 1 + ⋯ + k n {\displaystyle S_{k_{1}+\dots +k_{n}}} that acts on the set { 1 , 2 , … , k 1 + ⋯ + k n } {\displaystyle \{1,2,\dots ,k_{1}+\dots +k_{n}\}} by breaking it into n {\displaystyle n} blocks, the first of size k 1 {\displaystyle k_{1}} , the second of size k 2 {\displaystyle k_{2}} , through the n {\displaystyle n} th block of size k n {\displaystyle k_{n}} , and then permutes these n {\displaystyle n} blocks by t {\displaystyle t} , keeping each block intact) and given n {\displaystyle n} permutations s i ∈ S k i {\displaystyle s_{i}\in S_{k_{i}}} , θ ∘ ( θ 1 ∗ s 1 , … , θ n ∗ s n ) = ( θ ∘ ( θ 1 , … , θ n ) ) ∗ ( s 1 , … , s n ) {\displaystyle \theta \circ (\theta _{1}*s_{1},\ldots ,\theta _{n}*s_{n})=(\theta \circ (\theta _{1},\ldots ,\theta _{n}))*(s_{1},\ldots ,s_{n})} (where ( s 1 , … , s n ) {\displaystyle (s_{1},\ldots ,s_{n})} denotes the element of S k 1 + ⋯ + k n {\displaystyle S_{k_{1}+\dots +k_{n}}} that permutes the first of these blocks by s 1 {\displaystyle s_{1}} , the second by s 2 {\displaystyle s_{2}} , etc., and keeps their overall order intact). The permutation actions in this definition are vital to most applications, including the original application to loop spaces. === Morphisms === A morphism of operads f : P → Q {\displaystyle f:P\to Q} consists of a sequence ( f n : P ( n ) → Q ( n ) ) n ∈ N {\displaystyle (f_{n}:P(n)\to Q(n))_{n\in \mathbb {N} }} that: preserves the identity: f ( 1 ) = 1 {\displaystyle f(1)=1} preserves composition: for every n-ary operation θ {\displaystyle \theta } and operations θ 1 , … , θ n {\displaystyle \theta _{1},\ldots ,\theta _{n}} , f ( θ ∘ ( θ 1 , … , θ n ) ) = f ( θ ) ∘ ( f ( θ 1 ) , … , f ( θ n ) ) {\displaystyle f(\theta \circ (\theta _{1},\ldots ,\theta _{n}))=f(\theta )\circ (f(\theta _{1}),\ldots ,f(\theta _{n}))} preserves the permutation actions: f ( x ∗ s ) = f ( x ) ∗ s {\displaystyle f(x*s)=f(x)*s} . Operads therefore form a category denoted by O p e r {\displaystyle {\mathsf {Oper}}} . === In other categories === So far operads have only been considered in the category of sets. More generally, it is possible to define operads in any symmetric monoidal category C . In that case, each P ( n ) {\displaystyle P(n)} is an object of C, the composition ∘ {\displaystyle \circ } is a morphism P ( n ) ⊗ P ( k 1 ) ⊗ ⋯ ⊗ P ( k n ) → P ( k 1 + ⋯ + k n ) {\displaystyle P(n)\otimes P(k_{1})\otimes \cdots \otimes P(k_{n})\to P(k_{1}+\cdots +k_{n})} in C (where ⊗ {\displaystyle \otimes } denotes the tensor product of the monoidal category), and the actions of the symmetric group elements are given by isomorphisms in C. A common example is the category of topological spaces and continuous maps, with the monoidal product given by the cartesian product. In this case, an operad is given by a sequence of spaces (instead of sets) { P ( n ) } n ≥ 0 {\displaystyle \{P(n)\}_{n\geq 0}} . The structure maps of the operad (the composition and the actions of the symmetric groups) are then assumed to be continuous. The result is called a topological operad. Similarly, in the definition of a morphism of operads, it would be necessary to assume that the maps involved are continuous. Other common settings to define operads include, for example, modules over a commutative ring, chain complexes, groupoids (or even the category of categories itself), coalgebras, etc. === Algebraist definition === Given a commutative ring R we consider the category R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} of modules over R. An operad over R can be defined as a monoid object ( T , γ , η ) {\displaystyle (T,\gamma ,\eta )} in the monoidal category of endofunctors on R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} (it is a monad) satisfying some finiteness condition. For example, a monoid object in the category of "polynomial endofunctors" on R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} is an operad. Similarly, a symmetric operad can be defined as a monoid object in the category of S {\displaystyle \mathbb {S} } -objects, where S {\displaystyle \mathbb {S} } means a symmetric group. A monoid object in the category of combinatorial species is an operad in finite sets. An operad in the above sense is sometimes thought of as a generalized ring. For example, Nikolai Durov defines his generalized rings as monoid objects in the monoidal category of endofunctors on Set {\displaystyle {\textbf {Set}}} that commute with filtered colimits. This is a generalization of a ring since each ordinary ring R defines a monad Σ R : Set → Set {\displaystyle \Sigma _{R}:{\textbf {Set}}\to {\textbf {Set}}} that sends a set X to the underlying set of the free R-module R ( X ) {\displaystyle R^{(X)}} generated by X. == Understanding the axioms == === Associativity axiom === "Associativity" means that composition of operations is associative (the function ∘ {\displaystyle \circ } is associative), analogous to the axiom in category theory that f ∘ ( g ∘ h ) = ( f ∘ g ) ∘ h {\displaystyle f\circ (g\circ h)=(f\circ g)\circ h} ; it does not mean that the operations themselves are associative as operations. Compare with the associative operad, below. Associativity in operad theory means that expressions can be written involving operations without ambiguity from the omitted compositions, just as associativity for operations allows products to be written without ambiguity from the omitted parentheses. For instance, if θ {\displaystyle \theta } is a binary operation, which is written as θ ( a , b ) {\displaystyle \theta (a,b)} or ( a b ) {\displaystyle (ab)} . So that θ {\displaystyle \theta } may or may not be associative. Then what is commonly written ( ( a b ) c ) {\displaystyle ((ab)c)} is unambiguously written operadically as θ ∘ ( θ , 1 ) {\displaystyle \theta \circ (\theta ,1)} . This sends ( a , b , c ) {\displaystyle (a,b,c)} to ( a b , c ) {\displaystyle (ab,c)} (apply θ {\displaystyle \theta } on the first two, and the identity on the third), and then the θ {\displaystyle \theta } on the left "multiplies" a b {\displaystyle ab} by c {\displaystyle c} . This is clearer when depicted as a tree: which yields a 3-ary operation: However, the expression ( ( ( a b ) c ) d ) {\displaystyle (((ab)c)d)} is a priori ambiguous: it could mean θ ∘ ( ( θ , 1 ) ∘ ( ( θ , 1 ) , 1 ) ) {\displaystyle \theta \circ ((\theta ,1)\circ ((\theta ,1),1))} , if the inner compositions are performed first, or it could mean ( θ ∘ ( θ , 1 ) ) ∘ ( ( θ , 1 ) , 1 ) {\displaystyle (\theta \circ (\theta ,1))\circ ((\theta ,1),1)} , if the outer compositions are performed first (operations are read from right to left). Writing x = θ , y = ( θ , 1 ) , z = ( ( θ , 1 ) , 1 ) {\displaystyle x=\theta ,y=(\theta ,1),z=((\theta ,1),1)} , this is x ∘ ( y ∘ z ) {\displaystyle x\circ (y\circ z)} versus ( x ∘ y ) ∘ z {\displaystyle (x\circ y)\circ z} . That is, the tree is missing "vertical parentheses": If the top two rows of operations are composed first (puts an upward parenthesis at the ( a b ) c d {\displaystyle (ab)c\ \ d} line; does the inner composition first), the following results: which then evaluates unambiguously to yield a 4-ary operation. As an annotated expression: θ ( a b ) c ⋅ d ∘ ( ( θ a b ⋅ c , 1 d ) ∘ ( ( θ a ⋅ b , 1 c ) , 1 d ) ) {\displaystyle \theta _{(ab)c\cdot d}\circ ((\theta _{ab\cdot c},1_{d})\circ ((\theta _{a\cdot b},1_{c}),1_{d}))} If the bottom two rows of operations are composed first (puts a downward parenthesis at the a b c d {\displaystyle ab\quad c\ \ d} line; does the outer composition first), following results: which then evaluates unambiguously to yield a 4-ary operation: The operad axiom of associativity is that these yield the same result, and thus that the expression ( ( ( a b ) c ) d ) {\displaystyle (((ab)c)d)} is unambiguous. === Identity axiom === The identity axiom (for a binary operation) can be visualized in a tree as: meaning that the three operations obtained are equal: pre- or post- composing with the identity makes no difference. As for categories, 1 ∘ 1 = 1 {\displaystyle 1\circ 1=1} is a corollary of the identity axiom. == Examples == === Endomorphism operad in sets and operad algebras === The most basic operads are the ones given in the section on "Intuition", above. For any set X {\displaystyle X} , we obtain the endomorphism operad E n d X {\displaystyle {\mathcal {End}}_{X}} consisting of all functions X n → X {\displaystyle X^{n}\to X} . These operads are important because they serve to define operad algebras. If O {\displaystyle {\mathcal {O}}} is an operad, an operad algebra over O {\displaystyle {\mathcal {O}}} is given by a set X {\displaystyle X} and an operad morphism O → E n d X {\displaystyle {\mathcal {O}}\to {\mathcal {End}}_{X}} . Intuitively, such a morphism turns each "abstract" operation of O ( n ) {\displaystyle {\mathcal {O}}(n)} into a "concrete" n {\displaystyle n} -ary operation on the set X {\displaystyle X} . An operad algebra over O {\displaystyle {\mathcal {O}}} thus consists of a set X {\displaystyle X} together with concrete operations on X {\displaystyle X} that follow the rules abstractely specified by the operad O {\displaystyle {\mathcal {O}}} . === Endomorphism operad in vector spaces and operad algebras === If k is a field, we can consider the category of finite-dimensional vector spaces over k; this becomes a monoidal category using the ordinary tensor product over k. We can then define endomorphism operads in this category, as follows. Let V be a finite-dimensional vector space The endomorphism operad E n d V = { E n d V ( n ) } {\displaystyle {\mathcal {End}}_{V}=\{{\mathcal {End}}_{V}(n)\}} of V consists of E n d V ( n ) {\displaystyle {\mathcal {End}}_{V}(n)} = the space of linear maps V ⊗ n → V {\displaystyle V^{\otimes n}\to V} , (composition) given f ∈ E n d V ( n ) {\displaystyle f\in {\mathcal {End}}_{V}(n)} , g 1 ∈ E n d V ( k 1 ) {\displaystyle g_{1}\in {\mathcal {End}}_{V}(k_{1})} , ..., g n ∈ E n d V ( k n ) {\displaystyle g_{n}\in {\mathcal {End}}_{V}(k_{n})} , their composition is given by the map V ⊗ k 1 ⊗ ⋯ ⊗ V ⊗ k n ⟶ g 1 ⊗ ⋯ ⊗ g n V ⊗ n → f V {\displaystyle V^{\otimes k_{1}}\otimes \cdots \otimes V^{\otimes k_{n}}\ {\overset {g_{1}\otimes \cdots \otimes g_{n}}{\longrightarrow }}\ V^{\otimes n}\ {\overset {f}{\to }}\ V} , (identity) The identity element in E n d V ( 1 ) {\displaystyle {\mathcal {End}}_{V}(1)} is the identity map id V {\displaystyle \operatorname {id} _{V}} , (symmetric group action) S n {\displaystyle S_{n}} operates on E n d V ( n ) {\displaystyle {\mathcal {End}}_{V}(n)} by permuting the components of the tensors in V ⊗ n {\displaystyle V^{\otimes n}} . If O {\displaystyle {\mathcal {O}}} is an operad, a k-linear operad algebra over O {\displaystyle {\mathcal {O}}} is given by a finite-dimensional vector space V over k and an operad morphism O → E n d V {\displaystyle {\mathcal {O}}\to {\mathcal {End}}_{V}} ; this amounts to specifying concrete multilinear operations on V that behave like the operations of O {\displaystyle {\mathcal {O}}} . (Notice the analogy between operads&operad algebras and rings&modules: a module over a ring R is given by an abelian group M together with a ring homomorphism R → End ⁡ ( M ) {\displaystyle R\to \operatorname {End} (M)} .) Depending on applications, variations of the above are possible: for example, in algebraic topology, instead of vector spaces and tensor products between them, one uses (reasonable) topological spaces and cartesian products between them. === "Little something" operads === The little 2-disks operad is a topological operad where P ( n ) {\displaystyle P(n)} consists of ordered lists of n disjoint disks inside the unit disk of R 2 {\displaystyle \mathbb {R} ^{2}} centered at the origin. The symmetric group acts on such configurations by permuting the list of little disks. The operadic composition for little disks is illustrated in the accompanying figure to the right, where an element θ ∈ P ( 3 ) {\displaystyle \theta \in P(3)} is composed with an element ( θ 1 , θ 2 , θ 3 ) ∈ P ( 2 ) × P ( 3 ) × P ( 4 ) {\displaystyle (\theta _{1},\theta _{2},\theta _{3})\in P(2)\times P(3)\times P(4)} to yield the element θ ∘ ( θ 1 , θ 2 , θ 3 ) ∈ P ( 9 ) {\displaystyle \theta \circ (\theta _{1},\theta _{2},\theta _{3})\in P(9)} obtained by shrinking the configuration of θ i {\displaystyle \theta _{i}} and inserting it into the i-th disk of θ {\displaystyle \theta } , for i = 1 , 2 , 3 {\displaystyle i=1,2,3} . Analogously, one can define the little n-disks operad by considering configurations of disjoint n-balls inside the unit ball of R n {\displaystyle \mathbb {R} ^{n}} . Originally the little n-cubes operad or the little intervals operad (initially called little n-cubes PROPs) was defined by Michael Boardman and Rainer Vogt in a similar way, in terms of configurations of disjoint axis-aligned n-dimensional hypercubes (n-dimensional intervals) inside the unit hypercube. Later it was generalized by May to the little convex bodies operad, and "little disks" is a case of "folklore" derived from the "little convex bodies". === Rooted trees === In graph theory, rooted trees form a natural operad. Here, P ( n ) {\displaystyle P(n)} is the set of all rooted trees with n leaves, where the leaves are numbered from 1 to n. The group S n {\displaystyle S_{n}} operates on this set by permuting the leaf labels. Operadic composition T ∘ ( S 1 , … , S n ) {\displaystyle T\circ (S_{1},\ldots ,S_{n})} is given by replacing the i-th leaf of T {\displaystyle T} by the root of the i-th tree S i {\displaystyle S_{i}} , for i = 1 , … , n {\displaystyle i=1,\ldots ,n} , thus attaching the n trees to T {\displaystyle T} and forming a larger tree, whose root is taken to be the same as the root of T {\displaystyle T} and whose leaves are numbered in order. === Swiss-cheese operad === The Swiss-cheese operad is a two-colored topological operad defined in terms of configurations of disjoint n-dimensional disks inside a unit n-semidisk and n-dimensional semidisks, centered at the base of the unit semidisk and sitting inside of it. The operadic composition comes from gluing configurations of "little" disks inside the unit disk into the "little" disks in another unit semidisk and configurations of "little" disks and semidisks inside the unit semidisk into the other unit semidisk. The Swiss-cheese operad was defined by Alexander A. Voronov. It was used by Maxim Kontsevich to formulate a Swiss-cheese version of Deligne's conjecture on Hochschild cohomology. Kontsevich's conjecture was proven partly by Po Hu, Igor Kriz, and Alexander A. Voronov and then fully by Justin Thomas. === Associative operad === Another class of examples of operads are those capturing the structures of algebraic structures, such as associative algebras, commutative algebras and Lie algebras. Each of these can be exhibited as a finitely presented operad, in each of these three generated by binary operations. For example, the associative operad is a symmetric operad generated by a binary operation ψ {\displaystyle \psi } , subject only to the condition that ψ ∘ ( ψ , 1 ) = ψ ∘ ( 1 , ψ ) . {\displaystyle \psi \circ (\psi ,1)=\psi \circ (1,\psi ).} This condition corresponds to associativity of the binary operation ψ {\displaystyle \psi } ; writing ψ ( a , b ) {\displaystyle \psi (a,b)} multiplicatively, the above condition is ( a b ) c = a ( b c ) {\displaystyle (ab)c=a(bc)} . This associativity of the operation should not be confused with associativity of composition which holds in any operad; see the axiom of associativity, above. In the associative operad, each P ( n ) {\displaystyle P(n)} is given by the symmetric group S n {\displaystyle S_{n}} , on which S n {\displaystyle S_{n}} acts by right multiplication. The composite σ ∘ ( τ 1 , … , τ n ) {\displaystyle \sigma \circ (\tau _{1},\dots ,\tau _{n})} permutes its inputs in blocks according to σ {\displaystyle \sigma } , and within blocks according to the appropriate τ i {\displaystyle \tau _{i}} . The algebras over the associative operad are precisely the semigroups: sets together with a single binary associative operation. The k-linear algebras over the associative operad are precisely the associative k-algebras. === Terminal symmetric operad === The terminal symmetric operad is the operad which has a single n-ary operation for each n, with each S n {\displaystyle S_{n}} acting trivially. The algebras over this operad are the commutative semigroups; the k-linear algebras are the commutative associative k-algebras. === Operads from the braid groups === Similarly, there is a non- Σ {\displaystyle \Sigma } operad for which each P ( n ) {\displaystyle P(n)} is given by the Artin braid group B n {\displaystyle B_{n}} . Moreover, this non- Σ {\displaystyle \Sigma } operad has the structure of a braided operad, which generalizes the notion of an operad from symmetric to braid groups. === Linear algebra === In linear algebra, real vector spaces can be considered to be algebras over the operad R ∞ {\displaystyle \mathbb {R} ^{\infty }} of all linear combinations . This operad is defined by R ∞ ( n ) = R n {\displaystyle \mathbb {R} ^{\infty }(n)=\mathbb {R} ^{n}} for n ∈ N {\displaystyle n\in \mathbb {N} } , with the obvious action of S n {\displaystyle S_{n}} permuting components, and composition x → ∘ ( y 1 → , … , y n → ) {\displaystyle {\vec {x}}\circ ({\vec {y_{1}}},\ldots ,{\vec {y_{n}}})} given by the concatentation of the vectors x ( 1 ) y 1 → , … , x ( n ) y n → {\displaystyle x^{(1)}{\vec {y_{1}}},\ldots ,x^{(n)}{\vec {y_{n}}}} , where x → = ( x ( 1 ) , … , x ( n ) ) ∈ R n {\displaystyle {\vec {x}}=(x^{(1)},\ldots ,x^{(n)})\in \mathbb {R} ^{n}} . The vector x → = ( 2 , 3 , − 5 , 0 , … ) {\displaystyle {\vec {x}}=(2,3,-5,0,\dots )} for instance represents the operation of forming a linear combination with coefficients 2,3,-5,0,... This point of view formalizes the notion that linear combinations are the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations. The basic operations of vector addition and scalar multiplication are a generating set for the operad of all linear combinations, while the linear combinations operad canonically encodes all possible operations on a vector space. Similarly, affine combinations, conical combinations, and convex combinations can be considered to correspond to the sub-operads where the terms of the vector x → {\displaystyle {\vec {x}}} sum to 1, the terms are all non-negative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyper-octant, and the infinite simplex. This formalizes what is meant by R n {\displaystyle \mathbb {R} ^{n}} being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories. === Commutative-ring operad and Lie operad === The commutative-ring operad is an operad whose algebras are the commutative rings. It is defined by P ( n ) = Z [ x 1 , … , x n ] {\displaystyle P(n)=\mathbb {Z} [x_{1},\ldots ,x_{n}]} , with the obvious action of S n {\displaystyle S_{n}} and operadic composition given by substituting polynomials (with renumbered variables) for variables. A similar operad can be defined whose algebras are the associative, commutative algebras over some fixed base field. The Koszul-dual of this operad is the Lie operad (whose algebras are the Lie algebras), and vice versa. == Free Operads == Typical algebraic constructions (e.g., free algebra construction) can be extended to operads. Let S e t S n {\displaystyle \mathbf {Set} ^{S_{n}}} denote the category whose objects are sets on which the group S n {\displaystyle S_{n}} acts. Then there is a forgetful functor O p e r → ∏ n ∈ N S e t S n {\displaystyle {\mathsf {Oper}}\to \prod _{n\in \mathbb {N} }\mathbf {Set} ^{S_{n}}} , which simply forgets the operadic composition. It is possible to construct a left adjoint Γ : ∏ n ∈ N S e t S n → O p e r {\displaystyle \Gamma :\prod _{n\in \mathbb {N} }\mathbf {Set} ^{S_{n}}\to {\mathsf {Oper}}} to this forgetful functor (this is the usual definition of free functor). Given a collection of operations E, Γ ( E ) {\displaystyle \Gamma (E)} is the free operad on E. Like a group or a ring, the free construction allows to express an operad in terms of generators and relations. By a free representation of an operad O {\displaystyle {\mathcal {O}}} , we mean writing O {\displaystyle {\mathcal {O}}} as a quotient of a free operad F = Γ ( E ) {\displaystyle {\mathcal {F}}=\Gamma (E)} where E describes generators of O {\displaystyle {\mathcal {O}}} and the kernel of the epimorphism F → O {\displaystyle {\mathcal {F}}\to {\mathcal {O}}} describes the relations. A (symmetric) operad O = { O ( n ) } {\displaystyle {\mathcal {O}}=\{{\mathcal {O}}(n)\}} is called quadratic if it has a free presentation such that E = O ( 2 ) {\displaystyle E={\mathcal {O}}(2)} is the generator and the relation is contained in Γ ( E ) ( 3 ) {\displaystyle \Gamma (E)(3)} . == Clones == Clones are the special case of operads that are also closed under identifying arguments together ("reusing" some data). Clones can be equivalently defined as operads that are also a minion (or clonoid). == Operads in homotopy theory == In Stasheff (2004), Stasheff writes: Operads are particularly important and useful in categories with a good notion of "homotopy", where they play a key role in organizing hierarchies of higher homotopies. == See also == PRO (category theory) Algebra over an operad Higher-order operad E∞-operad Pseudoalgebra Multicategory == Notes == === Citations === == References == Tom Leinster (2004). Higher Operads, Higher Categories. Cambridge University Press. arXiv:math/0305049. Bibcode:2004hohc.book.....L. ISBN 978-0-521-53215-0. Martin Markl, Steve Shnider, Jim Stasheff (2002). Operads in Algebra, Topology and Physics. American Mathematical Society. ISBN 978-0-8218-4362-8.{{cite book}}: CS1 maint: multiple names: authors list (link) Markl, Martin (June 2006). "Operads and PROPs". arXiv:math/0601129. Stasheff, Jim (June–July 2004). "What Is...an Operad?" (PDF). Notices of the American Mathematical Society. 51 (6): 630–631. Retrieved 17 January 2008. Loday, Jean-Louis; Vallette, Bruno (2012), Algebraic Operads (PDF), Grundlehren der Mathematischen Wissenschaften, vol. 346, Berlin, New York: Springer-Verlag, ISBN 978-3-642-30361-6 Zinbiel, Guillaume W. (2012), "Encyclopedia of types of algebras 2010", in Bai, Chengming; Guo, Li; Loday, Jean-Louis (eds.), Operads and universal algebra, Nankai Series in Pure, Applied Mathematics and Theoretical Physics, vol. 9, pp. 217–298, arXiv:1101.0267, Bibcode:2011arXiv1101.0267Z, ISBN 9789814365116 Fresse, Benoit (17 May 2017), Homotopy of Operads and Grothendieck-Teichmüller Groups, Mathematical Surveys and Monographs, American Mathematical Society, ISBN 978-1-4704-3480-9, MR 3643404, Zbl 1373.55014 Miguel A. Mendéz (2015). Set Operads in Combinatorics and Computer Science. SpringerBriefs in Mathematics. ISBN 978-3-319-11712-6. Samuele Giraudo (2018). Nonsymmetric Operads in Combinatorics. Springer International Publishing. ISBN 978-3-030-02073-6. == External links == operad at the nLab https://golem.ph.utexas.edu/category/2011/05/an_operadic_introduction_to_en.html
Wikipedia:F. D. C. Willard#0
F. D. C. Willard (1968–1982) was the pen name of Chester, a Siamese cat, used on several papers written by his owner, J. H. Hetherington, in physics journals. On one occasion, he was listed as the sole author. == Background == In 1975, the American physicist and mathematician Jack H. Hetherington of Michigan State University wanted to publish some of his research results in the field of low-temperature physics in the scientific journal Physical Review Letters. A colleague, to whom he had given his paper for review, pointed out that Hetherington had used the first person plural, "we", in his text, and that the journal would reject this form on submissions with a sole author. Rather than take the time to retype the article to use the singular form, or to bring in a co-author, Hetherington decided to invent one. == Publications == Hetherington had a Siamese cat named Chester, who had been sired by a Siamese named Willard. Fearing that colleagues might recognize his pet's name, he thought it better to use the pet's initial. Aware that most Americans have at least two given names, he invented two more given names based on the scientific name for a house cat, Felis domesticus, and abbreviated them accordingly as F. D. C. His article, entitled "Two-, Three-, and Four-Atom Exchange Effects in bcc ³He" and written by J. H. Hetherington and F. D. C. Willard, was accepted by the Physical Review and published in number 35 (November 1975). At the 15th International Conference on Low Temperature Physics in 1978 in Grenoble, Hetherington's co-author was exposed: Hetherington had sent some signed copies of his article to friends and colleagues and included the "signature" (paw prints) of his co-author in them. Later, another essay appeared, this time solely authored by F. D. C. Willard, entitled "L'hélium 3 solide. Un antiferromagnétique nucléaire", published (in French) in September 1980 in the French popular science magazine La Recherche. Subsequently, Willard disappeared as an author from the professional world. == Reception == The unmasking of Hetherington's co-author on the Physical Review essay, which was frequently referenced, caused the co-authorship to become world-famous. The story goes that when inquiries were made to Hetherington's office at Michigan State University, and Hetherington was absent, the callers would ask to speak to the co-author instead. F. D. C. Willard appeared henceforth repeatedly in footnotes, where he was thanked for "useful contributions to the discussion" or oral communications, and even offered a position as a professor. F. D. C. Willard is sometimes included in lists of "Famous Cats" or "Historical Cats". As an April Fool's joke, in 2014 the American Physical Society announced that cat-authored papers, including the Hetherington/Willard paper, would henceforth be open-access (papers of the APS usually require subscription or membership for web access). == See also == List of animals awarded human credentials List of individual cats Polly Matzinger (an immunologist who listed her Afghan Hound, Galadriel Mirkwood, as a co-author) == References == == Further reading == Sam Stall (January 2007). 100 Cats Who Changed Civilization: History's Most Influential Felines. Quirk Books. p. 22. ISBN 978-1-59474-163-0.
Wikipedia:FGLM algorithm#0
FGLM is one of the main algorithms in computer algebra, named after its designers, Faugère, Gianni, Lazard and Mora. They introduced their algorithm in 1993. The input of the algorithm is a Gröbner basis of a zero-dimensional ideal in the ring of polynomials over a field with respect to a monomial order and a second monomial order. As its output, it returns a Gröbner basis of the ideal with respect to the second ordering. The algorithm is a fundamental tool in computer algebra and has been implemented in most of the computer algebra systems. The complexity of FGLM is O(nD3), where n is the number of variables of the polynomials and D is the degree of the ideal. There are several generalization and various applications for FGLM. == References ==
Wikipedia:FL (complexity)#0
In computational complexity theory, the complexity class FL is the set of function problems which can be solved by a deterministic Turing machine in a logarithmic amount of memory space. As in the definition of L, the machine reads its input from a read-only tape and writes its output to a write-only tape; the logarithmic space restriction applies only to the read/write working tape. Loosely speaking, a function problem takes a complicated input and produces a (perhaps equally) complicated output. Function problems are distinguished from decision problems, which produce only Yes or No answers and corresponds to the set L of decision problems which can be solved in deterministic logspace. FL is a subset of FP, the set of function problems which can be solved in deterministic polynomial time. FL is known to contain several natural problems, including arithmetic on numbers. Addition, subtraction and multiplication of two numbers are fairly simple, but achieving division is a far deeper problem which was open for decades. Similarly one may define FNL, which has the same relation with NL as FNP has with NP. == References ==
Wikipedia:FOIL method#0
In elementary algebra, FOIL is a mnemonic for the standard method of multiplying two binomials—hence the method may be referred to as the FOIL method. The word FOIL is an acronym for the four terms of the product: First ("first" terms of each binomial are multiplied together) Outer ("outside" terms are multiplied—that is, the first term of the first binomial and the second term of the second) Inner ("inside" terms are multiplied—second term of the first binomial and first term of the second) Last ("last" terms of each binomial are multiplied) The general form is ( a + b ) ( c + d ) = a c ⏟ first + a d ⏟ outside + b c ⏟ inside + b d ⏟ last . {\displaystyle (a+b)(c+d)=\underbrace {ac} _{\text{first}}+\underbrace {ad} _{\text{outside}}+\underbrace {bc} _{\text{inside}}+\underbrace {bd} _{\text{last}}.} Note that a is both a "first" term and an "outer" term; b is both a "last" and "inner" term, and so forth. The order of the four terms in the sum is not important and need not match the order of the letters in the word FOIL. == History == The FOIL method is a special case of a more general method for multiplying algebraic expressions using the distributive law. The word FOIL was originally intended solely as a mnemonic for high-school students learning algebra. The term appears in William Betz's 1929 text Algebra for Today, where he states: ... first terms, outer terms, inner terms, last terms. (The rule stated above may also be remembered by the word FOIL, suggested by the first letters of the words first, outer, inner, last.) William Betz was active in the movement to reform mathematics in the United States at that time, had written many texts on elementary mathematics topics and had "devoted his life to the improvement of mathematics education". Many students and educators in the US now use the word "FOIL" as a verb meaning "to expand the product of two binomials". == Examples == The method is most commonly used to multiply linear binomials. For example, ( x + 3 ) ( x + 5 ) = x ⋅ x + x ⋅ 5 + 3 ⋅ x + 3 ⋅ 5 = x 2 + 5 x + 3 x + 15 = x 2 + 8 x + 15. {\displaystyle {\begin{aligned}(x+3)(x+5)&=x\cdot x+x\cdot 5+3\cdot x+3\cdot 5\\&=x^{2}+5x+3x+15\\&=x^{2}+8x+15.\end{aligned}}} If either binomial involves subtraction, the corresponding terms must be negated. For example, ( 2 x − 3 ) ( 3 x − 4 ) = ( 2 x ) ( 3 x ) + ( 2 x ) ( − 4 ) + ( − 3 ) ( 3 x ) + ( − 3 ) ( − 4 ) = 6 x 2 − 8 x − 9 x + 12 = 6 x 2 − 17 x + 12. {\displaystyle {\begin{aligned}(2x-3)(3x-4)&=(2x)(3x)+(2x)(-4)+(-3)(3x)+(-3)(-4)\\&=6x^{2}-8x-9x+12\\&=6x^{2}-17x+12.\end{aligned}}} == The distributive law == The FOIL method is equivalent to a two-step process involving the distributive law: ( a + b ) ( c + d ) = a ( c + d ) + b ( c + d ) = a c + a d + b c + b d . {\displaystyle {\begin{aligned}(a+b)(c+d)&=a(c+d)+b(c+d)\\&=ac+ad+bc+bd.\end{aligned}}} In the first step, the (c + d) is distributed over the addition in first binomial. In the second step, the distributive law is used to simplify each of the two terms. Note that this process involves a total of three applications of the distributive property. In contrast to the FOIL method, the method using distributivity can be applied easily to products with more terms such as trinomials and higher. == Reverse FOIL == The FOIL rule converts a product of two binomials into a sum of four (or fewer, if like terms are then combined) monomials. The reverse process is called factoring or factorization. In particular, if the proof above is read in reverse it illustrates the technique called factoring by grouping. == Table as an alternative to FOIL == A visual memory tool can replace the FOIL mnemonic for a pair of polynomials with any number of terms. Make a table with the terms of the first polynomial on the left edge and the terms of the second on the top edge, then fill in the table with products of multiplication. The table equivalent to the FOIL rule looks like this: × c d a a c a d b b c b d {\displaystyle {\begin{array}{c|cc}\times &c&d\\\hline a&ac&ad\\b&bc&bd\end{array}}} In the case that these are polynomials, (ax + b)(cx + d), the terms of a given degree are found by adding along the antidiagonals: × c x d a x a c x 2 a d x b b c x b d {\displaystyle {\begin{array}{c|cc}\times &cx&d\\\hline ax&acx^{2}&adx\\b&bcx&bd\end{array}}} so ( a x + b ) ( c x + d ) = a c x 2 + ( a d + b c ) x + b d . {\displaystyle (ax+b)(cx+d)=acx^{2}+(ad+bc)x+bd.} To multiply (a + b + c)(w + x + y + z), the table would be as follows: × w x y z a a w a x a y a z b b w b x b y b z c c w c x c y c z {\displaystyle {\begin{array}{c|cccc}\times &w&x&y&z\\\hline a&aw&ax&ay&az\\b&bw&bx&by&bz\\c&cw&cx&cy&cz\end{array}}} The sum of the table entries is the product of the polynomials. Thus: ( a + b + c ) ( w + x + y + z ) = ( a w + a x + a y + a z ) + ( b w + b x + b y + b z ) + ( c w + c x + c y + c z ) . {\displaystyle {\begin{aligned}(a+b+c)(w+x+y+z)&=(aw+ax+ay+az)\\&+(bw+bx+by+bz)\\&+(cw+cx+cy+cz).\end{aligned}}} Similarly, to multiply (ax2 + bx + c)(dx3 + ex2 + fx + g), one writes the same table: × d e f g a a d a e a f a g b b d b e b f b g c c d c e c f c g {\displaystyle {\begin{array}{c|cccc}\times &d&e&f&g\\\hline a&ad&ae&af&ag\\b&bd&be&bf&bg\\c&cd&ce&cf&cg\end{array}}} and sums along antidiagonals: ( a x 2 + b x + c ) ( d x 3 + e x 2 + f x + g ) = a d x 5 + ( a e + b d ) x 4 + ( a f + b e + c d ) x 3 + ( a g + b f + c e ) x 2 + ( b g + c f ) x + c g . {\displaystyle {\begin{aligned}(ax^{2}&+bx+c)(dx^{3}+ex^{2}+fx+g)\\&=adx^{5}+(ae+bd)x^{4}+(af+be+cd)x^{3}+(ag+bf+ce)x^{2}+(bg+cf)x+cg.\end{aligned}}} == Generalizations == The FOIL rule cannot be directly applied to expanding products with more than two multiplicands or multiplicands with more than two summands. However, applying the associative law and recursive foiling allows one to expand such products. For instance: ( a + b + c + d ) ( x + y + z + w ) = ( ( a + b ) + ( c + d ) ) ( ( x + y ) + ( z + w ) ) = ( a + b ) ( x + y ) + ( a + b ) ( z + w ) + ( c + d ) ( x + y ) + ( c + d ) ( z + w ) = a x + a y + b x + b y + a z + a w + b z + b w + c x + c y + d x + d y + c z + c w + d z + d w . {\displaystyle {\begin{aligned}(a+b+c+d)(x+y+z+w)&=((a+b)+(c+d))((x+y)+(z+w))\\&=(a+b)(x+y)+(a+b)(z+w)\\&+(c+d)(x+y)+(c+d)(z+w)\\&=ax+ay+bx+by+az+aw+bz+bw\\&+cx+cy+dx+dy+cz+cw+dz+dw.\end{aligned}}} Alternate methods based on distributing forgo the use of the FOIL rule, but may be easier to remember and apply. For example: ( a + b + c + d ) ( x + y + z + w ) = ( a + ( b + c + d ) ) ( x + y + z + w ) = a ( x + y + z + w ) + ( b + c + d ) ( x + y + z + w ) = a ( x + y + z + w ) + ( b + ( c + d ) ) ( x + y + z + w ) = a ( x + y + z + w ) + b ( x + y + z + w ) + ( c + d ) ( x + y + z + w ) = a ( x + y + z + w ) + b ( x + y + z + w ) + c ( x + y + z + w ) + d ( x + y + z + w ) = a x + a y + a z + a w + b x + b y + b z + b w + c x + c y + c z + c w + d x + d y + d z + d w . {\displaystyle {\begin{aligned}(a+b+c+d)(x+y+z+w)&=(a+(b+c+d))(x+y+z+w)\\&=a(x+y+z+w)+(b+c+d)(x+y+z+w)\\&=a(x+y+z+w)+(b+(c+d))(x+y+z+w)\\&=a(x+y+z+w)+b(x+y+z+w)\\&\qquad +(c+d)(x+y+z+w)\\&=a(x+y+z+w)+b(x+y+z+w)\\&\qquad +c(x+y+z+w)+d(x+y+z+w)\\&=ax+ay+az+aw+bx+by+bz+bw\\&\qquad +cx+cy+cz+cw+dx+dy+dz+dw.\end{aligned}}} == See also == Binomial theorem Factorization == References == == Further reading == Steege, Ray; Bailey, Kerry (1997). Schaum's Outline of Theory and Problems of Intermediate Algebra. Schaum's Outline Series. New York: McGraw–Hill. ISBN 978-0-07-060839-9.
Wikipedia:FP (complexity)#0
In computational complexity theory, the complexity class FP is the set of function problems that can be solved by a deterministic Turing machine in polynomial time. It is the function problem version of the decision problem class P. Roughly speaking, it is the class of functions that can be efficiently computed on classical computers without randomization. The difference between FP and P is that problems in P have one-bit, yes/no answers, while problems in FP can have any output that can be computed in polynomial time. For example, adding two numbers is an FP problem, while determining if their sum is odd is in P. Polynomial-time function problems are fundamental in defining polynomial-time reductions, which are used in turn to define the class of NP-complete problems. == Formal definition == FP is formally defined as follows: A binary relation P ( x , y ) {\displaystyle P(x,y)} is in FP if and only if there is a deterministic polynomial time algorithm that, given x {\displaystyle x} , either finds some y {\displaystyle y} such that P ( x , y ) {\displaystyle P(x,y)} holds, or signals that no such y {\displaystyle y} exists. == Related complexity classes == FNP is the set of binary relations for which there is a polynomial time algorithm that, given x and y, checks whether P(x,y) holds. Just as P and FP are closely related, NP is closely related to FNP. FP = FNP if and only if P = NP. Because a machine that uses logarithmic space has at most polynomially many configurations, FL, the set of function problems which can be calculated in logspace, is contained in FP. It is not known whether FL = FP; this is analogous to the problem of determining whether the decision classes P and L are equal. == References == == External links == Complexity Zoo: FP
Wikipedia:Fa-Yueh Wu#0
Fa-Yueh Wu (January 5, 1932 – January 21, 2020) was a Chinese-born theoretical physicist, mathematical physicist, and mathematician who studied and contributed to solid-state physics and statistical mechanics. == Life == === Early stage === Born on January 5, 1932, in Shimen County, Hunan Province, Republic of China, with his father, a member of the Legislature, as his fourth child. The temporary capital of the Chiang Kai-shek administration of Nationalist government was placed in Chongqing in December 1938, but before that, in 1937, he evacuated to Chongqing with his father and stepmother and entered an elementary school there. However, due to repeated Bombing of Chongqing, he was unable to settle in one place. In 1943, he enrolled in Nankai Junior High School, which was evacuated to Chongqing at the time. He transferred to a high school in Nanjing, which became the capital of Chiang Kai-shek administration again in 1946, after the collapse of Wang Jingwei regime. In 1948 he moved to Changsha and transferred to another junior high school. In 1949, he fled to Taiwan with his father and stepmother due to the Chinese Civil War, but separated from the four siblings who remained on the continent. His parents died without being able to resume with their children. === Republic of China Navy === He enrolled in the Republic of China Navy Mechanical School in 1949, entered the Department of Electrical Engineering a year later, earned a bachelor's degree in 1954, served in the Republic of China Navy from 1954 to 1956, and became a lieutenant in the Navy. In 1955, he was selected to study radar engineering for half a year in San Francisco, USA. He was an expert of radar and sonar. He was also a master of Xiangqi. === Physics === He entered the National Tsing Hua University in 1957 and received a master's degree from the Institute of Atomic Sciences Physics Group in June 1959. He then went to the United States on a scholarship to study many-body problems under Eugene Feenberg at Washington University in St. Louis, and received his PhD in 1963. He was an assistant professor at Virginia Tech in 1963 and Northeastern University in 1967, an associate professor in 1969, and a professor in 1975. He has been a university professor since 1989 and Matthews professor since 1992. He has numerous academic treatises. After he retired in 2006, he became an emeritus professor. He died at his home in Newton, Massachusetts, on January 21, 2020. == Notes == == Books == Lieb, Wu Two dimensional ferroelectric models. In: Domb, Green (Hrsg.): Phase transitions and critical phenomena. Band 1. Academic Press, 1972, S. 331–490 (Vertex-Modelle) The Potts Model. In: Reviews of Modern Physics, Band 54, 1982, S. 235–268 Knot theory and statistical mechanics. In: Reviews of Modern Physics, Band 64, 1992, S. 1099–1131 Knot invariants and statistical mechanics- a physicists perspective. In: M. Ge, C.-N. Yang (Hrsg.): Braid group, knot theory and statistical mechanics. World Scientific, 1993 Exactly solvable models – a journey in statistical mechanics. Selected Papers with commentaries. World Scientific, 2009 (In: Chinese Journal of Physics, Band 40, 2002, No. 4) == External links == Maillard: A challenge in enumerative combinatorics: the graph of contributions of Prof. Fa-Yueh Wu. 2002, arXiv:cond-mat/0205063
Wikipedia:Fabio Toninelli#0
Fabio Toninelli (born 1975) is an Italian mathematician who works in probability theory, stochastic processes and probabilistic aspects of mathematical physics. == Education == He obtained his PhD in physics, in 2003, from Scuola Normale Superiore di Pisa. == Career == Between 2004 and 2020 he was senior researcher at Centre National de la Recherche Scientifique (CNRS) in Lyon. Since 2020 he has been Professor of Mathematics at Technische Universität Wien. He is currently (2021–2024) co-editor-in-chief (jointly with Bálint Tóth) of the journal Probability Theory and Related Fields. == Research == Toninelli has contributed substantially to the mathematical theory of disordered statistical mechanical systems, mixing of Markov chains, dimer models. His most significant contributions concern the theory of mean-field spin glasses, of polymers in random environments and of stochastic interface dynamics. == Recognition == Toninelli was an invited speaker of the International Congress of Mathematicians ICM-2018 (Rio de Janeiro), an invited plenary speaker of the 9th European Congress of Mathematics (Sevilla, 2024, https://www.ecm2024sevilla.com/), an invited plenary speaker of the International Congress on Mathematical Physics ICMP-2018 (Montreal), and an invited plenary speaker of the Conference on Stochastic Processes and their Applications SPA-2014 (Buenos Aires). == References == == External links == Fabio Toninelli publications indexed by Google Scholar Personal website https://sites.google.com/view/fabio-toninelli/home
Wikipedia:Factorization#0
In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, 3 × 5 is an integer factorization of 15, and (x − 2)(x + 2) is a polynomial factorization of x2 − 4. Factorization is not usually considered meaningful within number systems possessing division, such as the real or complex numbers, since any x {\displaystyle x} can be trivially written as ( x y ) × ( 1 / y ) {\displaystyle (xy)\times (1/y)} whenever y {\displaystyle y} is not zero. However, a meaningful factorization for a rational number or a rational function can be obtained by writing it in lowest terms and separately factoring its numerator and denominator. Factorization was first considered by ancient Greek mathematicians in the case of integers. They proved the fundamental theorem of arithmetic, which asserts that every positive integer may be factored into a product of prime numbers, which cannot be further factored into integers greater than 1. Moreover, this factorization is unique up to the order of the factors. Although integer factorization is a sort of inverse to multiplication, it is much more difficult algorithmically, a fact which is exploited in the RSA cryptosystem to implement public-key cryptography. Polynomial factorization has also been studied for centuries. In elementary algebra, factoring a polynomial reduces the problem of finding its roots to finding the roots of the factors. Polynomials with coefficients in the integers or in a field possess the unique factorization property, a version of the fundamental theorem of arithmetic with prime numbers replaced by irreducible polynomials. In particular, a univariate polynomial with complex coefficients admits a unique (up to ordering) factorization into linear polynomials: this is a version of the fundamental theorem of algebra. In this case, the factorization can be done with root-finding algorithms. The case of polynomials with integer coefficients is fundamental for computer algebra. There are efficient computer algorithms for computing (complete) factorizations within the ring of polynomials with rational number coefficients (see factorization of polynomials). A commutative ring possessing the unique factorization property is called a unique factorization domain. There are number systems, such as certain rings of algebraic integers, which are not unique factorization domains. However, rings of algebraic integers satisfy the weaker property of Dedekind domains: ideals factor uniquely into prime ideals. Factorization may also refer to more general decompositions of a mathematical object into the product of smaller or simpler objects. For example, every function may be factored into the composition of a surjective function with an injective function. Matrices possess many kinds of matrix factorizations. For example, every matrix has a unique LUP factorization as a product of a lower triangular matrix L with all diagonal entries equal to one, an upper triangular matrix U, and a permutation matrix P; this is a matrix formulation of Gaussian elimination. == Integers == By the fundamental theorem of arithmetic, every integer greater than 1 has a unique (up to the order of the factors) factorization into prime numbers, which are those integers which cannot be further factorized into the product of integers greater than one. For computing the factorization of an integer n, one needs an algorithm for finding a divisor q of n or deciding that n is prime. When such a divisor is found, the repeated application of this algorithm to the factors q and n / q gives eventually the complete factorization of n. For finding a divisor q of n, if any, it suffices to test all values of q such that 1 < q and q2 ≤ n. In fact, if r is a divisor of n such that r2 > n, then q = n / r is a divisor of n such that q2 ≤ n. If one tests the values of q in increasing order, the first divisor that is found is necessarily a prime number, and the cofactor r = n / q cannot have any divisor smaller than q. For getting the complete factorization, it suffices thus to continue the algorithm by searching a divisor of r that is not smaller than q and not greater than √r. There is no need to test all values of q for applying the method. In principle, it suffices to test only prime divisors. This needs to have a table of prime numbers that may be generated for example with the sieve of Eratosthenes. As the method of factorization does essentially the same work as the sieve of Eratosthenes, it is generally more efficient to test for a divisor only those numbers for which it is not immediately clear whether they are prime or not. Typically, one may proceed by testing 2, 3, 5, and the numbers > 5, whose last digit is 1, 3, 7, 9 and the sum of digits is not a multiple of 3. This method works well for factoring small integers, but is inefficient for larger integers. For example, Pierre de Fermat was unable to discover that the 6th Fermat number 1 + 2 2 5 = 1 + 2 32 = 4 294 967 297 {\displaystyle 1+2^{2^{5}}=1+2^{32}=4\,294\,967\,297} is not a prime number. In fact, applying the above method would require more than 10000 divisions, for a number that has 10 decimal digits. There are more efficient factoring algorithms. However they remain relatively inefficient, as, with the present state of the art, one cannot factorize, even with the more powerful computers, a number of 500 decimal digits that is the product of two randomly chosen prime numbers. This ensures the security of the RSA cryptosystem, which is widely used for secure internet communication. === Example === For factoring n = 1386 into primes: Start with division by 2: the number is even, and n = 2 · 693. Continue with 693, and 2 as a first divisor candidate. 693 is odd (2 is not a divisor), but is a multiple of 3: one has 693 = 3 · 231 and n = 2 · 3 · 231. Continue with 231, and 3 as a first divisor candidate. 231 is also a multiple of 3: one has 231 = 3 · 77, and thus n = 2 · 32 · 77. Continue with 77, and 3 as a first divisor candidate. 77 is not a multiple of 3, since the sum of its digits is 14, not a multiple of 3. It is also not a multiple of 5 because its last digit is 7. The next odd divisor to be tested is 7. One has 77 = 7 · 11, and thus n = 2 · 32 · 7 · 11. This shows that 7 is prime (easy to test directly). Continue with 11, and 7 as a first divisor candidate. As 72 > 11, one has finished. Thus 11 is prime, and the prime factorization is 1386 = 2 · 32 · 7 · 11. == Expressions == Manipulating expressions is the basis of algebra. Factorization is one of the most important methods for expression manipulation for several reasons. If one can put an equation in a factored form E⋅F = 0, then the problem of solving the equation splits into two independent (and generally easier) problems E = 0 and F = 0. When an expression can be factored, the factors are often much simpler, and may thus offer some insight on the problem. For example, x 3 − a x 2 − b x 2 − c x 2 + a b x + a c x + b c x − a b c {\displaystyle x^{3}-ax^{2}-bx^{2}-cx^{2}+abx+acx+bcx-abc} having 16 multiplications, 4 subtractions and 3 additions, may be factored into the much simpler expression ( x − a ) ( x − b ) ( x − c ) , {\displaystyle (x-a)(x-b)(x-c),} with only two multiplications and three subtractions. Moreover, the factored form immediately gives roots x = a,b,c as the roots of the polynomial. On the other hand, factorization is not always possible, and when it is possible, the factors are not always simpler. For example, x 10 − 1 {\displaystyle x^{10}-1} can be factored into two irreducible factors x − 1 {\displaystyle x-1} and x 9 + x 8 + ⋯ + x 2 + x + 1 {\displaystyle x^{9}+x^{8}+\cdots +x^{2}+x+1} . Various methods have been developed for finding factorizations; some are described below. Solving algebraic equations may be viewed as a problem of polynomial factorization. In fact, the fundamental theorem of algebra can be stated as follows: every polynomial in x of degree n with complex coefficients may be factorized into n linear factors x − a i , {\displaystyle x-a_{i},} for i = 1, ..., n, where the ais are the roots of the polynomial. Even though the structure of the factorization is known in these cases, the ais generally cannot be computed in terms of radicals (nth roots), by the Abel–Ruffini theorem. In most cases, the best that can be done is computing approximate values of the roots with a root-finding algorithm. === History of factorization of expressions === The systematic use of algebraic manipulations for simplifying expressions (more specifically equations) may be dated to 9th century, with al-Khwarizmi's book The Compendious Book on Calculation by Completion and Balancing, which is titled with two such types of manipulation. However, even for solving quadratic equations, the factoring method was not used before Harriot's work published in 1631, ten years after his death. In his book Artis Analyticae Praxis ad Aequationes Algebraicas Resolvendas, Harriot drew tables for addition, subtraction, multiplication and division of monomials, binomials, and trinomials. Then, in a second section, he set up the equation aa − ba + ca = + bc, and showed that this matches the form of multiplication he had previously provided, giving the factorization (a − b)(a + c). === General methods === The following methods apply to any expression that is a sum, or that may be transformed into a sum. Therefore, they are most often applied to polynomials, though they also may be applied when the terms of the sum are not monomials, that is, the terms of the sum are a product of variables and constants. ==== Common factor ==== It may occur that all terms of a sum are products and that some factors are common to all terms. In this case, the distributive law allows factoring out this common factor. If there are several such common factors, it is preferable to divide out the greatest such common factor. Also, if there are integer coefficients, one may factor out the greatest common divisor of these coefficients. For example, 6 x 3 y 2 + 8 x 4 y 3 − 10 x 5 y 3 = 2 x 3 y 2 ( 3 + 4 x y − 5 x 2 y ) , {\displaystyle 6x^{3}y^{2}+8x^{4}y^{3}-10x^{5}y^{3}=2x^{3}y^{2}(3+4xy-5x^{2}y),} since 2 is the greatest common divisor of 6, 8, and 10, and x 3 y 2 {\displaystyle x^{3}y^{2}} divides all terms. ==== Grouping ==== Grouping terms may allow using other methods for getting a factorization. For example, to factor 4 x 2 + 20 x + 3 x y + 15 y , {\displaystyle 4x^{2}+20x+3xy+15y,} one may remark that the first two terms have a common factor x, and the last two terms have the common factor y. Thus 4 x 2 + 20 x + 3 x y + 15 y = ( 4 x 2 + 20 x ) + ( 3 x y + 15 y ) = 4 x ( x + 5 ) + 3 y ( x + 5 ) . {\displaystyle 4x^{2}+20x+3xy+15y=(4x^{2}+20x)+(3xy+15y)=4x(x+5)+3y(x+5).} Then a simple inspection shows the common factor x + 5, leading to the factorization 4 x 2 + 20 x + 3 x y + 15 y = ( 4 x + 3 y ) ( x + 5 ) . {\displaystyle 4x^{2}+20x+3xy+15y=(4x+3y)(x+5).} In general, this works for sums of 4 terms that have been obtained as the product of two binomials. Although not frequently, this may work also for more complicated examples. ==== Adding and subtracting terms ==== Sometimes, some term grouping reveals part of a recognizable pattern. It is then useful to add and subtract terms to complete the pattern. A typical use of this is the completing the square method for getting the quadratic formula. Another example is the factorization of x 4 + 1. {\displaystyle x^{4}+1.} If one introduces the non-real square root of –1, commonly denoted i, then one has a difference of squares x 4 + 1 = ( x 2 + i ) ( x 2 − i ) . {\displaystyle x^{4}+1=(x^{2}+i)(x^{2}-i).} However, one may also want a factorization with real number coefficients. By adding and subtracting 2 x 2 , {\displaystyle 2x^{2},} and grouping three terms together, one may recognize the square of a binomial: x 4 + 1 = ( x 4 + 2 x 2 + 1 ) − 2 x 2 = ( x 2 + 1 ) 2 − ( x 2 ) 2 = ( x 2 + x 2 + 1 ) ( x 2 − x 2 + 1 ) . {\displaystyle x^{4}+1=(x^{4}+2x^{2}+1)-2x^{2}=(x^{2}+1)^{2}-\left(x{\sqrt {2}}\right)^{2}=\left(x^{2}+x{\sqrt {2}}+1\right)\left(x^{2}-x{\sqrt {2}}+1\right).} Subtracting and adding 2 x 2 {\displaystyle 2x^{2}} also yields the factorization: x 4 + 1 = ( x 4 − 2 x 2 + 1 ) + 2 x 2 = ( x 2 − 1 ) 2 + ( x 2 ) 2 = ( x 2 + x − 2 − 1 ) ( x 2 − x − 2 − 1 ) . {\displaystyle x^{4}+1=(x^{4}-2x^{2}+1)+2x^{2}=(x^{2}-1)^{2}+\left(x{\sqrt {2}}\right)^{2}=\left(x^{2}+x{\sqrt {-2}}-1\right)\left(x^{2}-x{\sqrt {-2}}-1\right).} These factorizations work not only over the complex numbers, but also over any field, where either –1, 2 or –2 is a square. In a finite field, the product of two non-squares is a square; this implies that the polynomial x 4 + 1 , {\displaystyle x^{4}+1,} which is irreducible over the integers, is reducible modulo every prime number. For example, x 4 + 1 ≡ ( x + 1 ) 4 ( mod 2 ) ; {\displaystyle x^{4}+1\equiv (x+1)^{4}{\pmod {2}};} x 4 + 1 ≡ ( x 2 + x − 1 ) ( x 2 − x − 1 ) ( mod 3 ) , {\displaystyle x^{4}+1\equiv (x^{2}+x-1)(x^{2}-x-1){\pmod {3}},} since 1 2 ≡ − 2 ( mod 3 ) ; {\displaystyle 1^{2}\equiv -2{\pmod {3}};} x 4 + 1 ≡ ( x 2 + 2 ) ( x 2 − 2 ) ( mod 5 ) , {\displaystyle x^{4}+1\equiv (x^{2}+2)(x^{2}-2){\pmod {5}},} since 2 2 ≡ − 1 ( mod 5 ) ; {\displaystyle 2^{2}\equiv -1{\pmod {5}};} x 4 + 1 ≡ ( x 2 + 3 x + 1 ) ( x 2 − 3 x + 1 ) ( mod 7 ) , {\displaystyle x^{4}+1\equiv (x^{2}+3x+1)(x^{2}-3x+1){\pmod {7}},} since 3 2 ≡ 2 ( mod 7 ) . {\displaystyle 3^{2}\equiv 2{\pmod {7}}.} === Recognizable patterns === Many identities provide an equality between a sum and a product. The above methods may be used for letting the sum side of some identity appear in an expression, which may therefore be replaced by a product. Below are identities whose left-hand sides are commonly used as patterns (this means that the variables E and F that appear in these identities may represent any subexpression of the expression that has to be factorized). Difference of two squares E 2 − F 2 = ( E + F ) ( E − F ) {\displaystyle E^{2}-F^{2}=(E+F)(E-F)} For example, a 2 + 2 a b + b 2 − x 2 + 2 x y − y 2 = ( a 2 + 2 a b + b 2 ) − ( x 2 − 2 x y + y 2 ) = ( a + b ) 2 − ( x − y ) 2 = ( a + b + x − y ) ( a + b − x + y ) . {\displaystyle {\begin{aligned}a^{2}+&2ab+b^{2}-x^{2}+2xy-y^{2}\\&=(a^{2}+2ab+b^{2})-(x^{2}-2xy+y^{2})\\&=(a+b)^{2}-(x-y)^{2}\\&=(a+b+x-y)(a+b-x+y).\end{aligned}}} Sum/difference of two cubes E 3 + F 3 = ( E + F ) ( E 2 − E F + F 2 ) {\displaystyle E^{3}+F^{3}=(E+F)(E^{2}-EF+F^{2})} E 3 − F 3 = ( E − F ) ( E 2 + E F + F 2 ) {\displaystyle E^{3}-F^{3}=(E-F)(E^{2}+EF+F^{2})} Cauchy identity a 3 + b 3 + 3 a b ( a + b ) = ( a + b ) 3 {\displaystyle a^{3}+b^{3}+3ab(a+b)=(a+b)^{3}} a 3 − b 3 − 3 a b ( a − b ) = ( a − b ) 3 {\displaystyle a^{3}-b^{3}-3ab(a-b)=(a-b)^{3}} Difference of two fourth powers E 4 − F 4 = ( E 2 + F 2 ) ( E 2 − F 2 ) = ( E 2 + F 2 ) ( E + F ) ( E − F ) {\displaystyle {\begin{aligned}E^{4}-F^{4}&=(E^{2}+F^{2})(E^{2}-F^{2})\\&=(E^{2}+F^{2})(E+F)(E-F)\end{aligned}}} Sum/difference of two nth powers In the following identities, the factors may often be further factorized: Difference, even exponent E 2 n − F 2 n = ( E n + F n ) ( E n − F n ) {\displaystyle E^{2n}-F^{2n}=(E^{n}+F^{n})(E^{n}-F^{n})} Difference, even or odd exponent E n − F n = ( E − F ) ( E n − 1 + E n − 2 F + E n − 3 F 2 + ⋯ + E F n − 2 + F n − 1 ) {\displaystyle E^{n}-F^{n}=(E-F)(E^{n-1}+E^{n-2}F+E^{n-3}F^{2}+\cdots +EF^{n-2}+F^{n-1})} This is an example showing that the factors may be much larger than the sum that is factorized. Sum, odd exponent E n + F n = ( E + F ) ( E n − 1 − E n − 2 F + E n − 3 F 2 − ⋯ − E F n − 2 + F n − 1 ) {\displaystyle E^{n}+F^{n}=(E+F)(E^{n-1}-E^{n-2}F+E^{n-3}F^{2}-\cdots -EF^{n-2}+F^{n-1})} (obtained by changing F by –F in the preceding formula) Sum, even exponent If the exponent is a power of two then the expression cannot, in general, be factorized without introducing complex numbers (if E and F contain complex numbers, this may be not the case). If n has an odd divisor, that is if n = pq with p odd, one may use the preceding formula (in "Sum, odd exponent") applied to ( E q ) p + ( F q ) p . {\displaystyle (E^{q})^{p}+(F^{q})^{p}.} Trinomials and cubic formulas x 2 + y 2 + z 2 + 2 ( x y + y z + x z ) = ( x + y + z ) 2 x 3 + y 3 + z 3 − 3 x y z = ( x + y + z ) ( x 2 + y 2 + z 2 − x y − x z − y z ) x 3 + y 3 + z 3 + 3 x 2 ( y + z ) + 3 y 2 ( x + z ) + 3 z 2 ( x + y ) + 6 x y z = ( x + y + z ) 3 x 3 + y 3 + z 3 + 3 ( x + y ) ( y + z ) ( x + z ) = ( x + y + z ) 3 {\displaystyle {\begin{aligned}&x^{2}+y^{2}+z^{2}+2(xy+yz+xz)=(x+y+z)^{2}\\&x^{3}+y^{3}+z^{3}-3xyz=(x+y+z)(x^{2}+y^{2}+z^{2}-xy-xz-yz)\\&x^{3}+y^{3}+z^{3}+3x^{2}(y+z)+3y^{2}(x+z)+3z^{2}(x+y)+6xyz=(x+y+z)^{3}\\&x^{3}+y^{3}+z^{3}+3(x+y)(y+z)(x+z)=(x+y+z)^{3}\\\end{aligned}}} Argand identity x 4 + x 2 y 2 + y 4 = ( x 2 + x y + y 2 ) ( x 2 − x y + y 2 ) {\displaystyle x^{4}+x^{2}y^{2}+y^{4}=(x^{2}+xy+y^{2})(x^{2}-xy+y^{2})} x 4 + x 2 + 1 = ( x 2 + x + 1 ) ( x 2 − x + 1 ) {\displaystyle x^{4}+x^{2}+1=(x^{2}+x+1)(x^{2}-x+1)} Binomial expansions The binomial theorem supplies patterns that can easily be recognized from the integers that appear in them In low degree: a 2 + 2 a b + b 2 = ( a + b ) 2 {\displaystyle a^{2}+2ab+b^{2}=(a+b)^{2}} a 2 − 2 a b + b 2 = ( a − b ) 2 {\displaystyle a^{2}-2ab+b^{2}=(a-b)^{2}} a 3 + 3 a 2 b + 3 a b 2 + b 3 = ( a + b ) 3 {\displaystyle a^{3}+3a^{2}b+3ab^{2}+b^{3}=(a+b)^{3}} a 3 − 3 a 2 b + 3 a b 2 − b 3 = ( a − b ) 3 {\displaystyle a^{3}-3a^{2}b+3ab^{2}-b^{3}=(a-b)^{3}} More generally, the coefficients of the expanded forms of ( a + b ) n {\displaystyle (a+b)^{n}} and ( a − b ) n {\displaystyle (a-b)^{n}} are the binomial coefficients, that appear in the nth row of Pascal's triangle. ==== Roots of unity ==== The nth roots of unity are the complex numbers each of which is a root of the polynomial x n − 1. {\displaystyle x^{n}-1.} They are thus the numbers e 2 i k π / n = cos ⁡ 2 π k n + i sin ⁡ 2 π k n {\displaystyle e^{2ik\pi /n}=\cos {\tfrac {2\pi k}{n}}+i\sin {\tfrac {2\pi k}{n}}} for k = 0 , … , n − 1. {\displaystyle k=0,\ldots ,n-1.} It follows that for any two expressions E and F, one has: E n − F n = ( E − F ) ∏ k = 1 n − 1 ( E − F e 2 i k π / n ) {\displaystyle E^{n}-F^{n}=(E-F)\prod _{k=1}^{n-1}\left(E-Fe^{2ik\pi /n}\right)} E n + F n = ∏ k = 0 n − 1 ( E − F e ( 2 k + 1 ) i π / n ) if n is even {\displaystyle E^{n}+F^{n}=\prod _{k=0}^{n-1}\left(E-Fe^{(2k+1)i\pi /n}\right)\qquad {\text{if }}n{\text{ is even}}} E n + F n = ( E + F ) ∏ k = 1 n − 1 ( E + F e 2 i k π / n ) if n is odd {\displaystyle E^{n}+F^{n}=(E+F)\prod _{k=1}^{n-1}\left(E+Fe^{2ik\pi /n}\right)\qquad {\text{if }}n{\text{ is odd}}} If E and F are real expressions, and one wants real factors, one has to replace every pair of complex conjugate factors by its product. As the complex conjugate of e i α {\displaystyle e^{i\alpha }} is e − i α , {\displaystyle e^{-i\alpha },} and ( a − b e i α ) ( a − b e − i α ) = a 2 − a b ( e i α + e − i α ) + b 2 e i α e − i α = a 2 − 2 a b cos α + b 2 , {\displaystyle \left(a-be^{i\alpha }\right)\left(a-be^{-i\alpha }\right)=a^{2}-ab\left(e^{i\alpha }+e^{-i\alpha }\right)+b^{2}e^{i\alpha }e^{-i\alpha }=a^{2}-2ab\cos \,\alpha +b^{2},} one has the following real factorizations (one passes from one to the other by changing k into n − k or n + 1 − k, and applying the usual trigonometric formulas: E 2 n − F 2 n = ( E − F ) ( E + F ) ∏ k = 1 n − 1 ( E 2 − 2 E F cos k π n + F 2 ) = ( E − F ) ( E + F ) ∏ k = 1 n − 1 ( E 2 + 2 E F cos k π n + F 2 ) {\displaystyle {\begin{aligned}E^{2n}-F^{2n}&=(E-F)(E+F)\prod _{k=1}^{n-1}\left(E^{2}-2EF\cos \,{\tfrac {k\pi }{n}}+F^{2}\right)\\&=(E-F)(E+F)\prod _{k=1}^{n-1}\left(E^{2}+2EF\cos \,{\tfrac {k\pi }{n}}+F^{2}\right)\end{aligned}}} E 2 n + F 2 n = ∏ k = 1 n ( E 2 + 2 E F cos ( 2 k − 1 ) π 2 n + F 2 ) = ∏ k = 1 n ( E 2 − 2 E F cos ( 2 k − 1 ) π 2 n + F 2 ) {\displaystyle {\begin{aligned}E^{2n}+F^{2n}&=\prod _{k=1}^{n}\left(E^{2}+2EF\cos \,{\tfrac {(2k-1)\pi }{2n}}+F^{2}\right)\\&=\prod _{k=1}^{n}\left(E^{2}-2EF\cos \,{\tfrac {(2k-1)\pi }{2n}}+F^{2}\right)\end{aligned}}} The cosines that appear in these factorizations are algebraic numbers, and may be expressed in terms of radicals (this is possible because their Galois group is cyclic); however, these radical expressions are too complicated to be used, except for low values of n. For example, a 4 + b 4 = ( a 2 − 2 a b + b 2 ) ( a 2 + 2 a b + b 2 ) . {\displaystyle a^{4}+b^{4}=(a^{2}-{\sqrt {2}}ab+b^{2})(a^{2}+{\sqrt {2}}ab+b^{2}).} a 5 − b 5 = ( a − b ) ( a 2 + 1 − 5 2 a b + b 2 ) ( a 2 + 1 + 5 2 a b + b 2 ) , {\displaystyle a^{5}-b^{5}=(a-b)\left(a^{2}+{\frac {1-{\sqrt {5}}}{2}}ab+b^{2}\right)\left(a^{2}+{\frac {1+{\sqrt {5}}}{2}}ab+b^{2}\right),} a 5 + b 5 = ( a + b ) ( a 2 − 1 − 5 2 a b + b 2 ) ( a 2 − 1 + 5 2 a b + b 2 ) , {\displaystyle a^{5}+b^{5}=(a+b)\left(a^{2}-{\frac {1-{\sqrt {5}}}{2}}ab+b^{2}\right)\left(a^{2}-{\frac {1+{\sqrt {5}}}{2}}ab+b^{2}\right),} Often one wants a factorization with rational coefficients. Such a factorization involves cyclotomic polynomials. To express rational factorizations of sums and differences or powers, we need a notation for the homogenization of a polynomial: if P ( x ) = a 0 x n + a i x n − 1 + ⋯ + a n , {\displaystyle P(x)=a_{0}x^{n}+a_{i}x^{n-1}+\cdots +a_{n},} its homogenization is the bivariate polynomial P ¯ ( x , y ) = a 0 x n + a i x n − 1 y + ⋯ + a n y n . {\displaystyle {\overline {P}}(x,y)=a_{0}x^{n}+a_{i}x^{n-1}y+\cdots +a_{n}y^{n}.} Then, one has E n − F n = ∏ k ∣ n Q ¯ n ( E , F ) , {\displaystyle E^{n}-F^{n}=\prod _{k\mid n}{\overline {Q}}_{n}(E,F),} E n + F n = ∏ k ∣ 2 n , k ∤ n Q ¯ n ( E , F ) , {\displaystyle E^{n}+F^{n}=\prod _{k\mid 2n,k\not \mid n}{\overline {Q}}_{n}(E,F),} where the products are taken over all divisors of n, or all divisors of 2n that do not divide n, and Q n ( x ) {\displaystyle Q_{n}(x)} is the nth cyclotomic polynomial. For example, a 6 − b 6 = Q ¯ 1 ( a , b ) Q ¯ 2 ( a , b ) Q ¯ 3 ( a , b ) Q ¯ 6 ( a , b ) = ( a − b ) ( a + b ) ( a 2 − a b + b 2 ) ( a 2 + a b + b 2 ) , {\displaystyle a^{6}-b^{6}={\overline {Q}}_{1}(a,b){\overline {Q}}_{2}(a,b){\overline {Q}}_{3}(a,b){\overline {Q}}_{6}(a,b)=(a-b)(a+b)(a^{2}-ab+b^{2})(a^{2}+ab+b^{2}),} a 6 + b 6 = Q ¯ 4 ( a , b ) Q ¯ 12 ( a , b ) = ( a 2 + b 2 ) ( a 4 − a 2 b 2 + b 4 ) , {\displaystyle a^{6}+b^{6}={\overline {Q}}_{4}(a,b){\overline {Q}}_{12}(a,b)=(a^{2}+b^{2})(a^{4}-a^{2}b^{2}+b^{4}),} since the divisors of 6 are 1, 2, 3, 6, and the divisors of 12 that do not divide 6 are 4 and 12. == Polynomials == For polynomials, factorization is strongly related with the problem of solving algebraic equations. An algebraic equation has the form P ( x ) = def a 0 x n + a 1 x n − 1 + ⋯ + a n = 0 , {\displaystyle P(x)\ \,{\stackrel {\text{def}}{=}}\ \,a_{0}x^{n}+a_{1}x^{n-1}+\cdots +a_{n}=0,} where P(x) is a polynomial in x with a 0 ≠ 0. {\displaystyle a_{0}\neq 0.} A solution of this equation (also called a root of the polynomial) is a value r of x such that P ( r ) = 0. {\displaystyle P(r)=0.} If P ( x ) = Q ( x ) R ( x ) {\displaystyle P(x)=Q(x)R(x)} is a factorization of P(x) = 0 as a product of two polynomials, then the roots of P(x) are the union of the roots of Q(x) and the roots of R(x). Thus solving P(x) = 0 is reduced to the simpler problems of solving Q(x) = 0 and R(x) = 0. Conversely, the factor theorem asserts that, if r is a root of P(x) = 0, then P(x) may be factored as P ( x ) = ( x − r ) Q ( x ) , {\displaystyle P(x)=(x-r)Q(x),} where Q(x) is the quotient of Euclidean division of P(x) = 0 by the linear (degree one) factor x − r. If the coefficients of P(x) are real or complex numbers, the fundamental theorem of algebra asserts that P(x) has a real or complex root. Using the factor theorem recursively, it results that P ( x ) = a 0 ( x − r 1 ) ⋯ ( x − r n ) , {\displaystyle P(x)=a_{0}(x-r_{1})\cdots (x-r_{n}),} where r 1 , … , r n {\displaystyle r_{1},\ldots ,r_{n}} are the real or complex roots of P, with some of them possibly repeated. This complete factorization is unique up to the order of the factors. If the coefficients of P(x) are real, one generally wants a factorization where factors have real coefficients. In this case, the complete factorization may have some quadratic (degree two) factors. This factorization may easily be deduced from the above complete factorization. In fact, if r = a + ib is a non-real root of P(x), then its complex conjugate s = a − ib is also a root of P(x). So, the product ( x − r ) ( x − s ) = x 2 − ( r + s ) x + r s = x 2 − 2 a x + a 2 + b 2 {\displaystyle (x-r)(x-s)=x^{2}-(r+s)x+rs=x^{2}-2ax+a^{2}+b^{2}} is a factor of P(x) with real coefficients. Repeating this for all non-real factors gives a factorization with linear or quadratic real factors. For computing these real or complex factorizations, one needs the roots of the polynomial, which may not be computed exactly, and only approximated using root-finding algorithms. In practice, most algebraic equations of interest have integer or rational coefficients, and one may want a factorization with factors of the same kind. The fundamental theorem of arithmetic may be generalized to this case, stating that polynomials with integer or rational coefficients have the unique factorization property. More precisely, every polynomial with rational coefficients may be factorized in a product P ( x ) = q P 1 ( x ) ⋯ P k ( x ) , {\displaystyle P(x)=q\,P_{1}(x)\cdots P_{k}(x),} where q is a rational number and P 1 ( x ) , … , P k ( x ) {\displaystyle P_{1}(x),\ldots ,P_{k}(x)} are non-constant polynomials with integer coefficients that are irreducible and primitive; this means that none of the P i ( x ) {\displaystyle P_{i}(x)} may be written as the product two polynomials (with integer coefficients) that are neither 1 nor −1 (integers are considered as polynomials of degree zero). Moreover, this factorization is unique up to the order of the factors and the signs of the factors. There are efficient algorithms for computing this factorization, which are implemented in most computer algebra systems. See Factorization of polynomials. Unfortunately, these algorithms are too complicated to use for paper-and-pencil computations. Besides the heuristics above, only a few methods are suitable for hand computations, which generally work only for polynomials of low degree, with few nonzero coefficients. The main such methods are described in next subsections. === Primitive-part & content factorization === Every polynomial with rational coefficients, may be factorized, in a unique way, as the product of a rational number and a polynomial with integer coefficients, which is primitive (that is, the greatest common divisor of the coefficients is 1), and has a positive leading coefficient (coefficient of the term of the highest degree). For example: − 10 x 2 + 5 x + 5 = ( − 5 ) ⋅ ( 2 x 2 − x − 1 ) {\displaystyle -10x^{2}+5x+5=(-5)\cdot (2x^{2}-x-1)} 1 3 x 5 + 7 2 x 2 + 2 x + 1 = 1 6 ( 2 x 5 + 21 x 2 + 12 x + 6 ) {\displaystyle {\frac {1}{3}}x^{5}+{\frac {7}{2}}x^{2}+2x+1={\frac {1}{6}}(2x^{5}+21x^{2}+12x+6)} In this factorization, the rational number is called the content, and the primitive polynomial is the primitive part. The computation of this factorization may be done as follows: firstly, reduce all coefficients to a common denominator, for getting the quotient by an integer q of a polynomial with integer coefficients. Then one divides out the greater common divisor p of the coefficients of this polynomial for getting the primitive part, the content being p / q . {\displaystyle p/q.} Finally, if needed, one changes the signs of p and all coefficients of the primitive part. This factorization may produce a result that is larger than the original polynomial (typically when there are many coprime denominators), but, even when this is the case, the primitive part is generally easier to manipulate for further factorization. === Using the factor theorem === The factor theorem states that, if r is a root of a polynomial P ( x ) = a 0 x n + a 1 x n − 1 + ⋯ + a n − 1 x + a n , {\displaystyle P(x)=a_{0}x^{n}+a_{1}x^{n-1}+\cdots +a_{n-1}x+a_{n},} meaning P(r) = 0, then there is a factorization P ( x ) = ( x − r ) Q ( x ) , {\displaystyle P(x)=(x-r)Q(x),} where Q ( x ) = b 0 x n − 1 + ⋯ + b n − 2 x + b n − 1 , {\displaystyle Q(x)=b_{0}x^{n-1}+\cdots +b_{n-2}x+b_{n-1},} with a 0 = b 0 {\displaystyle a_{0}=b_{0}} . Then polynomial long division or synthetic division give: b i = a 0 r i + ⋯ + a i − 1 r + a i for i = 1 , … , n − 1. {\displaystyle b_{i}=a_{0}r^{i}+\cdots +a_{i-1}r+a_{i}\ {\text{ for }}\ i=1,\ldots ,n{-}1.} This may be useful when one knows or can guess a root of the polynomial. For example, for P ( x ) = x 3 − 3 x + 2 , {\displaystyle P(x)=x^{3}-3x+2,} one may easily see that the sum of its coefficients is 0, so r = 1 is a root. As r + 0 = 1, and r 2 + 0 r − 3 = − 2 , {\displaystyle r^{2}+0r-3=-2,} one has x 3 − 3 x + 2 = ( x − 1 ) ( x 2 + x − 2 ) . {\displaystyle x^{3}-3x+2=(x-1)(x^{2}+x-2).} === Rational roots === For polynomials with rational number coefficients, one may search for roots which are rational numbers. Primitive part-content factorization (see above) reduces the problem of searching for rational roots to the case of polynomials with integer coefficients having no non-trivial common divisor. If x = p q {\displaystyle x={\tfrac {p}{q}}} is a rational root of such a polynomial P ( x ) = a 0 x n + a 1 x n − 1 + ⋯ + a n − 1 x + a n , {\displaystyle P(x)=a_{0}x^{n}+a_{1}x^{n-1}+\cdots +a_{n-1}x+a_{n},} the factor theorem shows that one has a factorization P ( x ) = ( q x − p ) Q ( x ) , {\displaystyle P(x)=(qx-p)Q(x),} where both factors have integer coefficients (the fact that Q has integer coefficients results from the above formula for the quotient of P(x) by x − p / q {\displaystyle x-p/q} ). Comparing the coefficients of degree n and the constant coefficients in the above equality shows that, if p q {\displaystyle {\tfrac {p}{q}}} is a rational root in reduced form, then q is a divisor of a 0 , {\displaystyle a_{0},} and p is a divisor of a n . {\displaystyle a_{n}.} Therefore, there is a finite number of possibilities for p and q, which can be systematically examined. For example, if the polynomial P ( x ) = 2 x 3 − 7 x 2 + 10 x − 6 {\displaystyle P(x)=2x^{3}-7x^{2}+10x-6} has a rational root p q {\displaystyle {\tfrac {p}{q}}} with q > 0, then p must divide 6; that is p ∈ { ± 1 , ± 2 , ± 3 , ± 6 } , {\displaystyle p\in \{\pm 1,\pm 2,\pm 3,\pm 6\},} and q must divide 2, that is q ∈ { 1 , 2 } . {\displaystyle q\in \{1,2\}.} Moreover, if x < 0, all terms of the polynomial are negative, and, therefore, a root cannot be negative. That is, one must have p q ∈ { 1 , 2 , 3 , 6 , 1 2 , 3 2 } . {\displaystyle {\tfrac {p}{q}}\in \{1,2,3,6,{\tfrac {1}{2}},{\tfrac {3}{2}}\}.} A direct computation shows that only 3 2 {\displaystyle {\tfrac {3}{2}}} is a root, so there can be no other rational root. Applying the factor theorem leads finally to the factorization 2 x 3 − 7 x 2 + 10 x − 6 = ( 2 x − 3 ) ( x 2 − 2 x + 2 ) . {\displaystyle 2x^{3}-7x^{2}+10x-6=(2x-3)(x^{2}-2x+2).} ==== Quadratic ac method ==== The above method may be adapted for quadratic polynomials, leading to the ac method of factorization. Consider the quadratic polynomial P ( x ) = a x 2 + b x + c {\displaystyle P(x)=ax^{2}+bx+c} with integer coefficients. If it has a rational root, its denominator must divide a evenly and it may be written as a possibly reducible fraction r 1 = r a . {\displaystyle r_{1}={\tfrac {r}{a}}.} By Vieta's formulas, the other root r 2 {\displaystyle r_{2}} is r 2 = − b a − r 1 = − b a − r a = − b + r a = s a , {\displaystyle r_{2}=-{\frac {b}{a}}-r_{1}=-{\frac {b}{a}}-{\frac {r}{a}}=-{\frac {b+r}{a}}={\frac {s}{a}},} with s = − ( b + r ) . {\displaystyle s=-(b+r).} Thus the second root is also rational, and Vieta's second formula r 1 r 2 = c a {\displaystyle r_{1}r_{2}={\frac {c}{a}}} gives s a r a = c a , {\displaystyle {\frac {s}{a}}{\frac {r}{a}}={\frac {c}{a}},} that is r s = a c and r + s = − b . {\displaystyle rs=ac\quad {\text{and}}\quad r+s=-b.} Checking all pairs of integers whose product is ac gives the rational roots, if any. In summary, if a x 2 + b x + c {\displaystyle ax^{2}+bx+c} has rational roots there are integers r and s such r s = a c {\displaystyle rs=ac} and r + s = − b {\displaystyle r+s=-b} (a finite number of cases to test), and the roots are r a {\displaystyle {\tfrac {r}{a}}} and s a . {\displaystyle {\tfrac {s}{a}}.} In other words, one has the factorization a ( a x 2 + b x + c ) = ( a x − r ) ( a x − s ) . {\displaystyle a(ax^{2}+bx+c)=(ax-r)(ax-s).} For example, let consider the quadratic polynomial 6 x 2 + 13 x + 6. {\displaystyle 6x^{2}+13x+6.} Inspection of the factors of ac = 36 leads to 4 + 9 = 13 = b, giving the two roots r 1 = − 4 6 = − 2 3 and r 2 = − 9 6 = − 3 2 , {\displaystyle r_{1}=-{\frac {4}{6}}=-{\frac {2}{3}}\quad {\text{and}}\quad r_{2}=-{\frac {9}{6}}=-{\frac {3}{2}},} and the factorization 6 x 2 + 13 x + 6 = 6 ( x + 2 3 ) ( x + 3 2 ) = ( 3 x + 2 ) ( 2 x + 3 ) . {\displaystyle 6x^{2}+13x+6=6(x+{\tfrac {2}{3}})(x+{\tfrac {3}{2}})=(3x+2)(2x+3).} === Using formulas for polynomial roots === Any univariate quadratic polynomial a x 2 + b x + c {\displaystyle ax^{2}+bx+c} can be factored using the quadratic formula: a x 2 + b x + c = a ( x − α ) ( x − β ) = a ( x − − b + b 2 − 4 a c 2 a ) ( x − − b − b 2 − 4 a c 2 a ) , {\displaystyle ax^{2}+bx+c=a(x-\alpha )(x-\beta )=a\left(x-{\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}\right)\left(x-{\frac {-b-{\sqrt {b^{2}-4ac}}}{2a}}\right),} where α {\displaystyle \alpha } and β {\displaystyle \beta } are the two roots of the polynomial. If a, b, c are all real, the factors are real if and only if the discriminant b 2 − 4 a c {\displaystyle b^{2}-4ac} is non-negative. Otherwise, the quadratic polynomial cannot be factorized into non-constant real factors. The quadratic formula is valid when the coefficients belong to any field of characteristic different from two, and, in particular, for coefficients in a finite field with an odd number of elements. There are also formulas for roots of cubic and quartic polynomials, which are, in general, too complicated for practical use. The Abel–Ruffini theorem shows that there are no general root formulas in terms of radicals for polynomials of degree five or higher. === Using relations between roots === It may occur that one knows some relationship between the roots of a polynomial and its coefficients. Using this knowledge may help factoring the polynomial and finding its roots. Galois theory is based on a systematic study of the relations between roots and coefficients, that include Vieta's formulas. Here, we consider the simpler case where two roots x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} of a polynomial P ( x ) {\displaystyle P(x)} satisfy the relation x 2 = Q ( x 1 ) , {\displaystyle x_{2}=Q(x_{1}),} where Q is a polynomial. This implies that x 1 {\displaystyle x_{1}} is a common root of P ( Q ( x ) ) {\displaystyle P(Q(x))} and P ( x ) . {\displaystyle P(x).} It is therefore a root of the greatest common divisor of these two polynomials. It follows that this greatest common divisor is a non constant factor of P ( x ) . {\displaystyle P(x).} Euclidean algorithm for polynomials allows computing this greatest common factor. For example, if one know or guess that: P ( x ) = x 3 − 5 x 2 − 16 x + 80 {\displaystyle P(x)=x^{3}-5x^{2}-16x+80} has two roots that sum to zero, one may apply Euclidean algorithm to P ( x ) {\displaystyle P(x)} and P ( − x ) . {\displaystyle P(-x).} The first division step consists in adding P ( x ) {\displaystyle P(x)} to P ( − x ) , {\displaystyle P(-x),} giving the remainder of − 10 ( x 2 − 16 ) . {\displaystyle -10(x^{2}-16).} Then, dividing P ( x ) {\displaystyle P(x)} by x 2 − 16 {\displaystyle x^{2}-16} gives zero as a new remainder, and x − 5 as a quotient, leading to the complete factorization x 3 − 5 x 2 − 16 x + 80 = ( x − 5 ) ( x − 4 ) ( x + 4 ) . {\displaystyle x^{3}-5x^{2}-16x+80=(x-5)(x-4)(x+4).} == Unique factorization domains == The integers and the polynomials over a field share the property of unique factorization, that is, every nonzero element may be factored into a product of an invertible element (a unit, ±1 in the case of integers) and a product of irreducible elements (prime numbers, in the case of integers), and this factorization is unique up to rearranging the factors and shifting units among the factors. Integral domains which share this property are called unique factorization domains (UFD). Greatest common divisors exist in UFDs, but not every integral domain in which greatest common divisors exist (known as a GCD domain) is a UFD. Every principal ideal domain is a UFD. A Euclidean domain is an integral domain on which is defined a Euclidean division similar to that of integers. Every Euclidean domain is a principal ideal domain, and thus a UFD. In a Euclidean domain, Euclidean division allows defining a Euclidean algorithm for computing greatest common divisors. However this does not imply the existence of a factorization algorithm. There is an explicit example of a field F such that there cannot exist any factorization algorithm in the Euclidean domain F[x] of the univariate polynomials over F. == Ideals == In algebraic number theory, the study of Diophantine equations led mathematicians, during 19th century, to introduce generalizations of the integers called algebraic integers. The first ring of algebraic integers that have been considered were Gaussian integers and Eisenstein integers, which share with usual integers the property of being principal ideal domains, and have thus the unique factorization property. Unfortunately, it soon appeared that most rings of algebraic integers are not principal and do not have unique factorization. The simplest example is Z [ − 5 ] , {\displaystyle \mathbb {Z} [{\sqrt {-5}}],} in which 9 = 3 ⋅ 3 = ( 2 + − 5 ) ( 2 − − 5 ) , {\displaystyle 9=3\cdot 3=(2+{\sqrt {-5}})(2-{\sqrt {-5}}),} and all these factors are irreducible. This lack of unique factorization is a major difficulty for solving Diophantine equations. For example, many wrong proofs of Fermat's Last Theorem (probably including Fermat's "truly marvelous proof of this, which this margin is too narrow to contain") were based on the implicit supposition of unique factorization. This difficulty was resolved by Dedekind, who proved that the rings of algebraic integers have unique factorization of ideals: in these rings, every ideal is a product of prime ideals, and this factorization is unique up the order of the factors. The integral domains that have this unique factorization property are now called Dedekind domains. They have many nice properties that make them fundamental in algebraic number theory. == Matrices == Matrix rings are non-commutative and have no unique factorization: there are, in general, many ways of writing a matrix as a product of matrices. Thus, the factorization problem consists of finding factors of specified types. For example, the LU decomposition gives a matrix as the product of a lower triangular matrix by an upper triangular matrix. As this is not always possible, one generally considers the "LUP decomposition" having a permutation matrix as its third factor. See Matrix decomposition for the most common types of matrix factorizations. A logical matrix represents a binary relation, and matrix multiplication corresponds to composition of relations. Decomposition of a relation through factorization serves to profile the nature of the relation, such as a difunctional relation. == See also == Euler's factorization method for integers Fermat's factorization method for integers Monoid factorisation Multiplicative partition Table of Gaussian integer factorizations == Notes == == References == Burnside, William Snow; Panton, Arthur William (1960) [1912], The Theory of Equations with an introduction to the theory of binary algebraic forms (Volume one), Dover Dickson, Leonard Eugene (1922), "First Course in the Theory of Equations", Nature, 109 (2746), New York: John Wiley & Sons: 773, Bibcode:1922Natur.109R.773., doi:10.1038/109773c0 Fite, William Benjamin (1921), College Algebra (Revised), Boston: D. C. Heath & Co. Klein, Felix (1925), Elementary Mathematics from an Advanced Standpoint; Arithmetic, Algebra, Analysis, Dover Selby, Samuel M. (1970), CRC Standard Mathematical Tables (18th ed.), The Chemical Rubber Co. == External links == Wolfram Alpha can factorize too.
Wikipedia:Factorization algebra#0
In mathematics and mathematical physics, a factorization algebra is an algebraic structure first introduced by Beilinson and Drinfel'd in an algebro-geometric setting as a reformulation of chiral algebras, and also studied in a more general setting by Costello and Gwilliam to study quantum field theory. == Definition == === Prefactorization algebras === A factorization algebra is a prefactorization algebra satisfying some properties, similar to sheafs being a presheaf with extra conditions. If M {\displaystyle M} is a topological space, a prefactorization algebra F {\displaystyle {\mathcal {F}}} of vector spaces on M {\displaystyle M} is an assignment of vector spaces F ( U ) {\displaystyle {\mathcal {F}}(U)} to open sets U {\displaystyle U} of M {\displaystyle M} , along with the following conditions on the assignment: For each inclusion U ⊂ V {\displaystyle U\subset V} , there's a linear map m V U : F ( U ) → F ( V ) {\displaystyle m_{V}^{U}:{\mathcal {F}}(U)\rightarrow {\mathcal {F}}(V)} There is a linear map m V U 1 , ⋯ , U n : F ( U 1 ) ⊗ ⋯ ⊗ F ( U n ) → F ( V ) {\displaystyle m_{V}^{U_{1},\cdots ,U_{n}}:{\mathcal {F}}(U_{1})\otimes \cdots \otimes {\mathcal {F}}(U_{n})\rightarrow {\mathcal {F}}(V)} for each finite collection of open sets with each U i ⊂ V {\displaystyle U_{i}\subset V} and the U i {\displaystyle U_{i}} pairwise disjoint. The maps compose in the obvious way: for collections of opens U i , j {\displaystyle U_{i,j}} , V i {\displaystyle V_{i}} and an open W {\displaystyle W} satisfying U i , 1 ⊔ ⋯ ⊔ U i , n i ⊂ V i {\displaystyle U_{i,1}\sqcup \cdots \sqcup U_{i,n_{i}}\subset V_{i}} and V 1 ⊔ ⋯ V n ⊂ W {\displaystyle V_{1}\sqcup \cdots V_{n}\subset W} , the following diagram commutes. ⨂ i ⨂ j F ( U i , j ) → ⨂ i F ( V i ) ↓ ↙ F ( W ) {\displaystyle {\begin{array}{lcl}&\bigotimes _{i}\bigotimes _{j}{\mathcal {F}}(U_{i,j})&\rightarrow &\bigotimes _{i}{\mathcal {F}}(V_{i})&\\&\downarrow &\swarrow &\\&{\mathcal {F}}(W)&&&\\\end{array}}} So F {\displaystyle {\mathcal {F}}} resembles a precosheaf, except the vector spaces are tensored rather than (direct-)summed. The category of vector spaces can be replaced with any symmetric monoidal category. === Factorization algebras === To define factorization algebras, it is necessary to define a Weiss cover. For U {\displaystyle U} an open set, a collection of opens U = { U i | i ∈ I } {\displaystyle {\mathfrak {U}}=\{U_{i}|i\in I\}} is a Weiss cover of U {\displaystyle U} if for any finite collection of points { x 1 , ⋯ , x k } {\displaystyle \{x_{1},\cdots ,x_{k}\}} in U {\displaystyle U} , there is an open set U i ∈ U {\displaystyle U_{i}\in {\mathfrak {U}}} such that { x 1 , ⋯ , x k } ⊂ U i {\displaystyle \{x_{1},\cdots ,x_{k}\}\subset U_{i}} . Then a factorization algebra of vector spaces on M {\displaystyle M} is a prefactorization algebra of vector spaces on M {\displaystyle M} so that for every open U {\displaystyle U} and every Weiss cover { U i | i ∈ I } {\displaystyle \{U_{i}|i\in I\}} of U {\displaystyle U} , the sequence ⨁ i , j F ( U i ∩ U j ) → ⨁ k F ( U k ) → F ( U ) → 0 {\displaystyle \bigoplus _{i,j}{\mathcal {F}}(U_{i}\cap U_{j})\rightarrow \bigoplus _{k}{\mathcal {F}}(U_{k})\rightarrow {\mathcal {F}}(U)\rightarrow 0} is exact. That is, F {\displaystyle {\mathcal {F}}} is a factorization algebra if it is a cosheaf with respect to the Weiss topology. A factorization algebra is multiplicative if, in addition, for each pair of disjoint opens U , V ⊂ M {\displaystyle U,V\subset M} , the structure map m U ⊔ V U , V : F ( U ) ⊗ F ( V ) → F ( U ⊔ V ) {\displaystyle m_{U\sqcup V}^{U,V}:{\mathcal {F}}(U)\otimes {\mathcal {F}}(V)\rightarrow {\mathcal {F}}(U\sqcup V)} is an isomorphism. === Algebro-geometric formulation === While this formulation is related to the one given above, the relation is not immediate. Let X {\displaystyle X} be a smooth complex curve. A factorization algebra on X {\displaystyle X} consists of A quasicoherent sheaf V X , I {\displaystyle {\mathcal {V}}_{X,I}} over X I {\displaystyle X^{I}} for any finite set I {\displaystyle I} , with no non-zero local section supported at the union of all partial diagonals Functorial isomorphisms of quasicoherent sheaves Δ J / I ∗ V X , J → V X , I {\displaystyle \Delta _{J/I}^{*}{\mathcal {V}}_{X,J}\rightarrow {\mathcal {V}}_{X,I}} over X I {\displaystyle X^{I}} for surjections J → I {\displaystyle J\rightarrow I} . (Factorization) Functorial isomorphisms of quasicoherent sheaves j J / I ∗ V X , J → j J / I ∗ ( ⊠ i ∈ I V X , p − 1 ( i ) ) {\displaystyle j_{J/I}^{*}{\mathcal {V}}_{X,J}\rightarrow j_{J/I}^{*}(\boxtimes _{i\in I}{\mathcal {V}}_{X,p^{-1}(i)})} over U J / I {\displaystyle U^{J/I}} . (Unit) Let V = V X , { 1 } {\displaystyle {\mathcal {V}}={\mathcal {V}}_{X,\{1\}}} and V 2 = V X , { 1 , 2 } {\displaystyle {\mathcal {V}}_{2}={\mathcal {V}}_{X,\{1,2\}}} . A global section (the unit) 1 ∈ V ( X ) {\displaystyle 1\in {\mathcal {V}}(X)} with the property that for every local section f ∈ V ( U ) {\displaystyle f\in {\mathcal {V}}(U)} ( U ⊂ X {\displaystyle U\subset X} ), the section 1 ⊠ f {\displaystyle 1\boxtimes f} of V 2 | U 2 Δ {\displaystyle {\mathcal {V}}_{2}|_{U^{2}\Delta }} extends across the diagonal, and restricts to f ∈ V ≅ V 2 | Δ {\displaystyle f\in {\mathcal {V}}\cong {\mathcal {V}}_{2}|_{\Delta }} . == Example == === Associative algebra === Any associative algebra A {\displaystyle A} can be realized as a prefactorization algebra A f {\displaystyle A^{f}} on R {\displaystyle \mathbb {R} } . To each open interval ( a , b ) {\displaystyle (a,b)} , assign A f ( ( a , b ) ) = A {\displaystyle A^{f}((a,b))=A} . An arbitrary open is a disjoint union of countably many open intervals, U = ⨆ i I i {\displaystyle U=\bigsqcup _{i}I_{i}} , and then set A f ( U ) = ⨂ i A {\displaystyle A^{f}(U)=\bigotimes _{i}A} . The structure maps simply come from the multiplication map on A {\displaystyle A} . Some care is needed for infinite tensor products, but for finitely many open intervals the picture is straightforward. == See also == Vertex algebra == References ==
Wikipedia:Factorization of polynomials#0
In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of computer algebra systems. The first polynomial factorization algorithm was published by Theodor von Schubert in 1793. Leopold Kronecker rediscovered Schubert's algorithm in 1882 and extended it to multivariate polynomials and coefficients in an algebraic extension. But most of the knowledge on this topic is not older than circa 1965 and the first computer algebra systems: When the long-known finite step algorithms were first put on computers, they turned out to be highly inefficient. The fact that almost any uni- or multivariate polynomial of degree up to 100 and with coefficients of a moderate size (up to 100 bits) can be factored by modern algorithms in a few minutes of computer time indicates how successfully this problem has been attacked during the past fifteen years. (Erich Kaltofen, 1982) Modern algorithms and computers can quickly factor univariate polynomials of degree more than 1000 having coefficients with thousands of digits. For this purpose, even for factoring over the rational numbers and number fields, a fundamental step is a factorization of a polynomial over a finite field. == Formulation of the question == Polynomial rings over the integers or over a field are unique factorization domains. This means that every element of these rings is a product of a constant and a product of irreducible polynomials (those that are not the product of two non-constant polynomials). Moreover, this decomposition is unique up to multiplication of the factors by invertible constants. Factorization depends on the base field. For example, the fundamental theorem of algebra, which states that every polynomial with complex coefficients has complex roots, implies that a polynomial with integer coefficients can be factored (with root-finding algorithms) into linear factors over the complex field C. Similarly, over the field of reals, the irreducible factors have degree at most two, while there are polynomials of any degree that are irreducible over the field of rationals Q. The question of polynomial factorization makes sense only for coefficients in a computable field whose every element may be represented in a computer and for which there are algorithms for the arithmetic operations. However, this is not a sufficient condition: Fröhlich and Shepherdson give examples of such fields for which no factorization algorithm can exist. The fields of coefficients for which factorization algorithms are known include prime fields (that is, the field of the rational number and the fields of the integers modulo a prime number) and their finitely generated field extensions. Integer coefficients are also tractable. Kronecker's classical method is interesting only from a historical point of view; modern algorithms proceed by a succession of: Square-free factorization Factorization over finite fields and reductions: From the multivariate case to the univariate case. From coefficients in a purely transcendental extension to the multivariate case over the ground field (see below). From coefficients in an algebraic extension to coefficients in the ground field (see below). From rational coefficients to integer coefficients (see below). From integer coefficients to coefficients in a prime field with p elements, for a well chosen p (see below). == Primitive part–content factorization == In this section, we show that factoring over Q (the rational numbers) and over Z (the integers) is essentially the same problem. The content of a polynomial p ∈ Z[X], denoted "cont(p)", is, up to its sign, the greatest common divisor of its coefficients. The primitive part of p is primpart(p) = p/cont(p), which is a primitive polynomial with integer coefficients. This defines a factorization of p into the product of an integer and a primitive polynomial. This factorization is unique up to the sign of the content. It is a usual convention to choose the sign of the content such that the leading coefficient of the primitive part is positive. For example, − 10 x 2 + 5 x + 5 = ( − 5 ) ( 2 x 2 − x − 1 ) {\displaystyle -10x^{2}+5x+5=(-5)(2x^{2}-x-1)\,} is a factorization into content and primitive part. Every polynomial q with rational coefficients may be written q = p c , {\displaystyle q={\frac {p}{c}},} where p ∈ Z[X] and c ∈ Z: it suffices to take for c a multiple of all denominators of the coefficients of q (for example their product) and p = cq. The content of q is defined as: cont ( q ) = cont ( p ) c , {\displaystyle {\text{cont}}(q)={\frac {{\text{cont}}(p)}{c}},} and the primitive part of q is that of p. As for the polynomials with integer coefficients, this defines a factorization into a rational number and a primitive polynomial with integer coefficients. This factorization is also unique up to the choice of a sign. For example, x 5 3 + 7 x 2 2 + 2 x + 1 = 2 x 5 + 21 x 2 + 12 x + 6 6 {\displaystyle {\frac {x^{5}}{3}}+{\frac {7x^{2}}{2}}+2x+1={\frac {2x^{5}+21x^{2}+12x+6}{6}}} is a factorization into content and primitive part. Gauss proved that the product of two primitive polynomials is also primitive (Gauss's lemma). This implies that a primitive polynomial is irreducible over the rationals if and only if it is irreducible over the integers. This implies also that the factorization over the rationals of a polynomial with rational coefficients is the same as the factorization over the integers of its primitive part. Similarly, the factorization over the integers of a polynomial with integer coefficients is the product of the factorization of its primitive part by the factorization of its content. In other words, an integer GCD computation reduces the factorization of a polynomial over the rationals to the factorization of a primitive polynomial with integer coefficients, and the factorization over the integers to the factorization of an integer and a primitive polynomial. Everything that precedes remains true if Z is replaced by a polynomial ring over a field F and Q is replaced by a field of rational functions over F in the same variables, with the only difference that "up to a sign" must be replaced by "up to the multiplication by an invertible constant in F". This reduces the factorization over a purely transcendental field extension of F to the factorization of multivariate polynomials over F. == Square-free factorization == If two or more factors of a polynomial are identical, then the polynomial is a multiple of the square of this factor. The multiple factor is also a factor of the polynomial's derivative (with respect to any of the variables, if several). For univariate polynomials, multiple factors are equivalent to multiple roots (over a suitable extension field). For univariate polynomials over the rationals (or more generally over a field of characteristic zero), Yun's algorithm exploits this to efficiently factorize the polynomial into square-free factors, that is, factors that are not a multiple of a square, performing a sequence of GCD computations starting with gcd(f(x), f '(x)). To factorize the initial polynomial, it suffices to factorize each square-free factor. Square-free factorization is therefore the first step in most polynomial factorization algorithms. Yun's algorithm extends this to the multivariate case by considering a multivariate polynomial as a univariate polynomial over a polynomial ring. In the case of a polynomial over a finite field, Yun's algorithm applies only if the degree is smaller than the characteristic, because, otherwise, the derivative of a non-zero polynomial may be zero (over the field with p elements, the derivative of a polynomial in xp is always zero). Nevertheless, a succession of GCD computations, starting from the polynomial and its derivative, allows one to compute the square-free decomposition; see Polynomial factorization over finite fields#Square-free factorization. == Classical methods == This section describes textbook methods that can be convenient when computing by hand. These methods are not used for computer computations because they use integer factorization, which is currently slower than polynomial factorization. The two methods that follow start from a univariate polynomial with integer coefficients for finding factors that are also polynomials with integer coefficients. === Obtaining linear factors === All linear factors with rational coefficients can be found using the rational root test. If the polynomial to be factored is a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{1}x+a_{0}} , then all possible linear factors are of the form b 1 x − b 0 {\displaystyle b_{1}x-b_{0}} , where b 1 {\displaystyle b_{1}} is an integer factor of a n {\displaystyle a_{n}} and b 0 {\displaystyle b_{0}} is an integer factor of a 0 {\displaystyle a_{0}} . All possible combinations of integer factors can be tested for validity, and each valid one can be factored out using polynomial long division. If the original polynomial is the product of factors at least two of which are of degree 2 or higher, this technique only provides a partial factorization; otherwise the factorization is complete. In particular, if there is exactly one non-linear factor, it will be the polynomial left after all linear factors have been factorized out. In the case of a cubic polynomial, if the cubic is factorizable at all, the rational root test gives a complete factorization, either into a linear factor and an irreducible quadratic factor, or into three linear factors. === Kronecker's method === Kronecker's method is aimed to factor univariate polynomials with integer coefficients into polynomials with integer coefficients. The method uses the fact that evaluating integer polynomials at integer values must produce integers. That is, if f ( x ) {\displaystyle f(x)} is a polynomial with integer coefficients, then f ( a ) {\displaystyle f(a)} is an integer as soon as a is an integer. There are only a finite number of possible integer values for a factor of a. So, if g ( x ) {\displaystyle g(x)} is a factor of f ( x ) , {\displaystyle f(x),} the value of g ( a ) {\displaystyle g(a)} must be one of the factors of f ( a ) . {\displaystyle f(a).} If one searches for all factors of a given degree d, one can consider d + 1 {\displaystyle d+1} values, a 0 , … , a d {\displaystyle a_{0},\ldots ,a_{d}} for a, which give a finite number of possibilities for the tuple ( f ( a 0 ) , … , f ( a d ) ) . {\displaystyle (f(a_{0}),\ldots ,f(a_{d})).} Each f ( a i ) {\displaystyle f(a_{i})} has a finite number of divisors b i , 0 , … , b i , k i {\displaystyle b_{i,0},\ldots ,b_{i,k_{i}}} , and, each ( d + 1 ) {\displaystyle (d+1)} -tuple where the i th {\displaystyle i^{\text{th}}} entry is a divisor of f ( a i ) {\displaystyle f(a_{i})} , that is, a tuple of the form ( b 0 , j 1 , … , b d , j d ) {\displaystyle (b_{0,j_{1}},\ldots ,b_{d,j_{d}})} , produces a unique polynomial of degree at most d {\displaystyle d} , which can be computed by polynomial interpolation. Each of these polynomials can be tested for being a factor by polynomial division. Since there were finitely many a i {\displaystyle a_{i}} and each f ( a i ) {\displaystyle f(a_{i})} has finitely many divisors, there are finitely many such tuples. So, an exhaustive search allows finding all factors of degree at most d. For example, consider f ( x ) = x 5 + x 4 + x 2 + x + 2 {\displaystyle f(x)=x^{5}+x^{4}+x^{2}+x+2} . If this polynomial factors over Z, then at least one of its factors p ( x ) {\displaystyle p(x)} must be of degree two or less, so p ( x ) {\displaystyle p(x)} is uniquely determined by three values. Thus, we compute three values f ( 0 ) = 2 {\displaystyle f(0)=2} , f ( 1 ) = 6 {\displaystyle f(1)=6} and f ( − 1 ) = 2 {\displaystyle f(-1)=2} . If one of these values is 0, we have a linear factor. If the values are nonzero, we can list the possible factorizations for each. Now, 2 can only factor as 1×2, 2×1, (−1)×(−2), or (−2)×(−1). Therefore, if a second degree integer polynomial factor exists, it must take one of the values p(0) = 1, 2, −1, or −2 and likewise for p(−1). There are eight factorizations of 6 (four each for 1×6 and 2×3), making a total of 4×4×8 = 128 possible triples (p(0), p(1), p(−1)), of which half can be discarded as the negatives of the other half. Thus, we must check 64 explicit integer polynomials p ( x ) = a x 2 + b x + c {\displaystyle p(x)=ax^{2}+bx+c} as possible factors of f ( x ) {\displaystyle f(x)} . Testing them exhaustively reveals that p ( x ) = x 2 + x + 1 {\displaystyle p(x)=x^{2}+x+1} constructed from (g(0), g(1), g(−1)) = (1,3,1) factors f ( x ) {\displaystyle f(x)} . Dividing f(x) by p(x) gives the other factor q ( x ) = x 3 − x + 2 {\displaystyle q(x)=x^{3}-x+2} , so that f ( x ) = p ( x ) q ( x ) {\displaystyle f(x)=p(x)q(x)} . Now one can test recursively to find factors of p(x) and q(x), in this case using the rational root test. It turns out they are both irreducible, so the irreducible factorization of f(x) is: f ( x ) = p ( x ) q ( x ) = ( x 2 + x + 1 ) ( x 3 − x + 2 ) . {\displaystyle f(x)=p(x)q(x)=(x^{2}+x+1)(x^{3}-x+2).} == Modern methods == === Factoring over finite fields === === Factoring univariate polynomials over the integers === If f ( x ) {\displaystyle f(x)} is a univariate polynomial over the integers, assumed to be content-free and square-free, one starts by computing a bound B {\displaystyle B} such that any factor g ( x ) {\displaystyle g(x)} has coefficients of absolute value bounded by B {\displaystyle B} . This way, if m {\displaystyle m} is an integer larger than 2 B {\displaystyle 2B} , and if g ( x ) {\displaystyle g(x)} is known modulo m {\displaystyle m} , then g ( x ) {\displaystyle g(x)} can be reconstructed from its image mod m {\displaystyle m} . The Zassenhaus algorithm proceeds as follows. First, choose a prime number p {\displaystyle p} such that the image of f ( x ) mod p {\displaystyle f(x){\bmod {p}}} remains square-free, and of the same degree as f ( x ) {\displaystyle f(x)} . A random choice will almost always satisfy these constraints, since only a finite number of prime numbers do not satify them, namely the prime divisors of the product of the discriminant and the leading coefficient of the polynomial. Then factor f ( x ) mod p {\displaystyle f(x){\bmod {p}}} . This produces integer polynomials f 1 ( x ) , … , f r ( x ) {\displaystyle f_{1}(x),\ldots ,f_{r}(x)} whose product matches f ( x ) mod p {\displaystyle f(x){\bmod {p}}} . Next, apply Hensel lifting; this updates the f i ( x ) {\displaystyle f_{i}(x)} in such a way that their product matches f ( x ) mod p a {\displaystyle f(x){\bmod {p}}^{a}} , where a {\displaystyle a} is large enough that p a {\displaystyle p^{a}} exceeds 2 B {\displaystyle 2B} : thus each f i ( x ) {\displaystyle f_{i}(x)} corresponds to a well-defined integer polynomial. Modulo p a {\displaystyle p^{a}} , the polynomial f ( x ) {\displaystyle f(x)} has 2 r {\displaystyle 2^{r}} factors (up to units): the products of all subsets of { f 1 ( x ) , … , f r ( x ) } mod p a {\displaystyle \{f_{1}(x),\ldots ,f_{r}(x)\}{\bmod {p}}^{a}} . These factors modulo p a {\displaystyle p^{a}} need not correspond to "true" factors of f ( x ) {\displaystyle f(x)} in Z [ x ] {\displaystyle \mathbb {Z} [x]} , but we can easily test them by division in Z [ x ] {\displaystyle \mathbb {Z} [x]} . This way, all irreducible true factors can be found by checking at most 2 r {\displaystyle 2^{r}} cases, reduced to 2 r − 1 {\displaystyle 2^{r-1}} cases by skipping complements. If f ( x ) {\displaystyle f(x)} is reducible, the number of cases is reduced further by removing those f i ( x ) {\displaystyle f_{i}(x)} that appear in an already found true factor. The Zassenhaus algorithm processes each case (each subset) quickly, however, in the worst case, it considers an exponential number of cases. The first polynomial time algorithm for factoring rational polynomials was discovered by Lenstra, Lenstra and Lovász and is an application of the Lenstra–Lenstra–Lovász lattice basis reduction (LLL) algorithm (Lenstra, Lenstra & Lovász 1982). A simplified version of the LLL factorization algorithm is as follows: calculate a complex (or p-adic) root α of the polynomial f ( x ) {\displaystyle f(x)} to high precision, then use the Lenstra–Lenstra–Lovász lattice basis reduction algorithm to find an approximate linear relation between 1, α, α2, α3, . . . with integer coefficients, which might be an exact linear relation and a polynomial factor of f ( x ) {\displaystyle f(x)} . One can determine a bound for the precision that guarantees that this method produces either a factor, or an irreducibility proof. Although this method finishes in polynomial time, it is not used in practice because the lattice has high dimension and huge entries, which makes the computation slow. The exponential complexity in the Zassenhaus algorithm comes from a combinatorial problem: how to select the right subsets of f 1 ( x ) , … , f r ( x ) {\displaystyle f_{1}(x),\ldots ,f_{r}(x)} . State-of-the-art factoring implementations work in a manner similar to Zassenhaus, except that the combinatorial problem is translated to a lattice problem that is then solved by LLL. In this approach, LLL is not used to compute coefficients of factors, but rather to compute vectors with r {\displaystyle r} entries in {0,1} that encode the subsets of f 1 ( x ) , … , f r ( x ) {\displaystyle f_{1}(x),\ldots ,f_{r}(x)} corresponding to the irreducible true factors. === Factoring over algebraic extensions (Trager's method) === We can factor a polynomial p ( x ) ∈ K [ x ] {\displaystyle p(x)\in K[x]} , where the field K {\displaystyle K} is a finite extension of Q {\displaystyle \mathbb {Q} } . First, using square-free factorization, we may suppose that the polynomial is square-free. Next we define the quotient ring L = K [ x ] / p ( x ) {\displaystyle L=K[x]/p(x)} of degree n = [ L : Q ] = deg ⁡ p ( x ) [ K : Q ] {\displaystyle n=[L:\mathbb {Q} ]=\deg p(x)\,[K:\mathbb {Q} ]} ; this is not a field unless p ( x ) {\displaystyle p(x)} is irreducible, but it is a reduced ring since p ( x ) {\displaystyle p(x)} is square-free. Indeed, if p ( x ) = ∏ i = 1 m p i ( x ) {\displaystyle p(x)=\prod _{i=1}^{m}p_{i}(x)} is the desired factorization of p(x), the ring decomposes uniquely into fields as: L = K [ x ] / p ( x ) ≅ ∏ i = 1 m K [ x ] / p i ( x ) . {\displaystyle L=K[x]/p(x)\cong \prod _{i=1}^{m}K[x]/p_{i}(x).} We will find this decomposition without knowing the factorization. First, we write L explicitly as an algebra over Q {\displaystyle \mathbb {Q} } : we pick a random element α ∈ L {\displaystyle \alpha \in L} , which generates L {\displaystyle L} over Q {\displaystyle \mathbb {Q} } with high probability by the primitive element theorem. If this is the case, we can compute the minimal polynomial q ( y ) ∈ Q [ y ] {\displaystyle q(y)\in \mathbb {Q} [y]} of α {\displaystyle \alpha } over Q {\displaystyle \mathbb {Q} } , by finding a Q {\displaystyle \mathbb {Q} } -linear relation among 1, α, . . . , αn. Using a factoring algorithm for rational polyomials, we factor into irreducibles in Q [ y ] {\displaystyle \mathbb {Q} [y]} : q ( y ) = ∏ i = 1 n q i ( y ) . {\displaystyle q(y)=\prod _{i=1}^{n}q_{i}(y).} Thus we have: L ≅ Q [ y ] / q ( y ) ≅ ∏ i = 1 n Q [ y ] / q i ( y ) , {\displaystyle L\cong \mathbb {Q} [y]/q(y)\cong \prod _{i=1}^{n}\mathbb {Q} [y]/q_{i}(y),} where α {\displaystyle \alpha } corresponds to y ↔ ( y , y , … , y ) {\displaystyle y\leftrightarrow (y,y,\ldots ,y)} . This must be isomorphic to the previous decomposition of L {\displaystyle L} . The generators of L are x along with the generators of K {\displaystyle K} over Q {\displaystyle \mathbb {Q} } ; writing these as a polynomials in α {\displaystyle \alpha } , we can determine the embeddings of x {\displaystyle x} and K {\displaystyle K} into each component Q [ y ] / q i ( y ) = K [ x ] / p i ( x ) {\displaystyle \mathbb {Q} [y]/q_{i}(y)=K[x]/p_{i}(x)} . By finding the minimal polynomial of x {\displaystyle x} in Q [ y ] / q i ( y ) {\displaystyle \mathbb {Q} [y]/q_{i}(y)} , we compute p i ( x ) {\displaystyle p_{i}(x)} , and thus factor p ( x ) {\displaystyle p(x)} over K . {\displaystyle K.} == Numerical factorization == "Numerical factorization" refers commonly to the factorization of polynomials with real or complex coefficients, whose coefficients are only approximately known, generally because they are represented as floating point numbers. For univariate polynomials with complex coefficients, factorization can easily be reduced to numerical computation of polynomial roots and multiplicities. In the multivariate case, a random infinitesimal perturbation of the coefficients produces with probability one an irreducible polynomial, even when starting from a polynomial with many factors. So, the very meaning of numerical factorization needs to be clarified precisely. Let p {\displaystyle p} be a polynomial with complex coefficients with an irreducible factorization p = α p 1 m 1 ⋯ p k m k {\displaystyle p=\alpha p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}} where α ∈ C {\displaystyle \alpha \in C} and the factors p 1 , … , p k {\displaystyle p_{1},\ldots ,p_{k}} are irreducible polynomials with complex coefficients. Assume that p {\displaystyle p} is approximated through a polynomial p ~ {\displaystyle {\tilde {p}}} whose coefficients are close to those of p {\displaystyle p} . The exact factorization of p ~ {\displaystyle {\tilde {p}}} is pointless, since it is generally irreducible. There are several possible definitions of what can be called a numerical factorization of p ~ . {\displaystyle {\tilde {p}}.} If k {\displaystyle k} and m i {\displaystyle m_{i}} 's are known, an approximate factorization consists of finding a polynomial close to p ~ {\displaystyle {\tilde {p}}} that factors as above. If one does not know the factorization scheme, identifying m 1 , … , m k {\displaystyle m_{1},\ldots ,m_{k}} becomes necessary. For example, the number of irreducible factors of a polynomial is the nullity of its Ruppert matrix. Thus the multiplicities m 1 , … , m k {\displaystyle m_{1},\ldots ,m_{k}} can be identified by square-free factorization via numerical GCD computation and rank-revealing on Ruppert matrices. Several algorithms have been developed and implemented for numerical factorization as an on-going subject of research. == See also == Factorization § Polynomials, for elementary heuristic methods and explicit formulas Swinnerton-Dyer polynomials, a family of polynomials having worst-case runtime for the Zassenhaus method == Bibliography == Fröhlich, A.; Shepherson, J. C. (1955), "On the factorisation of polynomials in a finite number of steps", Mathematische Zeitschrift, 62 (1): 331–334, doi:10.1007/BF01180640, ISSN 0025-5874, S2CID 119955899 Trager, B.M. (1976). "Algebraic factoring and rational function integration". Proceedings of the third ACM symposium on Symbolic and algebraic computation - SYMSAC '76. pp. 219–226. doi:10.1145/800205.806338. ISBN 9781450377904. S2CID 16567619. Bernard Beauzamy, Per Enflo, Paul Wang (October 1994). "Quantitative Estimates for Polynomials in One or Several Variables: From Analysis and Number Theory to Symbolic and Massively Parallel Computation". Mathematics Magazine. 67 (4): 243–257. doi:10.2307/2690843. JSTOR 2690843.{{cite journal}}: CS1 maint: multiple names: authors list (link) (accessible to readers with undergraduate mathematics) Cohen, Henri (1993). A course in computational algebraic number theory. Graduate Texts in Mathematics. Vol. 138. Berlin, New York: Springer-Verlag. ISBN 978-3-540-55640-4. MR 1228206. Kaltofen, Erich (1982), "Factorization of polynomials", in B. Buchberger; R. Loos; G. Collins (eds.), Computer Algebra, Springer Verlag, pp. 95–113, CiteSeerX 10.1.1.39.7916 Knuth, Donald E (1997). "4.6.2 Factorization of Polynomials". Seminumerical Algorithms. The Art of Computer Programming. Vol. 2 (Third ed.). Reading, Massachusetts: Addison-Wesley. pp. 439–461, 678–691. ISBN 978-0-201-89684-8. Lenstra, A. K.; Lenstra, H. W.; Lovász, László (1982). "Factoring polynomials with rational coefficients". Mathematische Annalen. 261 (4): 515–534. CiteSeerX 10.1.1.310.318. doi:10.1007/BF01457454. ISSN 0025-5831. MR 0682664. S2CID 5701340. Van der Waerden, Algebra (1970), trans. Blum and Schulenberger, Frederick Ungar. == Further reading == Kaltofen, Erich (1990), "Polynomial Factorization 1982-1986", in D. V. Chudnovsky; R. D. Jenks (eds.), Computers in Mathematics, Lecture Notes in Pure and Applied Mathematics, vol. 125, Marcel Dekker, Inc., CiteSeerX 10.1.1.68.7461 Kaltofen, Erich (1992), "Polynomial Factorization 1987–1991" (PDF), Proceedings of Latin '92, Springer Lect. Notes Comput. Sci., vol. 583, Springer, retrieved October 14, 2012 Ivanyos, Gabor; Marek, Karpinski; Saxena, Nitin (2009). "Schemes for deterministic polynomial factoring". Proceedings of the 2009 international symposium on Symbolic and algebraic computation. pp. 191–198. arXiv:0804.1974. doi:10.1145/1576702.1576730. ISBN 9781605586090. S2CID 15895636.
Wikipedia:Factorization of polynomials over finite fields#0
In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them. All factorization algorithms, including the case of multivariate polynomials over the rational numbers, reduce the problem to this case; see polynomial factorization. It is also used for various applications of finite fields, such as coding theory (cyclic redundancy codes and BCH codes), cryptography (public key cryptography by the means of elliptic curves), and computational number theory. As the reduction of the factorization of multivariate polynomials to that of univariate polynomials does not have any specificity in the case of coefficients in a finite field, only polynomials with one variable are considered in this article. == Background == === Finite field === The theory of finite fields, whose origins can be traced back to the works of Gauss and Galois, has played a part in various branches of mathematics. Due to the applicability of the concept in other topics of mathematics and sciences like computer science there has been a resurgence of interest in finite fields and this is partly due to important applications in coding theory and cryptography. Applications of finite fields introduce some of these developments in cryptography, computer algebra and coding theory. A finite field or Galois field is a field with a finite order (number of elements). The order of a finite field is always a prime or a power of prime. For each prime power q = pr, there exists exactly one finite field with q elements, up to isomorphism. This field is denoted GF(q) or Fq. If p is prime, GF(p) is the prime field of order p; it is the field of residue classes modulo p, and its p elements are denoted 0, 1, ..., p−1. Thus a = b in GF(p) means the same as a ≡ b (mod p). === Irreducible polynomials === Let F be a finite field. As for general fields, a non-constant polynomial f in F[x] is said to be irreducible over F if it is not the product of two polynomials of positive degree. A polynomial of positive degree that is not irreducible over F is called reducible over F. Irreducible polynomials allow us to construct the finite fields of non-prime order. In fact, for a prime power q, let Fq be the finite field with q elements, unique up to isomorphism. A polynomial f of degree n greater than one, which is irreducible over Fq, defines a field extension of degree n which is isomorphic to the field with qn elements: the elements of this extension are the polynomials of degree lower than n; addition, subtraction and multiplication by an element of Fq are those of the polynomials; the product of two elements is the remainder of the division by f of their product as polynomials; the inverse of an element may be computed by the extended GCD algorithm (see Arithmetic of algebraic extensions). It follows that, to compute in a finite field of non prime order, one needs to generate an irreducible polynomial. For this, the common method is to take a polynomial at random and test it for irreducibility. For sake of efficiency of the multiplication in the field, it is usual to search for polynomials of the shape xn + ax + b. Irreducible polynomials over finite fields are also useful for pseudorandom number generators using feedback shift registers and discrete logarithm over F2n. The number of irreducible monic polynomials of degree n over Fq is the number of aperiodic necklaces, given by Moreau's necklace-counting function Mq(n). The closely related necklace function Nq(n) counts monic polynomials of degree n which are primary (a power of an irreducible); or alternatively irreducible polynomials of all degrees d which divide n. === Example === The polynomial P = x4 + 1 is irreducible over Q but not over any finite field. On any field extension of F2, P = (x + 1)4. On every other finite field, at least one of −1, 2 and −2 is a square, because the product of two non-squares is a square and so we have If − 1 = a 2 , {\displaystyle -1=a^{2},} then P = ( x 2 + a ) ( x 2 − a ) . {\displaystyle P=(x^{2}+a)(x^{2}-a).} If 2 = b 2 , {\displaystyle 2=b^{2},} then P = ( x 2 + b x + 1 ) ( x 2 − b x + 1 ) . {\displaystyle P=(x^{2}+bx+1)(x^{2}-bx+1).} If − 2 = c 2 , {\displaystyle -2=c^{2},} then P = ( x 2 + c x − 1 ) ( x 2 − c x − 1 ) . {\displaystyle P=(x^{2}+cx-1)(x^{2}-cx-1).} === Complexity === Polynomial factoring algorithms use basic polynomial operations such as products, divisions, gcd, powers of one polynomial modulo another, etc. A multiplication of two polynomials of degree at most n can be done in O(n2) operations in Fq using "classical" arithmetic, or in O(nlog(n) log(log(n)) ) operations in Fq using "fast" arithmetic. A Euclidean division (division with remainder) can be performed within the same time bounds. The cost of a polynomial greatest common divisor between two polynomials of degree at most n can be taken as O(n2) operations in Fq using classical methods, or as O(nlog2(n) log(log(n)) ) operations in Fq using fast methods. For polynomials h, g of degree at most n, the exponentiation hq mod g can be done with O(log(q)) polynomial products, using exponentiation by squaring method, that is O(n2log(q)) operations in Fq using classical methods, or O(nlog(q)log(n) log(log(n))) operations in Fq using fast methods. In the algorithms that follow, the complexities are expressed in terms of number of arithmetic operations in Fq, using classical algorithms for the arithmetic of polynomials. == Factoring algorithms == Many algorithms for factoring polynomials over finite fields include the following three stages: Square-free factorization Distinct-degree factorization Equal-degree factorization An important exception is Berlekamp's algorithm, which combines stages 2 and 3. === Berlekamp's algorithm === Berlekamp's algorithm is historically important as being the first factorization algorithm which works well in practice. However, it contains a loop on the elements of the ground field, which implies that it is practicable only over small finite fields. For a fixed ground field, its time complexity is polynomial, but, for general ground fields, the complexity is exponential in the size of the ground field. === Square-free factorization === The algorithm determines a square-free factorization for polynomials whose coefficients come from the finite field Fq of order q = pm with p a prime. This algorithm firstly determines the derivative and then computes the gcd of the polynomial and its derivative. If it is not one then the gcd is again divided into the original polynomial, provided that the derivative is not zero (a case that exists for non-constant polynomials defined over finite fields). This algorithm uses the fact that, if the derivative of a polynomial is zero, then it is a polynomial in xp, which is, if the coefficients belong to Fp, the pth power of the polynomial obtained by substituting x by x1/p. If the coefficients do not belong to Fp, the pth root of a polynomial with zero derivative is obtained by the same substitution on x, completed by applying the inverse of the Frobenius automorphism to the coefficients. This algorithm works also over a field of characteristic zero, with the only difference that it never enters in the blocks of instructions where pth roots are computed. However, in this case, Yun's algorithm is much more efficient because it computes the greatest common divisors of polynomials of lower degrees. A consequence is that, when factoring a polynomial over the integers, the algorithm which follows is not used: one first computes the square-free factorization over the integers, and to factor the resulting polynomials, one chooses a p such that they remain square-free modulo p. Algorithm: SFF (Square-Free Factorization) Input: A monic polynomial f in Fq[x] where q = pm Output: Square-free factorization of f R ← 1 # Make w be the product (without multiplicity) of all factors of f that have # multiplicity not divisible by p c ← gcd(f, f′) w ← f/c # Step 1: Identify all factors in w i ← 1 while w ≠ 1 do y ← gcd(w, c) fac ← w / y R ← R · faci w ← y; c ← c / y; i ← i + 1 end while # c is now the product (with multiplicity) of the remaining factors of f # Step 2: Identify all remaining factors using recursion # Note that these are the factors of f that have multiplicity divisible by p if c ≠ 1 then c ← c1/p R ← R·SFF(c)p end if Output(R) The idea is to identify the product of all irreducible factors of f with the same multiplicity. This is done in two steps. The first step uses the formal derivative of f to find all the factors with multiplicity not divisible by p. The second step identifies the remaining factors. As all of the remaining factors have multiplicity divisible by p, meaning they are powers of p, one can simply take the pth square root and apply recursion. ==== Example of a square-free factorization ==== Let f = x 11 + 2 x 9 + 2 x 8 + x 6 + x 5 + 2 x 3 + 2 x 2 + 1 ∈ F 3 [ x ] , {\displaystyle f=x^{11}+2x^{9}+2x^{8}+x^{6}+x^{5}+2x^{3}+2x^{2}+1\in \mathbf {F} _{3}[x],} to be factored over the field with three elements. The algorithm computes first c = gcd ( f , f ′ ) = x 9 + 2 x 6 + x 3 + 2. {\displaystyle c=\gcd(f,f')=x^{9}+2x^{6}+x^{3}+2.} Since the derivative is non-zero we have w = f/c = x2 + 2 and we enter the while loop. After one loop we have y = x + 2, z = x + 1 and R = x + 1 with updates i = 2, w = x + 2 and c = x8 + x7 + x6 + x2+x+1. The second time through the loop gives y = x + 2, z = 1, R = x + 1, with updates i = 3, w = x + 2 and c = x7 + 2x6 + x + 2. The third time through the loop also does not change R. For the fourth time through the loop we get y = 1, z = x + 2, R = (x + 1)(x + 2)4, with updates i = 5, w = 1 and c = x6 + 1. Since w = 1, we exit the while loop. Since c ≠ 1, it must be a perfect cube. The cube root of c, obtained by replacing x3 by x is x2 + 1, and calling the square-free procedure recursively determines that it is square-free. Therefore, cubing it and combining it with the value of R to that point gives the square-free decomposition f = ( x + 1 ) ( x 2 + 1 ) 3 ( x + 2 ) 4 . {\displaystyle f=(x+1)(x^{2}+1)^{3}(x+2)^{4}.} === Distinct-degree factorization === This algorithm splits a square-free polynomial into a product of polynomials whose irreducible factors all have the same degree. Let f ∈ Fq[x] of degree n be the polynomial to be factored. Algorithm Distinct-degree factorization(DDF) Input: A monic square-free polynomial f ∈ Fq[x] Output: The set of all pairs (g, d), such that f has an irreducible factor of degree d and g is the product of all monic irreducible factors of f of degree d. Begin i := 1 ; S := ∅ , f ∗ := f ; {\displaystyle i:=1;\qquad S:=\emptyset ,\qquad f^{*}:=f;} while deg ⁡ f ∗ ≥ 2 i {\displaystyle \deg f^{*}\geq 2i} do g = gcd ( f ∗ , x q i − x ) {\displaystyle g=\gcd(f^{*},x^{q^{i}}-x)} if g ≠ 1, then S := S ∪ { ( g , i ) } {\displaystyle S:=S\cup \{(g,i)\}} ; f ∗ := f ∗ / g {\displaystyle f^{*}:=f^{*}/g} end if i := i + 1; end while; if f ∗ ≠ 1 {\displaystyle f^{*}\neq 1} , then S := S ∪ { ( f ∗ , deg ⁡ f ∗ ) } {\displaystyle S:=S\cup \{(f^{*},\deg f^{*})\}} ; if S = ∅ {\displaystyle S=\emptyset } , then return {(f, 1)}, else return S End The correctness of the algorithm is based on the following: Lemma. For i ≥ 1 the polynomial x q i − x ∈ F q [ x ] {\displaystyle x^{q^{i}}-x\in \mathbf {F} _{q}[x]} is the product of all monic irreducible polynomials in Fq[x] whose degree divides i. At first glance, this is not efficient since it involves computing the GCD of polynomials of a degree which is exponential in the degree of the input polynomial. However, g = gcd ( f ∗ , x q i − x ) {\displaystyle g=\gcd \left(f^{*},x^{q^{i}}-x\right)} may be replaced by g = gcd ( f ∗ , ( x q i − x mod f ∗ ) ) . {\displaystyle g=\gcd \left(f^{*},\left(x^{q^{i}}-x\mod f^{*}\right)\right).} Therefore, we have to compute: x q i − x mod f ∗ , {\displaystyle x^{q^{i}}-x\mod f^{*},} there are two methods: Method I. Start from the value of x q i − 1 mod f ∗ {\displaystyle x^{q^{i-1}}\mod f^{*}} computed at the preceding step and to compute its qth power modulo the new f*, using exponentiation by squaring method. This needs O ( log ⁡ ( q ) deg ⁡ ( f ) 2 ) {\displaystyle O\left(\log(q)\deg(f)^{2}\right)} arithmetic operations in Fq at each step, and thus O ( log ⁡ ( q ) deg ⁡ ( f ) 3 ) {\displaystyle O\left(\log(q)\deg(f)^{3}\right)} arithmetic operations for the whole algorithm. Method II. Using the fact that the qth power is a linear map over Fq we may compute its matrix with O ( deg ⁡ ( f ) 2 ( log ⁡ ( q ) + deg ⁡ ( f ) ) ) {\displaystyle O\left(\deg(f)^{2}(\log(q)+\deg(f))\right)} operations. Then at each iteration of the loop, compute the product of a matrix by a vector (with O(deg(f)2) operations). This induces a total number of operations in Fq which is O ( deg ⁡ ( f ) 2 ( log ⁡ ( q ) + deg ⁡ ( f ) ) ) . {\displaystyle O\left(\deg(f)^{2}(\log(q)+\deg(f))\right).} Thus this second method is more efficient and is usually preferred. Moreover, the matrix that is computed in this method is used, by most algorithms, for equal-degree factorization (see below); thus using it for the distinct-degree factorization saves further computing time. === Equal-degree factorization === ==== Cantor–Zassenhaus algorithm ==== In this section, we consider the factorization of a monic squarefree univariate polynomial f, of degree n, over a finite field Fq, which has r ≥ 2 pairwise distinct irreducible factors f 1 , … , f r {\displaystyle f_{1},\ldots ,f_{r}} each of degree d. We first describe an algorithm by Cantor and Zassenhaus (1981) and then a variant that has a slightly better complexity. Both are probabilistic algorithms whose running time depends on random choices (Las Vegas algorithms), and have a good average running time. In next section we describe an algorithm by Shoup (1990), which is also an equal-degree factorization algorithm, but is deterministic. All these algorithms require an odd order q for the field of coefficients. For more factorization algorithms see e.g. Knuth's book The Art of Computer Programming volume 2. Algorithm Cantor–Zassenhaus algorithm. Input: A finite field Fq of odd order q. A monic square free polynomial f in Fq[x] of degree n = rd, which has r ≥ 2 irreducible factors each of degree d Output: The set of monic irreducible factors of f. Factors := {f}; while Size(Factors) < r do, Choose h in Fq[x] with deg(h) < n at random; g := h q d − 1 2 − 1 ( mod f ) {\displaystyle g:=h^{\frac {q^{d}-1}{2}}-1{\pmod {f}}} for each u in Factors with deg(u) > d do if gcd(g, u) ≠ 1 and gcd(g, u) ≠ u, then Factors:= Factors ∖ { u } ∪ { ( gcd ( g , u ) , u / gcd ( g , u ) ) } {\displaystyle \,\setminus \,\{u\}\cup \{(\gcd(g,u),u/\gcd(g,u))\}} ; endif endwhile return Factors The correctness of this algorithm relies on the fact that the ring Fq[x]/f is a direct product of the fields Fq[x]/fi where fi runs on the irreducible factors of f. As all these fields have qd elements, the component of g in any of these fields is zero with probability q d − 1 2 q d ∼ 1 2 . {\displaystyle {\frac {q^{d}-1}{2q^{d}}}\sim {\tfrac {1}{2}}.} This implies that the polynomial gcd(g, u) is the product of the factors of g for which the component of g is zero. It has been shown that the average number of iterations of the while loop of the algorithm is less than 2.5 log 2 ⁡ r {\displaystyle 2.5\log _{2}r} , giving an average number of arithmetic operations in Fq which is O ( d n 2 log ⁡ ( r ) log ⁡ ( q ) ) {\displaystyle O(dn^{2}\log(r)\log(q))} . In the typical case where dlog(q) > n, this complexity may be reduced to O ( n 2 ( log ⁡ ( r ) log ⁡ ( q ) + n ) ) {\displaystyle O(n^{2}(\log(r)\log(q)+n))} by choosing h in the kernel of the linear map v → v q − v ( mod f ) {\displaystyle v\to v^{q}-v{\pmod {f}}} and replacing the instruction g := h q d − 1 2 − 1 ( mod f ) {\displaystyle g:=h^{\frac {q^{d}-1}{2}}-1{\pmod {f}}} by g := h q − 1 2 − 1 ( mod f ) . {\displaystyle g:=h^{\frac {q-1}{2}}-1{\pmod {f}}.} The proof of validity is the same as above, replacing the direct product of the fields Fq[x]/fi by the direct product of their subfields with q elements. The complexity is decomposed in O ( n 2 log ⁡ ( r ) log ⁡ ( q ) ) {\displaystyle O(n^{2}\log(r)\log(q))} for the algorithm itself, O ( n 2 ( log ⁡ ( q ) + n ) ) {\displaystyle O(n^{2}(\log(q)+n))} for the computation of the matrix of the linear map (which may be already computed in the square-free factorization) and O(n3) for computing its kernel. It may be noted that this algorithm works also if the factors have not the same degree (in this case the number r of factors, needed for stopping the while loop, is found as the dimension of the kernel). Nevertheless, the complexity is slightly better if square-free factorization is done before using this algorithm (as n may decrease with square-free factorization, this reduces the complexity of the critical steps). ==== Victor Shoup's algorithm ==== Like the algorithms of the preceding section, Victor Shoup's algorithm is an equal-degree factorization algorithm. Unlike them, it is a deterministic algorithm. However, it is less efficient, in practice, than the algorithms of preceding section. For Shoup's algorithm, the input is restricted to polynomials over prime fields Fp. The worst case time complexity of Shoup's algorithm has a factor p . {\displaystyle {\sqrt {p}}.} Although exponential, this complexity is much better than previous deterministic algorithms (Berlekamp's algorithm) which have p as a factor. However, there are very few polynomials for which the computing time is exponential, and the average time complexity of the algorithm is polynomial in d log ⁡ ( p ) , {\displaystyle d\log(p),} where d is the degree of the polynomial, and p is the number of elements of the ground field. Let g = g1 ... gk be the desired factorization, where the gi are distinct monic irreducible polynomials of degree d. Let n = deg(g) = kd. We consider the ring R = Fq[x]/g and denote also by x the image of x in R. The ring R is the direct product of the fields Ri = Fq[x]/gi, and we denote by pi the natural homomorphism from the R onto Ri. The Galois group of Ri over Fq is cyclic of order d, generated by the field automorphism u → up. It follows that the roots of gi in Ri are p i ( x ) , p i ( x q ) , p i ( x q 2 ) , … , p i ( x q d − 1 ) . {\displaystyle p_{i}(x),p_{i}(x^{q}),p_{i}\left(x^{q^{2}}\right),\ldots ,p_{i}\left(x^{q^{d-1}}\right).} Like in the preceding algorithm, this algorithm uses the same subalgebra B of R as the Berlekamp's algorithm, sometimes called the "Berlekamp subagebra" and defined as B = { α ∈ R : p 1 ( α ) , ⋯ , p k ( α ) ∈ F q } = { u ∈ R : u q = u } {\displaystyle {\begin{aligned}B&=\left\{\alpha \in R\ :\ p_{1}(\alpha ),\cdots ,p_{k}(\alpha )\in \mathbf {F} _{q}\right\}\\&=\{u\in R\ :\ u^{q}=u\}\end{aligned}}} A subset S of B is said a separating set if, for every 1 ≤ i < j ≤ k there exists s ∈ S such that p i ( s ) ≠ p j ( s ) {\displaystyle p_{i}(s)\neq p_{j}(s)} . In the preceding algorithm, a separating set is constructed by choosing at random the elements of S. In Shoup's algorithm, the separating set is constructed in the following way. Let s in R[Y] be such that s = ( Y − x ) ( Y − x q ) ⋯ ( Y − x q d − 1 ) = s 0 + ⋯ + s d − 1 Y d − 1 + Y d {\displaystyle {\begin{aligned}s&=(Y-x)\left(Y-x^{q}\right)\cdots \left(Y-x^{q^{d-1}}\right)\\&=s_{0}+\cdots +s_{d-1}Y^{d-1}+Y^{d}\end{aligned}}} Then { s 0 , … , s d − 1 } {\displaystyle \{s_{0},\dots ,s_{d-1}\}} is a separating set because p i ( s ) = g i {\displaystyle p_{i}(s)=g_{i}} for i =1, ..., k (the two monic polynomials have the same roots). As the gi are pairwise distinct, for every pair of distinct indexes (i, j), at least one of the coefficients sh will satisfy p i ( s h ) ≠ p j ( s h ) . {\displaystyle p_{i}(s_{h})\neq p_{j}(s_{h}).} Having a separating set, Shoup's algorithm proceeds as the last algorithm of the preceding section, simply by replacing the instruction "choose at random h in the kernel of the linear map v → v q − v ( mod f ) {\displaystyle v\to v^{q}-v{\pmod {f}}} " by "choose h + i with h in S and i in {1, ..., k−1}". == Time complexity == As described in previous sections, for the factorization over finite fields, there are randomized algorithms of polynomial time complexity (for example Cantor–Zassenhaus algorithm). There are also deterministic algorithms with a polynomial average complexity (for example Shoup's algorithm). The existence of a deterministic algorithm with a polynomial worst-case complexity is still an open problem. == Rabin's test of irreducibility == Like distinct-degree factorization algorithm, Rabin's algorithm is based on the lemma stated above. Distinct-degree factorization algorithm tests every d not greater than half the degree of the input polynomial. Rabin's algorithm takes advantage that the factors are not needed for considering fewer d. Otherwise, it is similar to distinct-degree factorization algorithm. It is based on the following fact. Let p1, ..., pk, be all the prime divisors of n, and denote n / p i = n i {\displaystyle n/p_{i}=n_{i}} , for 1 ≤ i ≤ k polynomial f in Fq[x] of degree n is irreducible in Fq[x] if and only if gcd ( f , x q n i − x ) = 1 {\displaystyle \gcd \left(f,x^{q^{n_{i}}}-x\right)=1} , for 1 ≤ i ≤ k, and f divides x q n − x {\displaystyle x^{q^{n}}-x} . In fact, if f has a factor of degree not dividing n, then f does not divide x q n − x {\displaystyle x^{q^{n}}-x} ; if f has a factor of degree dividing n, then this factor divides at least one of the x q n i − x . {\displaystyle x^{q^{n_{i}}}-x.} Algorithm Rabin Irreducibility Test Input: A monic polynomial f in Fq[x] of degree n, p1, ..., pk all distinct prime divisors of n. Output: Either "f is irreducible" or "f is reducible". for j = 1 to k do n j = n / p j {\displaystyle n_{j}=n/p_{j}} ; for i = 1 to k do h := x q n i − x mod f {\displaystyle h:=x^{q^{n_{i}}}-x{\bmod {f}}} ; g := gcd(f, h); if g ≠ 1, then return "f is reducible" and STOP; end for; g := x q n − x mod f {\displaystyle g:=x^{q^{n}}-x{\bmod {f}}} ; if g = 0, then return "f is irreducible", else return "f is reducible" The basic idea of this algorithm is to compute x q n i mod f {\displaystyle x^{q^{n_{i}}}{\bmod {f}}} starting from the smallest n 1 , … , n k {\displaystyle n_{1},\ldots ,n_{k}} by repeated squaring or using the Frobenius automorphism, and then to take the correspondent gcd. Using the elementary polynomial arithmetic, the computation of the matrix of the Frobenius automorphism needs O ( n 2 ( n + log ⁡ q ) ) {\displaystyle O(n^{2}(n+\log q))} operations in Fq, the computation of x q n i − x ( mod f ) {\displaystyle x^{q^{n_{i}}}-x{\pmod {f}}} needs O(n3) further operations, and the algorithm itself needs O(kn2) operations, giving a total of O ( n 2 ( n + log ⁡ q ) ) {\displaystyle O(n^{2}(n+\log q))} operations in Fq. Using fast arithmetic (complexity O ( n log ⁡ n ) {\displaystyle O(n\log n)} for multiplication and division, and O ( n ( log ⁡ n ) 2 ) {\displaystyle O(n(\log n)^{2})} for GCD computation), the computation of the x q n i − x mod f {\displaystyle x^{q^{n_{i}}}-x{\bmod {f}}} by repeated squaring is O ( n 2 log ⁡ n log ⁡ q ) {\displaystyle O(n^{2}\log n\log q)} , and the algorithm itself is O ( k n ( log ⁡ n ) 2 ) {\displaystyle O(kn(\log n)^{2})} , giving a total of O ( n 2 log ⁡ n log ⁡ q ) {\displaystyle O(n^{2}\log n\log q)} operations in Fq. == See also == Berlekamp's algorithm Cantor–Zassenhaus algorithm Polynomial factorization == References == == Notes == == External links == Some irreducible polynomials http://www.math.umn.edu/~garrett/m/algebra/notes/07.pdf Field and Galois Theory :http://www.jmilne.org/math/CourseNotes/FT.pdf Galois Field:http://designtheory.org/library/encyc/topics/gf.pdf Factoring polynomials over finite fields: http://www.science.unitn.it/~degraaf/compalg/polfact.pdf
Wikipedia:Faddeev–LeVerrier algorithm#0
In mathematics (linear algebra), the Faddeev–LeVerrier algorithm is a recursive method to calculate the coefficients of the characteristic polynomial p A ( λ ) = det ( λ I n − A ) {\displaystyle p_{A}(\lambda )=\det(\lambda I_{n}-A)} of a square matrix, A, named after Dmitry Konstantinovich Faddeev and Urbain Le Verrier. Calculation of this polynomial yields the eigenvalues of A as its roots; as a matrix polynomial in the matrix A itself, it vanishes by the Cayley–Hamilton theorem. Computing the characteristic polynomial directly from the definition of the determinant is computationally cumbersome insofar as it introduces a new symbolic quantity λ {\displaystyle \lambda } ; by contrast, the Faddeev-Le Verrier algorithm works directly with coefficients of matrix A {\displaystyle A} . The algorithm has been independently rediscovered several times in different forms. It was first published in 1840 by Urbain Le Verrier, subsequently redeveloped by P. Horst, Jean-Marie Souriau, in its present form here by Faddeev and Sominsky, and further by J. S. Frame, and others. (For historical points, see Householder. An elegant shortcut to the proof, bypassing Newton polynomials, was introduced by Hou. The bulk of the presentation here follows Gantmacher, p. 88.) == The Algorithm == The objective is to calculate the coefficients ck of the characteristic polynomial of the n×n matrix A, p A ( λ ) ≡ det ( λ I n − A ) = ∑ k = 0 n c k λ k , {\displaystyle p_{A}(\lambda )\equiv \det(\lambda I_{n}-A)=\sum _{k=0}^{n}c_{k}\lambda ^{k}~,} where, evidently, cn = 1 and c0 = (−1)n det A. The coefficients cn-i are determined by induction on i, using an auxiliary sequence of matrices M 0 ≡ 0 c n = 1 ( k = 0 ) M k ≡ A M k − 1 + c n − k + 1 I c n − k = − 1 k t r ( A M k ) k = 1 , … , n . {\displaystyle {\begin{aligned}M_{0}&\equiv 0&c_{n}&=1\qquad &(k=0)\\M_{k}&\equiv AM_{k-1}+c_{n-k+1}I\qquad \qquad &c_{n-k}&=-{\frac {1}{k}}\mathrm {tr} (AM_{k})\qquad &k=1,\ldots ,n~.\end{aligned}}} Thus, M 1 = I , c n − 1 = − t r A = − c n t r A ; {\displaystyle M_{1}=I~,\quad c_{n-1}=-\mathrm {tr} A=-c_{n}\mathrm {tr} A;} M 2 = A − I t r A , c n − 2 = − 1 2 ( t r A 2 − ( t r A ) 2 ) = − 1 2 ( c n t r A 2 + c n − 1 t r A ) ; {\displaystyle M_{2}=A-I\mathrm {tr} A,\quad c_{n-2}=-{\frac {1}{2}}{\Bigl (}\mathrm {tr} A^{2}-(\mathrm {tr} A)^{2}{\Bigr )}=-{\frac {1}{2}}(c_{n}\mathrm {tr} A^{2}+c_{n-1}\mathrm {tr} A);} M 3 = A 2 − A t r A − 1 2 ( t r A 2 − ( t r A ) 2 ) I , {\displaystyle M_{3}=A^{2}-A\mathrm {tr} A-{\frac {1}{2}}{\Bigl (}\mathrm {tr} A^{2}-(\mathrm {tr} A)^{2}{\Bigr )}I,} c n − 3 = − 1 6 ( ( tr ⁡ A ) 3 − 3 tr ⁡ ( A 2 ) ( tr ⁡ A ) + 2 tr ⁡ ( A 3 ) ) = − 1 3 ( c n t r A 3 + c n − 1 t r A 2 + c n − 2 t r A ) ; {\displaystyle c_{n-3}=-{\tfrac {1}{6}}{\Bigl (}(\operatorname {tr} A)^{3}-3\operatorname {tr} (A^{2})(\operatorname {tr} A)+2\operatorname {tr} (A^{3}){\Bigr )}=-{\frac {1}{3}}(c_{n}\mathrm {tr} A^{3}+c_{n-1}\mathrm {tr} A^{2}+c_{n-2}\mathrm {tr} A);} etc., ...; M m = ∑ k = 1 m c n − m + k A k − 1 , {\displaystyle M_{m}=\sum _{k=1}^{m}c_{n-m+k}A^{k-1}~,} c n − m = − 1 m ( c n t r A m + c n − 1 t r A m − 1 + . . . + c n − m + 1 t r A ) = − 1 m ∑ k = 1 m c n − m + k t r A k ; . . . {\displaystyle c_{n-m}=-{\frac {1}{m}}(c_{n}\mathrm {tr} A^{m}+c_{n-1}\mathrm {tr} A^{m-1}+...+c_{n-m+1}\mathrm {tr} A)=-{\frac {1}{m}}\sum _{k=1}^{m}c_{n-m+k}\mathrm {tr} A^{k}~;...} Observe A−1 = − Mn /c0 = (−1)n−1Mn/detA terminates the recursion at λ. This could be used to obtain the inverse or the determinant of A. == Derivation == The proof relies on the modes of the adjugate matrix, Bk ≡ Mn−k, the auxiliary matrices encountered. This matrix is defined by ( λ I − A ) B = I p A ( λ ) {\displaystyle (\lambda I-A)B=I~p_{A}(\lambda )} and is thus proportional to the resolvent B = ( λ I − A ) − 1 I p A ( λ ) . {\displaystyle B=(\lambda I-A)^{-1}I~p_{A}(\lambda )~.} It is evidently a matrix polynomial in λ of degree n−1. Thus, B ≡ ∑ k = 0 n − 1 λ k B k = ∑ k = 0 n λ k M n − k , {\displaystyle B\equiv \sum _{k=0}^{n-1}\lambda ^{k}~B_{k}=\sum _{k=0}^{n}\lambda ^{k}~M_{n-k},} where one may define the harmless M0≡0. Inserting the explicit polynomial forms into the defining equation for the adjugate, above, ∑ k = 0 n λ k + 1 M n − k − λ k ( A M n − k + c k I ) = 0 . {\displaystyle \sum _{k=0}^{n}\lambda ^{k+1}M_{n-k}-\lambda ^{k}(AM_{n-k}+c_{k}I)=0~.} Now, at the highest order, the first term vanishes by M0=0; whereas at the bottom order (constant in λ, from the defining equation of the adjugate, above), M n A = B 0 A = c 0 , {\displaystyle M_{n}A=B_{0}A=c_{0}~,} so that shifting the dummy indices of the first term yields ∑ k = 1 n λ k ( M 1 + n − k − A M n − k + c k I ) = 0 , {\displaystyle \sum _{k=1}^{n}\lambda ^{k}{\Big (}M_{1+n-k}-AM_{n-k}+c_{k}I{\Big )}=0~,} which thus dictates the recursion ∴ M m = A M m − 1 + c n − m + 1 I , {\displaystyle \therefore \qquad M_{m}=AM_{m-1}+c_{n-m+1}I~,} for m=1,...,n. Note that ascending index amounts to descending in powers of λ, but the polynomial coefficients c are yet to be determined in terms of the Ms and A. This can be easiest achieved through the following auxiliary equation (Hou, 1998), λ ∂ p A ( λ ) ∂ λ − n p = tr ⁡ A B . {\displaystyle \lambda {\frac {\partial p_{A}(\lambda )}{\partial \lambda }}-np=\operatorname {tr} AB~.} This is but the trace of the defining equation for B by dint of Jacobi's formula, ∂ p A ( λ ) ∂ λ = p A ( λ ) ∑ m = 0 ∞ λ − ( m + 1 ) tr ⁡ A m = p A ( λ ) tr ⁡ I λ I − A ≡ tr ⁡ B . {\displaystyle {\frac {\partial p_{A}(\lambda )}{\partial \lambda }}=p_{A}(\lambda )\sum _{m=0}^{\infty }\lambda ^{-(m+1)}\operatorname {tr} A^{m}=p_{A}(\lambda )~\operatorname {tr} {\frac {I}{\lambda I-A}}\equiv \operatorname {tr} B~.} Inserting the polynomial mode forms in this auxiliary equation yields ∑ k = 1 n λ k ( k c k − n c k − tr ⁡ A M n − k ) = 0 , {\displaystyle \sum _{k=1}^{n}\lambda ^{k}{\Big (}kc_{k}-nc_{k}-\operatorname {tr} AM_{n-k}{\Big )}=0~,} so that ∑ m = 1 n − 1 λ n − m ( m c n − m + tr ⁡ A M m ) = 0 , {\displaystyle \sum _{m=1}^{n-1}\lambda ^{n-m}{\Big (}mc_{n-m}+\operatorname {tr} AM_{m}{\Big )}=0~,} and finally ∴ c n − m = − 1 m tr ⁡ A M m . {\displaystyle \therefore \qquad c_{n-m}=-{\frac {1}{m}}\operatorname {tr} AM_{m}~.} This completes the recursion of the previous section, unfolding in descending powers of λ. Further note in the algorithm that, more directly, M m = A M m − 1 − 1 m − 1 ( tr ⁡ A M m − 1 ) I , {\displaystyle M_{m}=AM_{m-1}-{\frac {1}{m-1}}(\operatorname {tr} AM_{m-1})I~,} and, in comportance with the Cayley–Hamilton theorem, adj ⁡ ( A ) = ( − 1 ) n − 1 M n = ( − 1 ) n − 1 ( A n − 1 + c n − 1 A n − 2 + . . . + c 2 A + c 1 I ) = ( − 1 ) n − 1 ∑ k = 1 n c k A k − 1 . {\displaystyle \operatorname {adj} (A)=(-1)^{n-1}M_{n}=(-1)^{n-1}(A^{n-1}+c_{n-1}A^{n-2}+...+c_{2}A+c_{1}I)=(-1)^{n-1}\sum _{k=1}^{n}c_{k}A^{k-1}~.} The final solution might be more conveniently expressed in terms of complete exponential Bell polynomials as c n − k = ( − 1 ) n − k k ! B k ( tr ⁡ A , − 1 ! tr ⁡ A 2 , 2 ! tr ⁡ A 3 , … , ( − 1 ) k − 1 ( k − 1 ) ! tr ⁡ A k ) . {\displaystyle c_{n-k}={\frac {(-1)^{n-k}}{k!}}{\mathcal {B}}_{k}{\Bigl (}\operatorname {tr} A,-1!~\operatorname {tr} A^{2},2!~\operatorname {tr} A^{3},\ldots ,(-1)^{k-1}(k-1)!~\operatorname {tr} A^{k}{\Bigr )}.} == Example == A = [ 3 1 5 3 3 1 4 6 4 ] {\displaystyle {\displaystyle A=\left[{\begin{array}{rrr}3&1&5\\3&3&1\\4&6&4\end{array}}\right]}} M 0 = [ 0 0 0 0 0 0 0 0 0 ] c 3 = 1 M 1 = [ 1 0 0 0 1 0 0 0 1 ] A M 1 = [ 3 1 5 3 3 1 4 6 4 ] c 2 = − 1 1 10 = − 10 M 2 = [ − 7 1 5 3 − 7 1 4 6 − 6 ] A M 2 = [ 2 26 − 14 − 8 − 12 12 6 − 14 2 ] c 1 = − 1 2 ( − 8 ) = 4 M 3 = [ 6 26 − 14 − 8 − 8 12 6 − 14 6 ] A M 3 = [ 40 0 0 0 40 0 0 0 40 ] c 0 = − 1 3 120 = − 40 {\displaystyle {\displaystyle {\begin{aligned}M_{0}&=\left[{\begin{array}{rrr}0&0&0\\0&0&0\\0&0&0\end{array}}\right]\quad &&&c_{3}&&&&&=&1\\M_{\mathbf {\color {blue}1} }&=\left[{\begin{array}{rrr}1&0&0\\0&1&0\\0&0&1\end{array}}\right]&A~M_{1}&=\left[{\begin{array}{rrr}\mathbf {\color {red}3} &1&5\\3&\mathbf {\color {red}3} &1\\4&6&\mathbf {\color {red}4} \end{array}}\right]&c_{2}&&&=-{\frac {1}{\mathbf {\color {blue}1} }}\mathbf {\color {red}10} &&=&-10\\M_{\mathbf {\color {blue}2} }&=\left[{\begin{array}{rrr}-7&1&5\\3&-7&1\\4&6&-6\end{array}}\right]\qquad &A~M_{2}&=\left[{\begin{array}{rrr}\mathbf {\color {red}2} &26&-14\\-8&\mathbf {\color {red}-12} &12\\6&-14&\mathbf {\color {red}2} \end{array}}\right]\qquad &c_{1}&&&=-{\frac {1}{\mathbf {\color {blue}2} }}\mathbf {\color {red}(-8)} &&=&4\\M_{\mathbf {\color {blue}3} }&=\left[{\begin{array}{rrr}6&26&-14\\-8&-8&12\\6&-14&6\end{array}}\right]\qquad &A~M_{3}&=\left[{\begin{array}{rrr}\mathbf {\color {red}40} &0&0\\0&\mathbf {\color {red}40} &0\\0&0&\mathbf {\color {red}40} \end{array}}\right]\qquad &c_{0}&&&=-{\frac {1}{\mathbf {\color {blue}3} }}\mathbf {\color {red}120} &&=&-40\end{aligned}}}} Furthermore, M 4 = A M 3 + c 0 I = 0 {\displaystyle {\displaystyle M_{4}=A~M_{3}+c_{0}~I=0}} , which confirms the above calculations. The characteristic polynomial of matrix A is thus p A ( λ ) = λ 3 − 10 λ 2 + 4 λ − 40 {\displaystyle {\displaystyle p_{A}(\lambda )=\lambda ^{3}-10\lambda ^{2}+4\lambda -40}} ; the determinant of A is det ( A ) = ( − 1 ) 3 c 0 = 40 {\displaystyle {\displaystyle \det(A)=(-1)^{3}c_{0}=40}} ; the trace is 10=−c2; and the inverse of A is A − 1 = − 1 c 0 M 3 = 1 40 [ 6 26 − 14 − 8 − 8 12 6 − 14 6 ] = [ 0 . 15 0 . 65 − 0 . 35 − 0 . 20 − 0 . 20 0 . 30 0 . 15 − 0 . 35 0 . 15 ] {\displaystyle {\displaystyle A^{-1}=-{\frac {1}{c_{0}}}~M_{3}={\frac {1}{40}}\left[{\begin{array}{rrr}6&26&-14\\-8&-8&12\\6&-14&6\end{array}}\right]=\left[{\begin{array}{rrr}0{.}15&0{.}65&-0{.}35\\-0{.}20&-0{.}20&0{.}30\\0{.}15&-0{.}35&0{.}15\end{array}}\right]}} . == An equivalent but distinct expression == A compact determinant of an m×m-matrix solution for the above Jacobi's formula may alternatively determine the coefficients c, c n − m = ( − 1 ) m m ! | tr ⁡ A m − 1 0 ⋯ 0 tr ⁡ A 2 tr ⁡ A m − 2 ⋯ 0 ⋮ ⋮ ⋮ tr ⁡ A m − 1 tr ⁡ A m − 2 ⋯ ⋯ 1 tr ⁡ A m tr ⁡ A m − 1 ⋯ ⋯ tr ⁡ A | . {\displaystyle c_{n-m}={\frac {(-1)^{m}}{m!}}{\begin{vmatrix}\operatorname {tr} A&m-1&0&\cdots &0\\\operatorname {tr} A^{2}&\operatorname {tr} A&m-2&\cdots &0\\\vdots &\vdots &&&\vdots \\\operatorname {tr} A^{m-1}&\operatorname {tr} A^{m-2}&\cdots &\cdots &1\\\operatorname {tr} A^{m}&\operatorname {tr} A^{m-1}&\cdots &\cdots &\operatorname {tr} A\end{vmatrix}}~.} == See also == Characteristic polynomial Horner's method Fredholm determinant == References == Barbaresco F. (2019) Souriau Exponential Map Algorithm for Machine Learning on Matrix Lie Groups. In: Nielsen F., Barbaresco F. (eds) Geometric Science of Information. GSI 2019. Lecture Notes in Computer Science, vol 11712. Springer, Cham. https://doi.org/10.1007/978-3-030-26980-7_10
Wikipedia:Faithful representation#0
In mathematics, especially in an area of abstract algebra known as representation theory, a faithful representation ρ of a group G on a vector space V is a linear representation in which different elements g of G are represented by distinct linear mappings ρ(g). In more abstract language, this means that the group homomorphism ρ : G → G L ( V ) {\displaystyle \rho :G\to GL(V)} is injective (or one-to-one). == Caveat == While representations of G over a field K are de facto the same as K[G]-modules (with K[G] denoting the group algebra of the group G), a faithful representation of G is not necessarily a faithful module for the group algebra. In fact each faithful K[G]-module is a faithful representation of G, but the converse does not hold. Consider for example the natural representation of the symmetric group Sn in n dimensions by permutation matrices, which is certainly faithful. Here the order of the group is n! while the n × n matrices form a vector space of dimension n2. As soon as n is at least 4, dimension counting means that some linear dependence must occur between permutation matrices (since 24 > 16); this relation means that the module for the group algebra is not faithful. == Properties == A representation V of a finite group G over an algebraically closed field K of characteristic zero is faithful (as a representation) if and only if every irreducible representation of G occurs as a subrepresentation of SnV (the n-th symmetric power of the representation V) for a sufficiently high n. Also, V is faithful (as a representation) if and only if every irreducible representation of G occurs as a subrepresentation of V ⊗ n = V ⊗ V ⊗ ⋯ ⊗ V ⏟ n times {\displaystyle V^{\otimes n}=\underbrace {V\otimes V\otimes \cdots \otimes V} _{n{\text{ times}}}} (the n-th tensor power of the representation V) for a sufficiently high n. == References ==
Wikipedia:Falling and rising factorials#0
In mathematics, the falling factorial (sometimes called the descending factorial, falling sequential product, or lower factorial) is defined as the polynomial ( x ) n = x n _ = x ( x − 1 ) ( x − 2 ) ⋯ ( x − n + 1 ) ⏞ n factors = ∏ k = 1 n ( x − k + 1 ) = ∏ k = 0 n − 1 ( x − k ) . {\displaystyle {\begin{aligned}(x)_{n}=x^{\underline {n}}&=\overbrace {x(x-1)(x-2)\cdots (x-n+1)} ^{n{\text{ factors}}}\\&=\prod _{k=1}^{n}(x-k+1)=\prod _{k=0}^{n-1}(x-k).\end{aligned}}} The rising factorial (sometimes called the Pochhammer function, Pochhammer polynomial, ascending factorial, rising sequential product, or upper factorial) is defined as x ( n ) = x n ¯ = x ( x + 1 ) ( x + 2 ) ⋯ ( x + n − 1 ) ⏞ n factors = ∏ k = 1 n ( x + k − 1 ) = ∏ k = 0 n − 1 ( x + k ) . {\displaystyle {\begin{aligned}x^{(n)}=x^{\overline {n}}&=\overbrace {x(x+1)(x+2)\cdots (x+n-1)} ^{n{\text{ factors}}}\\&=\prod _{k=1}^{n}(x+k-1)=\prod _{k=0}^{n-1}(x+k).\end{aligned}}} The value of each is taken to be 1 (an empty product) when n = 0 {\displaystyle n=0} . These symbols are collectively called factorial powers. The Pochhammer symbol, introduced by Leo August Pochhammer, is the notation ( x ) n {\displaystyle (x)_{n}} , where n is a non-negative integer. It may represent either the rising or the falling factorial, with different articles and authors using different conventions. Pochhammer himself actually used ( x ) n {\displaystyle (x)_{n}} with yet another meaning, namely to denote the binomial coefficient ( x n ) {\displaystyle {\tbinom {x}{n}}} . In this article, the symbol ( x ) n {\displaystyle (x)_{n}} is used to represent the falling factorial, and the symbol x ( n ) {\displaystyle x^{(n)}} is used for the rising factorial. These conventions are used in combinatorics, although Knuth's underline and overline notations x n _ {\displaystyle x^{\underline {n}}} and x n ¯ {\displaystyle x^{\overline {n}}} are increasingly popular. In the theory of special functions (in particular the hypergeometric function) and in the standard reference work Abramowitz and Stegun, the Pochhammer symbol ( x ) n {\displaystyle (x)_{n}} is used to represent the rising factorial. When x {\displaystyle x} is a positive integer, ( x ) n {\displaystyle (x)_{n}} gives the number of n-permutations (sequences of distinct elements) from an x-element set, or equivalently the number of injective functions from a set of size n {\displaystyle n} to a set of size x {\displaystyle x} . The rising factorial x ( n ) {\displaystyle x^{(n)}} gives the number of partitions of an n {\displaystyle n} -element set into x {\displaystyle x} ordered sequences (possibly empty). == Examples and combinatorial interpretation == The first few falling factorials are as follows: ( x ) 0 = 1 ( x ) 1 = x ( x ) 2 = x ( x − 1 ) = x 2 − x ( x ) 3 = x ( x − 1 ) ( x − 2 ) = x 3 − 3 x 2 + 2 x ( x ) 4 = x ( x − 1 ) ( x − 2 ) ( x − 3 ) = x 4 − 6 x 3 + 11 x 2 − 6 x {\displaystyle {\begin{alignedat}{2}(x)_{0}&&&=1\\(x)_{1}&&&=x\\(x)_{2}&=x(x-1)&&=x^{2}-x\\(x)_{3}&=x(x-1)(x-2)&&=x^{3}-3x^{2}+2x\\(x)_{4}&=x(x-1)(x-2)(x-3)&&=x^{4}-6x^{3}+11x^{2}-6x\end{alignedat}}} The first few rising factorials are as follows: x ( 0 ) = 1 x ( 1 ) = x x ( 2 ) = x ( x + 1 ) = x 2 + x x ( 3 ) = x ( x + 1 ) ( x + 2 ) = x 3 + 3 x 2 + 2 x x ( 4 ) = x ( x + 1 ) ( x + 2 ) ( x + 3 ) = x 4 + 6 x 3 + 11 x 2 + 6 x {\displaystyle {\begin{alignedat}{2}x^{(0)}&&&=1\\x^{(1)}&&&=x\\x^{(2)}&=x(x+1)&&=x^{2}+x\\x^{(3)}&=x(x+1)(x+2)&&=x^{3}+3x^{2}+2x\\x^{(4)}&=x(x+1)(x+2)(x+3)&&=x^{4}+6x^{3}+11x^{2}+6x\end{alignedat}}} The coefficients that appear in the expansions are Stirling numbers of the first kind (see below). When the variable x {\displaystyle x} is a positive integer, the number ( x ) n {\displaystyle (x)_{n}} is equal to the number of n-permutations from a set of x items, that is, the number of ways of choosing an ordered list of length n consisting of distinct elements drawn from a collection of size x {\displaystyle x} . For example, ( 8 ) 3 = 8 × 7 × 6 = 336 {\displaystyle (8)_{3}=8\times 7\times 6=336} is the number of different podiums—assignments of gold, silver, and bronze medals—possible in an eight-person race. On the other hand, x ( n ) {\displaystyle x^{(n)}} is "the number of ways to arrange n {\displaystyle n} flags on x {\displaystyle x} flagpoles", where all flags must be used and each flagpole can have any number of flags. Equivalently, this is the number of ways to partition a set of size n {\displaystyle n} (the flags) into x {\displaystyle x} distinguishable parts (the poles), with a linear order on the elements assigned to each part (the order of the flags on a given pole). == Properties == The rising and falling factorials are simply related to one another: ( x ) n = ( x − n + 1 ) ( n ) = ( − 1 ) n ( − x ) ( n ) , x ( n ) = ( x + n − 1 ) n = ( − 1 ) n ( − x ) n . {\displaystyle {\begin{alignedat}{2}{(x)}_{n}&={(x-n+1)}^{(n)}&&=(-1)^{n}(-x)^{(n)},\\x^{(n)}&={(x+n-1)}_{n}&&=(-1)^{n}(-x)_{n}.\end{alignedat}}} Falling and rising factorials of integers are directly related to the ordinary factorial: n ! = 1 ( n ) = ( n ) n , ( m ) n = m ! ( m − n ) ! , m ( n ) = ( m + n − 1 ) ! ( m − 1 ) ! . {\displaystyle {\begin{aligned}n!&=1^{(n)}=(n)_{n},\\[6pt](m)_{n}&={\frac {m!}{(m-n)!}},\\[6pt]m^{(n)}&={\frac {(m+n-1)!}{(m-1)!}}.\end{aligned}}} Rising factorials of half integers are directly related to the double factorial: [ 1 2 ] ( n ) = ( 2 n − 1 ) ! ! 2 n , [ 2 m + 1 2 ] ( n ) = ( 2 ( n + m ) − 1 ) ! ! 2 n ( 2 m − 1 ) ! ! . {\displaystyle {\begin{aligned}\left[{\frac {1}{2}}\right]^{(n)}={\frac {(2n-1)!!}{2^{n}}},\quad \left[{\frac {2m+1}{2}}\right]^{(n)}={\frac {(2(n+m)-1)!!}{2^{n}(2m-1)!!}}.\end{aligned}}} The falling and rising factorials can be used to express a binomial coefficient: ( x ) n n ! = ( x n ) , x ( n ) n ! = ( x + n − 1 n ) . {\displaystyle {\begin{aligned}{\frac {(x)_{n}}{n!}}&={\binom {x}{n}},\\[6pt]{\frac {x^{(n)}}{n!}}&={\binom {x+n-1}{n}}.\end{aligned}}} Thus many identities on binomial coefficients carry over to the falling and rising factorials. The rising and falling factorials are well defined in any unital ring, and therefore x {\displaystyle x} can be taken to be, for example, a complex number, including negative integers, or a polynomial with complex coefficients, or any complex-valued function. === Real numbers and negative n === The falling factorial can be extended to real values of x {\displaystyle x} using the gamma function provided x {\displaystyle x} and x + n {\displaystyle x+n} are real numbers that are not negative integers: ( x ) n = Γ ( x + 1 ) Γ ( x − n + 1 ) , {\displaystyle (x)_{n}={\frac {\Gamma (x+1)}{\Gamma (x-n+1)}}\ ,} and so can the rising factorial: x ( n ) = Γ ( x + n ) Γ ( x ) . {\displaystyle x^{(n)}={\frac {\Gamma (x+n)}{\Gamma (x)}}\ .} === Calculus === Falling factorials appear in multiple differentiation of simple power functions: ( d d x ) n x a = ( a ) n ⋅ x a − n . {\displaystyle \left({\frac {\mathrm {d} }{\mathrm {d} x}}\right)^{n}x^{a}=(a)_{n}\cdot x^{a-n}.} The rising factorial is also integral to the definition of the hypergeometric function: The hypergeometric function is defined for | z | < 1 {\displaystyle |z|<1} by the power series 2 F 1 ( a , b ; c ; z ) = ∑ n = 0 ∞ a ( n ) b ( n ) c ( n ) z n n ! {\displaystyle {}_{2}F_{1}(a,b;c;z)=\sum _{n=0}^{\infty }{\frac {a^{(n)}b^{(n)}}{c^{(n)}}}{\frac {z^{n}}{n!}}} provided that c ≠ 0 , − 1 , − 2 , … {\displaystyle c\neq 0,-1,-2,\ldots } . Note, however, that the hypergeometric function literature typically uses the notation ( a ) n {\displaystyle (a)_{n}} for rising factorials. == Connection coefficients and identities == Falling and rising factorials are closely related to Stirling numbers. Indeed, expanding the product reveals Stirling numbers of the first kind ( x ) n = ∑ k = 0 n s ( n , k ) x k = ∑ k = 0 n [ n k ] ( − 1 ) n − k x k x ( n ) = ∑ k = 0 n [ n k ] x k {\displaystyle {\begin{aligned}(x)_{n}&=\sum _{k=0}^{n}s(n,k)x^{k}\\&=\sum _{k=0}^{n}{\begin{bmatrix}n\\k\end{bmatrix}}(-1)^{n-k}x^{k}\\x^{(n)}&=\sum _{k=0}^{n}{\begin{bmatrix}n\\k\end{bmatrix}}x^{k}\\\end{aligned}}} And the inverse relations uses Stirling numbers of the second kind x n = ∑ k = 0 n { n k } ( x ) k = ∑ k = 0 n { n k } ( − 1 ) n − k x ( k ) . {\displaystyle {\begin{aligned}x^{n}&=\sum _{k=0}^{n}{\begin{Bmatrix}n\\k\end{Bmatrix}}(x)_{k}\\&=\sum _{k=0}^{n}{\begin{Bmatrix}n\\k\end{Bmatrix}}(-1)^{n-k}x^{(k)}.\end{aligned}}} The falling and rising factorials are related to one another through the Lah numbers L ( n , k ) = ( n − 1 k − 1 ) n ! k ! {\textstyle L(n,k)={\binom {n-1}{k-1}}{\frac {n!}{k!}}} : x ( n ) = ∑ k = 0 n L ( n , k ) ( x ) k ( x ) n = ∑ k = 0 n L ( n , k ) ( − 1 ) n − k x ( k ) {\displaystyle {\begin{aligned}x^{(n)}&=\sum _{k=0}^{n}L(n,k)(x)_{k}\\(x)_{n}&=\sum _{k=0}^{n}L(n,k)(-1)^{n-k}x^{(k)}\end{aligned}}} Since the falling factorials are a basis for the polynomial ring, one can express the product of two of them as a linear combination of falling factorials: ( x ) m ( x ) n = ∑ k = 0 m ( m k ) ( n k ) k ! ⋅ ( x ) m + n − k . {\displaystyle (x)_{m}(x)_{n}=\sum _{k=0}^{m}{\binom {m}{k}}{\binom {n}{k}}k!\cdot (x)_{m+n-k}\ .} The coefficients ( m k ) ( n k ) k ! {\displaystyle {\tbinom {m}{k}}{\tbinom {n}{k}}k!} are called connection coefficients, and have a combinatorial interpretation as the number of ways to identify (or "glue together") k elements each from a set of size m and a set of size n. There is also a connection formula for the ratio of two rising factorials given by x ( n ) x ( i ) = ( x + i ) ( n − i ) , for n ≥ i . {\displaystyle {\frac {x^{(n)}}{x^{(i)}}}=(x+i)^{(n-i)},\quad {\text{for }}n\geq i.} Additionally, we can expand generalized exponent laws and negative rising and falling powers through the following identities:(p 52) ( x ) m + n = ( x ) m ( x − m ) n = ( x ) n ( x − n ) m x ( m + n ) = x ( m ) ( x + m ) ( n ) = x ( n ) ( x + n ) ( m ) x ( − n ) = Γ ( x − n ) Γ ( x ) = ( x − n − 1 ) ! ( x − 1 ) ! = 1 ( x − n ) ( n ) = 1 ( x − 1 ) n = 1 ( x − 1 ) ( x − 2 ) ⋯ ( x − n ) ( x ) − n = Γ ( x + 1 ) Γ ( x + n + 1 ) = x ! ( x + n ) ! = 1 ( x + n ) n = 1 ( x + 1 ) ( n ) = 1 ( x + 1 ) ( x + 2 ) ⋯ ( x + n ) {\displaystyle {\begin{aligned}(x)_{m+n}&=(x)_{m}(x-m)_{n}=(x)_{n}(x-n)_{m}\\[6pt]x^{(m+n)}&=x^{(m)}(x+m)^{(n)}=x^{(n)}(x+n)^{(m)}\\[6pt]x^{(-n)}&={\frac {\Gamma (x-n)}{\Gamma (x)}}={\frac {(x-n-1)!}{(x-1)!}}={\frac {1}{(x-n)^{(n)}}}={\frac {1}{(x-1)_{n}}}={\frac {1}{(x-1)(x-2)\cdots (x-n)}}\\[6pt](x)_{-n}&={\frac {\Gamma (x+1)}{\Gamma (x+n+1)}}={\frac {x!}{(x+n)!}}={\frac {1}{(x+n)_{n}}}={\frac {1}{(x+1)^{(n)}}}={\frac {1}{(x+1)(x+2)\cdots (x+n)}}\end{aligned}}} Finally, duplication and multiplication formulas for the falling and rising factorials provide the next relations: ( x ) k + m n = x ( k ) m m n ∏ j = 0 m − 1 ( x − k − j m ) n , for m ∈ N x ( k + m n ) = x ( k ) m m n ∏ j = 0 m − 1 ( x + k + j m ) ( n ) , for m ∈ N ( a x + b ) ( n ) = x n ∏ j = 0 n − 1 ( a + b + j x ) , for x ∈ Z + ( 2 x ) ( 2 n ) = 2 2 n x ( n ) ( x + 1 2 ) ( n ) . {\displaystyle {\begin{aligned}(x)_{k+mn}&=x^{(k)}m^{mn}\prod _{j=0}^{m-1}\left({\frac {x-k-j}{m}}\right)_{n}\,,&{\text{for }}m&\in \mathbb {N} \\[6pt]x^{(k+mn)}&=x^{(k)}m^{mn}\prod _{j=0}^{m-1}\left({\frac {x+k+j}{m}}\right)^{(n)},&{\text{for }}m&\in \mathbb {N} \\[6pt](ax+b)^{(n)}&=x^{n}\prod _{j=0}^{n-1}\left(a+{\frac {b+j}{x}}\right),&{\text{for }}x&\in \mathbb {Z} ^{+}\\[6pt](2x)^{(2n)}&=2^{2n}x^{(n)}\left(x+{\frac {1}{2}}\right)^{(n)}.\end{aligned}}} == Relation to umbral calculus == The falling factorial occurs in a formula which represents polynomials using the forward difference operator Δ ⁡ f ( x ) = d e f f ( x + 1 ) − f ( x ) , {\displaystyle \ \operatorname {\Delta } f(x)~{\stackrel {\mathrm {def} }{=}}~f(x{+}1)-f(x)\ ,} which in form is an exact analogue to Taylor's theorem: Compare the series expansion from umbral calculus f ( t ) = ∑ n = 0 ∞ 1 n ! Δ x n ⁡ f ( x ) | x = 0 ( t ) n {\displaystyle \qquad f(t)=\sum _{n=0}^{\infty }\ {\frac {1}{\ n!}}\operatorname {\Delta } _{x}^{n}f(x){\bigg \vert }_{x=0}\ (t)_{n}\qquad } with the corresponding series from differential calculus f ( t ) = ∑ n = 0 ∞ 1 n ! [ d d ⁡ x ] n f ( x ) | x = 0 t n . {\displaystyle \qquad f(t)=\sum _{n=0}^{\infty }\ {\frac {1}{\ n!}}\left[{\frac {\ \operatorname {d} }{\operatorname {d} x}}\right]^{n}f(x)\ {\bigg \vert }_{x=0}\ t^{n}~.} In this formula and in many other places, the falling factorial ( x ) n {\displaystyle \ (x)_{n}\ } in the calculus of finite differences plays the role of x n {\displaystyle \ x^{n}\ } in differential calculus. For another example, note the similarity of Δ ⁡ ( x ) n = n ( x ) n − 1 {\displaystyle ~\operatorname {\Delta } (x)_{n}=n\ (x)_{n-1}~} to d d ⁡ x x n = n x n − 1 . {\displaystyle ~{\frac {\ \operatorname {d} }{\operatorname {d} x}}\ x^{n}=n\ x^{n-1}~.} A corresponding relation holds for the rising factorial and the backward difference operator. The study of analogies of this type is known as umbral calculus. A general theory covering such relations, including the falling and rising factorial functions, is given by the theory of polynomial sequences of binomial type and Sheffer sequences. Falling and rising factorials are Sheffer sequences of binomial type, as shown by the relations: ( a + b ) n = ∑ j = 0 n ( n j ) ( a ) n − j ( b ) j ( a + b ) ( n ) = ∑ j = 0 n ( n j ) a ( n − j ) b ( j ) {\displaystyle \ {\begin{aligned}(a+b)_{n}&=\sum _{j=0}^{n}\ {\binom {n}{j}}\ (a)_{n-j}\ (b)_{j}\ \\[6pt](a+b)^{(n)}&=\sum _{j=0}^{n}\ {\binom {n}{j}}\ a^{(n-j)}\ b^{(j)}\ \end{aligned}}\ } where the coefficients are the same as those in the binomial theorem. Similarly, the generating function of Pochhammer polynomials then amounts to the umbral exponential, ∑ n = 0 ∞ ( x ) n t n n ! = ( 1 + t ) x , {\displaystyle \ \sum _{n=0}^{\infty }\ (x)_{n}\ {\frac {~t^{n}\ }{\ n!}}\ =\ \left(\ 1+t\ \right)^{x}\ ,} since Δ x ⁡ ( 1 + t ) x = t ⋅ ( 1 + t ) x . {\displaystyle \ \operatorname {\Delta } _{x}\left(\ 1+t\ \right)^{x}\ =\ t\cdot \left(\ 1+t\ \right)^{x}~.} == Alternative notations == An alternative notation for the rising factorial x m ¯ ≡ ( x ) + m ≡ ( x ) m = x ( x + 1 ) … ( x + m − 1 ) ⏞ m factors for integer m ≥ 0 {\displaystyle x^{\overline {m}}\equiv (x)_{+m}\equiv (x)_{m}=\overbrace {x(x+1)\ldots (x+m-1)} ^{m{\text{ factors}}}\quad {\text{for integer }}m\geq 0} and for the falling factorial x m _ ≡ ( x ) − m = x ( x − 1 ) … ( x − m + 1 ) ⏞ m factors for integer m ≥ 0 {\displaystyle x^{\underline {m}}\equiv (x)_{-m}=\overbrace {x(x-1)\ldots (x-m+1)} ^{m{\text{ factors}}}\quad {\text{for integer }}m\geq 0} goes back to A. Capelli (1893) and L. Toscano (1939), respectively. Graham, Knuth, and Patashnik(pp 47, 48) propose to pronounce these expressions as " x {\displaystyle x} to the m {\displaystyle m} rising" and " x {\displaystyle x} to the m {\displaystyle m} falling", respectively. An alternative notation for the rising factorial x ( n ) {\displaystyle x^{(n)}} is the less common ( x ) n + {\displaystyle (x)_{n}^{+}} . When ( x ) n + {\displaystyle (x)_{n}^{+}} is used to denote the rising factorial, the notation ( x ) n − {\displaystyle (x)_{n}^{-}} is typically used for the ordinary falling factorial, to avoid confusion. == Generalizations == The Pochhammer symbol has a generalized version called the generalized Pochhammer symbol, used in multivariate analysis. There is also a q-analogue, the q-Pochhammer symbol. For any fixed arithmetic function f : N → C {\displaystyle f:\mathbb {N} \rightarrow \mathbb {C} } and symbolic parameters x, t, related generalized factorial products of the form ( x ) n , f , t := ∏ k = 0 n − 1 ( x + f ( k ) t k ) {\displaystyle (x)_{n,f,t}:=\prod _{k=0}^{n-1}\left(x+{\frac {f(k)}{t^{k}}}\right)} may be studied from the point of view of the classes of generalized Stirling numbers of the first kind defined by the following coefficients of the powers of x in the expansions of (x)n,f,t and then by the next corresponding triangular recurrence relation: [ n k ] f , t = [ x k − 1 ] ( x ) n , f , t = f ( n − 1 ) t 1 − n [ n − 1 k ] f , t + [ n − 1 k − 1 ] f , t + δ n , 0 δ k , 0 . {\displaystyle {\begin{aligned}\left[{\begin{matrix}n\\k\end{matrix}}\right]_{f,t}&=\left[x^{k-1}\right](x)_{n,f,t}\\&=f(n-1)t^{1-n}\left[{\begin{matrix}n-1\\k\end{matrix}}\right]_{f,t}+\left[{\begin{matrix}n-1\\k-1\end{matrix}}\right]_{f,t}+\delta _{n,0}\delta _{k,0}.\end{aligned}}} These coefficients satisfy a number of analogous properties to those for the Stirling numbers of the first kind as well as recurrence relations and functional equations related to the f-harmonic numbers, F n ( r ) ( t ) := ∑ k ≤ n t k f ( k ) r . {\displaystyle F_{n}^{(r)}(t):=\sum _{k\leq n}{\frac {t^{k}}{f(k)^{r}}}\,.} == See also == Pochhammer k-symbol Vandermonde identity == References == == External links == Weisstein, Eric W. "Pochhammer Symbol". MathWorld.
Wikipedia:Faltings' annihilator theorem#0
In abstract algebra (specifically commutative ring theory), Faltings' annihilator theorem states: given a finitely generated module M over a Noetherian commutative ring A and ideals I, J, the following are equivalent: depth ⁡ M p + ht ⁡ ( I + p ) / p ≥ n {\displaystyle \operatorname {depth} M_{\mathfrak {p}}+\operatorname {ht} (I+{\mathfrak {p}})/{\mathfrak {p}}\geq n} for any prime ideal p ∈ Spec ⁡ ( A ) − V ( J ) {\displaystyle {\mathfrak {p}}\in \operatorname {Spec} (A)-V(J)} , there is an ideal b {\displaystyle {\mathfrak {b}}} in A such that b ⊃ J {\displaystyle {\mathfrak {b}}\supset J} and b {\displaystyle {\mathfrak {b}}} annihilates the local cohomologies H I i ⁡ ( M ) , 0 ≤ i ≤ n − 1 {\displaystyle \operatorname {H} _{I}^{i}(M),0\leq i\leq n-1} , provided either A has a dualizing complex or is a quotient of a regular ring. The theorem was first proved by Faltings in (Faltings 1981). == References == Faltings, Gerd (1981). "Der Endlichkeitssatz in der lokalen Kohomologie". Mathematische Annalen. 255: 45–56.
Wikipedia:Fang Liu (statistician)#0
Fang Liu is a Chinese-American statistician and data scientist whose research topics include differential privacy, data synthesis, trustworthy statistical learning, Bayesian statistics, regularization, missing data, and applications in biostatistics. She is a Notre Dame Collegiate professor in the Department of Applied and Computational Mathematics and Statistics at the University of Notre Dame. == Education and career == Liu was talented in mathematics as a child, competed in mathematics competitions, and wanted to become a mathematician, but was discouraged from doing so by her parents, who wanted her to become a physician. As a compromise, she studied biology at Peking University, where she earned a bachelor's degree in 1997. She began her graduate studies at Iowa State University intending to study genetics, but quickly switched to a program in statistics, and earned a master's degree there in 1999, and a Ph.D. from the University of Michigan in 2003. Her dissertation, Bayesian Methods for Statistical Disclosure Control in Microdata, involved both data privacy and Bayesian statistics, and was supervised by Roderick J. A. Little. After completing her doctorate, she became a researcher at the Merck Research Laboratories. She returned to academia, joining the Notre Dame faculty, in 2011. Her doctoral students at Notre Dame have included Claire McKay Bowen. == Recognition == Liu was named a Fellow of the American Statistical Association in 2021, "for novel contributions to differentially private synthetic data and Bayesian modeling; for outstanding interdisciplinary research in clinical and public health studies; for leadership in education and training; and for service to the profession". Liu became an Elected Member of the International Statistical Institute in 2024. == References == == External links == Home page Fang Liu publications indexed by Google Scholar
Wikipedia:Fangcheng (mathematics)#0
Fangcheng (sometimes written as fang-cheng or fang cheng) (Chinese: 方程; pinyin: fāngchéng) is the title of the eighth chapter of the Chinese mathematical classic Jiuzhang suanshu (The Nine Chapters on the Mathematical Art) composed by several generations of scholars who flourished during the period from the 10th to the 2nd century BC. This text is one of the earliest surviving mathematical texts from China. Several historians of Chinese mathematics have observed that the term fangcheng is not easy to translate exactly. However, as a first approximation it has been translated as "rectangular arrays" or "square arrays". The term is also used to refer to a particular procedure for solving a certain class of problems discussed in Chapter 8 of The Nine Chapters book. The procedure referred to by the term fangcheng and explained in the eighth chapter of The Nine Chapters, is essentially a procedure to find the solution of systems of n equations in n unknowns and is equivalent to certain similar procedures in modern linear algebra. The earliest recorded fangcheng procedure is similar to what we now call Gaussian elimination. The fangcheng procedure was popular in ancient China and was transmitted to Japan. It is possible that this procedure was transmitted to Europe also and served as precursors of the modern theory of matrices, Gaussian elimination, and determinants. It is well known that there was not much work on linear algebra in Greece or Europe prior to Gottfried Leibniz's studies of elimination and determinants, beginning in 1678. Moreover, Leibniz was a Sinophile and was interested in the translations of such Chinese texts as were available to him. However according to Grcar solution of linear equations by elimination was invented independently in several cultures in Eurasia starting from antiquity and in Europe definite examples of procedure were published already by late Renaissance (in 1550's). It is quite possible that already then the procedure was considered by mathematicians elementary and in no need to explanation for professionals, so we may never learn its detailed history except that by then it was practiced in at least several places in Europe. == On the meaning of fangcheng == There is no ambiguity in the meaning of the first character fang. It means "rectangle" or "square." But different interpretations are given to the second character cheng: The earliest extant commentary, by Liu Hui, dated 263 CE defines cheng as "measures," citing the non-mathematical term kecheng, which means "collecting taxes according to tax rates." Liu then defines fangcheng as a "rectangle of measures." The term kecheng, however, is not a mathematical term and it appears nowhere else in the Nine Chapters. Outside of mathematics, kecheng is a term most commonly used for collecting taxes. Li Ji's "Nine Chapters on the Mathematical Arts: Pronunciations and Meanings" also glosses cheng as "measure," again using a nonmathematical term, kelü, commonly used for taxation. This is how Li Ji defines fangcheng: "Fang means [on the] left and right. Cheng means terms of a ratio. Terms of a ratio [on the] left and right, combining together numerous objects, therefore [it] is called a "rectangular array"." Yang Hui's "Nine Chapters on the Mathematical Arts with Detailed Explanations" defines cheng as a general term for measuring weight, height, and length. Detailed Explanations states: What is called "rectangular" (fang) is the shape of the numbers; "measure" (cheng) is the general term for [all forms of] measurement, also a method for equating weights, lengths, and volumes, especially referring to measuring clearly and distinctly the greater and lesser. Since the end of the 19th century, in Chinese mathematical literature the term fangcheng has been used to denote an "equation." However, as already noted, the traditional meaning of the term is very different from "equation." == Contents of the chapter titled Fangcheng == The eighth chapter titled Fangcheng of the Nine Chapters book contains 18 problems. (There are a total of 288 problems in the whole book.) Each of these 18 problems reduces to a problem of solving a system of simultaneous linear equations. Except for one problem, namely Problem 13, all the problems are determinate in the sense that the number of unknowns is same as the number of equations. There are problems involving 2, 3, 4 and 5 unknowns. The table below shows how many unknowns are there in the various problems: The presentations of all the 18 problems (except Problem 1 and Problem 3) follow a common pattern: First the problem is stated. Then the answer to the problem is given. Finally the method of obtaining the answer is indicated. === On Problem 1 === Problem: 3 bundles of high-quality rice straws, 2 bundles of mid-quality rice straws and 1 bundle of low-quality rice straw produce 39 units of rice 2 bundles of high-quality rice straws, 3 bundles of mid-quality rice straws and 1 bundle of low-quality rice straw produce 34 units of rice 1 bundles of high-quality rice straw, 2 bundles of mid-quality rice straws and 3 bundle of low-quality rice straws produce 26 units of rice Question: how many units of rice can high, mid and low quality rice straw produce respectively? Solution: High-quality rice straw each produces ⁠9+1/4⁠ units of rice Mid-quality rice straw each produces ⁠4+1/4⁠ units of rice Low-quality rice straw each produces ⁠2+3/4⁠ units of rice The presentation of Problem 1 contains a description (not a crisp indication) of the procedure for obtaining the solution. The procedure has been referred to as fangcheng shu, which means "fangcheng procedure." The remaining problems all give the instruction "follow the fangcheng" procedure sometimes followed by the instruction to use the "procedure for positive and negative numbers". === On Problem 3 === There is also a special procedure, called "procedure for positive and negative numbers" (zheng fu shu) for handling negative numbers. This procedure is explained as part of the method for solving Problem 3. === On Problem 13 === In the collection of these 18 problems Problem 13 is very special. In it there are 6 unknowns but only 5 equations and so Problem 13 is indeterminate and does not have a unique solution. This is the earliest known reference to a system of linear equations in which the number of unknowns exceeds the number of equations. As per a suggestion of Jean-Claude Martzloff, a historian of Chinese mathematics, Roger Hart has named this problem "the well problem." == References == == Further reading == Christine Andrews-Larson (2015). "Roots of Linear Algebra: An Historical Exploration of Linear Systems". PRIMUS. 25 (6): 507–528. doi:10.1080/10511970.2015.1027975. S2CID 122250602. Kangshen Shen; John N. Crossley; Anthony Wah-Cheung Lun, Hui Liu (1999). The Nine Chapters on the Mathematical Art: Companion and Commentary. Oxford University Press. pp. 386–440. ISBN 978-0-19-853936-0. Retrieved 7 December 2016.
Wikipedia:Fatma Moalla#0
Fatma Moalla (born January 14, 1939) is a Tunisian mathematician who has published research on Finsler spaces and geometry and worked as an assistant at Faculté des Sciences Mathématique, Physiques et Naturelles. The International Fatma Moalla Award for the Popularization of Mathematics is now given in her honor. == Biography == Fatma Moalla was born in Tunis, Tunisia on January 14, 1939. Her father's name was Mohamed Moalla and he worked selling books. She attended secondary school at Lycée de la Rue du Pacha. In 1956, Moalla switched schools and began attending Lycée Carnot of Tunis where she chose to specialize in mathematics. Moalla then attended university at "Institut des Hautes Études de Tunis...[and] She graduated with her mathematics degree in June 1960." Moalla is the first Tunisian to have been awarded the Agrégation in Mathematics in France in 1961 and the first Tunisian woman to be awarded a doctorate in Mathematics in France in 1965. Later she was placed into the National Union of Tunisian Women. == Awards and achievements == The International Fatma Moalla Award for the Popularization of Mathematics is now given each year in honor of Fatma Moalla. == References ==
Wikipedia:Fatos Kongoli#0
Fatos Kongoli (born January 12, 1944) is an Albanian novelist. == Biography == Kongoli was born and raised in Elbasan and studied at the Qemal Stafa High School, in Tirana, Albania. He studied mathematics at university in China during the Sino-Albanian split. During the communist era in Albania, he was employed as a mathematician and did not publish any major works. == Works == Fatos Kangoli's first major novel, The Loser (I humburi, Tirana 1992; English edition, 2007), is set in March 1991, featuring a former university student, Thesar Kumi, who reflects on his life in Hoxhaist Albania and contemplates the futility of struggle and ambition under totalitarian communism. It was first published in 1992, in 10,000 copies, a relatively high number, and found success among the Albanian reading public. Among Kongoli's subsequent novels are: Kufoma, 1994 (The Corpse), which elaborated on his first novel's themes; Dragoi i fildishtë, 1999 (The Ivory Dragon), which focuses primarily on the life of an Albanian student in China in the 1960s; and Lëkura e qenit, 2003, a love story highlighting forgotten affection. Kongoli's novels have been translated into French, German, Italian, Greek, Esperanto, Spanish and Slovak. == Sources == Albanian literature from Robert Elsie == Books by Fatos Kongoli == Books by Fatos Kongoli gjemite e mbytura == References ==
Wikipedia:Faugère's F4 and F5 algorithms#0
In computer algebra, the Faugère F4 algorithm, by Jean-Charles Faugère, computes the Gröbner basis of an ideal of a multivariate polynomial ring. The algorithm uses the same mathematical principles as the Buchberger algorithm, but computes many normal forms in one go by forming a generally sparse matrix and using fast linear algebra to do the reductions in parallel. The Faugère F5 algorithm first calculates the Gröbner basis of a pair of generator polynomials of the ideal. Then it uses this basis to reduce the size of the initial matrices of generators for the next larger basis: If Gprev is an already computed Gröbner basis (f2, …, fm) and we want to compute a Gröbner basis of (f1) + Gprev then we will construct matrices whose rows are m f1 such that m is a monomial not divisible by the leading term of an element of Gprev. This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences. == Implementations == The Faugère F4 algorithm is implemented in FGb, Faugère's own implementation, which includes interfaces for using it from C/C++ or Maple, in Maple computer algebra system, as the option method=fgb of function Groebner[gbasis] in the Magma computer algebra system, in the SageMath computer algebra system, Study versions of the Faugère F5 algorithm is implemented in the SINGULAR computer algebra system; the SageMath computer algebra system. in SymPy Python package. == Applications == The previously intractable "cyclic 10" problem was solved by F5, as were a number of systems related to cryptography; for example HFE and C*. == References == Faugère, J.-C. (June 1999). "A new efficient algorithm for computing Gröbner bases (F4)" (PDF). Journal of Pure and Applied Algebra. 139 (1): 61–88. doi:10.1016/S0022-4049(99)00005-5. ISSN 0022-4049. Faugère, J.-C. (July 2002). "A new efficient algorithm for computing Gröbner bases without reduction to zero ( F 5 )". Proceedings of the 2002 international symposium on Symbolic and algebraic computation (PDF). ACM Press. pp. 75–83. CiteSeerX 10.1.1.188.651. doi:10.1145/780506.780516. ISBN 978-1-58113-484-1. S2CID 15833106. Till Stegers Faugère's F5 Algorithm Revisited (alternative link). Diplom-Mathematiker Thesis, advisor Johannes Buchmann, Technische Universität Darmstadt, September 2005 (revised April 27, 2007). Many references, including links to available implementations. == External links == Faugère's home page (includes pdf reprints of additional papers) An introduction to the F4 algorithm.
Wikipedia:Faulhaber's formula#0
In mathematics, Faulhaber's formula, named after the early 17th century mathematician Johann Faulhaber, expresses the sum of the p-th powers of the first n positive integers ∑ k = 1 n k p = 1 p + 2 p + 3 p + ⋯ + n p {\displaystyle \sum _{k=1}^{n}k^{p}=1^{p}+2^{p}+3^{p}+\cdots +n^{p}} as a polynomial in n. In modern notation, Faulhaber's formula is ∑ k = 1 n k p = 1 p + 1 ∑ r = 0 p ( p + 1 r ) B r n p + 1 − r . {\displaystyle \sum _{k=1}^{n}k^{p}={\frac {1}{p+1}}\sum _{r=0}^{p}{\binom {p+1}{r}}B_{r}n^{p+1-r}.} Here, ( p + 1 r ) {\textstyle {\binom {p+1}{r}}} is the binomial coefficient "p + 1 choose r", and the Bj are the Bernoulli numbers with the convention that B 1 = + 1 2 {\textstyle B_{1}=+{\frac {1}{2}}} . == The result: Faulhaber's formula == Faulhaber's formula concerns expressing the sum of the p-th powers of the first n positive integers ∑ k = 1 n k p = 1 p + 2 p + 3 p + ⋯ + n p {\displaystyle \sum _{k=1}^{n}k^{p}=1^{p}+2^{p}+3^{p}+\cdots +n^{p}} as a (p + 1)th-degree polynomial function of n. The first few examples are well known. For p = 0, we have ∑ k = 1 n k 0 = ∑ k = 1 n 1 = n . {\displaystyle \sum _{k=1}^{n}k^{0}=\sum _{k=1}^{n}1=n.} For p = 1, we have the triangular numbers ∑ k = 1 n k 1 = ∑ k = 1 n k = n ( n + 1 ) 2 = 1 2 ( n 2 + n ) . {\displaystyle \sum _{k=1}^{n}k^{1}=\sum _{k=1}^{n}k={\frac {n(n+1)}{2}}={\frac {1}{2}}(n^{2}+n).} For p = 2, we have the square pyramidal numbers ∑ k = 1 n k 2 = n ( n + 1 ) ( 2 n + 1 ) 6 = 1 3 ( n 3 + 3 2 n 2 + 1 2 n ) . {\displaystyle \sum _{k=1}^{n}k^{2}={\frac {n(n+1)(2n+1)}{6}}={\frac {1}{3}}(n^{3}+{\tfrac {3}{2}}n^{2}+{\tfrac {1}{2}}n).} The coefficients of Faulhaber's formula in its general form involve the Bernoulli numbers Bj. The Bernoulli numbers begin B 0 = 1 B 1 = 1 2 B 2 = 1 6 B 3 = 0 B 4 = − 1 30 B 5 = 0 B 6 = 1 42 B 7 = 0 , {\displaystyle {\begin{aligned}B_{0}&=1&B_{1}&={\tfrac {1}{2}}&B_{2}&={\tfrac {1}{6}}&B_{3}&=0\\B_{4}&=-{\tfrac {1}{30}}&B_{5}&=0&B_{6}&={\tfrac {1}{42}}&B_{7}&=0,\end{aligned}}} where here we use the convention that B 1 = + 1 2 {\textstyle B_{1}=+{\frac {1}{2}}} . The Bernoulli numbers have various definitions (see Bernoulli number#Definitions), such as that they are the coefficients of the exponential generating function t 1 − e − t = t 2 ( coth ⁡ t 2 + 1 ) = ∑ k = 0 ∞ B k t k k ! . {\displaystyle {\frac {t}{1-\mathrm {e} ^{-t}}}={\frac {t}{2}}\left(\operatorname {coth} {\frac {t}{2}}+1\right)=\sum _{k=0}^{\infty }B_{k}{\frac {t^{k}}{k!}}.} Then Faulhaber's formula is that ∑ k = 1 n k p = 1 p + 1 ∑ k = 0 p ( p + 1 k ) B k n p − k + 1 . {\displaystyle \sum _{k=1}^{n}k^{p}={\frac {1}{p+1}}\sum _{k=0}^{p}{\binom {p+1}{k}}B_{k}n^{p-k+1}.} Here, the Bj are the Bernoulli numbers as above, and ( p + 1 k ) = ( p + 1 ) ! ( p + 1 − k ) ! k ! = ( p + 1 ) p ( p − 1 ) ⋯ ( p − k + 3 ) ( p − k + 2 ) k ( k − 1 ) ( k − 2 ) ⋯ 2 ⋅ 1 {\displaystyle {\binom {p+1}{k}}={\frac {(p+1)!}{(p+1-k)!\,k!}}={\frac {(p+1)p(p-1)\cdots (p-k+3)(p-k+2)}{k(k-1)(k-2)\cdots 2\cdot 1}}} is the binomial coefficient "p + 1 choose k". == Examples == So, for example, one has for p = 4, 1 4 + 2 4 + 3 4 + ⋯ + n 4 = 1 5 ∑ j = 0 4 ( 5 j ) B j n 5 − j = 1 5 ( B 0 n 5 + 5 B 1 n 4 + 10 B 2 n 3 + 10 B 3 n 2 + 5 B 4 n ) = 1 5 ( n 5 + 5 2 n 4 + 5 3 n 3 − 1 6 n ) . {\displaystyle {\begin{aligned}1^{4}+2^{4}+3^{4}+\cdots +n^{4}&={\frac {1}{5}}\sum _{j=0}^{4}{5 \choose j}B_{j}n^{5-j}\\&={\frac {1}{5}}\left(B_{0}n^{5}+5B_{1}n^{4}+10B_{2}n^{3}+10B_{3}n^{2}+5B_{4}n\right)\\&={\frac {1}{5}}\left(n^{5}+{\tfrac {5}{2}}n^{4}+{\tfrac {5}{3}}n^{3}-{\tfrac {1}{6}}n\right).\end{aligned}}} The first seven examples of Faulhaber's formula are ∑ k = 1 n k 0 = 1 1 ( n ) ∑ k = 1 n k 1 = 1 2 ( n 2 + 2 2 n ) ∑ k = 1 n k 2 = 1 3 ( n 3 + 3 2 n 2 + 3 6 n ) ∑ k = 1 n k 3 = 1 4 ( n 4 + 4 2 n 3 + 6 6 n 2 + 0 n ) ∑ k = 1 n k 4 = 1 5 ( n 5 + 5 2 n 4 + 10 6 n 3 + 0 n 2 − 5 30 n ) ∑ k = 1 n k 5 = 1 6 ( n 6 + 6 2 n 5 + 15 6 n 4 + 0 n 3 − 15 30 n 2 + 0 n ) ∑ k = 1 n k 6 = 1 7 ( n 7 + 7 2 n 6 + 21 6 n 5 + 0 n 4 − 35 30 n 3 + 0 n 2 + 7 42 n ) . {\displaystyle {\begin{aligned}\sum _{k=1}^{n}k^{0}&={\frac {1}{1}}\,{\big (}n{\big )}\\\sum _{k=1}^{n}k^{1}&={\frac {1}{2}}\,{\big (}n^{2}+{\tfrac {2}{2}}n{\big )}\\\sum _{k=1}^{n}k^{2}&={\frac {1}{3}}\,{\big (}n^{3}+{\tfrac {3}{2}}n^{2}+{\tfrac {3}{6}}n{\big )}\\\sum _{k=1}^{n}k^{3}&={\frac {1}{4}}\,{\big (}n^{4}+{\tfrac {4}{2}}n^{3}+{\tfrac {6}{6}}n^{2}+0n{\big )}\\\sum _{k=1}^{n}k^{4}&={\frac {1}{5}}\,{\big (}n^{5}+{\tfrac {5}{2}}n^{4}+{\tfrac {10}{6}}n^{3}+0n^{2}-{\tfrac {5}{30}}n{\big )}\\\sum _{k=1}^{n}k^{5}&={\frac {1}{6}}\,{\big (}n^{6}+{\tfrac {6}{2}}n^{5}+{\tfrac {15}{6}}n^{4}+0n^{3}-{\tfrac {15}{30}}n^{2}+0n{\big )}\\\sum _{k=1}^{n}k^{6}&={\frac {1}{7}}\,{\big (}n^{7}+{\tfrac {7}{2}}n^{6}+{\tfrac {21}{6}}n^{5}+0n^{4}-{\tfrac {35}{30}}n^{3}+0n^{2}+{\tfrac {7}{42}}n{\big )}.\end{aligned}}} == History == === Ancient period === The history of the problem begins in antiquity and coincides with that of some of its special cases. The case p = 1 {\displaystyle p=1} coincides with that of the calculation of the arithmetic series, the sum of the first n {\displaystyle n} values of an arithmetic progression. This problem is quite simple but the case already known by the Pythagorean school for its connection with triangular numbers is historically interesting: 1 + 2 + ⋯ + n = 1 2 n 2 + 1 2 n , {\displaystyle 1+2+\dots +n={\frac {1}{2}}n^{2}+{\frac {1}{2}}n,} polynomial S 1 , 1 1 ( n ) {\displaystyle S_{1,1}^{1}(n)} calculating the sum of the first n {\displaystyle n} natural numbers. For m > 1 , {\displaystyle m>1,} the first cases encountered in the history of mathematics are: 1 + 3 + ⋯ + 2 n − 1 = n 2 , {\displaystyle 1+3+\dots +2n-1=n^{2},} polynomial S 1 , 2 1 ( n ) {\displaystyle S_{1,2}^{1}(n)} calculating the sum of the first n {\displaystyle n} successive odds forming a square. A property probably well known by the Pythagoreans themselves who, in constructing their figured numbers, had to add each time a gnomon consisting of an odd number of points to obtain the next perfect square. 1 2 + 2 2 + … + n 2 = 1 3 n 3 + 1 2 n 2 + 1 6 n , {\displaystyle 1^{2}+2^{2}+\ldots +n^{2}={\frac {1}{3}}n^{3}+{\frac {1}{2}}n^{2}+{\frac {1}{6}}n,} polynomial S 1 , 1 2 ( n ) {\displaystyle S_{1,1}^{2}(n)} calculating the sum of the squares of the successive integers. Property that is demonstrated in Spirals, a work of Archimedes. 1 3 + 2 3 + … + n 3 = 1 4 n 4 + 1 2 n 3 + 1 4 n 2 , {\displaystyle 1^{3}+2^{3}+\ldots +n^{3}={\frac {1}{4}}n^{4}+{\frac {1}{2}}n^{3}+{\frac {1}{4}}n^{2},} polynomial S 1 , 1 3 ( n ) {\displaystyle S_{1,1}^{3}(n)} calculating the sum of the cubes of the successive integers. Corollary of a theorem of Nicomachus of Gerasa. L'insieme S 1 , 1 m ( n ) {\displaystyle S_{1,1}^{m}(n)} of the cases, to which the two preceding polynomials belong, constitutes the classical problem of powers of successive integers. === Middle period === Over time, many other mathematicians became interested in the problem and made various contributions to its solution. These include Aryabhata, Al-Karaji, Ibn al-Haytham, Thomas Harriot, Johann Faulhaber, Pierre de Fermat and Blaise Pascal who recursively solved the problem of the sum of powers of successive integers by considering an identity that allowed to obtain a polynomial of degree m + 1 {\displaystyle m+1} already knowing the previous ones. Faulhaber's formula is also called Bernoulli's formula. Faulhaber did not know the properties of the coefficients later discovered by Bernoulli. Rather, he knew at least the first 17 cases, as well as the existence of the Faulhaber polynomials for odd powers described below. In 1713, Jacob Bernoulli published under the title Summae Potestatum an expression of the sum of the p powers of the n first integers as a (p + 1)th-degree polynomial function of n, with coefficients involving numbers Bj, now called Bernoulli numbers: ∑ k = 1 n k p = n p + 1 p + 1 + 1 2 n p + 1 p + 1 ∑ j = 2 p ( p + 1 j ) B j n p + 1 − j . {\displaystyle \sum _{k=1}^{n}k^{p}={\frac {n^{p+1}}{p+1}}+{\frac {1}{2}}n^{p}+{1 \over p+1}\sum _{j=2}^{p}{p+1 \choose j}B_{j}n^{p+1-j}.} Introducing also the first two Bernoulli numbers (which Bernoulli did not), the previous formula becomes ∑ k = 1 n k p = 1 p + 1 ∑ j = 0 p ( p + 1 j ) B j n p + 1 − j , {\displaystyle \sum _{k=1}^{n}k^{p}={1 \over p+1}\sum _{j=0}^{p}{p+1 \choose j}B_{j}n^{p+1-j},} using the Bernoulli number of the second kind for which B 1 = 1 2 {\textstyle B_{1}={\frac {1}{2}}} , or ∑ k = 1 n k p = 1 p + 1 ∑ j = 0 p ( − 1 ) j ( p + 1 j ) B j − n p + 1 − j , {\displaystyle \sum _{k=1}^{n}k^{p}={1 \over p+1}\sum _{j=0}^{p}(-1)^{j}{p+1 \choose j}B_{j}^{-}n^{p+1-j},} using the Bernoulli number of the first kind for which B 1 − = − 1 2 . {\textstyle B_{1}^{-}=-{\frac {1}{2}}.} A rigorous proof of these formulas and Faulhaber's assertion that such formulas would exist for all odd powers took until Carl Jacobi (1834), two centuries later. Jacobi benefited from the progress of mathematical analysis using the development in infinite series of an exponential function generating Bernoulli numbers. === Modern period === In 1982 A.W.F. Edwards publishes an article in which he shows that Pascal's identity can be expressed by means of triangular matrices containing the Pascal's triangle deprived of 'last element of each line: ( n n 2 n 3 n 4 n 5 ) = ( 1 0 0 0 0 1 2 0 0 0 1 3 3 0 0 1 4 6 4 0 1 5 10 10 5 ) ( n ∑ k = 0 n − 1 k 1 ∑ k = 0 n − 1 k 2 ∑ k = 0 n − 1 k 3 ∑ k = 0 n − 1 k 4 ) {\displaystyle {\begin{pmatrix}n\\n^{2}\\n^{3}\\n^{4}\\n^{5}\\\end{pmatrix}}={\begin{pmatrix}1&0&0&0&0\\1&2&0&0&0\\1&3&3&0&0\\1&4&6&4&0\\1&5&10&10&5\end{pmatrix}}{\begin{pmatrix}n\\\sum _{k=0}^{n-1}k^{1}\\\sum _{k=0}^{n-1}k^{2}\\\sum _{k=0}^{n-1}k^{3}\\\sum _{k=0}^{n-1}k^{4}\\\end{pmatrix}}} The example is limited by the choice of a fifth order matrix but is easily extendable to higher orders. The equation can be written as: N → = A S → {\displaystyle {\vec {N}}=A{\vec {S}}} and multiplying the two sides of the equation to the left by A − 1 {\displaystyle A^{-1}} , inverse of the matrix A, we obtain A − 1 N → = S → {\displaystyle A^{-1}{\vec {N}}={\vec {S}}} which allows to arrive directly at the polynomial coefficients without directly using the Bernoulli numbers. Other authors after Edwards dealing with various aspects of the power sum problem take the matrix path and studying aspects of the problem in their articles useful tools such as the Vandermonde vector. Other researchers continue to explore through the traditional analytic route and generalize the problem of the sum of successive integers to any geometric progression == Proof with exponential generating function == Let S p ( n ) = ∑ k = 1 n k p , {\displaystyle S_{p}(n)=\sum _{k=1}^{n}k^{p},} denote the sum under consideration for integer p ≥ 0. {\displaystyle p\geq 0.} Define the following exponential generating function with (initially) indeterminate z {\displaystyle z} G ( z , n ) = ∑ p = 0 ∞ S p ( n ) 1 p ! z p . {\displaystyle G(z,n)=\sum _{p=0}^{\infty }S_{p}(n){\frac {1}{p!}}z^{p}.} We find G ( z , n ) = ∑ p = 0 ∞ ∑ k = 1 n 1 p ! ( k z ) p = ∑ k = 1 n e k z = e z ⋅ 1 − e n z 1 − e z , = 1 − e n z e − z − 1 . {\displaystyle {\begin{aligned}G(z,n)=&\sum _{p=0}^{\infty }\sum _{k=1}^{n}{\frac {1}{p!}}(kz)^{p}=\sum _{k=1}^{n}e^{kz}=e^{z}\cdot {\frac {1-e^{nz}}{1-e^{z}}},\\=&{\frac {1-e^{nz}}{e^{-z}-1}}.\end{aligned}}} This is an entire function in z {\displaystyle z} so that z {\displaystyle z} can be taken to be any complex number. We next recall the exponential generating function for the Bernoulli polynomials B j ( x ) {\displaystyle B_{j}(x)} z e z x e z − 1 = ∑ j = 0 ∞ B j ( x ) z j j ! , {\displaystyle {\frac {ze^{zx}}{e^{z}-1}}=\sum _{j=0}^{\infty }B_{j}(x){\frac {z^{j}}{j!}},} where B j = B j ( 0 ) {\displaystyle B_{j}=B_{j}(0)} denotes the Bernoulli number with the convention B 1 = − 1 2 {\displaystyle B_{1}=-{\frac {1}{2}}} . This may be converted to a generating function with the convention B 1 + = 1 2 {\displaystyle B_{1}^{+}={\frac {1}{2}}} by the addition of j {\displaystyle j} to the coefficient of x j − 1 {\displaystyle x^{j-1}} in each B j ( x ) {\displaystyle B_{j}(x)} , see Bernoulli polynomials#Explicit formula for example. B 0 {\displaystyle B_{0}} does not need to be changed. ∑ j = 0 ∞ B j + ( x ) z j j ! = z e z x e z − 1 + ∑ j = 1 ∞ j x j − 1 z j j ! = z e z x e z − 1 + ∑ j = 1 ∞ x j − 1 z j ( j − 1 ) ! = z e z x e z − 1 + z e z x = z e z x + z e z e z x − z e z x e z − 1 = z e z x 1 − e − z {\displaystyle {\begin{aligned}\sum _{j=0}^{\infty }B_{j}^{+}(x){\frac {z^{j}}{j!}}\\=&{\frac {ze^{zx}}{e^{z}-1}}+\sum _{j=1}^{\infty }jx^{j-1}{\frac {z^{j}}{j!}}\\=&{\frac {ze^{zx}}{e^{z}-1}}+\sum _{j=1}^{\infty }x^{j-1}{\frac {z^{j}}{(j-1)!}}\\=&{\frac {ze^{zx}}{e^{z}-1}}+ze^{zx}\\=&{\frac {ze^{zx}+ze^{z}e^{zx}-ze^{zx}}{e^{z}-1}}\\=&{\frac {ze^{zx}}{1-e^{-z}}}\end{aligned}}} so that ∑ j = 0 ∞ B j + ( x ) z j j ! − ∑ j = 0 ∞ B j + ( 0 ) z j j ! = z e z x 1 − e − z − z 1 − e − z = z G ( z , n ) {\displaystyle \sum _{j=0}^{\infty }B_{j}^{+}(x){\frac {z^{j}}{j!}}-\sum _{j=0}^{\infty }B_{j}^{+}(0){\frac {z^{j}}{j!}}={\frac {ze^{zx}}{1-e^{-z}}}-{\frac {z}{1-e^{-z}}}=zG(z,n)} It follows that S p ( n ) = B p + 1 + ( n ) − B p + 1 + ( 0 ) p + 1 {\displaystyle S_{p}(n)={\frac {B_{p+1}^{+}(n)-B_{p+1}^{+}(0)}{p+1}}} for all p {\displaystyle p} . == Faulhaber polynomials == The term Faulhaber polynomials is used by some authors to refer to another polynomial sequence related to that given above. Write a = ∑ k = 1 n k = n ( n + 1 ) 2 . {\displaystyle a=\sum _{k=1}^{n}k={\frac {n(n+1)}{2}}.} Faulhaber observed that if p is odd then ∑ k = 1 n k p {\textstyle \sum _{k=1}^{n}k^{p}} is a polynomial function of a. For p = 1, it is clear that ∑ k = 1 n k 1 = ∑ k = 1 n k = n ( n + 1 ) 2 = a . {\displaystyle \sum _{k=1}^{n}k^{1}=\sum _{k=1}^{n}k={\frac {n(n+1)}{2}}=a.} For p = 3, the result that ∑ k = 1 n k 3 = n 2 ( n + 1 ) 2 4 = a 2 {\displaystyle \sum _{k=1}^{n}k^{3}={\frac {n^{2}(n+1)^{2}}{4}}=a^{2}} is known as Nicomachus's theorem. Further, we have ∑ k = 1 n k 5 = 4 a 3 − a 2 3 ∑ k = 1 n k 7 = 6 a 4 − 4 a 3 + a 2 3 ∑ k = 1 n k 9 = 16 a 5 − 20 a 4 + 12 a 3 − 3 a 2 5 ∑ k = 1 n k 11 = 16 a 6 − 32 a 5 + 34 a 4 − 20 a 3 + 5 a 2 3 {\displaystyle {\begin{aligned}\sum _{k=1}^{n}k^{5}&={\frac {4a^{3}-a^{2}}{3}}\\\sum _{k=1}^{n}k^{7}&={\frac {6a^{4}-4a^{3}+a^{2}}{3}}\\\sum _{k=1}^{n}k^{9}&={\frac {16a^{5}-20a^{4}+12a^{3}-3a^{2}}{5}}\\\sum _{k=1}^{n}k^{11}&={\frac {16a^{6}-32a^{5}+34a^{4}-20a^{3}+5a^{2}}{3}}\end{aligned}}} (see OEIS: A000537, OEIS: A000539, OEIS: A000541, OEIS: A007487, OEIS: A123095). More generally, ∑ k = 1 n k 2 m + 1 = 1 2 2 m + 2 ( 2 m + 2 ) ∑ q = 0 m ( 2 m + 2 2 q ) ( 2 − 2 2 q ) B 2 q [ ( 8 a + 1 ) m + 1 − q − 1 ] . {\displaystyle \sum _{k=1}^{n}k^{2m+1}={\frac {1}{2^{2m+2}(2m+2)}}\sum _{q=0}^{m}{\binom {2m+2}{2q}}(2-2^{2q})~B_{2q}~\left[(8a+1)^{m+1-q}-1\right].} Some authors call the polynomials in a on the right-hand sides of these identities Faulhaber polynomials. These polynomials are divisible by a2 because the Bernoulli number Bj is 0 for odd j > 1. Inversely, writing for simplicity s j := ∑ k = 1 n k j {\displaystyle s_{j}:=\sum _{k=1}^{n}k^{j}} , we have 4 a 3 = 3 s 5 + s 3 8 a 4 = 4 s 7 + 4 s 5 16 a 5 = 5 s 9 + 10 s 7 + s 5 {\displaystyle {\begin{aligned}4a^{3}&=3s_{5}+s_{3}\\8a^{4}&=4s_{7}+4s_{5}\\16a^{5}&=5s_{9}+10s_{7}+s_{5}\end{aligned}}} and generally 2 m − 1 a m = ∑ j > 0 ( m 2 j − 1 ) s 2 m − 2 j + 1 . {\displaystyle 2^{m-1}a^{m}=\sum _{j>0}{\binom {m}{2j-1}}s_{2m-2j+1}.} Faulhaber also knew that if a sum for an odd power is given by ∑ k = 1 n k 2 m + 1 = c 1 a 2 + c 2 a 3 + ⋯ + c m a m + 1 {\displaystyle \sum _{k=1}^{n}k^{2m+1}=c_{1}a^{2}+c_{2}a^{3}+\cdots +c_{m}a^{m+1}} then the sum for the even power just below is given by ∑ k = 1 n k 2 m = n + 1 2 2 m + 1 ( 2 c 1 a + 3 c 2 a 2 + ⋯ + ( m + 1 ) c m a m ) . {\displaystyle \sum _{k=1}^{n}k^{2m}={\frac {n+{\frac {1}{2}}}{2m+1}}(2c_{1}a+3c_{2}a^{2}+\cdots +(m+1)c_{m}a^{m}).} Note that the polynomial in parentheses is the derivative of the polynomial above with respect to a. Since a = n(n + 1)/2, these formulae show that for an odd power (greater than 1), the sum is a polynomial in n having factors n2 and (n + 1)2, while for an even power the polynomial has factors n, n + 1/2 and n + 1. == Expressing products of power sums as linear combinations of power sums == Products of two (and thus by iteration, several) power sums s j r := ∑ k = 1 n k j r {\displaystyle s_{j_{r}}:=\sum _{k=1}^{n}k^{j_{r}}} can be written as linear combinations of power sums with either all degrees even or all degrees odd, depending on the total degree of the product as a polynomial in n {\displaystyle n} , e.g. 30 s 2 s 4 = − s 3 + 15 s 5 + 16 s 7 {\displaystyle 30s_{2}s_{4}=-s_{3}+15s_{5}+16s_{7}} . Note that the sums of coefficients must be equal on both sides, as can be seen by putting n = 1 {\displaystyle n=1} , which makes all the s j {\displaystyle s_{j}} equal to 1. Some general formulae include: ( m + 1 ) s m 2 = 2 ∑ j = 0 ⌊ m 2 ⌋ ( m + 1 2 j ) ( 2 m + 1 − 2 j ) B 2 j s 2 m + 1 − 2 j . m ( m + 1 ) s m s m − 1 = m ( m + 1 ) B m s m + ∑ j = 0 ⌊ m − 1 2 ⌋ ( m + 1 2 j ) ( 2 m + 1 − 2 j ) B 2 j s 2 m − 2 j . 2 m − 1 s 1 m = ∑ j = 1 ⌊ m + 1 2 ⌋ ( m 2 j − 1 ) s 2 m + 1 − 2 j . {\displaystyle {\begin{aligned}(m+1)s_{m}^{2}&=2\sum _{j=0}^{\lfloor {\frac {m}{2}}\rfloor }{\binom {m+1}{2j}}(2m+1-2j)B_{2j}s_{2m+1-2j}.\\m(m+1)s_{m}s_{m-1}&=m(m+1)B_{m}s_{m}+\sum _{j=0}^{\lfloor {\frac {m-1}{2}}\rfloor }{\binom {m+1}{2j}}(2m+1-2j)B_{2j}s_{2m-2j}.\\2^{m-1}s_{1}^{m}&=\sum _{j=1}^{\lfloor {\frac {m+1}{2}}\rfloor }{\binom {m}{2j-1}}s_{2m+1-2j}.\end{aligned}}} Note that in the second formula, for even m {\displaystyle m} the term corresponding to j = m 2 {\displaystyle j={\dfrac {m}{2}}} is different from the other terms in the sum, while for odd m {\displaystyle m} , this additional term vanishes because of B m = 0 {\displaystyle B_{m}=0} . == Matrix form == Faulhaber's formula can also be written in a form using matrix multiplication. Take the first seven examples ∑ k = 1 n k 0 = − 1 n ∑ k = 1 n k 1 = − 1 2 n + 1 2 n 2 ∑ k = 1 n k 2 = − 1 6 n + 1 2 n 2 + 1 3 n 3 ∑ k = 1 n k 3 = − 0 n + 1 4 n 2 + 1 2 n 3 + 1 4 n 4 ∑ k = 1 n k 4 = − 1 30 n + 0 n 2 + 1 3 n 3 + 1 2 n 4 + 1 5 n 5 ∑ k = 1 n k 5 = − 0 n − 1 12 n 2 + 0 n 3 + 5 12 n 4 + 1 2 n 5 + 1 6 n 6 ∑ k = 1 n k 6 = − 1 42 n + 0 n 2 − 1 6 n 3 + 0 n 4 + 1 2 n 5 + 1 2 n 6 + 1 7 n 7 . {\displaystyle {\begin{aligned}\sum _{k=1}^{n}k^{0}&={\phantom {-}}1n\\\sum _{k=1}^{n}k^{1}&={\phantom {-}}{\tfrac {1}{2}}n+{\tfrac {1}{2}}n^{2}\\\sum _{k=1}^{n}k^{2}&={\phantom {-}}{\tfrac {1}{6}}n+{\tfrac {1}{2}}n^{2}+{\tfrac {1}{3}}n^{3}\\\sum _{k=1}^{n}k^{3}&={\phantom {-}}0n+{\tfrac {1}{4}}n^{2}+{\tfrac {1}{2}}n^{3}+{\tfrac {1}{4}}n^{4}\\\sum _{k=1}^{n}k^{4}&=-{\tfrac {1}{30}}n+0n^{2}+{\tfrac {1}{3}}n^{3}+{\tfrac {1}{2}}n^{4}+{\tfrac {1}{5}}n^{5}\\\sum _{k=1}^{n}k^{5}&={\phantom {-}}0n-{\tfrac {1}{12}}n^{2}+0n^{3}+{\tfrac {5}{12}}n^{4}+{\tfrac {1}{2}}n^{5}+{\tfrac {1}{6}}n^{6}\\\sum _{k=1}^{n}k^{6}&={\phantom {-}}{\tfrac {1}{42}}n+0n^{2}-{\tfrac {1}{6}}n^{3}+0n^{4}+{\tfrac {1}{2}}n^{5}+{\tfrac {1}{2}}n^{6}+{\tfrac {1}{7}}n^{7}.\end{aligned}}} Writing these polynomials as a product between matrices gives ( ∑ k 0 ∑ k 1 ∑ k 2 ∑ k 3 ∑ k 4 ∑ k 5 ∑ k 6 ) = G 7 ( n n 2 n 3 n 4 n 5 n 6 n 7 ) , {\displaystyle {\begin{pmatrix}\sum k^{0}\\\sum k^{1}\\\sum k^{2}\\\sum k^{3}\\\sum k^{4}\\\sum k^{5}\\\sum k^{6}\end{pmatrix}}=G_{7}{\begin{pmatrix}n\\n^{2}\\n^{3}\\n^{4}\\n^{5}\\n^{6}\\n^{7}\end{pmatrix}},} where G 7 = ( 1 0 0 0 0 0 0 1 2 1 2 0 0 0 0 0 1 6 1 2 1 3 0 0 0 0 0 1 4 1 2 1 4 0 0 0 − 1 30 0 1 3 1 2 1 5 0 0 0 − 1 12 0 5 12 1 2 1 6 0 1 42 0 − 1 6 0 1 2 1 2 1 7 ) . {\displaystyle G_{7}={\begin{pmatrix}1&0&0&0&0&0&0\\{1 \over 2}&{1 \over 2}&0&0&0&0&0\\{1 \over 6}&{1 \over 2}&{1 \over 3}&0&0&0&0\\0&{1 \over 4}&{1 \over 2}&{1 \over 4}&0&0&0\\-{1 \over 30}&0&{1 \over 3}&{1 \over 2}&{1 \over 5}&0&0\\0&-{1 \over 12}&0&{5 \over 12}&{1 \over 2}&{1 \over 6}&0\\{1 \over 42}&0&-{1 \over 6}&0&{1 \over 2}&{1 \over 2}&{1 \over 7}\end{pmatrix}}.} Surprisingly, inverting the matrix of polynomial coefficients yields something more familiar: G 7 − 1 = ( 1 0 0 0 0 0 0 − 1 2 0 0 0 0 0 1 − 3 3 0 0 0 0 − 1 4 − 6 4 0 0 0 1 − 5 10 − 10 5 0 0 − 1 6 − 15 20 − 15 6 0 1 − 7 21 − 35 35 − 21 7 ) = A ¯ 7 {\displaystyle G_{7}^{-1}={\begin{pmatrix}1&0&0&0&0&0&0\\-1&2&0&0&0&0&0\\1&-3&3&0&0&0&0\\-1&4&-6&4&0&0&0\\1&-5&10&-10&5&0&0\\-1&6&-15&20&-15&6&0\\1&-7&21&-35&35&-21&7\\\end{pmatrix}}={\overline {A}}_{7}} In the inverted matrix, Pascal's triangle can be recognized, without the last element of each row, and with alternating signs. Let A 7 {\displaystyle A_{7}} be the matrix obtained from A ¯ 7 {\displaystyle {\overline {A}}_{7}} by changing the signs of the entries in odd diagonals, that is by replacing a i , j {\displaystyle a_{i,j}} by ( − 1 ) i + j a i , j {\displaystyle (-1)^{i+j}a_{i,j}} , let G ¯ 7 {\displaystyle {\overline {G}}_{7}} be the matrix obtained from G 7 {\displaystyle G_{7}} with a similar transformation, then A 7 = ( 1 0 0 0 0 0 0 1 2 0 0 0 0 0 1 3 3 0 0 0 0 1 4 6 4 0 0 0 1 5 10 10 5 0 0 1 6 15 20 15 6 0 1 7 21 35 35 21 7 ) {\displaystyle A_{7}={\begin{pmatrix}1&0&0&0&0&0&0\\1&2&0&0&0&0&0\\1&3&3&0&0&0&0\\1&4&6&4&0&0&0\\1&5&10&10&5&0&0\\1&6&15&20&15&6&0\\1&7&21&35&35&21&7\\\end{pmatrix}}} and A 7 − 1 = ( 1 0 0 0 0 0 0 − 1 2 1 2 0 0 0 0 0 1 6 − 1 2 1 3 0 0 0 0 0 1 4 − 1 2 1 4 0 0 0 − 1 30 0 1 3 − 1 2 1 5 0 0 0 − 1 12 0 5 12 − 1 2 1 6 0 1 42 0 − 1 6 0 1 2 − 1 2 1 7 ) = G ¯ 7 . {\displaystyle A_{7}^{-1}={\begin{pmatrix}1&0&0&0&0&0&0\\-{1 \over 2}&{1 \over 2}&0&0&0&0&0\\{1 \over 6}&-{1 \over 2}&{1 \over 3}&0&0&0&0\\0&{1 \over 4}&-{1 \over 2}&{1 \over 4}&0&0&0\\-{1 \over 30}&0&{1 \over 3}&-{1 \over 2}&{1 \over 5}&0&0\\0&-{1 \over 12}&0&{5 \over 12}&-{1 \over 2}&{1 \over 6}&0\\{1 \over 42}&0&-{1 \over 6}&0&{1 \over 2}&-{1 \over 2}&{1 \over 7}\end{pmatrix}}={\overline {G}}_{7}.} Also ( ∑ k = 0 n − 1 k 0 ∑ k = 0 n − 1 k 1 ∑ k = 0 n − 1 k 2 ∑ k = 0 n − 1 k 3 ∑ k = 0 n − 1 k 4 ∑ k = 0 n − 1 k 5 ∑ k = 0 n − 1 k 6 ) = G ¯ 7 ( n n 2 n 3 n 4 n 5 n 6 n 7 ) {\displaystyle {\begin{pmatrix}\sum _{k=0}^{n-1}k^{0}\\\sum _{k=0}^{n-1}k^{1}\\\sum _{k=0}^{n-1}k^{2}\\\sum _{k=0}^{n-1}k^{3}\\\sum _{k=0}^{n-1}k^{4}\\\sum _{k=0}^{n-1}k^{5}\\\sum _{k=0}^{n-1}k^{6}\\\end{pmatrix}}={\overline {G}}_{7}{\begin{pmatrix}n\\n^{2}\\n^{3}\\n^{4}\\n^{5}\\n^{6}\\n^{7}\\\end{pmatrix}}} This is because it is evident that ∑ k = 1 n k m − ∑ k = 0 n − 1 k m = n m {\textstyle \sum _{k=1}^{n}k^{m}-\sum _{k=0}^{n-1}k^{m}=n^{m}} and that therefore polynomials of degree m + 1 {\displaystyle m+1} of the form 1 m + 1 n m + 1 + 1 2 n m + ⋯ {\textstyle {\frac {1}{m+1}}n^{m+1}+{\frac {1}{2}}n^{m}+\cdots } subtracted the monomial difference n m {\displaystyle n^{m}} they become 1 m + 1 n m + 1 − 1 2 n m + ⋯ {\textstyle {\frac {1}{m+1}}n^{m+1}-{\frac {1}{2}}n^{m}+\cdots } . This is true for every order, that is, for each positive integer m, one has G m − 1 = A ¯ m {\displaystyle G_{m}^{-1}={\overline {A}}_{m}} and G ¯ m − 1 = A m . {\displaystyle {\overline {G}}_{m}^{-1}=A_{m}.} Thus, it is possible to obtain the coefficients of the polynomials of the sums of powers of successive integers without resorting to the numbers of Bernoulli but by inverting the matrix easily obtained from the triangle of Pascal. == Variations == Replacing k {\displaystyle k} with p − k {\displaystyle p-k} , we find the alternative expression: ∑ k = 1 n k p = ∑ k = 0 p 1 k + 1 ( p k ) B p − k n k + 1 . {\displaystyle \sum _{k=1}^{n}k^{p}=\sum _{k=0}^{p}{\frac {1}{k+1}}{p \choose k}B_{p-k}n^{k+1}.} Subtracting n p {\displaystyle n^{p}} from both sides of the original formula and incrementing n {\displaystyle n} by 1 {\displaystyle 1} , we get ∑ k = 1 n k p = 1 p + 1 ∑ k = 0 p ( p + 1 k ) ( − 1 ) k B k ( n + 1 ) p − k + 1 = ∑ k = 0 p 1 k + 1 ( p k ) ( − 1 ) p − k B p − k ( n + 1 ) k + 1 , {\displaystyle {\begin{aligned}\sum _{k=1}^{n}k^{p}&={\frac {1}{p+1}}\sum _{k=0}^{p}{\binom {p+1}{k}}(-1)^{k}B_{k}(n+1)^{p-k+1}\\&=\sum _{k=0}^{p}{\frac {1}{k+1}}{\binom {p}{k}}(-1)^{p-k}B_{p-k}(n+1)^{k+1},\end{aligned}}} where ( − 1 ) k B k = B k − {\displaystyle (-1)^{k}B_{k}=B_{k}^{-}} can be interpreted as "negative" Bernoulli numbers with B 1 − = − 1 2 {\displaystyle B_{1}^{-}=-{\tfrac {1}{2}}} . We may also expand G ( z , n ) {\displaystyle G(z,n)} in terms of the Bernoulli polynomials to find G ( z , n ) = e ( n + 1 ) z e z − 1 − e z e z − 1 = ∑ j = 0 ∞ ( B j ( n + 1 ) − ( − 1 ) j B j ) z j − 1 j ! , {\displaystyle {\begin{aligned}G(z,n)&={\frac {e^{(n+1)z}}{e^{z}-1}}-{\frac {e^{z}}{e^{z}-1}}\\&=\sum _{j=0}^{\infty }\left(B_{j}(n+1)-(-1)^{j}B_{j}\right){\frac {z^{j-1}}{j!}},\end{aligned}}} which implies ∑ k = 1 n k p = 1 p + 1 ( B p + 1 ( n + 1 ) − ( − 1 ) p + 1 B p + 1 ) = 1 p + 1 ( B p + 1 ( n + 1 ) − B p + 1 ( 1 ) ) . {\displaystyle \sum _{k=1}^{n}k^{p}={\frac {1}{p+1}}\left(B_{p+1}(n+1)-(-1)^{p+1}B_{p+1}\right)={\frac {1}{p+1}}\left(B_{p+1}(n+1)-B_{p+1}(1)\right).} Since B n = 0 {\displaystyle B_{n}=0} whenever n > 1 {\displaystyle n>1} is odd, the factor ( − 1 ) p + 1 {\displaystyle (-1)^{p+1}} may be removed when p > 0 {\displaystyle p>0} . It can also be expressed in terms of Stirling numbers of the second kind and falling factorials as ∑ k = 0 n k p = ∑ k = 0 p { p k } ( n + 1 ) k + 1 k + 1 , {\displaystyle \sum _{k=0}^{n}k^{p}=\sum _{k=0}^{p}\left\{{p \atop k}\right\}{\frac {(n+1)_{k+1}}{k+1}},} ∑ k = 1 n k p = ∑ k = 1 p + 1 { p + 1 k } ( n ) k k . {\displaystyle \sum _{k=1}^{n}k^{p}=\sum _{k=1}^{p+1}\left\{{p+1 \atop k}\right\}{\frac {(n)_{k}}{k}}.} This is due to the definition of the Stirling numbers of the second kind as monomials in terms of falling factorials, and the behaviour of falling factorials under the indefinite sum. Interpreting the Stirling numbers of the second kind, { p + 1 k } {\displaystyle \left\{{p+1 \atop k}\right\}} , as the number of set partitions of [ p + 1 ] {\displaystyle \lbrack p+1\rbrack } into k {\displaystyle k} parts, the identity has a direct combinatorial proof since both sides count the number of functions f : [ p + 1 ] → [ n ] {\displaystyle f:\lbrack p+1\rbrack \to \lbrack n\rbrack } with f ( 1 ) {\displaystyle f(1)} maximal. The index of summation on the left hand side represents k = f ( 1 ) {\displaystyle k=f(1)} , while the index on the right hand side is represents the number of elements in the image of f. There is also a similar (but somehow simpler) expression: using the idea of telescoping and the binomial theorem, one gets Pascal's identity: ( n + 1 ) k + 1 − 1 = ∑ m = 1 n ( ( m + 1 ) k + 1 − m k + 1 ) = ∑ p = 0 k ( k + 1 p ) ( 1 p + 2 p + ⋯ + n p ) . {\displaystyle {\begin{aligned}(n+1)^{k+1}-1&=\sum _{m=1}^{n}\left((m+1)^{k+1}-m^{k+1}\right)\\&=\sum _{p=0}^{k}{\binom {k+1}{p}}(1^{p}+2^{p}+\dots +n^{p}).\end{aligned}}} This in particular yields the examples below – e.g., take k = 1 to get the first example. In a similar fashion we also find n k + 1 = ∑ m = 1 n ( m k + 1 − ( m − 1 ) k + 1 ) = ∑ p = 0 k ( − 1 ) k + p ( k + 1 p ) ( 1 p + 2 p + ⋯ + n p ) . {\displaystyle {\begin{aligned}n^{k+1}=\sum _{m=1}^{n}\left(m^{k+1}-(m-1)^{k+1}\right)=\sum _{p=0}^{k}(-1)^{k+p}{\binom {k+1}{p}}(1^{p}+2^{p}+\dots +n^{p}).\end{aligned}}} A generalized expression involving the Eulerian numbers A n ( x ) {\displaystyle A_{n}(x)} is ∑ n = 1 ∞ n k x n = x ( 1 − x ) k + 1 A k ( x ) {\displaystyle \sum _{n=1}^{\infty }n^{k}x^{n}={\frac {x}{(1-x)^{k+1}}}A_{k}(x)} . Faulhaber's formula was generalized by Guo and Zeng to a q-analog. == Relationship to Riemann zeta function == Using B k = − k ζ ( 1 − k ) {\displaystyle B_{k}=-k\zeta (1-k)} , one can write ∑ k = 1 n k p = n p + 1 p + 1 − ∑ j = 0 p − 1 ( p j ) ζ ( − j ) n p − j . {\displaystyle \sum \limits _{k=1}^{n}k^{p}={\frac {n^{p+1}}{p+1}}-\sum \limits _{j=0}^{p-1}{p \choose j}\zeta (-j)n^{p-j}.} If we consider the generating function G ( z , n ) {\displaystyle G(z,n)} in the large n {\displaystyle n} limit for ℜ ( z ) < 0 {\displaystyle \Re (z)<0} , then we find lim n → ∞ G ( z , n ) = 1 e − z − 1 = ∑ j = 0 ∞ ( − 1 ) j − 1 B j z j − 1 j ! {\displaystyle \lim _{n\rightarrow \infty }G(z,n)={\frac {1}{e^{-z}-1}}=\sum _{j=0}^{\infty }(-1)^{j-1}B_{j}{\frac {z^{j-1}}{j!}}} Heuristically, this suggests that ∑ k = 1 ∞ k p = ( − 1 ) p B p + 1 p + 1 . {\displaystyle \sum _{k=1}^{\infty }k^{p}={\frac {(-1)^{p}B_{p+1}}{p+1}}.} This result agrees with the value of the Riemann zeta function ζ ( s ) = ∑ n = 1 ∞ 1 n s {\textstyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}} for negative integers s = − p < 0 {\displaystyle s=-p<0} on appropriately analytically continuing ζ ( s ) {\displaystyle \zeta (s)} . Faulhaber's formula can be written in terms of the Hurwitz zeta function: ∑ k = 1 n k p = ζ ( − p ) − ζ ( − p , n + 1 ) {\displaystyle \sum \limits _{k=1}^{n}k^{p}=\zeta (-p)-\zeta (-p,n+1)} == Umbral form == In the umbral calculus, one treats the Bernoulli numbers B 0 = 1 {\textstyle B^{0}=1} , B 1 = 1 2 {\textstyle B^{1}={\frac {1}{2}}} , B 2 = 1 6 {\textstyle B^{2}={\frac {1}{6}}} , ... as if the index j in B j {\textstyle B^{j}} were actually an exponent, and so as if the Bernoulli numbers were powers of some object B. Using this notation, Faulhaber's formula can be written as ∑ k = 1 n k p = 1 p + 1 ( ( B + n ) p + 1 − B p + 1 ) . {\displaystyle \sum _{k=1}^{n}k^{p}={\frac {1}{p+1}}{\big (}(B+n)^{p+1}-B^{p+1}{\big )}.} Here, the expression on the right must be understood by expanding out to get terms B j {\textstyle B^{j}} that can then be interpreted as the Bernoulli numbers. Specifically, using the binomial theorem, we get 1 p + 1 ( ( B + n ) p + 1 − B p + 1 ) = 1 p + 1 ( ∑ k = 0 p + 1 ( p + 1 k ) B k n p + 1 − k − B p + 1 ) = 1 p + 1 ∑ k = 0 p ( p + 1 j ) B k n p + 1 − k . {\displaystyle {\begin{aligned}{\frac {1}{p+1}}{\big (}(B+n)^{p+1}-B^{p+1}{\big )}&={1 \over p+1}\left(\sum _{k=0}^{p+1}{\binom {p+1}{k}}B^{k}n^{p+1-k}-B^{p+1}\right)\\&={1 \over p+1}\sum _{k=0}^{p}{\binom {p+1}{j}}B^{k}n^{p+1-k}.\end{aligned}}} A derivation of Faulhaber's formula using the umbral form is available in The Book of Numbers by John Horton Conway and Richard K. Guy. Classically, this umbral form was considered as a notational convenience. In the modern umbral calculus, on the other hand, this is given a formal mathematical underpinning. One considers the linear functional T on the vector space of polynomials in a variable b given by T ( b j ) = B j . {\textstyle T(b^{j})=B_{j}.} Then one can say ∑ k = 1 n k p = 1 p + 1 ∑ j = 0 p ( p + 1 j ) B j n p + 1 − j = 1 p + 1 ∑ j = 0 p ( p + 1 j ) T ( b j ) n p + 1 − j = 1 p + 1 T ( ∑ j = 0 p ( p + 1 j ) b j n p + 1 − j ) = T ( ( b + n ) p + 1 − b p + 1 p + 1 ) . {\displaystyle {\begin{aligned}\sum _{k=1}^{n}k^{p}&={1 \over p+1}\sum _{j=0}^{p}{p+1 \choose j}B_{j}n^{p+1-j}\\&={1 \over p+1}\sum _{j=0}^{p}{p+1 \choose j}T(b^{j})n^{p+1-j}\\&={1 \over p+1}T\left(\sum _{j=0}^{p}{p+1 \choose j}b^{j}n^{p+1-j}\right)\\&=T\left({(b+n)^{p+1}-b^{p+1} \over p+1}\right).\end{aligned}}} == A general formula == The series 1 m + 2 m + 3 m + . . . + n m {\displaystyle 1^{m}+2^{m}+3^{m}+...+n^{m}} as a function of m {\displaystyle m} is often abbreviated as S m {\displaystyle S_{m}} . Beardon has published formulas for powers of S m {\displaystyle S_{m}} , including a 1996 paper which demonstrated that integer powers of S 1 {\displaystyle S_{1}} can be written as a linear sum of terms in the sequence S 3 , S 5 , S 7 , . . . {\displaystyle S_{3},\;S_{5},\;S_{7},\;...} : S 1 N = 1 2 N ∑ r = 0 N ( N r ) S N + r ( 1 − ( − 1 ) N − r ) {\displaystyle S_{1}^{\;N}={\frac {1}{2^{N}}}\sum _{r=0}^{N}{N \choose r}S_{N+r}\left(1-(-1)^{N-r}\right)} The first few resulting identities are then S 1 2 = S 3 {\displaystyle S_{1}^{\;2}=S_{3}} S 1 3 = 1 4 S 3 + 3 4 S 5 {\displaystyle S_{1}^{\;3}={\frac {1}{4}}S_{3}+{\frac {3}{4}}S_{5}} S 1 4 = 1 2 S 5 + 1 2 S 7 {\displaystyle S_{1}^{\;4}={\frac {1}{2}}S_{5}+{\frac {1}{2}}S_{7}} . Although other specific cases of S m N {\displaystyle S_{m}^{\;N}} – including S 2 2 = 1 3 S 3 + 2 3 S 5 {\displaystyle S_{2}^{\;2}={\frac {1}{3}}S_{3}+{\frac {2}{3}}S_{5}} and S 2 3 = 1 12 S 4 + 7 12 S 6 + 1 3 S 8 {\displaystyle S_{2}^{\;3}={\frac {1}{12}}S_{4}+{\frac {7}{12}}S_{6}+{\frac {1}{3}}S_{8}} – are known, no general formula for S m N {\displaystyle S_{m}^{\;N}} for positive integers m {\displaystyle m} and N {\displaystyle N} has yet been reported. A 2019 paper by Derby proved that: S m N = ∑ k = 1 N ( − 1 ) k − 1 ( N k ) ∑ r = 1 n r m k S m N − k ( r ) {\displaystyle S_{m}^{\;N}=\sum _{k=1}^{N}(-1)^{k-1}{N \choose k}\sum _{r=1}^{n}r^{mk}S_{m}^{\;\;N-k}(r)} . This can be calculated in matrix form, as described above. The m = 1 {\displaystyle m=1} case replicates Beardon's formula for S 1 N {\displaystyle S_{1}^{\;N}} and confirms the above-stated results for m = 2 {\displaystyle m=2} and N = 2 {\displaystyle N=2} or 3 {\displaystyle 3} . Results for higher powers include: S 2 4 = 1 54 S 5 + 5 18 S 7 + 5 9 S 9 + 4 27 S 11 {\displaystyle S_{2}^{\;4}={\frac {1}{54}}S_{5}+{\frac {5}{18}}S_{7}+{\frac {5}{9}}S_{9}+{\frac {4}{27}}S_{11}} S 6 3 = 1 588 S 8 − 1 42 S 10 + 13 84 S 12 − 47 98 S 14 + 17 28 S 16 + 19 28 S 18 + 3 49 S 20 {\displaystyle S_{6}^{\;3}={\frac {1}{588}}S_{8}-{\frac {1}{42}}S_{10}+{\frac {13}{84}}S_{12}-{\frac {47}{98}}S_{14}+{\frac {17}{28}}S_{16}+{\frac {19}{28}}S_{18}+{\frac {3}{49}}S_{20}} S 7 3 = 1 48 S 11 − 7 48 S 13 + 35 64 S 15 − 23 24 S 17 + 77 96 S 19 + 11 16 S 21 + 3 64 S 23 {\displaystyle S_{7}^{\;3}={\frac {1}{48}}S_{11}-{\frac {7}{48}}S_{13}+{\frac {35}{64}}S_{15}-{\frac {23}{24}}S_{17}+{\frac {77}{96}}S_{19}+{\frac {11}{16}}S_{21}+{\frac {3}{64}}S_{23}} . == Notes == == External links == Jacobi, Carl (1834). "De usu legitimo formulae summatoriae Maclaurinianae". Journal für die reine und angewandte Mathematik. Vol. 12. pp. 263–72. doi:10.1515/crll.1834.12.263. Weisstein, Eric W. "Faulhaber's formula". MathWorld. Johann Faulhaber (1631). Academia Algebrae - Darinnen die miraculosische Inventiones zu den höchsten Cossen weiters continuirt und profitiert werden. A very rare book, but Knuth has placed a photocopy in the Stanford library, call number QA154.8 F3 1631a f MATH. (online copy at Google Books) Beardon, A. F. (1996). "Sums of Powers of Integers" (PDF). American Mathematical Monthly. 103 (3): 201–213. doi:10.1080/00029890.1996.12004725. Retrieved 2011-10-23. (Winner of a Lester R. Ford Award) Schumacher, Raphael (2016). "An Extended Version of Faulhaber's Formula". Journal of Integer Sequences. Vol. 19, no. 16.4.2. Orosi, Greg (2018). "A Simple Derivation Of Faulhaber's Formula" (PDF). Applied Mathematics E-Notes. Vol. 18. pp. 124–126. A visual proof for the sum of squares and cubes.
Wikipedia:Faustina Pignatelli#0
Faustina Pignatelli Carafa, princess of Colubrano (9 December 1705-30 December 1769), was an Italian mathematician and scientist from Naples. She became the second woman (after the Bolognese physicist Laura Bassi) to be elected to the Academy of Sciences of Bologna on 20 November 1732. In 1734, Faustina published a paper titled Problemata Mathematica using the name "anonima napolitana" (a Latin phrase meaning "anonymous female from Naples"), in the German scientific journal Nova Acta Eruditorum, which was published entirely in Latin. Alongside her brother Peter, she was educated by Nicola De Martino and was instrumental in introducing the theories of Isaac Newton to Naples. She was an important participator in the scientific debate in Italy and corresponded with the French Academy of Sciences. Upon her marriage to the poet Francesco Domenico Carafa in 1724, she was given the principality Colubrano in southern Italy as a dowry by her father. Francesco Maria Zanotti, secretary of the Academy of Sciences of Bologna from 1723 to 1766, mentioned her as a gifted mathematician in 1745. She was a Dame of the Order of the Starry Cross from 3 May 1732. == References == A. Brigaglia, P. Nastasi, Bologna e il Regno delle due Sicilie: aspetti di un dialogo scientifico (1730-1760), «Giornale critico della filosofia italiana», LXIII, 2, 1984, pp. 145–178.
Wikipedia:Faxén integral#0
In mathematics, the Faxén integral (also named Faxén function) is the following integral Fi ⁡ ( α , β ; x ) = ∫ 0 ∞ exp ⁡ ( − t + x t α ) t β − 1 d t , ( 0 ≤ Re ⁡ ( α ) < 1 , Re ⁡ ( β ) > 0 ) . {\displaystyle \operatorname {Fi} (\alpha ,\beta ;x)=\int _{0}^{\infty }\exp(-t+xt^{\alpha })t^{\beta -1}\mathrm {d} t,\qquad (0\leq \operatorname {Re} (\alpha )<1,\;\operatorname {Re} (\beta )>0).} The integral is named after the Swedish physicist Olov Hilding Faxén, who published it in 1921 in his PhD thesis. == n-dimensional Faxén integral == More generally one defines the n {\displaystyle n} -dimensional Faxén integral as I n ( x ) = λ n ∫ 0 ∞ ⋯ ∫ 0 ∞ t 1 β 1 − 1 ⋯ t n β n − 1 e − f ( t 1 , … , t n ; x ) d t 1 ⋯ d t n , {\displaystyle I_{n}(x)=\lambda _{n}\int _{0}^{\infty }\cdots \int _{0}^{\infty }t_{1}^{\beta _{1}-1}\cdots t_{n}^{\beta _{n}-1}e^{-f(t_{1},\dots ,t_{n};x)}\mathrm {d} t_{1}\cdots \mathrm {d} t_{n},} with f ( t 1 , … , t n ; x ) := ∑ j = 1 n t j μ j − x t 1 α 1 ⋯ t n α n {\displaystyle f(t_{1},\dots ,t_{n};x):=\sum \limits _{j=1}^{n}t_{j}^{\mu _{j}}-xt_{1}^{\alpha _{1}}\cdots t_{n}^{\alpha _{n}}\quad } and λ n := ∏ j = 1 n μ j {\displaystyle \quad \lambda _{n}:=\prod \limits _{j=1}^{n}\mu _{j}} for x ∈ C {\displaystyle x\in \mathbb {C} } and ( 0 < α i < μ i , Re ⁡ ( β i ) > 0 , i = 1 , … , n ) . {\displaystyle (0<\alpha _{i}<\mu _{i},\;\operatorname {Re} (\beta _{i})>0,\;i=1,\dots ,n).} The parameter λ n {\displaystyle \lambda _{n}} is only for convenience in calculations. == Properties == Let Γ {\displaystyle \Gamma } denote the Gamma function, then Fi ⁡ ( α , β ; 0 ) = Γ ( β ) , {\displaystyle \operatorname {Fi} (\alpha ,\beta ;0)=\Gamma (\beta ),} Fi ⁡ ( 0 , β ; x ) = e x Γ ( β ) . {\displaystyle \operatorname {Fi} (0,\beta ;x)=e^{x}\Gamma (\beta ).} For α = β = 1 3 {\displaystyle \alpha =\beta ={\tfrac {1}{3}}} one has the following relationship to the Scorer function Fi ⁡ ( 1 3 , 1 3 ; x ) = 3 2 / 3 π Hi ⁡ ( 3 − 1 / 3 x ) . {\displaystyle \operatorname {Fi} ({\tfrac {1}{3}},{\tfrac {1}{3}};x)=3^{2/3}\pi \operatorname {Hi} (3^{-1/3}x).} === Asymptotics === For x → ∞ {\displaystyle x\to \infty } we have the following asymptotics Fi ⁡ ( α , β ; − x ) ∼ Γ ( β / α ) α y β / α , {\displaystyle \operatorname {Fi} (\alpha ,\beta ;-x)\sim {\frac {\Gamma (\beta /\alpha )}{\alpha y^{\beta /\alpha }}},} Fi ⁡ ( α , β ; x ) ∼ ( 2 π 1 − α ) 1 / 2 ( α x ) ( 2 β − 1 ) / ( 2 − 2 α ) exp ⁡ ( ( 1 − α ) ( α α y ) 1 / ( 1 − α ) ) . {\displaystyle \operatorname {Fi} (\alpha ,\beta ;x)\sim \left({\frac {2\pi }{1-\alpha }}\right)^{1/2}(\alpha x)^{(2\beta -1)/(2-2\alpha )}\exp \left((1-\alpha )(\alpha ^{\alpha }y)^{1/(1-\alpha )}\right).} == References ==
Wikipedia:Fay Farnum#0
Fay Farnum (August 24, 1888, in Spencer, Iowa – March 11, 1977, in Tucson, Arizona) was an American mathematician and university professor and one of the few women to earn a PhD in math before World War II. She was a founding member of the Mathematical Association of America. == Life and work == Born Eugenia Fae Farnum, the second of four children to Josephine and farmer George Edwin Farnum, Fay Farnum received her bachelor's degree in general science from the Iowa State College of Agriculture and Mechanic Arts (now Iowa State University) in 1909. After teaching at various schools in Lyons, Iowa, Le Mars, Iowa, and Ames, Iowa, she moved to Ithaca, New York and received a master's degree from Cornell University in 1915 using the name "Fae Farnum" (but sometime before 1920, she began calling herself Fay Farnum exclusively). Farnum returned as an instructor to Iowa State College between 1915 and 1924. She spent two summer semesters in Chicago before enrolling as a graduate student at Cornell University from 1924 to 1926. Between 1925 and 1926, she taught two classes each semester including solid geometry, advanced algebra and calculus. She received her Ph.D. in 1926 under the supervision of mathematician Virgil Snyder with her dissertation: On Triadic Cremona Nets of Plane Curves. Her major subject was geometry with her first minor in mathematical analysis and her second minor in physics. She started teaching at Washington Square College (now New York University) in 1926 and stayed there for several years, first as an instructor and later as an assistant professor. During the 1939–1940 school year, Farnum took a leave of absence from NYU to attend the Physics and Mathematics Institute in Copenhagen, Denmark, but in April 1940, troops from Nazi, Germany, invaded Denmark during World War II, requiring her to cut short her studies and return to her position at NYU. In 1943, she moved to Iowa State where she taught until 1949 because the math department needed help meeting the demands of the new Army and Navy students. She was hired as an assistant professor. She retired the first time in 1949, but later, in 1955, she began teaching at the University of Arizona where she remained until 1957. Farnum was a founding member of the Mathematical Association of America and also a member of the American Mathematical Society. She died in Tucson in 1977 at the age of 88 and was buried at Tucson Memorial Park South Lawn. == References ==
Wikipedia:Fay's trisecant identity#0
In algebraic geometry, Fay's trisecant identity is an identity between theta functions of Riemann surfaces introduced by Fay (1973, chapter 3, page 34, formula 45). Fay's identity holds for theta functions of Jacobians of curves, but not for theta functions of general abelian varieties. The name "trisecant identity" refers to the geometric interpretation given by Mumford (1984, p.3.219), who used it to show that the Kummer variety of a genus g Riemann surface, given by the image of the map from the Jacobian to projective space of dimension 2 g − 1 {\displaystyle 2^{g}-1} induced by theta functions of order 2, has a 4-dimensional space of trisecants. == Statement == Suppose that C {\displaystyle C} is a compact Riemann surface g {\displaystyle g} is the genus of C {\displaystyle C} θ {\displaystyle \theta } is the Riemann theta function of C {\displaystyle C} , a function from C g {\displaystyle \mathbb {C} ^{g}} to C {\displaystyle \mathbb {C} } E {\displaystyle E} is a prime form on C × C {\displaystyle C\times C} u {\displaystyle u} , v {\displaystyle v} , x {\displaystyle x} , y {\displaystyle y} are points of C {\displaystyle C} z {\displaystyle z} is an element of C g {\displaystyle \mathbb {C} ^{g}} ω {\displaystyle \omega } is a 1-form on C {\displaystyle C} with values in C g {\displaystyle \mathbb {C} ^{g}} The Fay's identity states that E ( x , v ) E ( u , y ) θ ( z + ∫ u x ω ) θ ( z + ∫ v y ω ) − E ( x , u ) E ( v , y ) θ ( z + ∫ v x ω ) θ ( z + ∫ u y ω ) = E ( x , y ) E ( u , v ) θ ( z ) θ ( z + ∫ u + v x + y ω ) {\displaystyle {\begin{aligned}&E(x,v)E(u,y)\theta \left(z+\int _{u}^{x}\omega \right)\theta \left(z+\int _{v}^{y}\omega \right)\\-&E(x,u)E(v,y)\theta \left(z+\int _{v}^{x}\omega \right)\theta \left(z+\int _{u}^{y}\omega \right)\\=&E(x,y)E(u,v)\theta (z)\theta \left(z+\int _{u+v}^{x+y}\omega \right)\end{aligned}}} with ∫ u + v x + y ω = ∫ u x ω + ∫ v y ω = ∫ u y ω + ∫ v x ω {\displaystyle {\begin{aligned}&\int _{u+v}^{x+y}\omega =\int _{u}^{x}\omega +\int _{v}^{y}\omega =\int _{u}^{y}\omega +\int _{v}^{x}\omega \end{aligned}}} == References == Fay, John D. (1973), Theta functions on Riemann surfaces, Lecture Notes in Mathematics, vol. 352, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0060090, ISBN 978-3-540-06517-3, MR 0335789 Mumford, David (1974), "Prym varieties. I", in Ahlfors, Lars V.; Kra, Irwin; Nirenberg, Louis; et al. (eds.), Contributions to analysis (a collection of papers dedicated to Lipman Bers), Boston, MA: Academic Press, pp. 325–350, ISBN 978-0-12-044850-0, MR 0379510 Mumford, David (1984), Tata lectures on theta. II, Progress in Mathematics, vol. 43, Boston, MA: Birkhäuser Boston, ISBN 978-0-8176-3110-9, MR 0742776
Wikipedia:Faà di Bruno's formula#0
Faà di Bruno's formula is an identity in mathematics generalizing the chain rule to higher derivatives. It is named after Francesco Faà di Bruno (1855, 1857), although he was not the first to state or prove the formula. In 1800, more than 50 years before Faà di Bruno, the French mathematician Louis François Antoine Arbogast had stated the formula in a calculus textbook, which is considered to be the first published reference on the subject. Perhaps the most well-known form of Faà di Bruno's formula says that d n d x n f ( g ( x ) ) = ∑ n ! m 1 ! 1 ! m 1 m 2 ! 2 ! m 2 ⋯ m n ! n ! m n ⋅ f ( m 1 + ⋯ + m n ) ( g ( x ) ) ⋅ ∏ j = 1 n ( g ( j ) ( x ) ) m j , {\displaystyle {d^{n} \over dx^{n}}f(g(x))=\sum {\frac {n!}{m_{1}!\,1!^{m_{1}}\,m_{2}!\,2!^{m_{2}}\,\cdots \,m_{n}!\,n!^{m_{n}}}}\cdot f^{(m_{1}+\cdots +m_{n})}(g(x))\cdot \prod _{j=1}^{n}\left(g^{(j)}(x)\right)^{m_{j}},} where the sum is over all n {\displaystyle n} -tuples of nonnegative integers ( m 1 , … , m n ) {\displaystyle (m_{1},\ldots ,m_{n})} satisfying the constraint 1 ⋅ m 1 + 2 ⋅ m 2 + 3 ⋅ m 3 + ⋯ + n ⋅ m n = n . {\displaystyle 1\cdot m_{1}+2\cdot m_{2}+3\cdot m_{3}+\cdots +n\cdot m_{n}=n.} Sometimes, to give it a memorable pattern, it is written in a way in which the coefficients that have the combinatorial interpretation discussed below are less explicit: d n d x n f ( g ( x ) ) = ∑ n ! m 1 ! m 2 ! ⋯ m n ! ⋅ f ( m 1 + ⋯ + m n ) ( g ( x ) ) ⋅ ∏ j = 1 n ( g ( j ) ( x ) j ! ) m j . {\displaystyle {d^{n} \over dx^{n}}f(g(x))=\sum {\frac {n!}{m_{1}!\,m_{2}!\,\cdots \,m_{n}!}}\cdot f^{(m_{1}+\cdots +m_{n})}(g(x))\cdot \prod _{j=1}^{n}\left({\frac {g^{(j)}(x)}{j!}}\right)^{m_{j}}.} Combining the terms with the same value of m 1 + m 2 + ⋯ + m n = k {\displaystyle m_{1}+m_{2}+\cdots +m_{n}=k} and noticing that m j {\displaystyle m_{j}} has to be zero for j > n − k + 1 {\displaystyle j>n-k+1} leads to a somewhat simpler formula expressed in terms of partial (or incomplete) exponential Bell polynomials B n , k ( x 1 , … , x n − k + 1 ) {\displaystyle B_{n,k}(x_{1},\ldots ,x_{n-k+1})} : d n d x n f ( g ( x ) ) = ∑ k = 0 n f ( k ) ( g ( x ) ) ⋅ B n , k ( g ′ ( x ) , g ″ ( x ) , … , g ( n − k + 1 ) ( x ) ) . {\displaystyle {d^{n} \over dx^{n}}f(g(x))=\sum _{k=0}^{n}f^{(k)}(g(x))\cdot B_{n,k}\left(g'(x),g''(x),\dots ,g^{(n-k+1)}(x)\right).} This formula works for all n ≥ 0 {\displaystyle n\geq 0} , however for n > 0 {\displaystyle n>0} the polynomials B n , 0 {\displaystyle B_{n,0}} are zero and thus summation in the formula can start with k = 1 {\displaystyle k=1} . == Combinatorial form == The formula has a "combinatorial" form: d n d x n f ( g ( x ) ) = ( f ∘ g ) ( n ) ( x ) = ∑ π ∈ Π f ( | π | ) ( g ( x ) ) ⋅ ∏ B ∈ π g ( | B | ) ( x ) {\displaystyle {d^{n} \over dx^{n}}f(g(x))=(f\circ g)^{(n)}(x)=\sum _{\pi \in \Pi }f^{(\left|\pi \right|)}(g(x))\cdot \prod _{B\in \pi }g^{(\left|B\right|)}(x)} where π {\displaystyle \pi } runs through the set Π {\displaystyle \Pi } of all partitions of the set { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} , " B ∈ π {\displaystyle B\in \pi } " means the variable B {\displaystyle B} runs through the list of all of the "blocks" of the partition π {\displaystyle \pi } , and | A | {\displaystyle |A|} denotes the cardinality of the set A {\displaystyle A} (so that | π | {\displaystyle |\pi |} is the number of blocks in the partition π {\displaystyle \pi } and | B | {\displaystyle |B|} is the size of the block B {\displaystyle B} ). == Example == The following is a concrete explanation of the combinatorial form for the n = 4 {\displaystyle n=4} case. ( f ∘ g ) ⁗ ( x ) = f ⁗ ( g ( x ) ) g ′ ( x ) 4 + 6 f ‴ ( g ( x ) ) g ″ ( x ) g ′ ( x ) 2 + 3 f ″ ( g ( x ) ) g ″ ( x ) 2 + 4 f ″ ( g ( x ) ) g ‴ ( x ) g ′ ( x ) + f ′ ( g ( x ) ) g ⁗ ( x ) . {\displaystyle {\begin{aligned}(f\circ g)''''(x)={}&f''''(g(x))g'(x)^{4}+6f'''(g(x))g''(x)g'(x)^{2}\\[8pt]&{}+\;3f''(g(x))g''(x)^{2}+4f''(g(x))g'''(x)g'(x)\\[8pt]&{}+\;f'(g(x))g''''(x).\end{aligned}}} The pattern is: g ′ ( x ) 4 ↔ 1 + 1 + 1 + 1 ↔ f ⁗ ( g ( x ) ) ↔ 1 g ″ ( x ) g ′ ( x ) 2 ↔ 2 + 1 + 1 ↔ f ‴ ( g ( x ) ) ↔ 6 g ″ ( x ) 2 ↔ 2 + 2 ↔ f ″ ( g ( x ) ) ↔ 3 g ‴ ( x ) g ′ ( x ) ↔ 3 + 1 ↔ f ″ ( g ( x ) ) ↔ 4 g ⁗ ( x ) ↔ 4 ↔ f ′ ( g ( x ) ) ↔ 1 {\displaystyle {\begin{array}{cccccc}g'(x)^{4}&&\leftrightarrow &&1+1+1+1&&\leftrightarrow &&f''''(g(x))&&\leftrightarrow &&1\\[12pt]g''(x)g'(x)^{2}&&\leftrightarrow &&2+1+1&&\leftrightarrow &&f'''(g(x))&&\leftrightarrow &&6\\[12pt]g''(x)^{2}&&\leftrightarrow &&2+2&&\leftrightarrow &&f''(g(x))&&\leftrightarrow &&3\\[12pt]g'''(x)g'(x)&&\leftrightarrow &&3+1&&\leftrightarrow &&f''(g(x))&&\leftrightarrow &&4\\[12pt]g''''(x)&&\leftrightarrow &&4&&\leftrightarrow &&f'(g(x))&&\leftrightarrow &&1\end{array}}} The factor g ″ ( x ) g ′ ( x ) 2 {\displaystyle g''(x)g'(x)^{2}} corresponds to the partition 2 + 1 + 1 of the integer 4, in the obvious way. The factor f ‴ ( g ( x ) ) {\displaystyle f'''(g(x))} that goes with it corresponds to the fact that there are three summands in that partition. The coefficient 6 that goes with those factors corresponds to the fact that there are exactly six partitions of a set of four members that break it into one part of size 2 and two parts of size 1. Similarly, the factor g ″ ( x ) 2 {\displaystyle g''(x)^{2}} in the third line corresponds to the partition 2 + 2 of the integer 4, (4, because we are finding the fourth derivative), while f ″ ( g ( x ) ) {\displaystyle f''(g(x))} corresponds to the fact that there are two summands (2 + 2) in that partition. The coefficient 3 corresponds to the fact that there are 1 2 ( 4 2 ) = 3 {\displaystyle {\tfrac {1}{2}}{\tbinom {4}{2}}=3} ways of partitioning 4 objects into groups of 2. The same concept applies to the others. A memorizable scheme is as follows: D 1 ( f ∘ g ) 1 ! = ( f ( 1 ) ∘ g ) g ( 1 ) 1 ! 1 ! D 2 ( f ∘ g ) 2 ! = ( f ( 1 ) ∘ g ) g ( 2 ) 2 ! 1 ! + ( f ( 2 ) ∘ g ) g ( 1 ) 1 ! g ( 1 ) 1 ! 2 ! D 3 ( f ∘ g ) 3 ! = ( f ( 1 ) ∘ g ) g ( 3 ) 3 ! 1 ! + ( f ( 2 ) ∘ g ) g ( 1 ) 1 ! 1 ! g ( 2 ) 2 ! 1 ! + ( f ( 3 ) ∘ g ) g ( 1 ) 1 ! g ( 1 ) 1 ! g ( 1 ) 1 ! 3 ! D 4 ( f ∘ g ) 4 ! = ( f ( 1 ) ∘ g ) g ( 4 ) 4 ! 1 ! + ( f ( 2 ) ∘ g ) ( g ( 1 ) 1 ! 1 ! g ( 3 ) 3 ! 1 ! + g ( 2 ) 2 ! g ( 2 ) 2 ! 2 ! ) + ( f ( 3 ) ∘ g ) g ( 1 ) 1 ! g ( 1 ) 1 ! 2 ! g ( 2 ) 2 ! 1 ! + ( f ( 4 ) ∘ g ) g ( 1 ) 1 ! g ( 1 ) 1 ! g ( 1 ) 1 ! g ( 1 ) 1 ! 4 ! {\displaystyle {\begin{aligned}&{\frac {D^{1}(f\circ {}g)}{1!}}&=\left(f^{(1)}\circ {}g\right){\frac {\frac {g^{(1)}}{1!}}{1!}}\\[8pt]&{\frac {D^{2}(f\circ g)}{2!}}&=\left(f^{(1)}\circ {}g\right){\frac {\frac {g^{(2)}}{2!}}{1!}}&{}+\left(f^{(2)}\circ {}g\right){\frac {{\frac {g^{(1)}}{1!}}{\frac {g^{(1)}}{1!}}}{2!}}\\[8pt]&{\frac {D^{3}(f\circ g)}{3!}}&=\left(f^{(1)}\circ {}g\right){\frac {\frac {g^{(3)}}{3!}}{1!}}&{}+\left(f^{(2)}\circ {}g\right){\frac {\frac {g^{(1)}}{1!}}{1!}}{\frac {\frac {g^{(2)}}{2!}}{1!}}&{}+\left(f^{(3)}\circ {}g\right){\frac {{\frac {g^{(1)}}{1!}}{\frac {g^{(1)}}{1!}}{\frac {g^{(1)}}{1!}}}{3!}}\\[8pt]&{\frac {D^{4}(f\circ g)}{4!}}&=\left(f^{(1)}\circ {}g\right){\frac {\frac {g^{(4)}}{4!}}{1!}}&{}+\left(f^{(2)}\circ {}g\right)\left({\frac {\frac {g^{(1)}}{1!}}{1!}}{\frac {\frac {g^{(3)}}{3!}}{1!}}+{\frac {{\frac {g^{(2)}}{2!}}{\frac {g^{(2)}}{2!}}}{2!}}\right)&{}+\left(f^{(3)}\circ {}g\right){\frac {{\frac {g^{(1)}}{1!}}{\frac {g^{(1)}}{1!}}}{2!}}{\frac {\frac {g^{(2)}}{2!}}{1!}}&{}+\left(f^{(4)}\circ {}g\right){\frac {{\frac {g^{(1)}}{1!}}{\frac {g^{(1)}}{1!}}{\frac {g^{(1)}}{1!}}{\frac {g^{(1)}}{1!}}}{4!}}\end{aligned}}} == Variations == === Multivariate version === Let y = g ( x 1 , … , x n ) {\displaystyle y=g(x_{1},\dots ,x_{n})} . Then the following identity holds regardless of whether the n {\displaystyle n} variables are all distinct, or all identical, or partitioned into several distinguishable classes of indistinguishable variables (if it seems opaque, see the very concrete example below): ∂ n ∂ x 1 ⋯ ∂ x n f ( y ) = ∑ π ∈ Π f ( | π | ) ( y ) ⋅ ∏ B ∈ π ∂ | B | y ∏ j ∈ B ∂ x j {\displaystyle {\partial ^{n} \over \partial x_{1}\cdots \partial x_{n}}f(y)=\sum _{\pi \in \Pi }f^{(\left|\pi \right|)}(y)\cdot \prod _{B\in \pi }{\partial ^{\left|B\right|}y \over \prod _{j\in B}\partial x_{j}}} where (as above) π {\displaystyle \pi } runs through the set Π {\displaystyle \Pi } of all partitions of the set { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} , " B ∈ π {\displaystyle B\in \pi } " means the variable B {\displaystyle B} runs through the list of all of the "blocks" of the partition π {\displaystyle \pi } , and | A | {\displaystyle |A|} denotes the cardinality of the set A {\displaystyle A} (so that | π | {\displaystyle |\pi |} is the number of blocks in the partition π {\displaystyle \pi } and | B | {\displaystyle |B|} is the size of the block B {\displaystyle B} ). More general versions hold for cases where the all functions are vector- and even Banach-space-valued. In this case one needs to consider the Fréchet derivative or Gateaux derivative. Example The five terms in the following expression correspond in the obvious way to the five partitions of the set { 1 , 2 , 3 } {\displaystyle \{1,2,3\}} , and in each case the order of the derivative of f {\displaystyle f} is the number of parts in the partition: ∂ 3 ∂ x 1 ∂ x 2 ∂ x 3 f ( y ) = f ′ ( y ) ∂ 3 y ∂ x 1 ∂ x 2 ∂ x 3 + f ″ ( y ) ( ∂ y ∂ x 1 ⋅ ∂ 2 y ∂ x 2 ∂ x 3 + ∂ y ∂ x 2 ⋅ ∂ 2 y ∂ x 1 ∂ x 3 + ∂ y ∂ x 3 ⋅ ∂ 2 y ∂ x 1 ∂ x 2 ) + f ‴ ( y ) ∂ y ∂ x 1 ⋅ ∂ y ∂ x 2 ⋅ ∂ y ∂ x 3 . {\displaystyle {\begin{aligned}{\partial ^{3} \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}f(y)={}&f'(y){\partial ^{3}y \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}\\[10pt]&{}+f''(y)\left({\partial y \over \partial x_{1}}\cdot {\partial ^{2}y \over \partial x_{2}\,\partial x_{3}}+{\partial y \over \partial x_{2}}\cdot {\partial ^{2}y \over \partial x_{1}\,\partial x_{3}}+{\partial y \over \partial x_{3}}\cdot {\partial ^{2}y \over \partial x_{1}\,\partial x_{2}}\right)\\[10pt]&{}+f'''(y){\partial y \over \partial x_{1}}\cdot {\partial y \over \partial x_{2}}\cdot {\partial y \over \partial x_{3}}.\end{aligned}}} If the three variables are indistinguishable from each other, then three of the five terms above are also indistinguishable from each other, and then we have the classic one-variable formula. === Formal power series version === Suppose f ( x ) = ∑ n = 0 ∞ a n x n {\displaystyle f(x)=\sum _{n=0}^{\infty }{a_{n}}x^{n}} and g ( x ) = ∑ n = 0 ∞ b n x n {\displaystyle g(x)=\sum _{n=0}^{\infty }{b_{n}}x^{n}} are formal power series and b 0 = 0 {\displaystyle b_{0}=0} . Then the composition f ∘ g {\displaystyle f\circ g} is again a formal power series, f ( g ( x ) ) = ∑ n = 0 ∞ c n x n , {\displaystyle f(g(x))=\sum _{n=0}^{\infty }{c_{n}}x^{n},} where c 0 = a 0 {\displaystyle c_{0}=a_{0}} and the other coefficient c n {\displaystyle c_{n}} for n ≥ 1 {\displaystyle n\geq 1} can be expressed as a sum over compositions of n {\displaystyle n} or as an equivalent sum over integer partitions of n {\displaystyle n} : c n = ∑ i ∈ C n a k b i 1 b i 2 ⋯ b i k , {\displaystyle c_{n}=\sum _{\mathbf {i} \in {\mathcal {C}}_{n}}a_{k}b_{i_{1}}b_{i_{2}}\cdots b_{i_{k}},} where C n = { ( i 1 , i 2 , … , i k ) : 1 ≤ k ≤ n , i 1 + i 2 + ⋯ + i k = n } {\displaystyle {\mathcal {C}}_{n}=\{(i_{1},i_{2},\dots ,i_{k})\,:\ 1\leq k\leq n,\ i_{1}+i_{2}+\cdots +i_{k}=n\}} is the set of compositions of n {\displaystyle n} with k {\displaystyle k} denoting the number of parts, or c n = ∑ k = 1 n a k ∑ π ∈ P n , k ( k π 1 , π 2 , … , π n ) b 1 π 1 b 2 π 2 ⋯ b n π n , {\displaystyle c_{n}=\sum _{k=1}^{n}a_{k}\sum _{\mathbf {\pi } \in {\mathcal {P}}_{n,k}}{\binom {k}{\pi _{1},\pi _{2},\ldots ,\pi _{n}}}b_{1}^{\pi _{1}}b_{2}^{\pi _{2}}\cdots b_{n}^{\pi _{n}},} where P n , k = { ( π 1 , π 2 , … , π n ) : π 1 + π 2 + ⋯ + π n = k , π 1 ⋅ 1 + π 2 ⋅ 2 + ⋯ + π n ⋅ n = n } {\displaystyle {\mathcal {P}}_{n,k}=\{(\pi _{1},\pi _{2},\dots ,\pi _{n})\,:\ \pi _{1}+\pi _{2}+\cdots +\pi _{n}=k,\ \pi _{1}\cdot 1+\pi _{2}\cdot 2+\cdots +\pi _{n}\cdot n=n\}} is the set of partitions of n {\displaystyle n} into k {\displaystyle k} parts, in frequency-of-parts form. The first form is obtained by picking out the coefficient of x n {\displaystyle x^{n}} in ( b 1 x + b 2 x 2 + ⋯ ) k {\displaystyle (b_{1}x+b_{2}x^{2}+\cdots )^{k}} "by inspection", and the second form is then obtained by collecting like terms, or alternatively, by applying the multinomial theorem. The special case f ( x ) = e x {\displaystyle f(x)=e^{x}} , g ( x ) = ∑ n ≥ 1 1 n ! a n x n {\displaystyle g(x)=\sum _{n\geq 1}{\frac {1}{n!}}a_{n}x^{n}} gives the exponential formula. The special case f ( x ) = 1 / ( 1 − x ) {\displaystyle f(x)=1/(1-x)} , g ( x ) = ∑ n ≥ 1 ( − a n ) x n {\displaystyle g(x)=\sum _{n\geq 1}(-a_{n})x^{n}} gives an expression for the reciprocal of the formal power series ∑ n ≥ 0 a n x n {\displaystyle \sum _{n\geq 0}a_{n}x^{n}} in the case a 0 = 1 {\displaystyle a_{0}=1} . Stanley gives a version for exponential power series. In the formal power series f ( x ) = ∑ n a n n ! x n , {\displaystyle f(x)=\sum _{n}{\frac {a_{n}}{n!}}x^{n},} we have the n {\displaystyle n} th derivative at 0: f ( n ) ( 0 ) = a n . {\displaystyle f^{(n)}(0)=a_{n}.} This should not be construed as the value of a function, since these series are purely formal; there is no such thing as convergence or divergence in this context. If g ( x ) = ∑ n = 0 ∞ b n n ! x n {\displaystyle g(x)=\sum _{n=0}^{\infty }{\frac {b_{n}}{n!}}x^{n}} and f ( x ) = ∑ n = 1 ∞ a n n ! x n {\displaystyle f(x)=\sum _{n=1}^{\infty }{\frac {a_{n}}{n!}}x^{n}} and g ( f ( x ) ) = h ( x ) = ∑ n = 0 ∞ c n n ! x n , {\displaystyle g(f(x))=h(x)=\sum _{n=0}^{\infty }{\frac {c_{n}}{n!}}x^{n},} then the coefficient c n {\displaystyle c_{n}} (which would be the n {\displaystyle n} th derivative of h {\displaystyle h} evaluated at 0 if we were dealing with convergent series rather than formal power series) is given by c n = ∑ π = { B 1 , … , B k } a | B 1 | ⋯ a | B k | b k {\displaystyle c_{n}=\sum _{\pi =\left\{B_{1},\ldots ,B_{k}\right\}}a_{\left|B_{1}\right|}\cdots a_{\left|B_{k}\right|}b_{k}} where π {\displaystyle \pi } runs through the set of all partitions of the set { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} and B 1 , … , B k {\displaystyle B_{1},\ldots ,B_{k}} are the blocks of the partition π {\displaystyle \pi } , and | B j | {\displaystyle |B_{j}|} is the number of members of the j {\displaystyle j} th block, for j = 1 , … , k {\displaystyle j=1,\ldots ,k} . This version of the formula is particularly well suited to the purposes of combinatorics. We can also write with respect to the notation above g ( f ( x ) ) = b 0 + ∑ n = 1 ∞ ∑ k = 1 n b k B n , k ( a 1 , … , a n − k + 1 ) n ! x n , {\displaystyle g(f(x))=b_{0}+\sum _{n=1}^{\infty }{\frac {\sum _{k=1}^{n}b_{k}B_{n,k}(a_{1},\ldots ,a_{n-k+1})}{n!}}x^{n},} where B n , k ( a 1 , … , a n − k + 1 ) {\displaystyle B_{n,k}(a_{1},\ldots ,a_{n-k+1})} are Bell polynomials. === A special case === If f ( x ) = e x {\displaystyle f(x)=e^{x}} , then all of the derivatives of f {\displaystyle f} are the same and are a factor common to every term: d n d x n e g ( x ) = e g ( x ) B n ( g ′ ( x ) , g ″ ( x ) , … , g ( n ) ( x ) ) , {\displaystyle {d^{n} \over dx^{n}}e^{g(x)}=e^{g(x)}B_{n}\left(g'(x),g''(x),\dots ,g^{(n)}(x)\right),} where B n ( x ) {\displaystyle B_{n}(x)} is the nth complete exponential Bell polynomial. In case g ( x ) {\displaystyle g(x)} is a cumulant-generating function, then f ( g ( x ) ) {\displaystyle f(g(x))} is a moment-generating function, and the polynomial in various derivatives of g {\displaystyle g} is the polynomial that expresses the moments as functions of the cumulants. == See also == Chain rule – For derivatives of composed functions Differentiation of trigonometric functions – Mathematical process of finding the derivative of a trigonometric function Differentiation rules – Rules for computing derivatives of functions General Leibniz rule – Generalization of the product rule in calculus Inverse functions and differentiation – Formula for the derivative of an inverse functionPages displaying short descriptions of redirect targets Linearity of differentiation – Calculus property Product rule – Formula for the derivative of a product Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets Vector calculus identities – Mathematical identities == Notes == == References == === Historical surveys and essays === Brigaglia, Aldo (2004), "L'Opera Matematica", in Giacardi, Livia (ed.), Francesco Faà di Bruno. Ricerca scientifica insegnamento e divulgazione, Studi e fonti per la storia dell'Università di Torino (in Italian), vol. XII, Torino: Deputazione Subalpina di Storia Patria, pp. 111–172. "The mathematical work" is an essay on the mathematical activity, describing both the research and teaching activity of Francesco Faà di Bruno. Craik, Alex D. D. (February 2005), "Prehistory of Faà di Bruno's Formula", American Mathematical Monthly, 112 (2): 217–234, doi:10.2307/30037410, JSTOR 30037410, MR 2121322, Zbl 1088.01008. Johnson, Warren P. (March 2002), "The Curious History of Faà di Bruno's Formula" (PDF), American Mathematical Monthly, 109 (3): 217–234, CiteSeerX 10.1.1.109.4135, doi:10.2307/2695352, JSTOR 2695352, MR 1903577, Zbl 1024.01010. === Research works === Arbogast, L. F. A. (1800), Du calcul des derivations [On the calculus of derivatives] (in French), Strasbourg: Levrault, pp. xxiii+404, Entirely freely available from Google books. Faà di Bruno, F. (1855), "Sullo sviluppo delle funzioni" [On the development of the functions], Annali di Scienze Matematiche e Fisiche (in Italian), 6: 479–480, LCCN 06036680. Entirely freely available from Google books. A well-known paper where Francesco Faà di Bruno presents the two versions of the formula that now bears his name, published in the journal founded by Barnaba Tortolini. Faà di Bruno, F. (1857), "Note sur une nouvelle formule de calcul differentiel" [On a new formula of differential calculus], The Quarterly Journal of Pure and Applied Mathematics (in French), 1: 359–360. Entirely freely available from Google books. Faà di Bruno, Francesco (1859), Théorie générale de l'élimination [General elimination theory] (in French), Paris: Leiber et Faraguet, pp. x+224. Entirely freely available from Google books. Flanders, Harley (2001) "From Ford to Faa", American Mathematical Monthly 108(6): 558–61 doi:10.2307/2695713 Fraenkel, L. E. (1978), "Formulae for high derivatives of composite functions", Mathematical Proceedings of the Cambridge Philosophical Society, 83 (2): 159–165, Bibcode:1978MPCPS..83..159F, doi:10.1017/S0305004100054402, MR 0486377, S2CID 121007038, Zbl 0388.46032. Krantz, Steven G.; Parks, Harold R. (2002), A Primer of Real Analytic Functions, Birkhäuser Advanced Texts - Basler Lehrbücher (Second ed.), Boston: Birkhäuser Verlag, pp. xiv+205, ISBN 978-0-8176-4264-8, MR 1916029, Zbl 1015.26030 Porteous, Ian R. (2001), "Paragraph 4.3: Faà di Bruno's formula", Geometric Differentiation (Second ed.), Cambridge: Cambridge University Press, pp. 83–85, ISBN 978-0-521-00264-6, MR 1871900, Zbl 1013.53001. T. A., (Tiburce Abadie, J. F. C.) (1850), "Sur la différentiation des fonctions de fonctions" [On the derivation of functions], Nouvelles annales de mathématiques, journal des candidats aux écoles polytechnique et normale, Série 1 (in French), 9: 119–125{{citation}}: CS1 maint: multiple names: authors list (link), available at NUMDAM. This paper, according to Johnson (2002, p. 228) is one of the precursors of Faà di Bruno 1855: note that the author signs only as "T.A.", and the attribution to J. F. C. Tiburce Abadie is due again to Johnson. A., (Tiburce Abadie, J. F. C.) (1852), "Sur la différentiation des fonctions de fonctions. Séries de Burmann, de Lagrange, de Wronski" [On the derivation of functions. Burmann, Lagrange and Wronski series.], Nouvelles annales de mathématiques, journal des candidats aux écoles polytechnique et normale, Série 1 (in French), 11: 376–383{{citation}}: CS1 maint: multiple names: authors list (link), available at NUMDAM. This paper, according to Johnson (2002, p. 228) is one of the precursors of Faà di Bruno 1855: note that the author signs only as "A.", and the attribution to J. F. C. Tiburce Abadie is due again to Johnson. == External links == Weisstein, Eric W. "Faa di Bruno's Formula". MathWorld.
Wikipedia:Federico Rodriguez Hertz#0
Federico Rodríguez Hertz (born December 14, 1973) is an Argentine mathematician working in the United States. He is the Anatole Katok Chair professor of mathematics at Penn State University. Rodriguez Hertz studies dynamical systems and ergodic theory, which can be used to described chaos's behaviors over the large time scale and also has many applications in statistical mechanics, number theory, and geometry. == Early life and education == He is the son of Mariana Frugoni and Adolfo Rodriguez Hertz. He has four siblings, including Jana, a mathematician, teacher and researcher. Rodriguez Hertz studied at the Universidad Nacional de Rosario in Argentina as an undergraduate student. He moved to Montevideo, Uruguay in 1995, and, unable to continue studies there, he moved in 1996 to Rio de Janeiro where he studied at the graduate school at IMPA and earned a doctoral degree at the IMPA in Brazil in 2001 (with a thesis on "Stable Ergodicity of Toral Automorphisms" under Jacob Palis). His doctoral thesis was published in the Annals of Mathematics and made a breakthrough in this field. After he received his Ph.D., he moved back to Uruguay, where worked at the National University until moving to Penn State. == Work and research == Rodriguez Hertz has published research papers in journals including Annals of Mathematics, Acta Mathematica, Journal of the American Mathematical Society, Inventiones Mathematicae, Contemporary Mathematics, and Journal of Modern Dynamics. The first important contribution of Federico Rodriguez Hertz is his thesis dealing with stable ergodicity, which established the tools for proving stable ergodicity of non accessible systems. Then jointly with Jana Rodriguez Hertz, Ali Tahzibi and Raul Ures, Federico Rodriguez Hertz has proved a series of deep results about the geometry of Hopf brushes. It has been commented that his work "allows one to bring to the spotlight new powerful tools of rigidity theory, in particular topological and geometric methods". Later, Rodriguez Hertz has researched rigidity theory, which describes the flexibility and motion of sets of rigid bodies. His work in nonuniform-measure rigidity has advanced ergodic theory. A recent work of Aaron Brown and Federico Rodriguez Hertz provides a significant generalization of nonuniform-measure rigidity theory. Another very important work of Rodriguez Hertz is on global rigidity of Anosov actions, joint with Zhiren Wang and joint with Aaron Brown and Zhiren Wang, which has been seen as "the crowning achievements in the work on global rigidity of Anosov actions on tori and nilmanifolds". Federico Rodriguez Hertz is an editor of the Journal of Modern Dynamics. He has served as a referee for many distinguished peer-reviewed journals, including Annals of Mathematics and Inventiones Mathematicae, and as an evaluator for the Fondo Nacional de Desarrollo Cientifico y Tecnologico, a research foundation promoting science and technology in Chile. He has given invited talks in conferences, workshops and at academic institutions in the United States, Canada, Argentina, Brazil, Uruguay, China, Chile, Poland, Mexico, Germany, Italy, France, Portugal and India. == Teaching == Rodriguez Hertz was a professor of mathematics in the engineering school at the Universidad de la República in Uruguay since 2002. He joined Penn State's Eberly College of Science Department of Mathematics in 2011 as a professor, and holds the Anatole Katok professorship since 2019. == Honors and recognition == In 2005, Rodriguez Hertz received Premio Roberto Caldeyro Barcia Award from Uruguay's Basic Science Development Program. In 2009, he received an award from the Mathematical Union for Latin America and the Caribbean. In 2010 he was an invited speaker at the International Congress of Mathematicians in 2010 in Hyderabad, India. In 2015 he received the Brin Prize in Dynamical Systems. In 2017, Rodriguez Hertz has been selected to receive the Penn State Faculty Scholar Medal for Outstanding Achievement in the Physical Sciences. This award was established in 1980 to "recognize scholarly or creative excellence represented by a single contribution or a series of contributions around a coherent theme". == References ==
Wikipedia:Federico Villarreal#0
Federico Villarreal National University (Spanish: Universidad Nacional Federico Villarreal, UNFV) is a public university located in Lima, Peru. It was named in honor of the Peruvian mathematician Federico Villarreal. == History == It first functioned as a branch of the Community University of the Center - Universidad Comunal del Centro (UCC) based in Huancayo. That same year, the Peruvian geographer, philosopher, historian and politician Javier Pulgar Vidal was commissioned to manage the university. The Lima branch of the UCC began its activities in a rented house, located at 262 Moquegua street. The entrance exams were set for the month of August 1960 and classes began on 16 September of the same year. In 1961, the Community University of the Center was nationalized as the National University of the Center of Peru. Due to the emergence of disagreements with the central headquarters in Junin, Víctor Raúl Haya de la Torre promoted the creation of the Lima branch and thus managed to declare its autonomy in January 1963. The Federico Villarreal National University was created by Order Nº 14692 on 30 October 1963. The law to create the university was presented by the APRA parliamentary bench, exposed and defended by Luis Alberto Sánchez, and promulgated by Fernando Belaunde Terry. == Organization == The UNFV is organized into 18 faculties: Administration Economics Health sciences - Hipolito Unanue (located near the Hipolito Unanue National Hospital in El Agustino) Laws and political sciences Education Humanities Civil engineering Industrial and systems engineering Geographical, Environmental and Ecotourism Engineering Oceanography, Fisheries, Food Sciences and Aquaculture Electronic and Computer Engineering Natural Sciences and Mathematics odontology Medical technology Psychology agricultural sciences, engineering sciences architecture accounting social sciences Together they are offering 60 bachelor programs, 52 master programs, and 13 doctorates. == Rankings == Federico Villarreal National University is one of the best public universities of Peru. In 2021, the Webometrics Ranking of World Universities of the Spanish National Research Council (CSIC) ranked the Federico Villarreal National University in the 27th place in the country, in its ranking. == Notable alumni == See also Category:Federico Villarreal National University alumni Laura Bozzo (TV talk show presenter and lawyer) Mercedes Cabanillas (educator and politician) César Hildebrandt (journalist) José Luis Pérez-Albela (doctor-writer, former athlete and lecturer) Alejandro Aguinaga (administrator, surgeon and politician) Arturo Cavero Velásquez (singer of Creole music) José Antonio Chang (industrial engineer, rector and politician) Teófilo Cubillas (soccer player, accountant) Liliana La Rosa (nurse, university professor and former minister) Luis Nava Guibert (lawyer and politician) Julián Pérez Huarancca (novelist and short story writer) Nidia Vílchez (public and political accountant) César Villanueva (administrator and politician) José Watanabe (poet) Juan Sheput (industrial engineer, politician and university professor) Zulema Tomás (doctor, politician and ex health minister) == Cooperations == University of Salamanca Complutense University of Madrid Harvard University - Laspau Virginia International University National University of Colombia Autonomous University of Asunción Technical University of Machala Municipal University of Sao Caetano do Sul University of Buenos Aires University of Seville University of La Laguna University of Atlántico Continuing on the route of internationalization, since 2017, UNFV joined the Compostela Group of Universities. == References == == External links == Official site Comunidad UNFV
Wikipedia:Fedor Zaytsev#0
Fedor Zaytsev (Russian: Фёдор Серге́евич За́йцев) (born 1963) is a Russian mathematician, Professor, Dr.Sc., a professor at the Faculty of Computer Science at the Moscow State University. He defended the thesis «Mathematical modeling of kinetic processes with Coulomb interaction in toroidal plasma» for the degree of Doctor of Physical and Mathematical Sciences (1997). Author of 7 books and more than 150 scientific articles. == References == == Bibliography == Grigoriev, Evgeny (2010). Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory. Moscow: Publishing house of Moscow University. pp. 197–198. ISBN 978-5-211-05838-5. == External links == Annals of the Moscow University(in Russian) MSU CMC Archived 2018-05-22 at the Wayback Machine(in Russian) Scientific works of Fedor Zaytsev Scientific works of Fedor Zaytsev(in English)
Wikipedia:Fekete problem#0
In mathematics, the Fekete problem is, given a natural number N and a real s ≥ 0, to find the points x1,...,xN on the 2-sphere for which the s-energy, defined by ∑ 1 ≤ i < j ≤ N ‖ x i − x j ‖ − s {\displaystyle \sum _{1\leq i<j\leq N}\|x_{i}-x_{j}\|^{-s}} for s > 0 and by ∑ 1 ≤ i < j ≤ N log ⁡ ‖ x i − x j ‖ − 1 {\displaystyle \sum _{1\leq i<j\leq N}\log \|x_{i}-x_{j}\|^{-1}} for s = 0, is minimal. For s > 0, such points are called s-Fekete points, and for s = 0, logarithmic Fekete points (see Saff & Kuijlaars (1997)). More generally, one can consider the same problem on the d-dimensional sphere, or on a Riemannian manifold (in which case ||xi −xj|| is replaced with the Riemannian distance between xi and xj). The problem originated in the paper by Michael Fekete (1923) who considered the one-dimensional, s = 0 case, answering a question of Issai Schur. An algorithmic version of the Fekete problem is number 7 on the list of problems discussed by Smale (1998). == References == Bendito, E.; Carmona, A.; Encinas, A. M.; Gesto, J. M.; Gómez, A.; Mouriño, C.; Sánchez, M. T. (2009), "Computational cost of the Fekete problem. I. The forces method on the 2-sphere", Journal of Computational Physics, 228 (9): 3288–3306, Bibcode:2009JCoPh.228.3288B, doi:10.1016/j.jcp.2009.01.021, ISSN 0021-9991, MR 2513833 Fekete, M. (1923), "Über die Verteilung der Wurzeln bei gewissen algebraischen Gleichungen mit ganzzahligen Koeffizienten", Mathematische Zeitschrift, 17 (1): 228–249, doi:10.1007/BF01504345, ISSN 0025-5874, MR 1544613, S2CID 186223729 Saff, E. B.; Kuijlaars, A. B. J. (1997). "Distributing many points on a sphere". Math. Intelligencer. 19 (1): 5–11. doi:10.1007/BF03024331. MR 1439152. S2CID 122562170. Smale, Stephen (1998), "Mathematical problems for the next century", The Mathematical Intelligencer, 20 (2): 7–15, doi:10.1007/BF03025291, ISSN 0343-6993, MR 1631413, S2CID 1331144
Wikipedia:Felix Frankl#0
Felix Issidorowitsch Frankl (12 March 1905, Vienna – 7 Aprile 1961, Nalchik Russian: Феликс Исидорович Франкль) was an Austrian mathematician, who went to live in the Soviet Union where he had an academic career as a university professor. He studied topology at the Faculty of Mathematics of the University of Vienna under Hans Hahn, gaining his doctorate in 1927. Frankl joined the Austrian Communist Party in 1928 and (with the assistance of Pavel Aleksandrov) emigrated to the Soviet Union in 1929. Here he initially collaborated with Lev Pontryagin in topology (they a paper co-authored a paper published in 1930 in the Mathematische Annalen. His interests then shifted to certain particular differential equations which are important for high-speed aerodynamics. These differential equations were of mixed elliptic-hyperbolic type. They determined the transition in aerodynamics between transonic and supersonic speeds. He attended the First International Topological Conference held in Moscow in 1935. In 1957 he was awarded the Leonhard Euler Gold Medal of the Russian Academy of Sciences. In 1950 he was expelled from the communist party and exiled to Bishkek. He died in 1961 in Nalchik. == References ==