source
stringlengths 16
98
| text
stringlengths 40
168k
|
|---|---|
Wikipedia:Sunčica Čanić#0
|
Sunčica Čanić is a Croatian-American mathematician, the Hugh Roy and Lillie Cranz Cullen Distinguished Professor of Mathematics and Director of the Center for Mathematical Biosciences at the University of Houston, and Professor of Mathematics at the University of California, Berkeley. She is known for her work in mathematically modeling the human cardiovascular system and medical devices for it. == Education and career == Čanić earned bachelor's and master's degrees in mathematics in 1984 and 1986 from the University of Zagreb. She completed her Ph.D. in 1992 in applied mathematics from Stony Brook University, under the joint supervision of Bradley J. Plohr and James Glimm. She became an assistant professor at Iowa State University in 1992, and moved to the University of Houston in 1998. She became the Cullen Distinguished Professor in 2008, and Professor of Mathematics at U.C. Berkeley in 2018. She has most recently taught the undergraduate multivariable and vector calculus course at UC Berkeley. She is also a member of the board of governors of the Institute for Mathematics and its Applications. == Contributions == Čanić's research has involved the computational simulation of the stents used to treat arterial clogging. By finding ways of simplifying computer models of stents from hundreds of thousands of nodes to only 400 nodes, she was able to make these simulations much more efficient, and used them to design improved stents that reduce clotting and scar formation. She has also led the development of a procedure for heart valve replacement surgery that is less traumatic than open-heart surgery. == Recognition == In 2014 she was elected as a fellow of the Society for Industrial and Applied Mathematics "for contributions to the modeling and analysis of partial differential equations motivated by applications in the life sciences." She was elected as a Fellow of the American Mathematical Society in the 2020 Class, for "contributions to partial differential equations, and for mathematical modeling of fluid-structure interactions that has influenced the design of medical devices". She was honored as the 2024 AWM-SIAM Sonia Kovalevsky Lecturer. == References == == External links == Official website
|
Wikipedia:Superslow process#0
|
Superslow processes are processes in which values change so little that their capture is very difficult because of their smallness in comparison with the measurement error. == Applications == Most of the time, the superslow processes lie beyond the scope of investigation due to the reason of their superslowness. Multiple gaps can be easily detected in biology, astronomy, physics, mechanics, economics, linguistics, ecology, gerontology, etc. Biology: Traditional scientific research in this area was focused on the describing some brain reactions. Mathematics: In mathematics, when the fluid flows through thin and long tubes it forms stagnation zones where the flow becomes almost immobile. If the ratio of tube length to its diameter is large, then the potential function and stream function are almost invariable on very extended areas. The situation seems uninteresting, but if we remember that these minor changes occur in the extra-long intervals, we see here a series of first-class tasks that require the development of special mathematical methods. Mathematics: Apriori information regarding the stagnation zones contributes to optimization of the computational process by replacing the unknown functions with the corresponding constants in such zones. Sometimes this makes it possible to significantly reduce the amount of computation, for example in approximate calculation of conformal mappings of strongly elongated rectangles. Economic Geography: The obtained results are particularly useful for applications in economic geography. In a case where the function describes the intensity of commodity trade, a theorem about its stagnation zones gives us (under appropriate restrictions on the selected model) geometric dimensions estimates of the stagnation zone of the world-economy (for more information about a stagnation zone of the world-economy, see Fernand Braudel, Les Jeux de L'echange). For example, if the subarc of a domain boundary has zero transparency, and the flow of the gradient vector field of the function through the rest of the boundary is small enough, then the domain for such function is its stagnation zone. Stagnation zones theorems are closely related to pre-Liouville's theorems about evaluation of solutions fluctuation, which direct consequences are the different versions of the classic Liouville theorem about conversion of the entire doubly periodic function into the identical constant. Identification of what parameters impact the sizes of stagnation zones opens up opportunities for practical recommendations on targeted changes in configuration (reduction or increase) of such zones. == References ==
|
Wikipedia:Suren Arakelov#0
|
Suren Yurievich Arakelov (Russian: Суре́н Ю́рьевич Араке́лов, Armenian: Սուրեն Յուրիի Առաքելով) (born October 16, 1947 in Kharkiv) is a Soviet mathematician of Armenian descent known for developing Arakelov theory. == Biography == From 1965 onwards Arakelov attended the Mathematics department of Moscow State University, where he graduated in 1971. In 1974, Arakelov received his candidate of sciences degree from the Steklov Institute in Moscow, under the supervision of Igor Shafarevich. He then worked as a junior researcher at the Gubkin Russian State University of Oil and Gas in Moscow until 1979. He did protest against arrest of Alexander Solzhenitsyn, and was arrested and committed to a mental hospital. Then he stopped his research activity to pursue other life goals. As of 2014 he lives in Moscow with his wife and children. == Arakelov theory == Arakelov theory was exploited by Paul Vojta to give a new proof of the Mordell conjecture and by Gerd Faltings in his proof of Lang's generalization of the Mordell conjecture. == Publications == S. J. Arakelov (1971). "Families of algebraic curves with fixed degeneracies". Mathematics of the USSR-Izvestiya. 5 (6): 1277–1302. doi:10.1070/IM1971v005n06ABEH001235. S. J. Arakelov (1974). "Intersection theory of divisors on an arithmetic surface". Mathematics of the USSR-Izvestiya. 8 (6): 1167–1180. doi:10.1070/IM1974v008n06ABEH002141. Arakelov, S. J. (1975). "Theory of intersections on an arithmetic surface". Proc. Internat. Congr. Mathematicians. 1. Vancouver: Amer. Math. Soc.: 405–408. == References == == External links == Serge Lang (1988). Introduction to Arakelov Theory. Springer. ISBN 0387967931.
|
Wikipedia:Suresh Venapally#0
|
Suresh Venepally (Telugu: సురేశ్ వేనెపల్లి; born 1966) is an Indian mathematician known for his research work in algebra. He is a professor at Emory University. == Background == Suresh was born in Vangoor, Telangana, India and studied in ZPHS at Vangoor up to 9th standard. He did his M.Sc at University of Hyderabad. He joined Tata Institute of Fundamental Research (TIFR) in 1989 and got his PhD in under the guidance of Raman Parimala (1994). He later joined the faculty at University of Hyderabad. == Honors == Shanti Swarup Bhatnagar Award for Mathematical Sciences in 2009 Invited speaker at the International Congress of Mathematicians held at Hyderabad, India in 2010 Fellow of the Indian Academy of Sciences Andhra Pradesh Scientist Award, 2008 B. M. Birla Science prize, 2004 INSA Medal for Young Scientists, 1997 == Selected publications == 1995: "Zero-cycles on quadric fibrations: finiteness theorems and the cycle map", Invent. Math. 122, 83–117 (with Raman Parimala) doi:10.1007/BF01231440 1998: "Isotropy of quadratic forms over function fields in one variable over p-adic fields", Publ. de I.H.E.S. 88, 129–150 (with Raman Parimala) doi:10.1007/BF02701768 MR1733328 2001: Hermitian analogue of a theorem of Springer", J.Alg. 243(2), 780-789 (with Raman Parimala and Ramaiyengar Sridharan) doi:10.1006/JABR.2001.8830 2010: "Bounding the symbol length in the Galois cohomology of function field of p-adic curves", Comment. Math. Helv. 85(2), 337–346 doi:10.4171/CMH/198, "The u-invariant of the function fields of p-adic curves" Ann. Math. 172(2), 1391-1405 (with Raman Parimala) doi:10.4007/ANNALS.2010.172.1391 MR2680494 arXiv:0708.3128 == References == == External links == Emory University faculty web page Archived 15 September 2014 at the Wayback Machine University of Hyderabad faculty web page Suresh Venapally at the Mathematics Genealogy Project
|
Wikipedia:Susan Assmann#0
|
Susan Fera Assmann (June 26, 1956 – May 30, 2020) was an American mathematician and statistician who published highly cited research on subgroup analysis and on the use of spironolactone for treating heart failure. == Early life, education, and career == Assmann is originally from Princeton, New Jersey, where she was born on June 26, 1956. Her father, Frederick Fera Assmann (1915–2004) was a chemical engineer for the US Army and Thiokol Chemical Corporation; her mother, Mary Assmann (died 2010), was a science teacher at The Pennington School. In her doctoral dissertation, Susan Assmann writes that her interest in mathematics "was sparked by the 'interesting test' which constituted part of the application for entrance to the Hampshire College Summer Studies in Mathematics program", a summer program for high school mathematics students. She was the 1974 valedictorian at Hopewell Valley Central High School in Pennington, New Jersey, and a 1978 summa cum laude graduate of Dartmouth College. She completed a Ph.D. in mathematics in 1983 at the Massachusetts Institute of Technology, with the dissertation Problems in Discrete and Applied Mathematics supervised by Daniel Kleitman. Through her joint publications with Kleitman on problems including the bin covering problem, she has Erdős number 2. After continuing in academia as a mathematics professor at the University of Massachusetts Lowell and Regis College (Massachusetts), Assmann came to work for the New England Research Institute (later known as HealthCore), where she continued as a principal statistician for nearly 26 years. Supporting the corresponding shift in her research interests, she received a master's degree in biostatistics from the School of Public Health & Health Sciences at the University of Massachusetts Amherst in 1994. == Personal life == Assmann married Jeffrey Del Papa, a private equity manager. Her interests included change ringing and early harpsichord music. Assmann died of cancer on May 30, 2020. == Selected publications == Assmann, S. F.; Peck, G. W.; Sysło, M. M.; Zak, J. (1981), "The bandwidth of caterpillars with hairs of length 1 and 2", SIAM Journal on Algebraic and Discrete Methods, 2 (4): 387–393, doi:10.1137/0602041, MR 0634362 Assmann, S. F.; Johnson, D. S.; Kleitman, D. J.; Leung, J.Y.-T. (December 1984), "On a dual version of the one-dimensional bin packing problem", Journal of Algorithms, 5 (4): 502–525, doi:10.1016/0196-6774(84)90004-x Assmann, Susan F.; Hosmer, David W.; Lemeshow, Stanley; Mundt, Kenneth A. (May 1996), "Confidence intervals for measures of interaction", Epidemiology, 7 (3): 286–290, doi:10.1097/00001648-199605000-00012, JSTOR 3702864, PMID 8728443 Assmann, Susan F.; Pocock, Stuart J.; Enos, Laura E.; Kasten, Linda E. (March 2000), "Subgroup analysis and other (mis)uses of baseline data in clinical trials", The Lancet, 355 (9209): 1064–1069, doi:10.1016/s0140-6736(00)02039-0, PMID 10744093 Pitt, Bertram; Pfeffer, Marc A.; Assmann, Susan F.; Boineau, Robin; Anand, Inder S.; Claggett, Brian; Clausell, Nadine; Desai, Akshay S.; Diaz, Rafael; Fleg, Jerome L.; Gordeev, Ivan; Harty, Brian; Heitner, John F.; Kenwood, Christopher T.; Lewis, Eldrin F.; O'Meara, Eileen; Probstfield, Jeffrey L.; Shaburishvili, Tamaz; Shah, Sanjiv J.; Solomon, Scott D.; Sweitzer, Nancy K.; Yang, Song; McKinlay, Sonja M. (April 2014), "Spironolactone for heart failure with preserved ejection fraction", New England Journal of Medicine, 370 (15), Massachusetts Medical Society: 1383–1392, doi:10.1056/nejmoa1313731, PMID 24716680 == References ==
|
Wikipedia:Susanne Ditlevsen#0
|
Susanne Ditlevsen is a Danish mathematician and statistician, interested in mathematical biology, perception, dynamical systems, and statistical modeling of biological systems. She is a professor in the Department of Mathematical Sciences of the University of Copenhagen, where she heads the section of statistics and probability theory. Ditlevsen was an actor before she became a researcher. She completed her Ph.D. in 2004 at the University of Copenhagen. Her dissertation, Modeling of physiological processes by stochastic differential equations, was supervised by Michael Sørensen. In 2012, Ditlevsen became an elected member of the International Statistical Institute. In 2016, Ditlevsen was elected to the Royal Danish Academy of Sciences and Letters. In 2023, she and her brother Peter, a climate scientist, published an article predicting that the Atlantic Meridional Overturning Circulation has a 95% chance of collapsing between 2025 and 2095, with the statistical average of the predictions being 2057. When this tipping point is reached, it will have severe consequences to the world's climate, especially of northern Europe (see Effects of AMOC slowdown). == References == == External links == Home page Susanne Ditlevsen publications indexed by Google Scholar
|
Wikipedia:Suslin algebra#0
|
In mathematics, a Suslin algebra is a Boolean algebra that is complete, atomless, countably distributive, and satisfies the countable chain condition. They are named after Mikhail Yakovlevich Suslin. The existence of Suslin algebras is independent of the axioms of ZFC, and is equivalent to the existence of Suslin trees or Suslin lines. == See also == Andrei Suslin == References ==
|
Wikipedia:Suspension of a ring#0
|
In algebra, more specifically in algebraic K-theory, the suspension Σ R {\displaystyle \Sigma R} of a ring R is given by Σ ( R ) = C ( R ) / M ( R ) {\displaystyle \Sigma (R)=C(R)/M(R)} where C ( R ) {\displaystyle C(R)} is the ring of all infinite matrices with entries in R having only finitely many nonzero elements in each row or column and M ( R ) {\displaystyle M(R)} is its ideal of matrices having only finitely many nonzero elements. It is an analog of suspension in topology. One then has: K i ( R ) ≃ K i + 1 ( Σ R ) {\displaystyle K_{i}(R)\simeq K_{i+1}(\Sigma R)} . == Notes == == References == C. Weibel "The K-book: An introduction to algebraic K-theory"
|
Wikipedia:Suzan Kahramaner#0
|
Suzan Kahramaner (May 21, 1913 – February 22, 2006) was one of the first female mathematicians in Turkish academia. == Education == Kahramaner was born in Üsküdar, in Istanbul. Her mother was Müzeyyen Hanım, the daughter of Halep's district treasurer, and her father was surgeon Dr. Rifki Osman Bey. She studied at the Moda Nümune Inas primary school between 1919 and 1924. After enrolling in Notre Dame De Sion in 1924, she completed her secondary education and obtained her French bachelor's degree in 1934. In the aftermath of the higher education reforms conducted in 1933 in Istanbul Darülfunun, which was the only institution of higher education in the country, was modernized and renamed Istanbul University. Kahramaner began her graduate studies in 1934 in the Mathematics-Astronomy Department of Istanbul University. In addition to its renewed curricula and evolving faculty, Istanbul University housed the scientific research of many famous German academics that had fled from the pre-World War II Germany. During her undergraduate studies, she took classes taught by many famous mathematicians, including Ali Yar, Kerim Erim, Richard von Mises, Hilda Geiringer and William Prager. In 1939, Kahramaner graduated from the Department of Mathematics and Astronomy in Istanbul University, which had previously housed great academic merit through its successful scholars. She undertook research projects in the field of physics between the years 1939 and 1940. In 1943, she started her doctorate studies on Coefficient Problems in the Theory of Complex Functions with the advisor Kerim Erim, the first mathematician in Turkey with a doctoral degree, who had completed his doctorate studies in Friedrich-Alexander University in Germany with his advisor Adolf Hurwitz. Kerim Erim was also the first scientist to direct a doctoral study in mathematics in Turkey. Kahramaner's doctoral thesis was entitled, Sur les fonctions analytiques qui prénnent la même valeur ou des valeurs donnés (ou en m points donnés). == Career == At the beginning of the 1940–1941 academic year, since teachers at the time were not appointed to Istanbul but were instead appointed to other cities in Turkey, she started working as an assistant teacher in Çamlıca High School for Girls and worked as a mathematics teacher there until 1943. In 1943, she worked as a teaching assistant for the Analysis I and Analysis II courses in the Mathematics Department of the Faculty of Science in Istanbul University. After her doctoral thesis was approved, Kahramaner continued her scientific and academic studies in Istanbul University as one of the first woman mathematicians in Turkey with a PhD in Mathematics. She wrote the thesis, Sur l'argument des fonctions univalentes for her Assistant Professorship and was consequently titled assistant professor the same year after she successfully passed the necessary exams. She was sent to Rolf Nevanlinna to Helsinki University for a year in January 1957 in order to do research on the Theory of Complex Functions. She participated in the Scandinavian Congress of Mathematicians, International Colloquium on the Theory of Functions, in Helsinki the very same year in August and had the opportunity to meet some of the famous mathematicians like Ernst Hölder, Wilhelm Blaschke, Lars Valerian Ahlfors, Paul Montel, Olli Lehto, Mieczysław Biernacki, Alexander Gelfond, Albert Pfluger, Wilfred Kaplan, Walter Hayman and Paul Erdős. In November 1957, she went to Zurich to continue her scientific research for approximately a year at Zurich University, where Rolf Nevanlinna was lecturing. In August 1958, she attended the International Congress of Mathematicians in Edinburgh held by the International Mathematical Union, where the Fields Medals were awarded to their recipients. She returned to Istanbul University at the end of 1958. In the autumn of 1959, she won the NATO Scholarship, to which she had applied with a reference from Rolf Nevanlinna and with this scholarship; she worked at Zurich University during 1959–1960. Afterwards, she conducted scientific research at Stanford University for a month. She continued her research the same year in September at Helsinki University. She returned to her duty in Istanbul University at the end of October 1960. She participated to the International Congress of Mathematicians held in Stockholm in August 1962. She did her research at the Helsinki University and Zurich University in September and October in the same year. In August 1966, she was invited to the II. Rolf Nevanlinna Colloquium. In August 1966, she joined the International Congress of Mathematicians (ICM) in Moscow. After the congress, she carried out her studies at Helsinki University in September and October in order to complete her professorship thesis. Her professorship thesis, entitled Sur les singularites d'une application différentiable was accepted in 1968 and she received the title of professor the same year. She conducted scientific studies at various universities in London, Paris, Zurich and Nice in 1970. She attended the International Congress of Mathematicians in Nice in 1970. She also contributed to the founding of Balkan Union of Mathematicians, which was realized with the participation of Romania, Yugoslavia, Greece, Bulgaria and Turkey in the same year. She participated to the Balkan Union of Mathematicians in Athens in 1971. She also participated to the congress organized by the Balkan Union of Mathematicians in September 1971. In May and July 1973, O. Lehto, Menahem Max Schiffer, O. Tammi, Cevdet Kocak and H. Minc visited her and conducted scientific research with her. She joined the Seminar and International Symposium on Functions Theory of Silivri Institute of Research on Mathematics in 1976. In this symposium, Rolf Nevanlinna was given the title of Honoris Causa. In the same year, she was awarded the Jyvaskyla University (Finland) medal. She joined the conference organized in Varna in 1977 by the Balkan Union of Mathematicians. In 1978, she also participated to the International Congress of Mathematicians in Helsinki and Rolf Nevanlinna Colloquium in Joensuu. She was the head of the Department of Mathematics between 1978 and 1979 in Istanbul University. Kahramaner was the PhD supervisor of Ahmet Dernek, Rıfkı Kahramaner and Yasar Polatoglu and, co-supervisor of Semin Akdogan. In the beginning of 1983, Kahramaner retired from Istanbul University after forty years in academia due to her age. During her retirement, she continued her scientific research. In August 1987, she attended the Rolf Nevanlinna Colloquium in Leningrad. == Selected publications == Kahramaner, who was proficient in English, French, German, Turkish and Arabic, wrote countless scientific studies, some of which are: Sur les fonctions analytiques qui prénnent la meme valeur ou des valeurs données en deux points donnés (ou en m points donnés), Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 20, 1955. Ein verzerrungssatz des argumentes der schlichten funktionen, (with Nazim Terzioglu) Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 20, 1955. Über das argument der anlytischen funktionen, (with Nazim Terzioglu) Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 21, 1956. Sur le comportement d'une représentation presque-conforme dans le voisinage d'un point singulier, Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 22, 1957. Sur les applications différentiables du plan complexe, Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 26, 1961. Sur les coefficients des fonctions univalents, Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 28, 1962. Modern Mathematical Methods and Models Volume I: Multicomponent Methods (A Book of Experimental Text Materials), (Translation from The Dartmouth College Writing Group; E.J. Cogan, R.L. Davis, J. G. Kemeny, R.Z. Norman, J.L. Snell and G.L. Thompson) Malloy Inc., Ann Arbor, Michigan, ABD, 1958. Sur l'argument des fonctions univalentes, Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 32, 1967. == Awards and honors == Kahramaner was awarded the War of Independence Sword by the Halic Rotary Club in the 75th celebration of the Turkish Republic. == Personal == Kahramaner died in Istanbul, on Wednesday, February 22, 2006. Kahramaner's son H. Rifki Kahramaner and her daughter-in-law Yasemin Kahramaner are also both mathematics professors. == References == Suzan Kahramaner - The Mathematics Genealogy Project 9th International Symposium on Geometric Function Theory and Applications Symposium, GFTA2013 (Dedicated to Suzan Kahramaner) Abstract Book. == External links == Photo Movie of Suzan Kahramaner on YouTube
|
Wikipedia:Suzanne Weekes#0
|
Suzanne Louise Weekes is the Chief Executive Officer of the Society for Industrial and Applied Mathematics. She is also Professor of Mathematical Sciences at Worcester Polytechnic Institute (WPI). She is a co-founder of the Mathematical Sciences Research Institute Undergraduate Program. == Education == Weekes is Caribbean-American, and was born and raised in Trinidad and Tobago. She graduated in 1989 from Indiana University Bloomington with a major in mathematics and a minor in computer science. She went on to get an MS in applied mathematics in 1990 and a PhD in Mathematics and Scientific Computing in 1995 at the University of Michigan. == Career == Weekes is the co-director of the Preparation for Industrial Careers in Mathematical Sciences, which helps faculty in the U.S. engage their students with Industrial math research. She is a professor of mathematical sciences at Worcester Polytechnic Institute as well as a cofounder of MSRI-UP, a research experience for undergraduates that aims to increase under represented groups in math programs by providing them with research opportunities. In July 2019, she became Interim Associate Dean of Undergraduate Studies at WPI. In December 2019, she was elected to the executive committee of the Association for Women in Mathematics as an at large member. == Awards and recognition == In 2015, Weekes received the Denise Nicoletti Trustees' Award for Service to Community. Weekes was recognized by Mathematically Gifted & Black as a Black History Month 2017 Honoree. She received the 2019 M. Gweneth Humphreys Award for mentorship from the Association for Women in Mathematics. She won one of the Deborah and Franklin Haimo Awards for Distinguished College or University Teaching of Mathematics from the Mathematical Association of America in 2020. She was honored as the 2022 AWM-MAA Etta Zuber Falconer Lecturer. Weekes was selected a Fellow of the Association for Women in Mathematics in the class of 2024 "for her consistent and outstanding support for broadening the participation of women and girls as well as others that are underrepresented in mathematics; for her award-winning teaching and mentoring; and for her vision and success in co-creating and co-directing innovative programs that have improved and diversified the mathematics community." == References ==
|
Wikipedia:Suzhou numerals#0
|
The Suzhou numerals, also known as Sūzhōu mǎzi (蘇州碼子), is a numeral system used in China before the introduction of Hindu numerals. The Suzhou numerals are also known as Soochow numerals, ma‑tzu, huāmǎ (花碼), cǎomǎ (草碼), jīngzǐmǎ (菁仔碼), fānzǐmǎ (番仔碼) and shāngmǎ (商碼). == History == The Suzhou numeral system is the only surviving variation of the rod numeral system. The rod numeral system is a positional numeral system used by the Chinese in mathematics. Suzhou numerals are a variation of the Southern Song rod numerals. Suzhou numerals were used as shorthand in number-intensive areas of commerce such as accounting and bookkeeping. At the same time, standard Chinese numerals were used in formal writing, akin to spelling out the numbers in English. Suzhou numerals were once popular in Chinese marketplaces, such as those in Hong Kong and Chinese restaurants in Malaysia before the 1990s, but they have gradually been supplanted by Hindu numerals. This is similar to what had happened in Europe with Roman numerals used in ancient and medieval Europe for mathematics and commerce. Nowadays, the Suzhou numeral system is only used for displaying prices in Chinese markets or on traditional handwritten invoices. == Symbols == In the Suzhou numeral system, special symbols are used for digits instead of the Chinese characters. The digits of the Suzhou numerals are defined between U+3021 and U+3029 in Unicode. An additional three code points starting from U+3038 were added later. The symbols for 5 to 9 are derived from those for 0 to 4 by adding a vertical bar on top, which is similar to adding an upper bead which represents a value of 5 in an abacus. The resemblance makes the Suzhou numerals intuitive to use together with the abacus as the traditional calculation tool. The numbers one, two, and three are all represented by vertical bars. This can cause confusion when they appear next to each other. Standard Chinese ideographs are often used in this situation to avoid ambiguity. For example, "21" is written as "〢一" instead of "〢〡" which can be confused with "3" (〣). The first character of such sequences is usually represented by the Suzhou numeral, while the second character is represented by the Chinese ideograph. == Notations == The digits are positional. The full numerical notations are written in two lines to indicate numerical value, order of magnitude, and unit of measurement. Following the rod numeral system, the digits of the Suzhou numerals are always written horizontally from left to right, just like how numbers are represented in an abacus, even when used within vertically written documents. For example: The first line contains the numerical values, in this example, "〤〇〢二" stands for "4022". The second line consists of Chinese characters that represents the order of magnitude and unit of measurement of the first digit in the numerical representation. In this case "十元" which stands for "ten yuan". When put together, it is then read as "40.22 yuan". Possible characters denoting order of magnitude include: wàn (万) for myriads (As a variant of the traditional character 萬, it is used for speed of writing in Suzhou numerals even before simplification of Chinese characters.) qiān (千) for thousands bǎi (百) for hundreds shí (十) for tens blank for ones Other possible characters denoting unit of measurement include: yuán (元) for dollar máo (毫 or 毛) for 10 cents lǐ (里) for the Chinese mile any other Chinese measurement unit Notice that the decimal point is implicit when the first digit is set at the ten position. Zero is represented by the character for zero (〇). Leading and trailing zeros are unnecessary in this system. This is very similar to the modern scientific notation for floating point numbers where the significant digits are represented in the mantissa and the order of magnitude is specified in the exponent. Also, the unit of measurement, with the first digit indicator, is usually aligned to the middle of the "numbers" row. == Hangzhou misnomer == In the Unicode standard version 3.0, these characters are incorrectly named Hangzhou style numerals. In the Unicode standard 4.0, an erratum was added which stated: The Suzhou numerals (Chinese su1zhou1ma3zi) are special numeric forms used by traders to display the prices of goods. The use of "HANGZHOU" in the names is a misnomer. All references to "Hangzhou" in the Unicode standard have been corrected to "Suzhou" except for the character names themselves, which cannot be changed once assigned, in accordance with the Unicode Stability Policy. (This policy allows software to use the names as unique identifiers.) == See also == Unicode numerals == References ==
|
Wikipedia:Svante Janson#0
|
Carl Svante Janson (born 21 May 1955) is a Swedish mathematician. A member of the Royal Swedish Academy of Sciences since 1994, Janson has been the chaired professor of mathematics at Uppsala University since 1987. In mathematical analysis, Janson has publications in functional analysis (especially harmonic analysis) and probability theory. In mathematical statistics, Janson has made contributions to the theory of U-statistics. In combinatorics, Janson has publications in probabilistic combinatorics, particularly random graphs and in the analysis of algorithms: In the study of random graphs, Janson introduced U-statistics and the Hoeffding decomposition. == Biography == Svante Janson has already had a long career in mathematics, because he started research at a very young age. === From prodigy to docent === A child prodigy in mathematics, Janson took high-school and even university classes while in primary school. He was admitted in 1968 to University of Gothenburg at age 12. After his 1968 matriculation at Uppsala University at age 13, Janson obtained the following degrees in mathematics: a "candidate of philosophy" (roughly an "honours" B.S. with a thesis) at age 14 (in 1970) and a doctor of philosophy at age 21–22 (in 1977). Janson's Ph.D. was awarded on his 22nd birthday. Janson's doctoral dissertation was supervised by Lennart Carleson, who had himself received his doctoral degree when he was 22 years old. After having earned his doctorate, Janson was a postdoc with the Mittag-Leffler Institute from 1978 to 1980. Thereafter he worked at Uppsala University. Janson's ongoing research earned him another PhD from Uppsala University in 1984 – this second doctoral degree being in mathematical statistics; the supervisor was Carl-Gustav Esseen. In 1984, Janson was hired by Stockholm University as docent (roughly associate professor in the USA). === Professorships === In 1985 Janson returned to Uppsala University, where he was named the chaired professor in mathematical statistics. In 1987 Janson became the chaired professor of mathematics at Uppsala university. Traditionally in Sweden, the chaired professor has had the role of a "professor ordinarius" in a German university (roughly combining the roles of research professor and director of graduate studies at a research university in the USA). == Awards == Besides being a member of the Royal Swedish Academy of Sciences (KVA), Svante Janson is a member of the Royal Society of Sciences in Uppsala. His thesis received the 1978 Sparre Award from the KVA. He received the 1994 Swedish medal for the best young mathematical scientist, the Göran Gustafsson Prize. Janson's former doctoral student, Ola Hössjer, received the Göran Gustafsson prize in 2009, becoming the first statistician so honored. In December 2009, Janson received the Eva & Lars Gårding prize from the Royal Physiographic Society in Lund. In 2021, Janson received the Flajolet Lecture Prize. He will deliver the Flajolet Lecture at the 2022 AofA conference. == Works by Janson == === Books === Barbour, A. D.; Holst, Lars; Janson, Svante (1992). Poisson Approximation. Oxford, UK: Oxford University Press. ISBN 0-19-852235-5. MR 1163825. Janson, Svante (1994). "Orthogonal decompositions and functional limit theorems for random graph statistics". Memoirs of the American Mathematical Society. 111 (534). Providence, Rhode Island: American Mathematical Society: vi+78. doi:10.1090/memo/0534. ISBN 0-8218-2595-X. MR 1219708. Janson, Svante (1997). Gaussian Hilbert spaces. Cambridge Tracts in Mathematics. Vol. 129. Cambridge: Cambridge University Press. pp. x, 340. ISBN 0-521-56128-0. MR 1474726. Janson, Svante; Łuczak, Tomasz; Rucinski, Andrzej (2000). Random graphs. Wiley-Interscience Series in Discrete Mathematics and Optimization. New York: Wiley-Interscience. pp. xii+333. ISBN 0-471-17541-2. MR 1782847. === Selected articles === Janson, Svante (1990). "Poisson approximation for large deviations". Random Structures and Algorithms. 1 (2): 221–229. doi:10.1002/rsa.3240010209. MR 1138428. (Janson's inequality) Janson, Svante; Knuth, Donald E.; Luczak, Tomasz; Pittel, Boris (1993). "The birth of the giant component". Random Structures and Algorithms. 4 (3): 231–358. arXiv:math/9310236. doi:10.1002/rsa.3240040303. MR 1220220. S2CID 206454812. Janson, Svante; Nowicki, Krzysztof (1991). "The asymptotic distributions of generalized U-statistics with applications to random graphs". Probability Theory and Related Fields. 90 (3): 341–375. doi:10.1007/BF01193750. MR 1133371. S2CID 120249197. == References == Svante Janson's homepage at Uppsala University. Accessed 2010-06-27. Curriculum Vitæ for Svante Janson. Accessed 2010-06-27. Mathematical works by Svante Janson, Department of Mathematics, Uppsala University. Accessed 2010-06-27. Details of seminar given by Janson on May 7th 2010 to the Microsoft Research Theory Group. Accessed 2010-06-27. Member record for Svante Janson. Swedish Academy of Sciences. Accessed 2010-06-27. == External links == Svante Janson at the Mathematics Genealogy Project "Svante Janson". Google Scholar. Retrieved 25 July 2022. Mathematical Reviews. "Svante Janson". Retrieved 2010-06-29.
|
Wikipedia:Sven Erlander#0
|
Tage Fritjof Erlander (Swedish: [ˈtɑ̂ːgɛ ɛˈɭǎnːdɛr] ; 13 June 1901 – 21 June 1985) was a Swedish politician and statesman who served as the Prime Minister of Sweden and leader of the Social Democratic Party from 1946 to 1969. Previously, he served as minister of education from 1945 to 1946, and was a member of the Riksdag from 1932 to 1973. During his premiership, Sweden developed into one of the world's most advanced welfare states, with the "Swedish Model" at the peak of its acclaim and notoriety. His uninterrupted tenure of 23 years as head of the government is the longest ever in Sweden and in any modern Western democracy. Born to a poor family in Ransäter, Erlander later studied at Lund University. He was elected to Lund's municipal council in 1930, and in 1932 he was elected as a member of the Riksdag. Becoming a member of the World War II coalition government in 1944, Erlander rose unexpectedly to the leadership upon the death of Prime Minister Per Albin Hansson in October 1946, maintaining the position of the Social Democrats as the dominant party in the country. Known for his moderation, pragmatism and self-irony, Erlander often sought approval from the liberal-conservative opposition for his policies, de facto dropping all pretences of wide-scale nationalizations whilst introducing reforms such as universal health insurance, pension additions and a growing public sector, although stopping short of raising tax levels above the average OECD levels at the time. Until the 1960s, income taxes were lower in Sweden than in the United States. For most of his time in power, Erlander ran a minority government of the Social Democrats. From 1951 to 1957, he instead ran a coalition with the Farmers' League. The Social Democrats held a majority of seats in the upper house for most of this time and this allowed Erlander to remain in power after the 1956 general election, when the right-wing parties won a majority. A snap election in 1958 then reversed this result. In foreign policy, he initially sought an alliance of Nordic countries, but without success, instead maintaining strict neutrality while spending heavily on the military (but ultimately rejecting nuclear capability and signing the nuclear non-proliferation treaty in 1968). Erlander's mandate coincided with the post–World War II economic expansion, in Sweden known as the record years, in which Sweden saw its economy grow to one of the ten strongest in the world, and subsequently joined the G10. In the 1968 general election, he won his seventh and most successful victory, with the Social Democrats winning an absolute majority of the popular vote and seats in the lower chamber. Erlander resigned the following year during a process of major constitutional reform, and was succeeded by his long-time protégé and friend Olof Palme. He continued to serve as a member of the Riksdag until he resigned in 1973. Afterwards, Erlander continued to speak on political matters and published his memoirs. He died in 1985. He was considered one of the most popular leaders in the world by the end of the 1960s, and one of the most popular prime ministers in the history of Sweden. == Early life and education == Tage Fritjof Erlander was born in Ransäter, Värmland County on June 13, 1901, on the top floor of the house today known as Erlandergården. His parents were Alma Erlander (née Nilsson) and Erik Gustaf Erlander. Erik Gustaf was a teacher and cantor who married Alma Nilsson in 1893. Erlander had an older brother, Janne Gustaf Erlander (born 1893), an older sister, Anna Erlander (born 1894), and a younger sister, Dagmar Erlander (born 1904). Erlander's paternal grandfather, Anders Erlandsson, worked as a smith at an ironworks, and his maternal grandfather was a farmer who held a public office in his home municipality. On his maternal grandmother's side, Erlander descended from Forest Finns, who migrated to Värmland from the Finnish province of Savonia in the 17th century. According to Erlander, his father was very religious, supportive of universal suffrage, pro-free market, anti-trade union, and liberal. Erlander also said that his father became increasingly anti-socialist as he aged, speculating that his father was unhappy with his son's eventual election to parliament as a member of a socialist party. The Erlander family was initially poor, although Erik Gustaf was able to make money through selling homemade furniture and exporting lingonberries to Germany. As a child, Erlander lived on the second floor of Erlandergården, and attended school on the first floor. He later attended schools in Karlstad, living in a boarding house for children of clergymen. He was reportedly a good student in high school. From 1921 to 1922, Erlander carried out his mandatory military service at a machine gun factory in Malmslätt. In September 1920, his father enrolled him at Lund University rather than Uppsala University, as he felt Lund was more affordable. As a student at Lund, Erlander was heavily involved in student politics and met many politically radical students. He was exposed to societal and economic injustices, and began to identify with socialism. Beginning in autumn of 1923, Erlander read the writings of Karl Marx. He met his future wife, fellow student Aina Andersson. They began working together in the chemistry department in 1923. He also met and studied natural sciences with fellow student and future physicist Torsten Gustafson, who would later serve as an advisor on nuclear affairs to Erlander during his premiership. In addition to his scientific studies, Erlander also read some economics, and was an active member of Wermlands Nation, where he was elected Kurator (head executive) in 1922. In 1926, he led student opposition to celebrations of the 250th anniversary of the Battle of Lund. He graduated with a degree in political science and economics in 1928. From 1928 to 1929 he completed his compulsory military service in the Signal Corps and eventually went on to become a reserve Lieutenant. Erlander's first major job was a member of the editorial staff of the encyclopedia Svensk upplagsbok from 1928 to 1938. In 1930 Tage and Aina married, although in his memoirs he stated that they both opposed the institution. They spent their first few years of marriage apart, as Erlander was working in Lund while Aina was working in Karlshamn, and they only saw each other on holidays. Their first son, Sven Bertil Erlander, was born on May 25, 1934, in Halmstad, and their second son, Bo Gunnar Erlander, was born in Lund on May 16, 1937. == Early political career == === MP and state secretary === Erlander joined the Social Democratic Party in 1928, and was elected to the Lund municipal council in 1930. He was involved in improving poor city housing, lowering unemployment, and installing a new bathhouse. He served on the council until 1938. He was elected as a member of the Riksdag (parliament) in 1932,: 1:23-1:27 representing Fyrstadskretsen, which he would represent until 1944. He began making political connections, and attracted the attention of prominent Social Democratic politician and Minister for Social Affairs Gustav Möller. In 1938, Möller appointed Erlander as a state secretary at the Ministry of Social Affairs. After Erlander became a state secretary, he and Aina, with their children, moved to Stockholm. In 1941, Sweden's Population Commission was created under Erlander's leadership. He served as its chairman, and it put forward proposals on grants and regulations of daycare centers and play schools. As a state secretary, Erlander was one of the most senior officials responsible for the establishment of internment camps in Sweden during World War II. Various types of camps were set up, primarily to house and detain refugees and foreigners arriving to Sweden, to house interned German and Allied military personnel (e.g. pilots who had crashed in Sweden), and to replace the military draft for pro-Soviet Communists and others who were viewed as unreliable and hostile to Sweden's political system; instead of being stationed in the armed forces, they were conscripted to work camps organized to build infrastructure. Writing in his memoirs in the 1970s, Erlander downplayed his knowledge of the camps, as, according to journalist Niclas Sennerteg, Erlander knew about their existence long before he claimed and was integral to their design and function. In 1942, Erlander and Möller initiated a nationwide census of the Swedish Travelers, a branch of the Romani people. === In Hansson's government === Erlander ascended to Prime Minister Per Albin Hansson's World War II coalition cabinet in 1944 as a minister without portfolio, a post he held until the next year. Following the 1944 general election, he began representing the Malmöhus County. In the summer of 1945, as part of Hansson's post-war cabinet, he became minister of education and ecclesiastical affairs. It has been suggested that Erlander was chosen for the position due to his lack of experience with educational policies, as he was not associated with factional divides regarding debates over Sweden's educational system. Erlander was initially skeptical about accepting the role, but he eventually grew accustomed to it, despite not holding the office very long. Erlander largely left ecclesiastical matters to other politicians, instead focusing on tangible educational reforms. Influenced by his experiences at Lund University, he proposed larger investments in research and higher education. He was a major driving force behind successful laws providing free school lunches and textbooks. On October 29, 1945, Erlander was visited by Austro-Swedish nuclear physicist Lise Meitner, to discuss Sweden investing in nuclear physics and technology following the atomic bombings of Hiroshima and Nagasaki. In 1946, Möller introduced a new pension proposal, a uniform one which would lift all pensioners above the poverty line, which Erlander and Minister for Finance Ernst Wigforss opposed, but it passed in the Riksdag. At the 1945 Social Democratic party conference, Per Nyström presented a motion to update Swedish schooling. The conference was split on how much schooling should be mandatory, with some arguing it should only extend to elementary school. Despite the disagreements, the conference requested the party executive create a special committee to develop school programs. The committee was divided on whether students should be separated by abilities, a practice known as streaming. It never reached a consensus, but finished a draft for a new school program requiring nine years of universal mandatory education, although it was never submitted to the party. In 1946, Erlander, as minister of education, created a second committee, the Schools Commission, despite the first one being still active. This new committee, chaired by Erlander, was composed mainly of appointed party members. By 1948, after Erlander had become prime minister, the second committee also proposed nine years of mandatory schooling, but the question of when to begin streaming was still debated. === Succeeding Hansson === Prime Minister Hansson suddenly died on October 6, 1946. Foreign Minister Östen Undén was chosen to serve as interim prime minister until a successor could be chosen. Erlander and his wife were in Lund when Hansson died, and when they returned to the Grand Hôtel, they were informed of his death by Minister of Defense Allan Vougt. On October 6, Hansson's cabinet and the Social Democrat executive committee met, and the executive committee scheduled a full board party meeting for October 9, as did the Social Democratic party caucus. Erlander first learned of his possible selection as prime minister and party leader on October 7. Erlander himself was reluctant and had little interest in becoming prime minister, saying he would only do so if the desire from the party was strong enough. At the October 9 meeting, the board voted 15 to 11 in favor of Erlander becoming prime minister, and the caucus voted 94 to 72 to make Erlander party leader. The choice was considered surprising and controversial, and some believed Gustav Möller, who received the 72 remaining votes, was Hansson's obvious successor, including Möller himself. The choice of Erlander has been attributed to younger party members wanting a younger generation to lead and Erlander being viewed as a greater figure of change, as he was experienced in areas seen as important to Social Democrats such as social and educational policies, and was able to foster cooperation between people with differing views. == Premiership and party leadership == === First government: 1946–1951 === ==== Ascension and first actions ==== After Erlander was chosen as prime minister, Hansson's cabinet all submitted their resignations, as was routine. King Gustaf V met Erlander on October 13, asking him to form a new government. Erlander asked all cabinet members to withdraw their resignations. Upon meeting Erlander at Drottningholm and asking him to form a new government, Gustaf encouragingly told Erlander that times were difficult, and that a younger man serving as prime minister was best for Sweden. He also assured Erlander that "things would work out well", and that the two of them would get along as initially he had some disagreements with Per Albin Hansson, who was ideologically a republican. In the two years leading up to the 1948 election, Erlander visited numerous Social Democratic organizations across the country to solidify his support and explain party policies. Within his first 365 days in office, he made between seventy and eighty public appearances outside of Stockholm. Social Democratic newspapers began writing positively about Erlander‘s speaking events. Nonsocialist newspapers became more critical of Erlander, first casting him as an irrelevant figure, then as an unreliable and uninspiring tactician. These perceived attacks made Erlander more popular within the party. ==== First cabinet ==== Erlander inherited 14 ministers from Hansson. Overall, Erlander allowed his cabinet ministers a great deal of freedom, as he did not want to become overly involved in coordinating them daily, but he did monitor them. Over his premiership, Hansson's ministers slowly left the government. Minister of Commerce Gunnar Myrdal implemented policies such as selling the Soviet Union machinery on a fifteen-year credit and a 17% appreciation of the Swedish krona. The former, conceived by his predecessor, was viewed as less economically attractive due to stronger trading partners existing post-war, and the latter worsened Sweden's trade deficit. Due to the backlash, he resigned in 1947, becoming the first minister to leave Erlander's government. Erlander appointed Josef Weijne to replace him as minister of education and ecclesiastical affairs. In 1947, Karin Kock became the first woman in Swedish to hold a cabinet position when she became a minister without portfolio, and in 1948 she became minister of supply. Kock was suggested by Riksdag member Ulla Wohlin, who would serve in Erlander's third cabinet as Sweden's third female cabinet minister. Kock left the post in 1949, and the office was abolished the following year. Weijne died in office in 1951, and Erlander appointed Hildur Nygren to succeed him, making her the second woman in Sweden to become a cabinet minister. ==== Election of 1948 ==== Going into the 1948 election, Erlander's first as party leader and prime minister, many Social Democrats expected their party to lose, including Erlander's future protégé and later prime minister Ingvar Carlsson. The Liberal People's Party was becoming a major opposition party with their new leader, Bertil Ohlin. During the World War II coalition government, Ohlin had served as Hansson's minister of commerce, but he opposed Hansson's various social policy proposals During Erlander's government he generally came to support many of the Social Democrats' policies. Despite this, Erlander, still partially influenced by Ohlin's opposition to the Hansson government, harbored a strong dislike of the Liberals and their leader. In speeches and during Riksdag debates, Erlander frequently attacked the Liberals with accusations including irresponsibility, opportunism, and irreconcilability. Erlander viewed Ohlin as "stiff, self-righteous, arrogant, bossy, and lacking in principles", while Ohiln wrote in his memoirs that Erlander was "evasive, ungenerous, uncertain, quick to take offense, and a somewhat unfair debater." Their political rivalry is considered one of the most notable in modern Swedish history. Despite fears, the Social Democrats won 46.13%. In the Andra kammaren ("Second chamber" or lower house) of the Riksdag, the Social Democrats secured 112 out of 230 total seats. The Liberals came in second with 22.8% of the vote, one of their largest victories. Erlander himself had now been elected as a representative of Stockholm County, following his four years a Malmöhus representative. Following this election, the Social Democrats remained in power, but desired to maintain a long-term majority, so they offered to form a coalition government with the Centre Party. They declined, but this had no impact on Erlander's ability to form a government on time, as the talks were public but informal. === Coalition government: 1951–1957 === ==== Socialist–Centrist cabinet ==== In 1951, Erlander formed a coalition with the Centre Party. He added four Centrists to his cabinet that year. His working relationship with the party's leader, Gunnar Hedlund, is known to have been good. Erlander and Hedlund, while disagreeing on some issues, shared a common desire to outmaneuver the Liberals and the Moderate Party. The voter bases of both parties are also considered to have been similar. Under the coalition, Hedlund became minister for home affairs. One of the positions that the Centrists demanded be given to one of their own was the minister of education, which had been held by Hildur Nygren since earlier that year. Erlander did not get along with Nygren, and used the negotiations as an excuse to remove her. The coalition government was formed on October 1, 1951. ==== Election of 1952 ==== In the 1952 general election, the Social Democrats won 46%, a slight decrease from the previous election. The Centrists obtained 10.7%, which was also a decrease for them. The Liberals gained 24.4%, an increase from their previous percentage. ==== Gustaf VI Adolf and Haijby scandal ==== Erlander served under King Gustaf V for 4 years, and the two had a mutual respect. Gustaf, aged 88 upon Erlander's ascension to the premiership, was not very politically active. In 1950, upon the death of his father, Crown Prince Gustaf VI Adolf became king. Erlander was also on good terms with Gustaf VI, but at times disapproved of the new king's more hands-on involvement in political matters than his father, and during Gustaf VI's time as Crown Prince, Erlander saw him as a "rather stiff individual who lacked perspective". In 1947, Kurt Haijby, who had previously been arrested multiple times on suspected homosexual acts, wrote a memoir about his experiences, which included previous claims that he had a sexual relationship with Gustaf V. The Stockholm police bought most of the stock to prevent distribution, and the government took charge of the affair. According to Erlander, Minister of the Interior Eije Mossberg opened a cabinet meeting by stating, "The King is homosexual!" to which Wigforss replied, "At his age? How vigorous!" One of the only copies that got out was read by Erlander. He reportedly believed the allegations. According to journalist Maria Schottenius, Erlander later told her of how he was tormented for decades by the "Haijbyskiten" (Swedish: "Haijby shit"). ==== Election of 1956 ==== In the 1956 general election, the party won 44.58%, a larger decrease than the previous one. Erlander at one point stated that the setback was due to, among other things, "Christian anti-socialist agitation." Their coalition partners, the Centrists, garnered 9.45%. ==== Pension referendum and coalition collapse ==== Despite the ideological similarities between the Social Democrats and the Centrists, a major issue was Sweden's proposed pension system. Erlander desired a system that was mandatory for all citizens, while Hedlund wanted the pensions to be voluntary. A referendum on the issue in 1957 included three proposals for pensions systems, one by the Social Democrats, another by the Centrists, and the third by the right. The Social Democrats' proposal won with 45.8% of the vote, while the right's garnered 35.3% and the Centrists' 15%. As a result of the pension referendum, the coalition dissolved that year, with the Centre Party withdrawing on October 24. Following this, the king facilitated inter-party dialogue, specifically asking about the possibility of the Social Democrats forming a coalition with the three non-socialist parties. Erlander was appointed formateur/informateur, but was very reluctant to create a four-party government. The king then designated the Liberals and Moderates as formateurs, and asked them to explore creating a non-socialist government. The Centrists stated that they were unwilling to join the other two parties in a coalition, and the plans failed. On October 29, Erlander was asked to form a minority government, to which he agreed. Erlander was thus allowed to remain prime minister and formateur, leading a minority government into the next election. === Final government: 1957–1969 === ==== Third cabinet and "the boys" ==== On October 31, 1957, Erlander's all-Social Democratic government was sworn in. Nine of the ten cabinet ministries Erlander inherited from Hansson's cabinet existed by the end of Erlander's premiership. Three additional ministries were created, with Erlander's final cabinet having twelve ministries by 1968. Altogether, 57 people served in Erlander's cabinets. In August 1953, Erlander hired Olof Palme to serve as his personal secretary. In 1963, he ascended to the cabinet as a minister without portfolio. Palme became Minister of Communications in 1965, and in 1967 became Minister of Education. Beginning with Palme, Erlander began to hire a larger group of personal staff, including typists and stenographers, consisting of young Social Democrats such as Palme, Ingvar Carlsson, and Bengt K. Å. Johansson. In the 1960s, Erlander began to call his group of young aides "the boys". Erlander frequently consulted the boys on speeches he planned to make, although according to Olle Svenning, he was rarely satisfied with the speeches they wrote. ==== Election of 1958 and ATP ==== Social Democratic efforts for a universal pension system continued. In 1958, a bill was proposed that would provide uniform, state-administered pensions to all Swedes over the age of 67. Left wing parties supported the bill, while right wing parties opposed it. It was defeated in a vote of 117 against to 111 for. Following this loss, Erlander asked the king to temporarily dissolve the Riksdag and called for a snap election. In the ensuing election, the party won 46.2%, an increase from the 1956 election. This was the third, and, as of 2024, last snap election in Swedish history. In the spring of 1959, the Social Democratic pension system was again being voted on in the Riksdag. In the second chamber, the vote was evenly split, 115 for and 115 against. Ture Königson, a Liberal, chose to vote in support of the Socialist's proposal. Königson preferred his party's pension plan, but desired a secure future for Sweden's older workers, and reasoned that the Socialists' plan was better than a permanent political stalemate. Through his vote, the smallest possible margin, the pension plan passed. The system, called Allmän tilläggspension (Swedish: "General supplementary pension") or ATP for short, was successful implemented in 1960. ==== Election of 1960 ==== In the 1960 general election, the Social Democrats' percentage of the vote was up to 47.79%, another increase from the previous election. Erlander described the election as an "ideological breakthrough", which allowed the Social Democrats to pursue further reform. ==== Wennerström scandal ==== On June 20, 1963, Col. Stig Wennerström was arrested on his way to work, and charged with espionage. He soon admitted to spying for the Soviet Union for 15 years, and it was later estimated that had sold around 160 Swedish defense secrets to the Soviet government. Minister of Defence Sven Andersson had been informed of suspicions against Wennerström four years earlier and had become personally suspicious of him two years earlier, as had Foreign Minister Undén. Erlander, however, had not known about the suspicions until the day Wennerström was arrested. Undén's successor, Foreign Minister Torsten Nilsson informed him via telephone the day of the arrest while Erlander was in a restaurant in Italy on vacation with his wife, and asked him to return to Sweden immediately. Upon returning to Sweden, in response to criticism over the lack of government coordination, Erlander stated on television that, "It is impossible for the government to be informed of every person who is under suspicion. We need more proof in a democratic society before we can take action." It later surfaced that twice in 1962, meetings were scheduled with Erlander to discuss Wennerström, but the first was canceled due to the minister of Justice being ill, and the second was canceled due to Erlander's schedule being full. Opposition parties demanded a parliamentary investigation, and Bertil Ohlin led the opposition's push for the censure of Sven Andersson and Östen Undén for negligence. In 1964, after two days of debate in the Riksdag, Andersson was not found guilty of gross negligence, and Ohlin dropped plans for a vote of censure. Simultaneously, the lower chamber voted 116 to 105 to clear Undén of negligence charges. Erlander stated that he would regard votes of censure as a question of confidence in his entire cabinet, and that it was "a tragedy" that Wennerström's arrest and trial became a political issue. Also in 1964, Wennerström was found guilty on three counts of gross espionage, was stripped of his rank, and was ordered to pay the government $98,000 of the $200,000 he was paid by the Soviet government. He was sentenced to life imprisonment. The entire arrest, trial, investigation, and scandal took up much of Erlander's energy for almost a year. ==== Election of 1964 ==== In the 1964 general election, the Social Democrats won 47.27% of the vote, a slight decrease overall from 1960, but the party now obtained a majority in the second chamber. The Social Democratic campaign slogan was, "You never had it so good". The Left Party made larger gains that year, as they won 3 new seats in the second chamber (in addition to the 5 they previously won) and were the only party to increase their percentage from the previous election. ==== Traffic change ==== In a 1955 referendum, a proposal was put forward to switch Sweden from left-handed driving to right-handed driving. The referendum results overwhelmingly rejecting the proposal, with 82.9 percent of voters voting no to the switch and only 15.5% in support, although the voter turnout was considered low. Despite the general lack of support, efforts continued well into the next decade. In 1963, the Riksdag voted in a majority to switch traffic to the right side, despite the public's rejection of the idea in the 1955 referendum. This sparked backlash, and in response, Erlander stated, "The referendum was only advisory, after all." Following the 1963 Riksdag vote, the project began to go underway. Olof Palme, now Minister for Communications (Transport), oversaw the project, which was often seen as a way to bring Sweden in line with the driving standards of most of Europe. Debates were held over the proposed change, with pro-switch politicians arguing the change would reduce the number of traffic accidents. A massive advertising campaign was carried out to shift public opinion. On September 3, 1967, an event known as Dagen H, Sweden began the drastic change, with an estimated 360,000 street signs needing to be changed overnight. The final cost was expected to exceed 800 million Swedish krona. Initially the number of accidents went down, but the number reached pre-1967 levels by 1969. ==== Unicameral Riksdag ==== In 1954, Erlander met Prime Minister of the United Kingdom Winston Churchill, and the two discussed different electoral systems. Churchill was surprised to learn that Sweden did not have a system of majority voting in single-member constituencies. Erlander explained the reason was because that system would benefit the Social Democrats. Churchill replied, "A statesman must not hesitate to do the right thing, even if it benefits his own party." In March 1967, Sweden's political parties finally agreed to replace the bicameral Riksdag with a unicameral chamber that would be directly elected. The Första kammaren ("First chamber" or upper house) voted to abolish itself on May 17, 1968, 117 for and 13 against. The Riksdag would fully become unicameral in 1971, after Erlander had retired from the premiership. ==== Republic of Jamtland ==== In 1963, actor Yngve Gamlin humorously declared himself president of the Republic of Jamtland, a breakaway state in Swedish territory. In 1967, Erlander invited Gamlin to Harpsund. However, when discussions did not go the way he hoped they would, Gamlin stole the plug from Erlander's boat, Harpsundsekan. ==== Election of 1968 ==== In the 1968 general election, Erlander's final election as prime minister, his party won 50.1%, the Social Democrats' largest victory under his leadership. They had also obtained a proper majority. This would be the last bicameral election in Sweden. === Popularity and public image === Erlander was initially somewhat controversial, paritially because he was not considered an obvious successor to Per Albin Hansson. When he became prime minister, many Swedes didn't know who he was, and he was often seen as being in Hansson's shadow politically during the early years of his premiership. He was initially both praised and criticized for having been a university graduate. Critics believed he had not risen as far as Hansson, and he had not been a traditional worker. Liberal newspapers were optimistic, as Erlander had more education and administrative experience than Hansson, which was seen as beneficial to the party. His youth also won him both praise and concern. He was seen as a figure whose youth and stronger left-leaning ideals could bring new energy to the party. However, as he was younger than several members in his cabinet, it was feared that he would be unable to maintain party unity. Despite initial fears about party instability, throughout his premiership, Erlander became increasingly known as a unifying figure within his party, as he came to be viewed as a centrist who would sometimes utilized both left-leaning and right leaning policies, although overall the party moved more towards the left. Erlander's nationwide support during his premiership was at its strongest in the 1960s. While making radio broadcasts, he was criticized for his "unpleasant" voice. His popularity increased as television began to play an important role in Swedish politics, as Erlander's amiable and humorous personality was more apparent. Historian Dick Harrison cites a 1962 appearance on Lennart Hyland's popular talk show Hylands hörna where Erlander told a humorous story about a priest as the beginning of his growing popularity among the Swedish public. Also attributed to his rise in popularity was an increased emphasis on his poverty-ridden childhood and less emphasis was placed on his time at university, improving his image as a "man of the people". Erlander's debating style was controversial, and was criticized by many, including writer Stig Ahlgren. During debates, Erlander was often known to change between serious and comical tones, and those he debated were often frustrated as they could not keep pace with him. In 1967, standard public opinion polls began in Sweden. In February, 65 percent of Social Democrats approved of his party leadership, 25 percent were unsure, and 10 percent thought his leadership was poor. In November of that same year, his approval ratings had reached 77 percent, and reached 84 percent in May 1968. After the 1968 general election, his approval within the party was 95 percent. In 1969, 54% of the general population polled showed approval of him as prime minister, while 80% approved of his leadership of the Social Democratic Party. Erlander garnered a number of nicknames during his tenure as prime minister. He became known as "Sweden's longest Prime Minister" referring to both his physical stature – 192 cm (6 ft 4 in) – and his record tenure of 23 years (the Swedish word lång meaning both 'long' and 'tall'). Political cartoons often mocked Erlander by exaggerating his height. By the 1960s, he become generally affectionately referred to as "Tage" (as opposed to Erlander, Mr. Erlander, Prime Minister Erlander, etc.) within the Social Democratic Party, similar to how Per Albin Hansson had become known more as "Per Albin". === Resignation and succession === On 1 October 1969, Erlander resigned as prime minister at 68, with an absolute majority for the Social Democrats in the second chamber since 1968. Erlander was succeeded by 42-year-old Olof Palme, who, although more radical and more controversial, had in many ways been Erlander's student and protégé, and was endorsed by Erlander. Palme was later asked when Erlander first hinted to him that he wanted him as his successor. Palme stated, "It never happened." Prior to the announcement of Palme, President of Finland Urho Kekkonen asked Erlander who his successor would be, and Erlander did not give a concrete answer. Kekkonen then asked if it would be Palme, to which Erlander responded, "Never, he is far too intelligent for a Prime Minister". == Domestic policy == === Million programme === Following World War II, Sweden increasingly developed a housing shortage in larger cities. In response, at the 1964 Social Democratic Party conference, the party adopted the Million Programme, a plan to build one million homes in the span of ten years. The proposal successfully passed through the Riksdag in 1965. The motto for the program was, "A good home for everyone." In 1966, during the early period of the project, during a debate he was asked what a young couple should do if they wanted to buy an apartment and start a family in Stockholm. Erlander answered, "stand in the housing queue." It was intended as an honest answer, but was unpopular, as the wait for an apartment in Stockholm was found to be ten years long, and it is said to have been the cause of Social Democratic losses in the municipal elections that year. Additionally, critics argued that the Million Programme created a form of segregation, with more recent evidence indicating that creating uniformity and separating this housing from more high-quality housing was part of the plan. In 1965, in response to this criticism, Erlander defended the program by arguing that American racial tensions and segregation didn't exist in and couldn't be reproduced in Sweden. Erlander stated, "We Swedes live in an infinitely more fortunate situation. The population of our country is homogeneous, not only in regard to race, but also in many other aspects." Critics also argued that the new housing was somewhat ugly and visually monotonous. Despite this, the goal of 1,000,000 homes was successfully reached by 1974, with 1,006,000 homes built, which, at the time, solved most of the problem, though not all of it. The Social Democrats were eventually able to recover from the municipal losses. === Economic policy === Also in 1947, a special law was passed “setting up principles governing the construction and operation of homes for the aged.” In 1959, Erlander's government proposed raising the previously lowed income taxes, partially to provide funding for recent welfare programs. Conservative parties opposed the proposal, and the Left Party abstained from voting in the Second Chamber, allowing the proposal to go into law. In 1962, Sweden joined the G10, being one of ten countries that agreed to provide an additional $6 billion each in funding to the International Monetary Fund. In 1964, Erlander's government proposed a new budget that would begin on July 1 of that year. The total budget would be $4.858 billion (in 1964), an increase from the previous budget by $475 million. The expected deficit was $180 million, and to prevent it from increasing, Erlander's government proposed ending deductions of old-age pension fees from taxable income. About half the budget was expected to be spent on welfare-related benefits and programs. On average, during Erlander's premiership, Sweden's GNP increased roughly 2.5% a year. It rose 5% in 1963 and 6% in 1964. === Social policy === From 1946 onwards, an extensive system of scholarships and fellowships was provided for higher education, along with free lunches, school books and writing materials for all primary and elementary school children. In 1956, the Social Democrats sponsored a law on "social help" which further extended social services. In addition, a number of laws concerning vacations, worker's safety and working hours were introduced. Erlander coined the phrase "the strong society", describing a society with a growing public sector taking care of the growing demand on many services that an affluent society creates. The public sector, particularly its welfare state institutions, grew considerably during his tenure as Prime Minister, while nationalizations were rare. In order to maintain employment for his vast electorate and Swedish sovereignty as a non-NATO member, the armed forces was greatly expanded, reaching an impressive level by the 1960s, while nuclear capability was ultimately dropped after outcries, not least from the Social Democratic Women's League. === Nuclear weapons === The question of nuclear weapons as a means to deter a possible attack remained a divisive factor in Swedish society and among Social Democrats and prompted diplomatic agreements with the United States, guaranteeing intervention in the case of an invasion. Erlander was initially in favor of acquiring nuclear weapons as a means of defense, but received criticism for this position. Following a 1954 report by Supreme Commander Nils Swedlund, who advocated for Sweden acquiring nuclear weapons to maintain neutrality, Erlander sought to avoid public debate on the issue so his party could develop a unified position and then collaborate with the opposition. However, the Social Democrats became split on the issue while the Moderates publicly pushed for nuclear weapons. The largest opposition within Erlander's party came from the Social Democrat Women's Organization (SSKF). The first government meeting on the issue occurred in November 1955, and the Social Democratic Party held a discussion in February 1956. Erlander had his anti-nuclear foreign minister Östen Undén discuss ongoing UN nuclear disarmament talks. Erlander also proposed delaying the decision until 1958, because, according to him, the government lacked sufficient knowledge about the technical prerequisites to have nuclear weapons, and that he did not want to complicate the disarmament talks by producing nuclear weapons at that time. After Undén's presentation, SSKF chair Inga Thorsson declared that her organization publicly opposed nuclear weapons, but the board ultimately followed Erlander's proposed postponement. During a March 1959 debate in the Riksdag, Erlander implied that he did not want to add to the "limited number of countries" with atomic weapons, pending the results of a nuclear summit. Sweden signed the nuclear non-proliferation treaty in 1968, dropping all pretenses of developing a nuclear weapon. However, some nuclear reactors were kept secret from IAEA until 1994, and small teams of theoretical physicists continued researching nuclear weapons after Erlander's premiership. Some international observers speculated that Erlander and future Swedish leaders maintained interest in a hypothetical nuclear system for defense, but did not take action to develop one. According to Erlander's memoirs, Swedish military chiefs believed in limited nuclear war, inspired by Henry Kissinger's advocation of the policy, as it was a "defense strategy that appeared to be made for a small state's defense". == Foreign policy == === Cold War neutrality and international alliances === Under Erlander, Sweden had to navigate the challenges of the Cold War. Sweden did not officially side with either the United States or the Soviet Union, although Sweden's official position has been described as a "non-alliance", rather than "neutral", and Erlander once stated that Sweden shared an "ideological affinity with the Western democracies." Sweden's firm stance on neutrality was supported by Erlander and his foreign minister Undén, who were seen as the two leading figures of the Social Democratic Party. Erlander represented Sweden at the funerals of several foreign heads of state, such as those of United States President John F. Kennedy in 1963 and West German Chancellor Konrad Adenauer in 1967. Negotiations for a Scandinavian defence union began in 1948, with Erlander and Danish Prime Minister Hans Hedtoft being its strongest proponents. The proposal fell apart and was shelved in January 1949 due to Norwegian resistance and the country's acceptance of membership in NATO, with Denmark and Iceland following suit. In Erlander's 1952 United States tour, he stated that Sweden would not join NATO. Erlander was generally considered a pro-Western leader despite this, and wrote that America was doing Europe a great service by allowing itself to increase their arms for defense against the Soviet Union. In 1961, Erlander and President John F. Kennedy advocated for the West to strengthen the United Nations and its Secretary General, Swedish politician Dag Hammarskjöld. Erlander was a strong supporter of the proposed Nordic economic community Nordek, and held meetings on the subject with Finnish President Urho Kekkonen and Prime Minister Mauno Koivisto in 1969. === United States and Vietnam War === In 1952, as part of his U.S. tour, Erlander visited United States President Harry S. Truman, which was the first time a Swedish Prime Minister and a U.S. president met. Erlander would later meet Dwight D. Eisenhower, John F. Kennedy, and Lyndon B. Johnson. In 1958, Sweden recognized South Vietnam. They established diplomatic relations in Saigon in 1960, but did not establish an official ambassador there. In the 1960s, Erlander and the Swedish government became critical of the Vietnam War. Despite Erlander's personal opposition to the war and the uneasy nature of U.S.-Sweden relations at that point, William Womack Heath, the U.S. ambassador to Sweden during the Lyndon B. Johnson administration, found Erlander to be "completely pro-American" from 1967 until early 1968. On February 21, 1968, Olof Palme participated in a torchlight parade through Stockholm with North Vietnam's ambassador to Moscow, Nguyễn Thọ Chân, to protest the Vietnam War, an event which soured Swedish relations with the United States and stirred controversy worldwide, and led to Heath being recalled for "consultations", with no immediate successor appointed. Moderate leader Yngve Holmberg called for Palme's resignation from the cabinet, but the demand was not met. By March 1968, Sweden had accepted 79 draft-dodgers from the United States, and Erlander, soon followed by opposition party leaders, publicly stated his opposition to the Vietnam War. === Soviet Union and Warsaw Pact === In 1950, Erlander condemned the aggression of North Korea that began the Korean War, deeming it, "a deed of violence calculated to imperil world peace". Sweden then dispatched a field hospital in South Korea. In June 1952, during the war, the Soviet Union shot down two Swedish military aircraft, an event known as the Catalina affair. Erlander and Hedlund planned a visit the Soviet Union in 1956 to ease tensions, the first time a Swedish prime minister visited the country. However, Erlander was willing to cancel the trip should the Soviet government have refused to accept the information the Swedish government had collected on Raoul Wallenberg, a businessman and humanitarian who had served as Sweden's special envoy in Budapest. Wallenberg disappeared during the Siege of Budapest after his arrest by Soviet forces in 1945. Since 1952, the Swedish government had demanded Wallenberg's return, but the Soviet Union insisted it was unfamiliar with him. During the visit, which occurred as expected, Erlander questioned Soviet Premier Nikita Khrushchev about Wallenberg's status, and presented Khrushchev with a large file of evidence that showed the Soviet Union's connection to Wallenberg's disappearance. Khrushchev examined it and stated that Swedish-Soviet relations would be positive if the Wallenberg affair was dropped. Soviet documents stated that Wallenberg died in a cell in 1947 of a heart attack, but Erlander, the Swedish government, and international observers were skeptical. Wallenberg biographer Ingrid Carlberg noted that Soviet documents declassified after the fall of the Soviet Union about Wallenberg existed, which Khrushchev had denied, and that on Wallenberg's official Soviet prison card the crime he was arrested for was unspecified. In 1959, Khrushchev planned to visit Scandinavia and Finland, but the Swedish press and opposition reacted negatively to the idea, causing Khrushchev to "postpone" it. Erlander and Undén expressed disappointment in Khrushchev's decision, to which he responded during a speech in Moscow that the decision was because of the Swedish government taking no steps to counter the negative press. Erlander stated that the government could not polemicize against these opinions, as he felt that it would give them undue importance. The government then avoided appointing the anti-Khrushchev leader of the Conservatives, Jarl Hjalmarson, to Sweden's UN delegation. While travelling for his United States visit, Khrushchev sent Erlander a message of "friendship" to ensure the postponed visit was still possible. In 1963, after the arrest of Stig Wennerström, Erlander stated that the case had seriously disturbed relations between Sweden and the Soviet Union. Nikita Khrushchev had planned a goodwill tour of Scandinavia in 1964, which was to begin 10 days after Wennerström had been given a life sentence. Erlander declined to state how the sentencing might affect Khrushchev's visit. During that 1964 visit, while receiving Khrushchev at Harpsund, Erlander took Khrushchev and his interpreter in an eka rowing boat called the Harpsundseka across the 300-yard lake nearby. It has since become tradition for Swedish prime ministers and foreign heads of state to row across the lake in the Harpsundseka when they visit Harpsund. In that same visit, Erlander was once again unable to get information out of Khrushchev relating to Raoul Wallenberg. Khrushchev continued denying that Wallenberg was in the Soviet Union, and Erlander and the government expressed "deep disappointment" over the lack of development in the case. There were anti-Khrushchev protests in Sweden from Soviet exiles upon his visit, and the Swedish press criticized him as a liar relating to his discussions over Wallenberg and the stringent security (3000 police officers upon his arrival) around him. Both Khrushchev and Erlander ultimately stated they were pleased with the visit, and Khrushchev left for Norway on June 27 as part of his Scandinavian goodwill tour. Khrushchev did not mention the Wallenberg controversy or the negative press he received in his farewell address. After visiting the Soviet Union in 1965, Erlander stated that the case had to be closed. In 1968, tensions rose between Czechoslovakia and the Soviet Union due to the former's implementation of political reforms. The Swedish public expected their government to support Czechoslovakia given its opposition to the Vietnam War, but the government wished to maintain neutrality. In July, Soviet politician Alexei Kosygin visited Stockholm, which caused the Liberal leader Sven Wedén to give a speech rebuking Erlander's perceived neglect of Czechoslovak self-determination. In response, Erlander and Foreign Minister Torsten Nilsson cited as a reason for their caution a secret report by Agda Rössel, the ambassador in Belgrade, who stated that Czechoslovak leaders desired Western silence. Although the government's response was not as strong as it had been to the Vietnam War, when the Warsaw Pact invasion of Czechoslovakia began, Erlander, the Social Democrats, and all opposition parties condemned it. The Social Democrats' opposition to the invasion likely helped them electorally in 1968. === South Africa and Apartheid === In the 1960s, after Erlander finished giving a speech to students at Lund University, South African Lund student and anti-apartheid activist Billy Modise personally asked Erlander to impose sanctions on South Africa in response to apartheid. Erlander stated that he did not have the power to do so, but advised Modise to publicly lobby for the policy. Olof Palme was also an advocate for sanctions against South Africa, and became more outspoken on his opposition to aparthied after he joined Erlander's cabinet in 1963. The Swedish South Africa Committee was created in 1961. In 1963, the National Council of Swedish Youth launched a boycott against South African goods. Erlander and Palme were among the sponsors of the committee. Swedish donations to the International Defence and Aid Fund for Southern Africa (IDAF) increased around 140,000 SEK. The number continued to go up when, in 1964, Sweden became the first industrialized Western country to donate public funds to the IDAF, the equivalent of $100,000. In the end, Sweden was the largest donar by far. === Israel === In 1947, Sweden voted in favor of the United Nations Partition Plan for Palestine. In 1948, Sweden recognized Israel. Sweden established an embassy in Israel in 1951. In 1962, Erlander became the first Swedish prime minister to visit Israel. During his visit, Erlander was famously photographed swimming in the Dead Sea. He spoke to Prime Minister David Ben-Gurion. According to Erlander, no specific policies were discussed, although he stated he hoped the visit would strengthen Israeli-Swedish relations. Erlander stated that he was "fascinated" by the country, and he invited Premier Ben-Gurion to visit Sweden. Ben-Guiron visited Sweden later that year. == Later life and death == After his resignation, Erlander and his wife lived in a house constructed at Bommersvik by the Social Democrats to honor him, and was owned by the Swedish Social Democratic Youth League. Erlander remained in the Riksdag for several years after it became unicameral. Following the 1970 general election, he once again changed constituencies, now representing Gothenburg, which followed 22 years as a Stockholm representative. He resigned from the Riksdag in 1973, after holding seats there for over forty years. After leaving leadership roles, Erlander began sorting through his personal papers, and chose to use them to help write his political memoirs. He wrote an article for Svenska Dagbladet in 1972 explaining his motives for doing so. The memoirs were published in six volumes from 1972 to 1982. In the 1980s, Erlander allowed writer Olof Ruin unlimited access to his diaries, which would serve as a source for Ruin's biography of Erlander. Erlander died on 21 June 1985 in Stockholm at the age of 84 from pneumonia and heart failure. Erlander's coffin was covered with a socialist flag and blue and yellow flowers (the colors of the Swedish flag), and was carried through Stockholm. An estimated 45,000 Swedes lined the streets to pay respects to him. A large, secular ceremony was held in Stockholm, wherein Olof Palme delivered Erlander's eulogy. At the end of the service, the audience sang the socialist hymn "The Internationale". After the Stockholm ceremony, his funeral crossed the country and returned to his home town of Ransäter, Värmland, in a triumphant procession for the final rest. His wife, who died in 1990, is buried beside him. == Ideology and political positions == Despite Erlander being familiar with the writings of Karl Marx and identifying as a socialist, he did not subscribe to full Marxism and did not support nationalization, instead believing in a strong public sector under well-regulated capitalism with social welfare programs. Based on his university studies, Erlander believed that Keynesian economics and Stockholm School economics were compatible with social democracy, and could be useful in ending economic slumps. Unlike many other left-leaning intellectuals, Erlander did not sympathize with the Soviet Union, although he did attempt to maintain positive Swedish-Soviet relations. On the role of politicians, Erlander reportedly stated that, "A politician's job is to build the dance floor, so that everyone can dance as they please." Erlander acknowledged the need for women to play a larger role in politics and hold cabinet positions. However, he had disputes or grievances with all the women who actually did serve in his cabinet. Erlander had a good relationship with Moderate Party leader Jarl Hjalmarson, although he viewed Hjalmarson as a "political lightweight." Erlander hoped in 1968 that later Moderate leader Yngve Holmberg would remain in office due to the disorganization of the opposition parties and Holmberg's perceived "clunkiness". Erlander admired the writings of Adlai Stevenson II, because Stevenson "expressed his views more deftly than he could himself". == Personal life == === Family & living situation === He met his future wife Aina Andersson while they were both students at Lund University. They married in 1930. Their marriage has been described as "deeply harmonious" and "full of mutual trust", and Erlander's family life as "remarkably happy". Their son Sven was a mathematician who published much of the content of his father's diaries from 2001 on. Erlander's mother, Alma, died in 1961, at age 92, during her son's premiership. Through one Erlander's Finnish ancestors, Simon Larsson (née Kauttoinen) (c.1605-1696), he is a distant relative of Stefan Löfven, the Social Democratic Prime Minister of Sweden from 2014 to 2021. Carl August Wicander gave Harpsund to the Swedish government as a country retreat for prime ministers in 1953. Erlander starting using it as a vacation home that year, and all prime ministers since have continued this practice. Erlander and his wife often spent Christmases, Easters, weekends, and summers at vacationing at Harpsund. For most of his career, the Erlander family lived in an apartment in Bromma, Stockholm, until the summer of 1964, when they moved to an apartment in a high-rise complex in Stockholm's Gamla stan (Old Town) district. Earlier in his career, Erlander traveled via subway to and from work rather than use a car, although eventually he and Aina bought one. After getting the car, Aina would usually drive him to work, as he did not have a driver's license, dropping him off and then driving to the school where she worked. When Aina was unable to take him, neighbors in Bromma usually offered him rides. Erlander did not have an official car to travel in, and visiting foreign heads of state were often surprised to see that he usually arrived at events alone. === Personality, interests, & habits === Erlander was known to be a dedicated diarist, often writing daily entries, with his diaries serving as key sources for his memoirs. Erlander wrote on a variety of subjects, and initially wrote to help him remember things related to his work, such as occurrences, arguments, and decisions, going into greater detail on matters he thought were controversial. He also wrote about matters including his family, his health at the time, plays he saw and books he read, and his impressions of other people. Erlander would later frequently note that his diaries contained many exaggerations. Erlander was often described as a "fatherly" or "avuncular". Ingvar Carlsson stated that to him, Erlander became like a second father or a guide. Biographers Harrison and Ruin note that although Erlander was in power longer than any other Swedish leader, he didn't seek power for himself, which Carlsson affirmed. Erlander was an avid lover of literature and theatre, which often served as a source of recreation. Erlander's favorite novel was John Steinbeck's Cannery Row. Many contemporary Swedish writers were often surprised to learn that their prime minister had read their work. During his premiership, Erlander often visited his former college Lund University, meeting the Värmland Student Association. At one of these meetings, Student Association members Olof Ruin and Lars Bergquist proposed that Erlander should give annual speeches to Lund students, to which Erlander agreed. In total, he gave fourteen of these student addresses. == Legacy == Erlander served as prime minister for 23 years, making him the longest-serving one in Swedish history. His uninterrupted tenure as head of the government is also the longest ever in any modern Western democracy. Two of Erlander's closest advisors, Olof Palme and Ingvar Carlsson, also became prime ministers of Sweden, and together their tenures equal more than 40 years. Upon his death, The Washington Post described Erlander as "one of the most popular political leaders". Erlander has been dubbed a "political giant" who transformed Sweden's political climate and brought the nation together. He has been compared to other notable Swedish "political giants" such as Palme and Dag Hammarskjöld. Biographer Dick Harrison and journalist Per Olov Enquist have described Erlander as a "father of the country" (Swedish: landsfader). Ruin notes that as Sweden encountered difficulties in the 1970s, nostalgia sometimes influenced positive views of Erlander, and that his time as leader was looked upon by some as a "golden age" of Swedish history. During his premiership, despite disagreements between parties, particularly the Liberals and Moderates supporting lower taxes, Sweden's major political parties began to increasingly agree on the goal of developing Sweden as a welfare state. Some conservative and liberal analysts have argued that during Erlander's premiership an air of Sweden becoming a de facto one-party state developed. Critics of Olof Palme have also criticized Erlander for his role in Palme's ascension to the premiership. In general, following Sweden's economic crises in the 1970s, the Swedish Model, and to some extent Erlander's premiership, was scrutinized more. Left Party leader Nooshi Dadgostar praised Erlander in 2022, citing him as an inspiration who passed reforms laying the foundation of Sweden's welfare state. The building that served as Erlander's childhood home and schoolhouse in Ransäter is now a museum named Erlandergården centered around him and his life. The Tage Erlander Prize, given by the Royal Swedish Academy of Sciences, is a prize for research in natural sciences, technology, and mathematics which is named after Erlander. == Awards == Erlander was a nominee for the 1971 Nobel Peace Prize, although he didn't win. Erlander was awarded the Illis quorum in 1984. == In popular culture == In the 2013 comedy film The Hundred-Year Old Man Who Climbed Out of the Window and Disappeared, Erlander was portrayed by Swedish actor Johan Rheborg. In the 2021 series En Kunglig Affär, which depicted the Haijby scandal, Erlander was portrayed by Swedish actor Emil Almén. In the 2022 Netflix series Clark, which depicted the life of Swedish criminal Clark Olofsson, Erlander was portrayed by Swedish actor Claes Malmberg. == Works == Erlander, Tage (1959). Levande stad (in Swedish). Stockholm: Raben & Sjögren. Erlander, Tage (1961). Arvet från Hammarskjöld (in Swedish). Stockholm: Gummessons Bokförlag. Erlander, Tage (1972). Tage Erlander 1901–1939 (in Swedish). Stockholm: Tidens förlag. ISBN 91-550-1543-3. Erlander, Tage (1973). Tage Erlander 1940–1949 (in Swedish). Stockholm: Tidens förlag. ISBN 91-550-1640-5. Erlander, Tage (1974). Tage Erlander 1949–1954 (in Swedish). Stockholm: Tidens förlag. ISBN 91-550-1702-9. Erlander, Tage (1976). Tage Erlander 1955–1960 (in Swedish). Stockholm: Tidens förlag. ISBN 91-550-2043-7. Erlander, Tage (1982). Tage Erlander 1960-talet (in Swedish). Stockholm: Tidens förlag. ISBN 91-550-2647-8. Erlander, Tage (1979). Tage Erlander Sjuttiotal (in Swedish). Stockholm: Tidens förlag. ISBN 91-550-2375-4. Erlander, Tage; Erlander, Sven (2001). Dagböcker 1945–1949 (in Swedish). Gidlunds förlag. ISBN 91-7844-335-0. Erlander, Tage; Erlander, Sven (2001). Dagböcker 1950–1951 (in Swedish). Gidlunds förlag. ISBN 91-7844-336-9. Erlander, Tage; Erlander, Sven (2002). Dagböcker 1952 (in Swedish). Gidlunds förlag. ISBN 91-7844-357-1. Erlander, Tage; Erlander, Sven (2003). Dagböcker 1953 (in Swedish). Gidlunds förlag. ISBN 91-7844-362-8. Erlander, Tage; Erlander, Sven (2004). Dagböcker 1954 (in Swedish). Gidlunds förlag. ISBN 91-7844-368-7. Erlander, Tage; Erlander, Sven (2005). Dagböcker 1955 (in Swedish). Gidlunds förlag. ISBN 91-7844-372-5. Erlander, Tage; Erlander, Sven (2006). Dagböcker 1956 (in Swedish). Gidlunds förlag. ISBN 91-7844-375-X. Erlander, Tage; Erlander, Sven (2007). Dagböcker 1957 (in Swedish). Gidlunds förlag. ISBN 978-91-7844-384-0. Erlander, Tage; Erlander, Sven (2008). Dagböcker 1958 (in Swedish). Gidlunds förlag. ISBN 978-91-7844-390-1. Erlander, Tage; Erlander, Sven (2009). Dagböcker 1959 (in Swedish). Gidlunds förlag. ISBN 978-91-7844-775-6. Erlander, Tage; Erlander, Sven (2010). Dagböcker 1960 (in Swedish). Gidlunds förlag. ISBN 978-91-7844-804-3. Erlander, Tage; Erlander, Sven (2011). Dagböcker 1961–1962 (in Swedish). Gidlunds förlag. ISBN 978-91-7844-825-8. Erlander, Tage; Erlander, Sven (2012). Dagböcker 1963–1964 (in Swedish). Gidlunds förlag. ISBN 978-91-7844-851-7. Erlander, Tage; Erlander, Sven (2013). Dagböcker 1965 (in Swedish). Gudlunds förlag. ISBN 978-91-7844-880-7. Erlander, Tage; Erlander, Sven (2014). Dagböcker 1966–1967 (in Swedish). Gidlunds förlag. ISBN 978-91-7844-907-1. Erlander, Tage; Erlander, Sven (2015). Dagböcker 1968 (in Swedish). Gudlunds förlag. ISBN 978-91-7844-934-7. Erlander, Tage; Erlander, Sven (2016). Dagböcker 1969 (in Swedish). Gudlunds förlag. ISBN 978-91-7844-957-6. == Gallery == == Notes == == References == === Citations === === Bibliography === == Further reading == === In English === Andersson, Jenny (2006). Between Growth And Security: Swedish Social Democracy from a Strong Society to a Third Way. Machester University Press. ISBN 9781847796660. Lane, Jan-Erik (1991). Understanding The Swedish Model. F. Cass. ISBN 9780714634456. Sörlin, Sverker (2016). Science, Geopolitics, and Culture In The Polar Region: Norden Beyond Borders. Taylor & Francis. ISBN 9781317058922. Ruin, Olof. "Three Swedish Prime Ministers: Tage Erlander, Olof Palme and Ingvar Carlsson." West European Politics 14.3 (1991): 58–82. === In Swedish === Alsing, Rolf (2012). Sveriges statsministrar under 100 år. Tage Erlander (in Swedish). Albert Bonniers Förlag. ISBN 9789100131999. Eklund, Klas (2012). Sveriges statsministrar under 100 år. Olof Palme (in Swedish). Albert Bonniers Förlag. ISBN 9789100132026. Johansson, Bengt K.Å. (2020). Dagar med Tage, stunder med Sträng (in Swedish). Hjalmarson & Högberg Bokförlag. ISBN 9789198534634. Lagercrantz, Arvid (1975). Tage,statministern och privatpersonen: en bok (in Swedish). Tiden. ISBN 9789155019532. Ruin, Olof (2007). Statsministern: från Tage Erlander till Göran Persson (in Swedish). Gidlunds Förlag. ISBN 9789178443826. Svenning, Olle (2018). År med Erlander (in Swedish). Albert Bonniers Förlag. ISBN 9789100169411. Thorsell, Staffan (2004). Sverige i Vita huset (in Swedish). Albert Bonniers Förlag. ISBN 9789100149123. == External links == Media related to Tage Erlander at Wikimedia Commons
|
Wikipedia:Svetlana Katok#0
|
Svetlana Katok (born May 1, 1947) is a Russian-American mathematician and a professor of mathematics at Pennsylvania State University. == Education and career == Katok grew up in Moscow, and earned a master's degree from Moscow State University in 1969; however, due to the anti-Semitic and anti-intelligentsia policies of the time, she was denied admission to the doctoral program there and instead worked for several years in the area of early and secondary mathematical education. She immigrated to the US in 1978, and earned her doctorate from the University of Maryland, College Park in 1983 under the supervision of Don Zagier. She joined the Pennsylvania State University faculty in 1990. Katok founded the Electronic Research Announcements of the American Mathematical Society in 1995; it was renamed in 2007 to the Electronic Research Announcements in Mathematical Sciences, and she remains its managing editor. Katok was an American Mathematical Society (AMS) Council member at large. == Books == Katok is the author of: Fuchsian Groups, Chicago Lectures in Mathematics, University of Chicago Press, 1992. Russian edition, Faktorial Press, Moscow, 2002. p-adic Analysis Compared with Real, Student Mathematical Library, vol. 37, American Math. Soc., 2007. Russian edition, MCCME Press, Moscow, 2004. Additionally, she coedited the book MASS Selecta: Teaching and learning advanced undergraduate mathematics (American Math. Soc., 2003). == Awards and honors == Katok was the 2004 Emmy Noether Lecturer of the Association for Women in Mathematics. In 2012 she and her husband, mathematician Anatole Katok, both became fellows of the American Mathematical Society. == References ==
|
Wikipedia:Svetlana Selezneva#0
|
Svetlana Selezneva (Russian: Светла́на Никола́евна Селезнёва) (born 1969) is a Russian mathematician, Dr.Sc., Associate professor, a professor at the Faculty of Computer Science at the Moscow State University. She defended the thesis «Polynomial representations of discrete functions» for the degree of Doctor of Physical and Mathematical Sciences (2016). She is the author of three books and more than 70 scientific articles. == References == == Bibliography == Evgeny Grigoriev (2010). Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory (1 500 экз ed.). Moscow: Publishing house of Moscow University. pp. 386–387. ISBN 978-5-211-05838-5. == External links == MSU CMC(in Russian) Scientific works of Svetlana Selezneva Scientific works of Svetlana Selezneva(in English)
|
Wikipedia:Swish function#0
|
The swish function is a family of mathematical function defined as follows: swish β ( x ) = x sigmoid ( β x ) = x 1 + e − β x . {\displaystyle \operatorname {swish} _{\beta }(x)=x\operatorname {sigmoid} (\beta x)={\frac {x}{1+e^{-\beta x}}}.} where β {\displaystyle \beta } can be constant (usually set to 1) or trainable. The swish family was designed to smoothly interpolate between a linear function and the ReLU function. When considering positive values, Swish is a particular case of doubly parameterized sigmoid shrinkage function defined in : Eq 3 . Variants of the swish function include Mish. == Special values == For β = 0, the function is linear: f(x) = x/2. For β = 1, the function is the Sigmoid Linear Unit (SiLU). With β → ∞, the function converges to ReLU. Thus, the swish family smoothly interpolates between a linear function and the ReLU function. Since swish β ( x ) = swish 1 ( β x ) / β {\displaystyle \operatorname {swish} _{\beta }(x)=\operatorname {swish} _{1}(\beta x)/\beta } , all instances of swish have the same shape as the default swish 1 {\displaystyle \operatorname {swish} _{1}} , zoomed by β {\displaystyle \beta } . One usually sets β > 0 {\displaystyle \beta >0} . When β {\displaystyle \beta } is trainable, this constraint can be enforced by β = e b {\displaystyle \beta =e^{b}} , where b {\displaystyle b} is trainable. swish 1 ( x ) = x 2 + x 2 4 − x 4 48 + x 6 480 + O ( x 8 ) {\displaystyle \operatorname {swish} _{1}(x)={\frac {x}{2}}+{\frac {x^{2}}{4}}-{\frac {x^{4}}{48}}+{\frac {x^{6}}{480}}+O\left(x^{8}\right)} swish 1 ( x ) = x 2 tanh ( x 2 ) + x 2 swish 1 ( x ) + swish − 1 ( x ) = x tanh ( x 2 ) swish 1 ( x ) − swish − 1 ( x ) = x {\displaystyle {\begin{aligned}\operatorname {swish} _{1}(x)&={\frac {x}{2}}\tanh \left({\frac {x}{2}}\right)+{\frac {x}{2}}\\\operatorname {swish} _{1}(x)+\operatorname {swish} _{-1}(x)&=x\tanh \left({\frac {x}{2}}\right)\\\operatorname {swish} _{1}(x)-\operatorname {swish} _{-1}(x)&=x\end{aligned}}} == Derivatives == Because swish β ( x ) = swish 1 ( β x ) / β {\displaystyle \operatorname {swish} _{\beta }(x)=\operatorname {swish} _{1}(\beta x)/\beta } , it suffices to calculate its derivatives for the default case. swish 1 ′ ( x ) = x + sinh ( x ) 4 cosh 2 ( x 2 ) + 1 2 {\displaystyle \operatorname {swish} _{1}'(x)={\frac {x+\sinh(x)}{4\cosh ^{2}\left({\frac {x}{2}}\right)}}+{\frac {1}{2}}} so swish 1 ′ ( x ) − 1 2 {\displaystyle \operatorname {swish} _{1}'(x)-{\frac {1}{2}}} is odd. swish 1 ″ ( x ) = 1 − x 2 tanh ( x 2 ) 2 cosh 2 ( x 2 ) {\displaystyle \operatorname {swish} _{1}''(x)={\frac {1-{\frac {x}{2}}\tanh \left({\frac {x}{2}}\right)}{2\cosh ^{2}\left({\frac {x}{2}}\right)}}} so swish 1 ″ ( x ) {\displaystyle \operatorname {swish} _{1}''(x)} is even. == History == SiLU was first proposed alongside the GELU in 2016, then again proposed in 2017 as the Sigmoid-weighted Linear Unit (SiL) in reinforcement learning. The SiLU/SiL was then again proposed as the SWISH over a year after its initial discovery, originally proposed without the learnable parameter β, so that β implicitly equaled 1. The swish paper was then updated to propose the activation with the learnable parameter β. In 2017, after performing analysis on ImageNet data, researchers from Google indicated that using this function as an activation function in artificial neural networks improves the performance, compared to ReLU and sigmoid functions. It is believed that one reason for the improvement is that the swish function helps alleviate the vanishing gradient problem during backpropagation. == See also == Activation function Gating mechanism == References ==
|
Wikipedia:Sy Friedman#0
|
Sy-David Friedman (born May 23, 1953, in Chicago) is an American and Austrian mathematician and a (retired) professor of mathematics at the University of Vienna and the former director of the Kurt Gödel Research Center for Mathematical Logic. His main research interest lies in mathematical logic, in particular in set theory and recursion theory. Friedman is the brother of Ilene Friedman and the brother of mathematician Harvey Friedman. == Biography == He studied at Northwestern University and, from 1970, at the Massachusetts Institute of Technology. He received his Ph.D. in 1976 from MIT (his thesis Recursion on Inadmissible Ordinals was written under the supervision of Gerald E. Sacks). In 1979, Sy Friedman accepted a position at MIT, and in 1990 he became a full professor there. Since 1999, he has been a professor of mathematical logic at the University of Vienna, and he retired in 2018. He is a Fellow of Collegium Invisibile. == Selected publications and results == He has authored about 70 research articles, including: Friedman, Sy D. (1981). "Negative solutions to Post's problem. II". Annals of Mathematics. Second Series. 113 (1): 25–43. doi:10.2307/1971132. JSTOR 1971132. Friedman, Sy (1985). "A guide to "Coding the Universe" by Beller, Jensen, Welch". Journal of Symbolic Logic. 50 (4): 1002–1019. doi:10.2307/2273986. JSTOR 2273986. Friedman, Sy D. (1990). "The Π 2 1 {\displaystyle \Pi _{2}^{1}} -singleton conjecture". J. Amer. Math. Soc. 3 (4): 771–791. doi:10.1090/S0894-0347-1990-1071116-6. Friedman, Sy D (2005). "Genericity and large cardinals". J. Math. Log. 5 (2): 149–166. CiteSeerX 10.1.1.23.9437. doi:10.1142/S0219061305000420. He also published a research monograph Friedman, Sy D. (2000). Fine structure and class forcing. de Gruyter Series in Logic and its Applications. Vol. 3. Berlin: Walter de Gruyter & Co. ISBN 978-3-11-016777-1. == References == == External links == Sy Friedman at the Mathematics Genealogy Project
|
Wikipedia:Sylvester's sequence#0
|
In number theory, Sylvester's sequence is an integer sequence in which each term is the product of the previous terms, plus one. Its first few terms are 2, 3, 7, 43, 1807, 3263443, 10650056950807, 113423713055421844361000443 (sequence A000058 in the OEIS). Sylvester's sequence is named after James Joseph Sylvester, who first investigated it in 1880. Its values grow doubly exponentially, and the sum of its reciprocals forms a series of unit fractions that converges to 1 more rapidly than any other series of unit fractions. The recurrence by which it is defined allows the numbers in the sequence to be factored more easily than other numbers of the same magnitude, but, due to the rapid growth of the sequence, complete prime factorizations are known only for a few of its terms. Values derived from this sequence have also been used to construct finite Egyptian fraction representations of 1, Sasakian Einstein manifolds, and hard instances for online algorithms. == Formal definitions == Formally, Sylvester's sequence can be defined by the formula s n = 1 + ∏ i = 0 n − 1 s i . {\displaystyle s_{n}=1+\prod _{i=0}^{n-1}s_{i}.} The product of the empty set is 1, so this formula gives s0 = 2, without need of a separate base case. Alternatively, one may define the sequence by the recurrence s i = s i − 1 ( s i − 1 − 1 ) + 1 , {\displaystyle \displaystyle s_{i}=s_{i-1}(s_{i-1}-1)+1,} with the base case s0 = 2. It is straightforward to show by induction that this is equivalent to the other definition. == Closed form formula and asymptotics == The Sylvester numbers grow doubly exponentially as a function of n. Specifically, it can be shown that s n = ⌊ E 2 n + 1 + 1 2 ⌋ , {\displaystyle s_{n}=\left\lfloor E^{2^{n+1}}+{\frac {1}{2}}\right\rfloor ,\!} for a number E that is approximately 1.26408473530530... (sequence A076393 in the OEIS). This formula has the effect of the following algorithm: s0 is the nearest integer to E 2; s1 is the nearest integer to E 4; s2 is the nearest integer to E 8; for sn, take E 2, square it n more times, and take the nearest integer. This would only be a practical algorithm if we had a better way of calculating E to the requisite number of places than calculating sn and taking its repeated square root. The double-exponential growth of the Sylvester sequence is unsurprising if one compares it to the sequence of Fermat numbers Fn ; the Fermat numbers are usually defined by a doubly exponential formula, 2 2 n + 1 {\displaystyle 2^{2^{n}}\!+1} , but they can also be defined by a product formula very similar to that defining Sylvester's sequence: F n = 2 + ∏ i = 0 n − 1 F i . {\displaystyle F_{n}=2+\prod _{i=0}^{n-1}F_{i}.} == Connection with Egyptian fractions == The unit fractions formed by the reciprocals of the values in Sylvester's sequence generate an infinite series: ∑ i = 0 ∞ 1 s i = 1 2 + 1 3 + 1 7 + 1 43 + 1 1807 + ⋯ . {\displaystyle \sum _{i=0}^{\infty }{\frac {1}{s_{i}}}={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{7}}+{\frac {1}{43}}+{\frac {1}{1807}}+\cdots .} The partial sums of this series have a simple form, ∑ i = 0 j − 1 1 s i = 1 − 1 s j − 1 = s j − 2 s j − 1 , {\displaystyle \sum _{i=0}^{j-1}{\frac {1}{s_{i}}}=1-{\frac {1}{s_{j}-1}}={\frac {s_{j}-2}{s_{j}-1}},} which is already in lowest terms. This may be proved by induction, or more directly by noting that the recursion implies that 1 s i − 1 − 1 s i + 1 − 1 = 1 s i , {\displaystyle {\frac {1}{s_{i}-1}}-{\frac {1}{s_{i+1}-1}}={\frac {1}{s_{i}}},} so the sum telescopes ∑ i = 0 j − 1 1 s i = ∑ i = 0 j − 1 ( 1 s i − 1 − 1 s i + 1 − 1 ) = 1 s 0 − 1 − 1 s j − 1 = 1 − 1 s j − 1 . {\displaystyle \sum _{i=0}^{j-1}{\frac {1}{s_{i}}}=\sum _{i=0}^{j-1}\left({\frac {1}{s_{i}-1}}-{\frac {1}{s_{i+1}-1}}\right)={\frac {1}{s_{0}-1}}-{\frac {1}{s_{j}-1}}=1-{\frac {1}{s_{j}-1}}.} Since this sequence of partial sums (sj − 2)/(sj − 1) converges to one, the overall series forms an infinite Egyptian fraction representation of the number one: 1 = 1 2 + 1 3 + 1 7 + 1 43 + 1 1807 + ⋯ . {\displaystyle 1={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{7}}+{\frac {1}{43}}+{\frac {1}{1807}}+\cdots .} One can find finite Egyptian fraction representations of one, of any length, by truncating this series and subtracting one from the last denominator: 1 = 1 2 + 1 3 + 1 6 , 1 = 1 2 + 1 3 + 1 7 + 1 42 , 1 = 1 2 + 1 3 + 1 7 + 1 43 + 1 1806 , … . {\displaystyle 1={\tfrac {1}{2}}+{\tfrac {1}{3}}+{\tfrac {1}{6}},\quad 1={\tfrac {1}{2}}+{\tfrac {1}{3}}+{\tfrac {1}{7}}+{\tfrac {1}{42}},\quad 1={\tfrac {1}{2}}+{\tfrac {1}{3}}+{\tfrac {1}{7}}+{\tfrac {1}{43}}+{\tfrac {1}{1806}},\quad \dots .} The sum of the first k terms of the infinite series provides the closest possible underestimate of 1 by any k-term Egyptian fraction. For example, the first four terms add to 1805/1806, and therefore any Egyptian fraction for a number in the open interval (1805/1806, 1) requires at least five terms. It is possible to interpret the Sylvester sequence as the result of a greedy algorithm for Egyptian fractions, that at each step chooses the smallest possible denominator that makes the partial sum of the series be less than one. == Uniqueness of quickly growing series with rational sums == As Sylvester himself observed, Sylvester's sequence seems to be unique in having such quickly growing values, while simultaneously having a series of reciprocals that converges to a rational number. This sequence provides an example showing that double-exponential growth is not enough to cause an integer sequence to be an irrationality sequence. To make this more precise, it follows from results of Badea (1993) that, if a sequence of integers a n {\displaystyle a_{n}} grows quickly enough that a n ≥ a n − 1 2 − a n − 1 + 1 , {\displaystyle a_{n}\geq a_{n-1}^{2}-a_{n-1}+1,} and if the series A = ∑ 1 a i {\displaystyle A=\sum {\frac {1}{a_{i}}}} converges to a rational number A, then, for all n after some point, this sequence must be defined by the same recurrence a n = a n − 1 2 − a n − 1 + 1 {\displaystyle a_{n}=a_{n-1}^{2}-a_{n-1}+1} that can be used to define Sylvester's sequence. Erdős & Graham (1980) conjectured that, in results of this type, the inequality bounding the growth of the sequence could be replaced by a weaker condition, lim n → ∞ a n a n − 1 2 = 1. {\displaystyle \lim _{n\rightarrow \infty }{\frac {a_{n}}{a_{n-1}^{2}}}=1.} Badea (1995) surveys progress related to this conjecture; see also Brown (1979). == Divisibility and factorizations == If i < j, it follows from the definition that sj ≡ 1 (mod si ). Therefore, every two numbers in Sylvester's sequence are relatively prime. The sequence can be used to prove that there are infinitely many prime numbers, as any prime can divide at most one number in the sequence. More strongly, no prime factor of a number in the sequence can be congruent to 5 modulo 6, and the sequence can be used to prove that there are infinitely many primes congruent to 7 modulo 12. No term can be a perfect power. Much remains unknown about the factorization of the numbers in Sylvester's sequence. For instance, it is not known if all numbers in the sequence are squarefree, although all the known terms are. As Vardi (1991) describes, it is easy to determine which Sylvester number (if any) a given prime p divides: simply compute the recurrence defining the numbers modulo p until finding either a number that is congruent to zero (mod p) or finding a repeated modulus. Using this technique he found that 1166 out of the first three million primes are divisors of Sylvester numbers, and that none of these primes has a square that divides a Sylvester number. The set of primes that can occur as factors of Sylvester numbers is of density zero in the set of all primes: indeed, the number of such primes less than x is O ( π ( x ) / log log log x ) {\displaystyle O(\pi (x)/\log \log \log x)} . The following table shows known factorizations of these numbers (except s0 ... s3, which are all prime): As is customary, Pn and Cn denote prime numbers and unfactored composite numbers n digits long. == Applications == Boyer, Galicki & Kollár (2005) use the properties of Sylvester's sequence to define large numbers of Sasakian Einstein manifolds having the differential topology of odd-dimensional spheres or exotic spheres. They show that the number of distinct Sasakian Einstein metrics on a topological sphere of dimension 2n − 1 is at least proportional to sn and hence has double exponential growth with n. As Galambos & Woeginger (1995) describe, Brown (1979) and Liang (1980) used values derived from Sylvester's sequence to construct lower bound examples for online bin packing algorithms. Seiden & Woeginger (2005) similarly use the sequence to lower bound the performance of a two-dimensional cutting stock algorithm. Znám's problem concerns sets of numbers such that each number in the set divides but is not equal to the product of all the other numbers, plus one. Without the inequality requirement, the values in Sylvester's sequence would solve the problem; with that requirement, it has other solutions derived from recurrences similar to the one defining Sylvester's sequence. Solutions to Znám's problem have applications to the classification of surface singularities (Brenton and Hill 1988) and to the theory of nondeterministic finite automata. Curtiss (1922) describes an application of the closest approximations to one by k-term sums of unit fractions, in lower-bounding the number of divisors of any perfect number, and Miller (1919) uses the same property to upper bound the size of certain groups. == See also == Cahen's constant Primary pseudoperfect number Leonardo number == Notes == == References == == External links == Irrationality of Quadratic Sums, from K. S. Brown's MathPages. Weisstein, Eric W. "Sylvester's Sequence". MathWorld.
|
Wikipedia:Sylvia de Neymet#0
|
Sylvia de Neymet Urbina (aka Silvia de Neymet de Christ, 1939 – 13 January 2003) was a Mexican mathematician, the first woman to earn a doctorate in mathematics in Mexico, and the first female professor in the faculty of sciences of the National Autonomous University of Mexico (UNAM). == Early life and education == De Neymet was born in Mexico City in 1939. Her mother had been orphaned in the Mexican Revolution of 1910, studied art at La Esmeralda, and became a teacher; she encouraged De Neymet in her studies. Her father's mother was also a teacher, and her father was a civil engineer. In 1955 she began studying at the Universidad Femenina de México, a women's school founded by Adela Formoso de Obregón Santacilia, and in her fourth year there she was hired as a mathematics teacher herself, despite the fact that many of her students would be older than her. After two years of mathematical study in Paris, at the Institut Henri Poincaré, from 1959 to 1961, de Neymet returned to Mexico and was given a degree in mathematics in 1961, with a thesis on differential equations supervised by Solomon Lefschetz, who by this time was regularly wintering at UNAM. At around the same time, CINVESTAV (the Center for Research and Advanced Studies of the National Polytechnic Institute) was founded; de Neymet became one of the first students there, and the first doctoral student of Samuel Gitler Hammer, one of the founders of CINVESTAV. She married Michael Christ, a French physician, in 1962, and while still finishing her doctorate became a teacher at the Escuela Superior de Física y Matemáticas of the Instituto Politécnico Nacional, founded four years earlier. She completed her doctorate under Gitler's supervision in 1966, becoming one of the first seven people to earn a mathematics doctorate in Mexico, and the first Mexican woman to do so. == Career and later life == After completing her doctorate, she joined the faculty of sciences of UNAM, one of only three full-time mathematicians there (with Víctor Neumann-Lara and Arturo Fregoso Urbina). After continuing her career at UNAM for many years, she died on 13 January 2003. Her book Introducción a los grupos topológicos de transformaciones [Introduction to topological transformation groups] was published posthumously in 2005. == References ==
|
Wikipedia:Sylvie Corteel#0
|
Sylvie Corteel is a French mathematician at the Centre national de la recherche scientifique and Paris Diderot University and the University of California, Berkeley, who was an editor-in-chief of the Journal of Combinatorial Theory, Series A. Her research concerns the enumerative combinatorics and algebraic combinatorics of permutations, Young tableaux, and integer partitions. == Education and career == After earning an engineering degree in 1996 from the University of Technology of Compiègne, Corteel worked with Carla Savage at North Carolina State University, where she earned a master's degree in 1997. She completed her Ph.D. in 2000 at the University of Paris-Sud under the supervision of Dominique Gouyou-Beauchamps, and earned a habilitation in 2010 at Paris Diderot University. She worked as a maitresse de conférences and then as a CNRS chargée de recherche at the Versailles Saint-Quentin-en-Yvelines University from 2000 to 2005, also doing postdoctoral studies at the Université du Québec à Montréal in 2001. From 2005 to 2009 she was associated with the University of Paris-Sud, and in 2009 she moved to Paris Diderot, where in 2010 she became a director of research. Since 2017 she has been a Visiting Miller Professor at the University of California, Berkeley. She was named MSRI Simons Professor for 2017-2018. Along with colleagues O. Mandelshtam and L. Williams, in 2018 Corteel developed a new characterization of both symmetric and nonsymmetric Macdonald polynomials using the combinatorial exclusion process. == Selected publications == Bergeron, Anne; Corteel, Sylvie; Raffinot, Mathieu (2002), "The algorithmic of gene teams", in Guigó, Roderic; Gusfield, Dan (eds.), Algorithms in Bioinformatics: Second International Workshop, WABI 2002, Rome, Italy, September 17–21, 2002, Proceedings, Lecture Notes in Computer Science, vol. 2452, Berlin: Springer, pp. 464–476, doi:10.1007/3-540-45784-4_36, ISBN 978-3-540-44211-0 Corteel, Sylvie; Lovejoy, Jeremy (2004), "Overpartitions", Transactions of the American Mathematical Society, 356 (4): 1623–1635, doi:10.1090/S0002-9947-03-03328-2, MR 2034322 Corteel, Sylvie (2007), "Crossings and alignments of permutations", Advances in Applied Mathematics, 38 (2): 149–163, arXiv:math/0601469, doi:10.1016/j.aam.2006.01.006, MR 2290808, S2CID 10061267 Corteel, Sylvie; Mandelshtam, Olya; Williams, Lauren (2022), "From multiline queues to Macdonald polynomials via the exclusion process", American Journal of Mathematics, 144 (2): 395–436, arXiv:1811.01024, doi:10.1353/ajm.2022.0007, MR 4401508 Corteel, Sylvie; Williams, Lauren K. (2007), "Tableaux combinatorics for the asymmetric exclusion process", Advances in Applied Mathematics, 39 (3): 293–310, arXiv:math/0602109, doi:10.1016/j.aam.2006.08.002, MR 2352041, S2CID 18546635 == References == == External links == Home page
|
Wikipedia:Sylvie Méléard#0
|
Sylvie Méléard is a French mathematician specializing in probability theory, stochastic processes, particle systems, and stochastic differential equations. She is editor-in-chief of Stochastic Processes and Their Applications. == Education and career == Méléard grew up in Picardy as the daughter of two high school biology teachers, and was already aiming for a career as a mathematician by the time she was ten years old. She studied at the Lycée Janson-de-Sailly and, from 1977 to 1981, at the École normale supérieure de Fontenay-aux-Roses, where she became known for knitting during lectures. After earning her agrégation in 1981, she completed her doctorate in 1984 at Pierre and Marie Curie University under the supervision of Nicole El Karoui. She took a position at the University of Le Mans, and then moved to Pierre and Marie Curie University as maître de conférences in 1989. At Pierre and Marie Curie, she earned her habilitation in 1991. She became a professor at Paris Nanterre University in 1992, and moved again to the École Polytechnique in 2006. == Contributions == With Vincent Bansaye, Méléard is the author of Stochastic Models for Structured Populations: Scaling Limits and Long Time Behavior (Springer, 2015). She is also the author of Modèles aléatoires en ecologie et evolution [Random models in ecology and evolution] (Springer, 2016). As of 2018, she is the editor-in-chief of the journal Stochastic Processes and Their Applications and, as editor, an executive member in the Bernoulli Society. == Recognition == In September 2018, a conference on population dynamics was held at the Institut Henri Poincaré in honor of Méléard's 60th birthday. She was elected to the Academia Europaea in 2021. == References == == External links == Home page
|
Wikipedia:Sylvie Paycha#0
|
Sylvie Paycha (born 27 March 1960 in Neuilly-sur-Seine) is a French mathematician and mathematical physicist working in operator theory as a professor at the University of Potsdam. She has chaired both European Women in Mathematics and L'association femmes et mathématiques. == Education == She completed her PhD thesis at the University of Bochum, Germany in 1988. Her doctoral advisor was Sergio Albeverio. == Selected publications == Albeverio, Sergio; Jost, Jürgen; Paycha, Sylvie; Scarlatti, Sergio (1997), A mathematical introduction to string theory, London Mathematical Society Lecture Note Series, vol. 225, Cambridge: Cambridge University Press, doi:10.1017/CBO9780511600791, ISBN 0-521-55610-4 Paycha, Sylvie (2012), Regularised integrals, sums and traces, University Lecture Series, vol. 59, Providence, RI: American Mathematical Society, doi:10.1090/ulect/059, ISBN 978-0-8218-5367-2 == References == == External links == Official website
|
Wikipedia:Sylvie Roelly#0
|
Sylvie Roelly (born 1960) is a French mathematician specializing in probability theory, including the study of particle systems, Gibbs measure, diffusion, and branching processes. She is a professor of mathematics in the Institute of Mathematics at the University of Potsdam in Germany. == Education and career == Roelly was born in 1960 in Paris, and studied mathematics from 1979 to 1984 at the École normale supérieure de jeunes filles in Paris. She earned a diploma in mathematics in 1980 through the Paris Diderot University, and an agrégation in 1982. She completed her Ph.D. in 1984 through Pierre and Marie Curie University, with the dissertation Processus de diffusion à valeurs mesures multiplicatifs supervised by Nicole El Karoui. She also earned her habilitation in 1991 through Pierre and Marie Curie University. After a year of lecturing at the École normale supérieure, she became a researcher for the French National Centre for Scientific Research (CNRS) in 1985. She came to Germany as a Humboldt Fellow at Bielefeld University from 1990 to 1994, and was a researcher at the Weierstrass Institute in Berlin from 2001 to 2003, before taking her professorship at Potsdam in 2003. At Potsdam, she was head of the Institute of Mathematics from 2011 to 2015, and vice-dean of the Faculty of Science from 2016 to 2019. Along with her research interest in probability, she has organized in Potsdam several events concerning the history of Jewish mathematicians. == Recognition == In 2007, Roelly and Michèle Thieullen won the Itô Prize of the Bernoulli Society for their work on Brownian diffusion. She was named mathematician of the month for April 2015 by the German Mathematical Society. == References == == External links == Home page Sylvie Roelly publications indexed by Google Scholar
|
Wikipedia:Symbolic integration#0
|
In calculus, symbolic integration is the problem of finding a formula for the antiderivative, or indefinite integral, of a given function f(x), i.e. to find a formula for a differentiable function F(x) such that d F d x = f ( x ) . {\displaystyle {\frac {dF}{dx}}=f(x).} The family of all functions that satisfy this property can be denoted ∫ f ( x ) d x . {\displaystyle \int f(x)\,dx.} == Discussion == The term symbolic is used to distinguish this problem from that of numerical integration, where the value of F is sought at a particular input or set of inputs, rather than a general formula for F. Both problems were held to be of practical and theoretical importance long before the time of digital computers, but they are now generally considered the domain of computer science, as computers are most often used currently to tackle individual instances. Finding the derivative of an expression is a straightforward process for which it is easy to construct an algorithm. The reverse question of finding the integral is much more difficult. Many expressions that are relatively simple do not have integrals that can be expressed in closed form. See antiderivative and nonelementary integral for more details. A procedure called the Risch algorithm exists that is capable of determining whether the integral of an elementary function (function built from a finite number of exponentials, logarithms, constants, and nth roots through composition and combinations using the four elementary operations) is elementary and returning it if it is. In its original form, the Risch algorithm was not suitable for a direct implementation, and its complete implementation took a long time. It was first implemented in Reduce in the case of purely transcendental functions; the case of purely algebraic functions was solved and implemented in Reduce by James H. Davenport; the general case was solved by Manuel Bronstein, who implemented almost all of it in Axiom, though to date there is no implementation of the Risch algorithm that can deal with all of the special cases and branches in it. However, the Risch algorithm applies only to indefinite integrals, while most of the integrals of interest to physicists, theoretical chemists, and engineers are definite integrals often related to Laplace transforms, Fourier transforms, and Mellin transforms. Lacking a general algorithm, the developers of computer algebra systems have implemented heuristics based on pattern-matching and the exploitation of special functions, in particular the incomplete gamma function. Although this approach is heuristic rather than algorithmic, it is nonetheless an effective method for solving many definite integrals encountered by practical engineering applications. Earlier systems such as Macsyma had a few definite integrals related to special functions within a look-up table. However this particular method, involving differentiation of special functions with respect to its parameters, variable transformation, pattern matching and other manipulations, was pioneered by developers of the Maple system and then later emulated by Mathematica, Axiom, MuPAD and other systems. == Recent advances == The main problem in the classical approach of symbolic integration is that, if a function is represented in closed form, then, in general, its antiderivative has not a similar representation. In other words, the class of functions that can be represented in closed form is not closed under antiderivation. Holonomic functions are a large class of functions, which is closed under antiderivation and allows algorithmic implementation in computers of integration and many other operations of calculus. More precisely, a holonomic function is a solution of a homogeneous linear differential equation with polynomial coefficients. Holonomic functions are closed under addition and multiplication, derivation, and antiderivation. They include algebraic functions, exponential function, logarithm, sine, cosine, inverse trigonometric functions, inverse hyperbolic functions. They include also most common special functions such as Airy function, error function, Bessel functions, and all hypergeometric functions. A fundamental property of holonomic functions is that the coefficients of their Taylor series at any point satisfy a linear recurrence relation with polynomial coefficients, and that this recurrence relation may be computed from the differential equation defining the function. Conversely given such a recurrence relation between the coefficients of a power series, this power series defines a holonomic function whose differential equation may be computed algorithmically. This recurrence relation allows a fast computation of the Taylor series, and thus of the value of the function at any point, with an arbitrary small certified error. This makes algorithmic most operations of calculus, when restricted to holonomic functions, represented by their differential equation and initial conditions. This includes the computation of antiderivatives and definite integrals (this amounts to evaluating the antiderivative at the endpoints of the interval of integration). This includes also the computation of the asymptotic behavior of the function at infinity, and thus the definite integrals on unbounded intervals. All these operations are implemented in the algolib library for Maple. See also the Dynamic Dictionary of Mathematical Functions. == Example == For example: ∫ x 2 d x = x 3 3 + C {\displaystyle \int x^{2}\,dx={\frac {x^{3}}{3}}+C} is a symbolic result for an indefinite integral (here C is a constant of integration), ∫ − 1 1 x 2 d x = [ x 3 3 ] − 1 1 = 1 3 3 − ( − 1 ) 3 3 = 2 3 {\displaystyle \int _{-1}^{1}x^{2}\,dx=\left[{\frac {x^{3}}{3}}\right]_{-1}^{1}={\frac {1^{3}}{3}}-{\frac {(-1)^{3}}{3}}={\frac {2}{3}}} is a symbolic result for a definite integral, and ∫ − 1 1 x 2 d x ≈ 0.6667 {\displaystyle \int _{-1}^{1}x^{2}\,dx\approx 0.6667} is a numerical result for the same definite integral. == See also == Computer algebra – Scientific area at the interface between computer science and mathematics Elementary function – A kind of mathematical function Fox H-function – Generalization of the Meijer G-function and the Fox–Wright function Definite integral – Operation in mathematical calculus Lists of integrals Meijer G-function – Generalization of the hypergeometric function Operational calculus – Technique to solve differential equations Risch algorithm – Method for evaluating indefinite integrals == References ==
|
Wikipedia:Symbolic method#0
|
In mathematics, the symbolic method in invariant theory is an algorithm developed by Arthur Cayley, Siegfried Heinrich Aronhold, Alfred Clebsch, and Paul Gordan in the 19th century for computing invariants of algebraic forms. It is based on treating the form as if it were a power of a degree one form, which corresponds to embedding a symmetric power of a vector space into the symmetric elements of a tensor product of copies of it. == Symbolic notation == The symbolic method uses a compact, but rather confusing and mysterious notation for invariants, depending on the introduction of new symbols a, b, c, ... (from which the symbolic method gets its name) with apparently contradictory properties. === Example: the discriminant of a binary quadratic form === These symbols can be explained by the following example from Gordan. Suppose that f ( x ) = A 0 x 1 2 + 2 A 1 x 1 x 2 + A 2 x 2 2 {\displaystyle \displaystyle f(x)=A_{0}x_{1}^{2}+2A_{1}x_{1}x_{2}+A_{2}x_{2}^{2}} is a binary quadratic form with an invariant given by the discriminant Δ = A 0 A 2 − A 1 2 . {\displaystyle \displaystyle \Delta =A_{0}A_{2}-A_{1}^{2}.} The symbolic representation of the discriminant is 2 Δ = ( a b ) 2 {\displaystyle \displaystyle 2\Delta =(ab)^{2}} where a and b are the symbols. The meaning of the expression (ab)2 is as follows. First of all, (ab) is a shorthand form for the determinant of a matrix whose rows are a1, a2 and b1, b2, so ( a b ) = a 1 b 2 − a 2 b 1 . {\displaystyle \displaystyle (ab)=a_{1}b_{2}-a_{2}b_{1}.} Squaring this we get ( a b ) 2 = a 1 2 b 2 2 − 2 a 1 a 2 b 1 b 2 + a 2 2 b 1 2 . {\displaystyle \displaystyle (ab)^{2}=a_{1}^{2}b_{2}^{2}-2a_{1}a_{2}b_{1}b_{2}+a_{2}^{2}b_{1}^{2}.} Next we pretend that f ( x ) = ( a 1 x 1 + a 2 x 2 ) 2 = ( b 1 x 1 + b 2 x 2 ) 2 {\displaystyle \displaystyle f(x)=(a_{1}x_{1}+a_{2}x_{2})^{2}=(b_{1}x_{1}+b_{2}x_{2})^{2}} so that A i = a 1 2 − i a 2 i = b 1 2 − i b 2 i {\displaystyle \displaystyle A_{i}=a_{1}^{2-i}a_{2}^{i}=b_{1}^{2-i}b_{2}^{i}} and we ignore the fact that this does not seem to make sense if f is not a power of a linear form. Substituting these values gives ( a b ) 2 = A 2 A 0 − 2 A 1 A 1 + A 0 A 2 = 2 Δ . {\displaystyle \displaystyle (ab)^{2}=A_{2}A_{0}-2A_{1}A_{1}+A_{0}A_{2}=2\Delta .} === Higher degrees === More generally if f ( x ) = A 0 x 1 n + ( n 1 ) A 1 x 1 n − 1 x 2 + ⋯ + A n x 2 n {\displaystyle \displaystyle f(x)=A_{0}x_{1}^{n}+{\binom {n}{1}}A_{1}x_{1}^{n-1}x_{2}+\cdots +A_{n}x_{2}^{n}} is a binary form of higher degree, then one introduces new variables a1, a2, b1, b2, c1, c2, with the properties f ( x ) = ( a 1 x 1 + a 2 x 2 ) n = ( b 1 x 1 + b 2 x 2 ) n = ( c 1 x 1 + c 2 x 2 ) n = ⋯ . {\displaystyle f(x)=(a_{1}x_{1}+a_{2}x_{2})^{n}=(b_{1}x_{1}+b_{2}x_{2})^{n}=(c_{1}x_{1}+c_{2}x_{2})^{n}=\cdots .} What this means is that the following two vector spaces are naturally isomorphic: The vector space of homogeneous polynomials in A0,...An of degree m The vector space of polynomials in 2m variables a1, a2, b1, b2, c1, c2, ... that have degree n in each of the m pairs of variables (a1, a2), (b1, b2), (c1, c2), ... and are symmetric under permutations of the m symbols a, b, ...., The isomorphism is given by mapping an−j1aj2, bn−j1bj2, .... to Aj. This mapping does not preserve products of polynomials. === More variables === The extension to a form f in more than two variables x1, x2, x3,... is similar: one introduces symbols a1, a2, a3 and so on with the properties f ( x ) = ( a 1 x 1 + a 2 x 2 + a 3 x 3 + ⋯ ) n = ( b 1 x 1 + b 2 x 2 + b 3 x 3 + ⋯ ) n = ( c 1 x 1 + c 2 x 2 + c 3 x 3 + ⋯ ) n = ⋯ . {\displaystyle f(x)=(a_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3}+\cdots )^{n}=(b_{1}x_{1}+b_{2}x_{2}+b_{3}x_{3}+\cdots )^{n}=(c_{1}x_{1}+c_{2}x_{2}+c_{3}x_{3}+\cdots )^{n}=\cdots .} == Symmetric products == The rather mysterious formalism of the symbolic method corresponds to embedding a symmetric product Sn(V) of a vector space V into a tensor product of n copies of V, as the elements preserved by the action of the symmetric group. In fact this is done twice, because the invariants of degree n of a quantic of degree m are the invariant elements of SnSm(V), which gets embedded into a tensor product of mn copies of V, as the elements invariant under a wreath product of the two symmetric groups. The brackets of the symbolic method are really invariant linear forms on this tensor product, which give invariants of SnSm(V) by restriction. == See also == Umbral calculus == References == Gordan, Paul (1987) [1887]. Kerschensteiner, Georg (ed.). Vorlesungen über Invariantentheorie (2nd ed.). New York York: AMS Chelsea Publishing. ISBN 9780828403283. MR 0917266. Footnotes == Further reading == Dieudonné, Jean; Carrell, James B. (1970). "Invariant theory, old and new". Advances in Mathematics. 4: 1–80. doi:10.1016/0001-8708(70)90015-0. pp. 32–7, "Invariants of n-ary forms: the symbolic method. Reprinted as Dieudonné, Jean; Carrell, James B. (1971). Invariant theory, old and new. Academic Press. ISBN 0-12-215540-8. Dolgachev, Igor (2003). Lectures on invariant theory. London Mathematical Society Lecture Note Series. Vol. 296. Cambridge University Press. doi:10.1017/CBO9780511615436. ISBN 978-0-521-52548-0. MR 2004511. S2CID 118144995. Grace, John Hilton; Young, Alfred (1903), The Algebra of invariants, Cambridge University Press Hilbert, David (1993) [1897]. Theory of algebraic invariants. Cambridge University Press. ISBN 9780521444576. MR 1266168. Koh, Sebastian S., ed. (2009) [1987]. Invariant Theory. Lecture Notes in Mathematics. Vol. 1278. Springer. ISBN 9783540183600. Kung, Joseph P. S.; Rota, Gian-Carlo (1984). "The invariant theory of binary forms". Bulletin of the American Mathematical Society. New Series. 10 (1): 27–85. doi:10.1090/S0273-0979-1984-15188-7. ISSN 0002-9904. MR 0722856.
|
Wikipedia:Symbolic power of an ideal#0
|
In algebra and algebraic geometry, given a commutative Noetherian ring R {\displaystyle R} and an ideal I {\displaystyle I} in it, the n-th symbolic power of I {\displaystyle I} is the ideal I ( n ) = ⋂ P ∈ Ass ( R / I ) φ P − 1 ( I n R P ) {\displaystyle I^{(n)}=\bigcap _{P\in \operatorname {Ass} (R/I)}\varphi _{P}^{-1}(I^{n}R_{P})} where R P {\displaystyle R_{P}} is the localization of R {\displaystyle R} at P {\displaystyle P} , we set φ P : R → R P {\displaystyle \varphi _{P}:R\to R_{P}} is the canonical map from a ring to its localization, and the intersection runs through all of the associated primes of R / I {\displaystyle R/I} . Though this definition does not require I {\displaystyle I} to be prime, this assumption is often worked with because in the case of a prime ideal, the symbolic power can be equivalently defined as the I {\displaystyle I} -primary component of I n {\displaystyle I^{n}} . Very roughly, it consists of functions with zeros of order n along the variety defined by I {\displaystyle I} . We have: I ( 1 ) = I {\displaystyle I^{(1)}=I} and if I {\displaystyle I} is a maximal ideal, then I ( n ) = I n {\displaystyle I^{(n)}=I^{n}} . Symbolic powers induce the following chain of ideals: I ( 0 ) = R ⊃ I = I ( 1 ) ⊃ I ( 2 ) ⊃ I ( 3 ) ⊃ I ( 4 ) ⊃ ⋯ {\displaystyle I^{(0)}=R\supset I=I^{(1)}\supset I^{(2)}\supset I^{(3)}\supset I^{(4)}\supset \cdots } == Uses == The study and use of symbolic powers has a long history in commutative algebra. Krull’s famous proof of his principal ideal theorem uses them in an essential way. They first arose after primary decompositions were proved for Noetherian rings. Zariski used symbolic powers in his study of the analytic normality of algebraic varieties. Chevalley's famous lemma comparing topologies states that in a complete local domain the symbolic powers topology of any prime is finer than the m-adic topology. A crucial step in the vanishing theorem on local cohomology of Hartshorne and Lichtenbaum uses that for a prime I {\displaystyle I} defining a curve in a complete local domain, the powers of I {\displaystyle I} are cofinal with the symbolic powers of I {\displaystyle I} . This important property of being cofinal was further developed by Schenzel in the 1970s. == In algebraic geometry == Though generators for ordinary powers of I {\displaystyle I} are well understood when I {\displaystyle I} is given in terms of its generators as I = ( f 1 , … , f k ) {\displaystyle I=(f_{1},\ldots ,f_{k})} , it is still very difficult in many cases to determine the generators of symbolic powers of I {\displaystyle I} . But in the geometric setting, there is a clear geometric interpretation in the case when I {\displaystyle I} is a radical ideal over an algebraically closed field of characteristic zero. If X {\displaystyle X} is an irreducible variety whose ideal of vanishing is I {\displaystyle I} , then the differential power of I {\displaystyle I} consists of all the functions in R {\displaystyle R} that vanish to order ≥ n on X {\displaystyle X} , i.e. I ⟨ n ⟩ := { f ∈ R ∣ f vanishes to order ≥ n on all of X } . {\displaystyle I^{\langle n\rangle }:=\{f\in R\mid f{\text{ vanishes to order}}\geq n{\text{ on all of }}X\}.} Or equivalently, if m p {\displaystyle \mathbf {m} _{p}} is the maximal ideal for a point p ∈ X {\displaystyle p\in X} , I ⟨ n ⟩ = ⋂ p ∈ X m p n {\displaystyle I^{\langle n\rangle }=\bigcap _{p\in X}\mathbf {m} _{p}^{n}} . Theorem (Nagata, Zariski) Let I {\displaystyle I} be a prime ideal in a polynomial ring K [ x 1 , … , x N ] {\displaystyle K[x_{1},\ldots ,x_{N}]} over an algebraically closed field. Then I ( m ) = I ⟨ m ⟩ {\displaystyle I^{(m)}=I^{\langle m\rangle }} This result can be extended to any radical ideal. This formulation is very useful because, in characteristic zero, we can compute the differential powers in terms of generators as: I ⟨ m ⟩ = ⟨ f ∣ ∂ a f ∂ x a ∈ I for all a ∈ N N where | a | = ∑ i = 1 N a i ≤ m − 1 ⟩ {\displaystyle I^{\langle m\rangle }=\left\langle f\mid {\frac {\partial ^{\mathbf {a} }f}{\partial x^{\mathbf {a} }}}\in I{\text{ for all }}\mathbf {a} \in \mathbb {N} ^{N}{\text{ where }}|\mathbf {a} |=\sum _{i=1}^{N}a_{i}\leq m-1\right\rangle } For another formulation, we can consider the case when the base ring is a polynomial ring over a field. In this case, we can interpret the n-th symbolic power as the sheaf of all function germs over X = Spec ( R ) vanishing to order ≥ n at Z = V ( I ) {\displaystyle X=\operatorname {Spec} (R){\text{ vanishing to order}}\geq n{\text{ at }}Z=V(I)} In fact, if X {\displaystyle X} is a smooth variety over a perfect field, then I ( n ) = { f ∈ R ∣ f ∈ m n for every closed point m ∈ Z } {\displaystyle I^{(n)}=\{f\in R\mid f\in \mathbf {m} ^{n}{\text{ for every closed point }}\mathbf {m} \in Z\}} == Containments == It is natural to consider whether or not symbolic powers agree with ordinary powers, i.e. does I n = I ( n ) {\displaystyle I^{n}=I^{(n)}} hold? In general this is not the case. One example of this is the prime ideal P = ( x 4 − y z , y 2 − x z , x 3 y − z 2 ) ⊆ K [ x , y , z ] {\displaystyle P=(x^{4}-yz,\,y^{2}-xz,\,x^{3}y-z^{2})\subseteq K[x,y,z]} . Here we have that P 2 ≠ P ( 2 ) {\displaystyle P^{2}\neq P^{(2)}} . However, P 2 ⊂ P ( 2 ) {\displaystyle P^{2}\subset P^{(2)}} does hold and the generalization of this inclusion is well understood. Indeed, the containment I n ⊆ I ( n ) {\displaystyle I^{n}\subseteq I^{(n)}} follows from the definition. Further, it is known that I r ⊆ I ( m ) {\displaystyle I^{r}\subseteq I^{(m)}} if and only if m ≤ r {\displaystyle m\leq r} . The proof follows from Nakayama's lemma. There has been extensive study into the other containment, when symbolic powers are contained in ordinary powers of ideals, referred to as the Containment Problem. Once again this has an easily stated answer summarized in the following theorem. It was developed by Ein, Lazarfeld, and Smith in characteristic zero and was expanded to positive characteristic by Hochster and Huneke. Their papers both build upon the results of Irena Swanson in Linear Equivalence of Ideal Topologies (2000). Theorem (Ein, Lazarfeld, Smith; Hochster, Huneke) Let I ⊂ K [ x 1 , x 2 , … , x N ] {\displaystyle I\subset K[x_{1},x_{2},\ldots ,x_{N}]} be a homogeneous ideal. Then the inclusion I ( m ) ⊂ I r {\displaystyle I^{(m)}\subset I^{r}} holds for all m ≥ N r . {\displaystyle m\geq Nr.} It was later verified that the bound of N {\displaystyle N} in the theorem cannot be tightened for general ideals. However, following a question posed by Bocci, Harbourne, and Huneke, it was discovered that a better bound exists in some cases. Theorem The inclusion I ( m ) ⊆ I r {\displaystyle I^{(m)}\subseteq I^{r}} for all m ≥ N r − N + 1 {\displaystyle m\geq Nr-N+1} holds for arbitrary ideals in characteristic 2; for monomial ideals in arbitrary characteristic for ideals of d-stars for ideals of general points in P 2 and P 3 {\displaystyle \mathbb {P} ^{2}{\text{ and }}\mathbb {P} ^{3}} == References == == External links == Melvin Hochster, Math 711: Lecture of September 7, 2007
|
Wikipedia:Symbolic regression#0
|
Symbolic regression (SR) is a type of regression analysis that searches the space of mathematical expressions to find the model that best fits a given dataset, both in terms of accuracy and simplicity. No particular model is provided as a starting point for symbolic regression. Instead, initial expressions are formed by randomly combining mathematical building blocks such as mathematical operators, analytic functions, constants, and state variables. Usually, a subset of these primitives will be specified by the person operating it, but that's not a requirement of the technique. The symbolic regression problem for mathematical functions has been tackled with a variety of methods, including recombining equations most commonly using genetic programming, as well as more recent methods utilizing Bayesian methods and neural networks. Another non-classical alternative method to SR is called Universal Functions Originator (UFO), which has a different mechanism, search-space, and building strategy. Further methods such as Exact Learning attempt to transform the fitting problem into a moments problem in a natural function space, usually built around generalizations of the Meijer-G function. By not requiring a priori specification of a model, symbolic regression isn't affected by human bias, or unknown gaps in domain knowledge. It attempts to uncover the intrinsic relationships of the dataset, by letting the patterns in the data itself reveal the appropriate models, rather than imposing a model structure that is deemed mathematically tractable from a human perspective. The fitness function that drives the evolution of the models takes into account not only error metrics (to ensure the models accurately predict the data), but also special complexity measures, thus ensuring that the resulting models reveal the data's underlying structure in a way that's understandable from a human perspective. This facilitates reasoning and favors the odds of getting insights about the data-generating system, as well as improving generalisability and extrapolation behaviour by preventing overfitting. Accuracy and simplicity may be left as two separate objectives of the regression—in which case the optimum solutions form a Pareto front—or they may be combined into a single objective by means of a model selection principle such as minimum description length. It has been proven that symbolic regression is an NP-hard problem, in the sense that one cannot always find the best possible mathematical expression to fit to a given dataset in polynomial time. Nevertheless, if the sought-for equation is not too complex it is possible to solve the symbolic regression problem exactly by generating every possible function (built from some predefined set of operators) and evaluating them on the dataset in question. == Difference from classical regression == While conventional regression techniques seek to optimize the parameters for a pre-specified model structure, symbolic regression avoids imposing prior assumptions, and instead infers the model from the data. In other words, it attempts to discover both model structures and model parameters. This approach has the disadvantage of having a much larger space to search, because not only the search space in symbolic regression is infinite, but there are an infinite number of models which will perfectly fit a finite data set (provided that the model complexity isn't artificially limited). This means that it will possibly take a symbolic regression algorithm longer to find an appropriate model and parametrization, than traditional regression techniques. This can be attenuated by limiting the set of building blocks provided to the algorithm, based on existing knowledge of the system that produced the data; but in the end, using symbolic regression is a decision that has to be balanced with how much is known about the underlying system. Nevertheless, this characteristic of symbolic regression also has advantages: because the evolutionary algorithm requires diversity in order to effectively explore the search space, the result is likely to be a selection of high-scoring models (and their corresponding set of parameters). Examining this collection could provide better insight into the underlying process, and allows the user to identify an approximation that better fits their needs in terms of accuracy and simplicity. == Benchmarking == === SRBench === In 2021, SRBench was proposed as a large benchmark for symbolic regression. In its inception, SRBench featured 14 symbolic regression methods, 7 other ML methods, and 252 datasets from PMLB. The benchmark intends to be a living project: it encourages the submission of improvements, new datasets, and new methods, to keep track of the state of the art in SR. === SRBench Competition 2022 === In 2022, SRBench announced the competition Interpretable Symbolic Regression for Data Science, which was held at the GECCO conference in Boston, MA. The competition pitted nine leading symbolic regression algorithms against each other on a novel set of data problems and considered different evaluation criteria. The competition was organized in two tracks, a synthetic track and a real-world data track. ==== Synthetic Track ==== In the synthetic track, methods were compared according to five properties: re-discovery of exact expressions; feature selection; resistance to local optima; extrapolation; and sensitivity to noise. Rankings of the methods were: QLattice PySR (Python Symbolic Regression) uDSR (Deep Symbolic Optimization) ==== Real-world Track ==== In the real-world track, methods were trained to build interpretable predictive models for 14-day forecast counts of COVID-19 cases, hospitalizations, and deaths in New York State. These models were reviewed by a subject expert and assigned trust ratings and evaluated for accuracy and simplicity. The ranking of the methods was: uDSR (Deep Symbolic Optimization) QLattice geneticengine (Genetic Engine) == Non-standard methods == Most symbolic regression algorithms prevent combinatorial explosion by implementing evolutionary algorithms that iteratively improve the best-fit expression over many generations. Recently, researchers have proposed algorithms utilizing other tactics in AI. Silviu-Marian Udrescu and Max Tegmark developed the "AI Feynman" algorithm, which attempts symbolic regression by training a neural network to represent the mystery function, then runs tests against the neural network to attempt to break up the problem into smaller parts. For example, if f ( x 1 , . . . , x i , x i + 1 , . . . , x n ) = g ( x 1 , . . . , x i ) + h ( x i + 1 , . . . , x n ) {\displaystyle f(x_{1},...,x_{i},x_{i+1},...,x_{n})=g(x_{1},...,x_{i})+h(x_{i+1},...,x_{n})} , tests against the neural network can recognize the separation and proceed to solve for g {\displaystyle g} and h {\displaystyle h} separately and with different variables as inputs. This is an example of divide and conquer, which reduces the size of the problem to be more manageable. AI Feynman also transforms the inputs and outputs of the mystery function in order to produce a new function which can be solved with other techniques, and performs dimensional analysis to reduce the number of independent variables involved. The algorithm was able to "discover" 100 equations from The Feynman Lectures on Physics, while a leading software using evolutionary algorithms, Eureqa, solved only 71. AI Feynman, in contrast to classic symbolic regression methods, requires a very large dataset in order to first train the neural network and is naturally biased towards equations that are common in elementary physics. == Software == === End-user software === QLattice is a quantum-inspired simulation and machine learning technology that helps search through an infinite list of potential mathematical models to solve a problem. Evolutionary Forest is a Genetic Programming-based automated feature construction algorithm for symbolic regression. uDSR is a deep learning framework for symbolic optimization tasks dCGP, differentiable Cartesian Genetic Programming in python (free, open source) HeuristicLab, a software environment for heuristic and evolutionary algorithms, including symbolic regression (free, open source) GeneXProTools, - an implementation of Gene expression programming technique for various problems including symbolic regression (commercial) Multi Expression Programming X, an implementation of Multi expression programming for symbolic regression and classification (free, open source) Eureqa, evolutionary symbolic regression software (commercial), and software library TuringBot, symbolic regression software based on simulated annealing (commercial) PySR, symbolic regression environment written in Python and Julia, using regularized evolution, simulated annealing, and gradient-free optimization (free, open source) GP-GOMEA, fast (C++ back-end) evolutionary symbolic regression with Python scikit-learn-compatible interface, achieved one of the best trade-offs between accuracy and simplicity of discovered models on SRBench in 2021 (free, open source) == See also == Closed-form expression § Conversion from numerical forms Genetic programming Gene expression programming Kolmogorov complexity Linear genetic programming Mathematical optimization Multi expression programming Regression analysis Reverse mathematics Discovery system (AI research) == References == == Further reading == Mark J. Willis; Hugo G. Hiden; Ben McKay; Gary A. Montague; Peter Marenbach (1997). "Genetic programming: An introduction and survey of applications" (PDF). IEE Conference Publications. IEE. pp. 314–319. Wouter Minnebo; Sean Stijven (2011). "Chapter 4: Symbolic Regression" (PDF). Empowering Knowledge Computing with Variable Selection (M.Sc. thesis). University of Antwerp. John R. Koza; Martin A. Keane; James P. Rice (1993). "Performance improvement of machine learning via automatic discovery of facilitating functions as applied to a problem of symbolic system identification" (PDF). IEEE International Conference on Neural Networks. San Francisco: IEEE. pp. 191–198. == External links == Ivan Zelinka (2004). "Symbolic regression — an overview". Hansueli Gerber (1998). "Simple Symbolic Regression Using Genetic Programming". (Java applet) — approximates a function by evolving combinations of simple arithmetic operators, using algorithms developed by John Koza. Katya Vladislavleva. "Symbolic Regression: Function Discovery & More". Archived from the original on 2014-12-18.
|
Wikipedia:Symbolic-numeric computation#0
|
In mathematics and computer science, symbolic-numeric computation is the use of software that combines symbolic and numeric methods to solve problems. == References == Wang, Dongming; Zhi, Lihong (2007). Symbolic-numeric Computation. Springer. ISBN 978-3-7643-7983-4. Mourrain, Bernard; Pavone, Jean-Pascal; Trebuchet, Philippe; Tsigaridas, Elias P.; Wintz, Julien (2008). "SYNAPS: A Library for Dedicated Applications in Symbolic Numeric Computing". Software for Algebraic Geometry. The IMA Volumes in Mathematics and its Applications. Vol. 148. pp. 81–109. CiteSeerX 10.1.1.135.1680. doi:10.1007/978-0-387-78133-4_6. ISBN 978-0-387-78132-7. Grabmeier, Johannes; Kaltofen, Erich; Weispfenning, Volker, eds. (2003). "Hybrid methods" (PDF). Computer algebra handbook: foundations, applications, systems, Volume 1. Springer. ISBN 978-3-540-65466-7. Robbiano, Lorenzo; Abbott, John (2009). Approximate Commutative Algebra. Springer. ISBN 978-3-211-99313-2. Langer, Ulrich; Paule, Peter, eds. (2011). Numerical and Symbolic Scientific Computing. Springer. ISBN 978-3-7091-0793-5. == External links == "The Fourth International Workshop on Symbolic-Numeric Computation (SNC2011)". San Jose, California. June 7–9, 2011. Professional organizations ACM SIGSAM: Special Interest Group in Symbolic and Algebraic Manipulation
|
Wikipedia:Symmetric function#0
|
In mathematics, a function of n {\displaystyle n} variables is symmetric if its value is the same no matter the order of its arguments. For example, a function f ( x 1 , x 2 ) {\displaystyle f\left(x_{1},x_{2}\right)} of two arguments is a symmetric function if and only if f ( x 1 , x 2 ) = f ( x 2 , x 1 ) {\displaystyle f\left(x_{1},x_{2}\right)=f\left(x_{2},x_{1}\right)} for all x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} such that ( x 1 , x 2 ) {\displaystyle \left(x_{1},x_{2}\right)} and ( x 2 , x 1 ) {\displaystyle \left(x_{2},x_{1}\right)} are in the domain of f . {\displaystyle f.} The most commonly encountered symmetric functions are polynomial functions, which are given by the symmetric polynomials. A related notion is alternating polynomials, which change sign under an interchange of variables. Aside from polynomial functions, tensors that act as functions of several vectors can be symmetric, and in fact the space of symmetric k {\displaystyle k} -tensors on a vector space V {\displaystyle V} is isomorphic to the space of homogeneous polynomials of degree k {\displaystyle k} on V . {\displaystyle V.} Symmetric functions should not be confused with even and odd functions, which have a different sort of symmetry. == Symmetrization == Given any function f {\displaystyle f} in n {\displaystyle n} variables with values in an abelian group, a symmetric function can be constructed by summing values of f {\displaystyle f} over all permutations of the arguments. Similarly, an anti-symmetric function can be constructed by summing over even permutations and subtracting the sum over odd permutations. These operations are of course not invertible, and could well result in a function that is identically zero for nontrivial functions f . {\displaystyle f.} The only general case where f {\displaystyle f} can be recovered if both its symmetrization and antisymmetrization are known is when n = 2 {\displaystyle n=2} and the abelian group admits a division by 2 (inverse of doubling); then f {\displaystyle f} is equal to half the sum of its symmetrization and its antisymmetrization. == Examples == Consider the real function f ( x 1 , x 2 , x 3 ) = ( x − x 1 ) ( x − x 2 ) ( x − x 3 ) . {\displaystyle f(x_{1},x_{2},x_{3})=(x-x_{1})(x-x_{2})(x-x_{3}).} By definition, a symmetric function with n {\displaystyle n} variables has the property that f ( x 1 , x 2 , … , x n ) = f ( x 2 , x 1 , … , x n ) = f ( x 3 , x 1 , … , x n , x n − 1 ) , etc. {\displaystyle f(x_{1},x_{2},\ldots ,x_{n})=f(x_{2},x_{1},\ldots ,x_{n})=f(x_{3},x_{1},\ldots ,x_{n},x_{n-1}),\quad {\text{ etc.}}} In general, the function remains the same for every permutation of its variables. This means that, in this case, ( x − x 1 ) ( x − x 2 ) ( x − x 3 ) = ( x − x 2 ) ( x − x 1 ) ( x − x 3 ) = ( x − x 3 ) ( x − x 1 ) ( x − x 2 ) {\displaystyle (x-x_{1})(x-x_{2})(x-x_{3})=(x-x_{2})(x-x_{1})(x-x_{3})=(x-x_{3})(x-x_{1})(x-x_{2})} and so on, for all permutations of x 1 , x 2 , x 3 . {\displaystyle x_{1},x_{2},x_{3}.} Consider the function f ( x , y ) = x 2 + y 2 − r 2 . {\displaystyle f(x,y)=x^{2}+y^{2}-r^{2}.} If x {\displaystyle x} and y {\displaystyle y} are interchanged the function becomes f ( y , x ) = y 2 + x 2 − r 2 , {\displaystyle f(y,x)=y^{2}+x^{2}-r^{2},} which yields exactly the same results as the original f ( x , y ) . {\displaystyle f(x,y).} Consider now the function f ( x , y ) = a x 2 + b y 2 − r 2 . {\displaystyle f(x,y)=ax^{2}+by^{2}-r^{2}.} If x {\displaystyle x} and y {\displaystyle y} are interchanged, the function becomes f ( y , x ) = a y 2 + b x 2 − r 2 . {\displaystyle f(y,x)=ay^{2}+bx^{2}-r^{2}.} This function is not the same as the original if a ≠ b , {\displaystyle a\neq b,} which makes it non-symmetric. == Applications == === U-statistics === In statistics, an n {\displaystyle n} -sample statistic (a function in n {\displaystyle n} variables) that is obtained by bootstrapping symmetrization of a k {\displaystyle k} -sample statistic, yielding a symmetric function in n {\displaystyle n} variables, is called a U-statistic. Examples include the sample mean and sample variance. == See also == Alternating polynomial Elementary symmetric polynomial – Mathematical function Even and odd functions – Functions such that f(–x) equals f(x) or –f(x) Exchangeable random variables – Concept in statistics Quasisymmetric function Ring of symmetric functions Symmetrization – process that converts any function in n variables to a symmetric function in n variablesPages displaying wikidata descriptions as a fallback Vandermonde polynomial – determinant of Vandermonde matrixPages displaying wikidata descriptions as a fallback == References == F. N. David, M. G. Kendall & D. E. Barton (1966) Symmetric Function and Allied Tables, Cambridge University Press. Joseph P. S. Kung, Gian-Carlo Rota, & Catherine H. Yan (2009) Combinatorics: The Rota Way, §5.1 Symmetric functions, pp 222–5, Cambridge University Press, ISBN 978-0-521-73794-4.
|
Wikipedia:Symmetric graph#0
|
In the mathematical field of graph theory, a graph G is symmetric or arc-transitive if, given any two ordered pairs of adjacent vertices ( u 1 , v 1 ) {\displaystyle (u_{1},v_{1})} and ( u 2 , v 2 ) {\displaystyle (u_{2},v_{2})} of G, there is an automorphism f : V ( G ) → V ( G ) {\displaystyle f:V(G)\rightarrow V(G)} such that f ( u 1 ) = u 2 {\displaystyle f(u_{1})=u_{2}} and f ( v 1 ) = v 2 . {\displaystyle f(v_{1})=v_{2}.} In other words, a graph is symmetric if its automorphism group acts transitively on ordered pairs of adjacent vertices (that is, upon edges considered as having a direction). Such a graph is sometimes also called 1-arc-transitive or flag-transitive. By definition (ignoring u1 and u2), a symmetric graph without isolated vertices must also be vertex-transitive. Since the definition above maps one edge to another, a symmetric graph must also be edge-transitive. However, an edge-transitive graph need not be symmetric, since a—b might map to c—d, but not to d—c. Star graphs are a simple example of being edge-transitive without being vertex-transitive or symmetric. As a further example, semi-symmetric graphs are edge-transitive and regular, but not vertex-transitive. Every connected symmetric graph must thus be both vertex-transitive and edge-transitive, and the converse is true for graphs of odd degree. However, for even degree, there exist connected graphs which are vertex-transitive and edge-transitive, but not symmetric. Such graphs are called half-transitive. The smallest connected half-transitive graph is Holt's graph, with degree 4 and 27 vertices. Confusingly, some authors use the term "symmetric graph" to mean a graph which is vertex-transitive and edge-transitive, rather than an arc-transitive graph. Such a definition would include half-transitive graphs, which are excluded under the definition above. A distance-transitive graph is one where instead of considering pairs of adjacent vertices (i.e. vertices a distance of 1 apart), the definition covers two pairs of vertices, each the same distance apart. Such graphs are automatically symmetric, by definition. A t-arc is defined to be a sequence of t + 1 vertices, such that any two consecutive vertices in the sequence are adjacent, and with any repeated vertices being more than 2 steps apart. A t-transitive graph is a graph such that the automorphism group acts transitively on t-arcs, but not on (t + 1)-arcs. Since 1-arcs are simply edges, every symmetric graph of degree 3 or more must be t-transitive for some t, and the value of t can be used to further classify symmetric graphs. The cube is 2-transitive, for example. Note that conventionally the term "symmetric graph" is not complementary to the term "asymmetric graph," as the latter refers to a graph that has no nontrivial symmetries at all. == Examples == Two basic families of symmetric graphs for any number of vertices are the cycle graphs (of degree 2) and the complete graphs. Further symmetric graphs are formed by the vertices and edges of the regular and quasiregular polyhedra: the cube, octahedron, icosahedron, dodecahedron, cuboctahedron, and icosidodecahedron. Extension of the cube to n dimensions gives the hypercube graphs (with 2n vertices and degree n). Similarly extension of the octahedron to n dimensions gives the graphs of the cross-polytopes, this family of graphs (with 2n vertices and degree 2n − 2) are sometimes referred to as the cocktail party graphs - they are complete graphs with a set of edges making a perfect matching removed. Additional families of symmetric graphs with an even number of vertices 2n, are the evenly split complete bipartite graphs Kn,n and the crown graphs on 2n vertices. Many other symmetric graphs can be classified as circulant graphs (but not all). The Rado graph forms an example of a symmetric graph with infinitely many vertices and infinite degree. === Cubic symmetric graphs === Combining the symmetry condition with the restriction that graphs be cubic (i.e. all vertices have degree 3) yields quite a strong condition, and such graphs are rare enough to be listed. They all have an even number of vertices. The Foster census and its extensions provide such lists. The Foster census was begun in the 1930s by Ronald M. Foster while he was employed by Bell Labs, and in 1988 (when Foster was 92) the then current Foster census (listing all cubic symmetric graphs up to 512 vertices) was published in book form. The first thirteen items in the list are cubic symmetric graphs with up to 30 vertices (ten of these are also distance-transitive; the exceptions are as indicated): Other well known cubic symmetric graphs are the Dyck graph, the Foster graph and the Biggs–Smith graph. The ten distance-transitive graphs listed above, together with the Foster graph and the Biggs–Smith graph, are the only cubic distance-transitive graphs. == Properties == The vertex-connectivity of a symmetric graph is always equal to the degree d. In contrast, for vertex-transitive graphs in general, the vertex-connectivity is bounded below by 2(d + 1)/3. A t-transitive graph of degree 3 or more has girth at least 2(t − 1). However, there are no finite t-transitive graphs of degree 3 or more for t ≥ 8. In the case of the degree being exactly 3 (cubic symmetric graphs), there are none for t ≥ 6. == See also == Algebraic graph theory Gallery of named graphs Regular map == References == == External links == Cubic symmetric graphs (The Foster Census). Data files for all cubic symmetric graphs up to 768 vertices, and some cubic graphs with up to 1000 vertices. Gordon Royle, updated February 2001, retrieved 2009-04-18. Trivalent (cubic) symmetric graphs on up to 10000 vertices. Marston Conder, 2011.
|
Wikipedia:Symmetric polynomial#0
|
In mathematics, a symmetric polynomial is a polynomial P(X1, X2, ..., Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n one has P(Xσ(1), Xσ(2), ..., Xσ(n)) = P(X1, X2, ..., Xn). Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view the elementary symmetric polynomials are the most fundamental symmetric polynomials. Indeed, a theorem called the fundamental theorem of symmetric polynomials states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials. This implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial. Symmetric polynomials also form an interesting structure by themselves, independently of any relation to the roots of a polynomial. In this context other collections of specific symmetric polynomials, such as complete homogeneous, power sum, and Schur polynomials play important roles alongside the elementary ones. The resulting structures, and in particular the ring of symmetric functions, are of great importance in combinatorics and in representation theory. == Examples == The following polynomials in two variables X1 and X2 are symmetric: X 1 3 + X 2 3 − 7 {\displaystyle X_{1}^{3}+X_{2}^{3}-7} 4 X 1 2 X 2 2 + X 1 3 X 2 + X 1 X 2 3 + ( X 1 + X 2 ) 4 {\displaystyle 4X_{1}^{2}X_{2}^{2}+X_{1}^{3}X_{2}+X_{1}X_{2}^{3}+(X_{1}+X_{2})^{4}} as is the following polynomial in three variables X1, X2, X3: X 1 X 2 X 3 − 2 X 1 X 2 − 2 X 1 X 3 − 2 X 2 X 3 {\displaystyle X_{1}X_{2}X_{3}-2X_{1}X_{2}-2X_{1}X_{3}-2X_{2}X_{3}} There are many ways to make specific symmetric polynomials in any number of variables (see the various types below). An example of a somewhat different flavor is ∏ 1 ≤ i < j ≤ n ( X i − X j ) 2 {\displaystyle \prod _{1\leq i<j\leq n}(X_{i}-X_{j})^{2}} where first a polynomial is constructed that changes sign under every exchange of variables, and taking the square renders it completely symmetric (if the variables represent the roots of a monic polynomial, this polynomial gives its discriminant). On the other hand, the polynomial in two variables X 1 − X 2 {\displaystyle X_{1}-X_{2}} is not symmetric, since if one exchanges X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} one gets a different polynomial, X 2 − X 1 {\displaystyle X_{2}-X_{1}} . Similarly in three variables X 1 4 X 2 2 X 3 + X 1 X 2 4 X 3 2 + X 1 2 X 2 X 3 4 {\displaystyle X_{1}^{4}X_{2}^{2}X_{3}+X_{1}X_{2}^{4}X_{3}^{2}+X_{1}^{2}X_{2}X_{3}^{4}} has only symmetry under cyclic permutations of the three variables, which is not sufficient to be a symmetric polynomial. However, the following is symmetric: X 1 4 X 2 2 X 3 + X 1 X 2 4 X 3 2 + X 1 2 X 2 X 3 4 + X 1 4 X 2 X 3 2 + X 1 X 2 2 X 3 4 + X 1 2 X 2 4 X 3 {\displaystyle X_{1}^{4}X_{2}^{2}X_{3}+X_{1}X_{2}^{4}X_{3}^{2}+X_{1}^{2}X_{2}X_{3}^{4}+X_{1}^{4}X_{2}X_{3}^{2}+X_{1}X_{2}^{2}X_{3}^{4}+X_{1}^{2}X_{2}^{4}X_{3}} == Applications == === Galois theory === One context in which symmetric polynomial functions occur is in the study of monic univariate polynomials of degree n having n roots in a given field. These n roots determine the polynomial, and when they are considered as independent variables, the coefficients of the polynomial are symmetric polynomial functions of the roots. Moreover the fundamental theorem of symmetric polynomials implies that a polynomial function f of the n roots can be expressed as (another) polynomial function of the coefficients of the polynomial determined by the roots if and only if f is given by a symmetric polynomial. This yields the approach to solving polynomial equations by inverting this map, "breaking" the symmetry – given the coefficients of the polynomial (the elementary symmetric polynomials in the roots), how can one recover the roots? This leads to studying solutions of polynomials using the permutation group of the roots, originally in the form of Lagrange resolvents, later developed in Galois theory. == Relation with the roots of a monic univariate polynomial == Consider a monic polynomial in t of degree n P = t n + a n − 1 t n − 1 + ⋯ + a 2 t 2 + a 1 t + a 0 {\displaystyle P=t^{n}+a_{n-1}t^{n-1}+\cdots +a_{2}t^{2}+a_{1}t+a_{0}} with coefficients ai in some field K. There exist n roots x1,...,xn of P in some possibly larger field (for instance if K is the field of real numbers, the roots will exist in the field of complex numbers); some of the roots might be equal, but the fact that one has all roots is expressed by the relation P = t n + a n − 1 t n − 1 + ⋯ + a 2 t 2 + a 1 t + a 0 = ( t − x 1 ) ( t − x 2 ) ⋯ ( t − x n ) . {\displaystyle P=t^{n}+a_{n-1}t^{n-1}+\cdots +a_{2}t^{2}+a_{1}t+a_{0}=(t-x_{1})(t-x_{2})\cdots (t-x_{n}).} By comparing coefficients one finds that a n − 1 = − x 1 − x 2 − ⋯ − x n a n − 2 = x 1 x 2 + x 1 x 3 + ⋯ + x 2 x 3 + ⋯ + x n − 1 x n = ∑ 1 ≤ i < j ≤ n x i x j ⋮ a 1 = ( − 1 ) n − 1 ( x 2 x 3 ⋯ x n + x 1 x 3 x 4 ⋯ x n + ⋯ + x 1 x 2 ⋯ x n − 2 x n + x 1 x 2 ⋯ x n − 1 ) = ( − 1 ) n − 1 ∑ i = 1 n ∏ j ≠ i x j a 0 = ( − 1 ) n x 1 x 2 ⋯ x n . {\displaystyle {\begin{aligned}a_{n-1}&=-x_{1}-x_{2}-\cdots -x_{n}\\a_{n-2}&=x_{1}x_{2}+x_{1}x_{3}+\cdots +x_{2}x_{3}+\cdots +x_{n-1}x_{n}=\textstyle \sum _{1\leq i<j\leq n}x_{i}x_{j}\\&{}\ \,\vdots \\a_{1}&=(-1)^{n-1}(x_{2}x_{3}\cdots x_{n}+x_{1}x_{3}x_{4}\cdots x_{n}+\cdots +x_{1}x_{2}\cdots x_{n-2}x_{n}+x_{1}x_{2}\cdots x_{n-1})=\textstyle (-1)^{n-1}\sum _{i=1}^{n}\prod _{j\neq i}x_{j}\\a_{0}&=(-1)^{n}x_{1}x_{2}\cdots x_{n}.\end{aligned}}} These are in fact just instances of Vieta's formulas. They show that all coefficients of the polynomial are given in terms of the roots by a symmetric polynomial expression: although for a given polynomial P there may be qualitative differences between the roots (like lying in the base field K or not, being simple or multiple roots), none of this affects the way the roots occur in these expressions. Now one may change the point of view, by taking the roots rather than the coefficients as basic parameters for describing P, and considering them as indeterminates rather than as constants in an appropriate field; the coefficients ai then become just the particular symmetric polynomials given by the above equations. Those polynomials, without the sign ( − 1 ) n − i {\displaystyle (-1)^{n-i}} , are known as the elementary symmetric polynomials in x1, ..., xn. A basic fact, known as the fundamental theorem of symmetric polynomials, states that any symmetric polynomial in n variables can be given by a polynomial expression in terms of these elementary symmetric polynomials. It follows that any symmetric polynomial expression in the roots of a monic polynomial can be expressed as a polynomial in the coefficients of the polynomial, and in particular that its value lies in the base field K that contains those coefficients. Thus, when working only with such symmetric polynomial expressions in the roots, it is unnecessary to know anything particular about those roots, or to compute in any larger field than K in which those roots may lie. In fact the values of the roots themselves become rather irrelevant, and the necessary relations between coefficients and symmetric polynomial expressions can be found by computations in terms of symmetric polynomials only. An example of such relations are Newton's identities, which express the sum of any fixed power of the roots in terms of the elementary symmetric polynomials. == Special kinds of symmetric polynomials == There are a few types of symmetric polynomials in the variables X1, X2, ..., Xn that are fundamental. === Elementary symmetric polynomials === For each nonnegative integer k, the elementary symmetric polynomial ek(X1, ..., Xn) is the sum of all distinct products of k distinct variables. (Some authors denote it by σk instead.) For k = 0 there is only the empty product so e0(X1, ..., Xn) = 1, while for k > n, no products at all can be formed, so ek(X1, X2, ..., Xn) = 0 in these cases. The remaining n elementary symmetric polynomials are building blocks for all symmetric polynomials in these variables: as mentioned above, any symmetric polynomial in the variables considered can be obtained from these elementary symmetric polynomials using multiplications and additions only. In fact one has the following more detailed facts: any symmetric polynomial P in X1, ..., Xn can be written as a polynomial expression in the polynomials ek(X1, ..., Xn) with 1 ≤ k ≤ n; this expression is unique up to equivalence of polynomial expressions; if P has integral coefficients, then the polynomial expression also has integral coefficients. For example, for n = 2, the relevant elementary symmetric polynomials are e1(X1, X2) = X1 + X2, and e2(X1, X2) = X1X2. The first polynomial in the list of examples above can then be written as X 1 3 + X 2 3 − 7 = e 1 ( X 1 , X 2 ) 3 − 3 e 2 ( X 1 , X 2 ) e 1 ( X 1 , X 2 ) − 7 {\displaystyle X_{1}^{3}+X_{2}^{3}-7=e_{1}(X_{1},X_{2})^{3}-3e_{2}(X_{1},X_{2})e_{1}(X_{1},X_{2})-7} (for a proof that this is always possible see the fundamental theorem of symmetric polynomials). === Monomial symmetric polynomials === Powers and products of elementary symmetric polynomials work out to rather complicated expressions. If one seeks basic additive building blocks for symmetric polynomials, a more natural choice is to take those symmetric polynomials that contain only one type of monomial, with only those copies required to obtain symmetry. Any monomial in X1, ..., Xn can be written as X1α1...Xnαn where the exponents αi are natural numbers (possibly zero); writing α = (α1,...,αn) this can be abbreviated to X α. The monomial symmetric polynomial mα(X1, ..., Xn) is defined as the sum of all monomials xβ where β ranges over all distinct permutations of (α1,...,αn). For instance one has m ( 3 , 1 , 1 ) ( X 1 , X 2 , X 3 ) = X 1 3 X 2 X 3 + X 1 X 2 3 X 3 + X 1 X 2 X 3 3 {\displaystyle m_{(3,1,1)}(X_{1},X_{2},X_{3})=X_{1}^{3}X_{2}X_{3}+X_{1}X_{2}^{3}X_{3}+X_{1}X_{2}X_{3}^{3}} , m ( 3 , 2 , 1 ) ( X 1 , X 2 , X 3 ) = X 1 3 X 2 2 X 3 + X 1 3 X 2 X 3 2 + X 1 2 X 2 3 X 3 + X 1 2 X 2 X 3 3 + X 1 X 2 3 X 3 2 + X 1 X 2 2 X 3 3 . {\displaystyle m_{(3,2,1)}(X_{1},X_{2},X_{3})=X_{1}^{3}X_{2}^{2}X_{3}+X_{1}^{3}X_{2}X_{3}^{2}+X_{1}^{2}X_{2}^{3}X_{3}+X_{1}^{2}X_{2}X_{3}^{3}+X_{1}X_{2}^{3}X_{3}^{2}+X_{1}X_{2}^{2}X_{3}^{3}.} Clearly mα = mβ when β is a permutation of α, so one usually considers only those mα for which α1 ≥ α2 ≥ ... ≥ αn, in other words for which α is a partition of an integer. These monomial symmetric polynomials form a vector space basis: every symmetric polynomial P can be written as a linear combination of the monomial symmetric polynomials. To do this it suffices to separate the different types of monomial occurring in P. In particular if P has integer coefficients, then so will the linear combination. The elementary symmetric polynomials are particular cases of monomial symmetric polynomials: for 0 ≤ k ≤ n one has e k ( X 1 , … , X n ) = m α ( X 1 , … , X n ) {\displaystyle e_{k}(X_{1},\ldots ,X_{n})=m_{\alpha }(X_{1},\ldots ,X_{n})} where α is the partition of k into k parts 1 (followed by n − k zeros). === Power-sum symmetric polynomials === For each integer k ≥ 1, the monomial symmetric polynomial m(k,0,...,0)(X1, ..., Xn) is of special interest. It is the power sum symmetric polynomial, defined as p k ( X 1 , … , X n ) = X 1 k + X 2 k + ⋯ + X n k . {\displaystyle p_{k}(X_{1},\ldots ,X_{n})=X_{1}^{k}+X_{2}^{k}+\cdots +X_{n}^{k}.} All symmetric polynomials can be obtained from the first n power sum symmetric polynomials by additions and multiplications, possibly involving rational coefficients. More precisely, Any symmetric polynomial in X1, ..., Xn can be expressed as a polynomial expression with rational coefficients in the power sum symmetric polynomials p1(X1, ..., Xn), ..., pn(X1, ..., Xn). In particular, the remaining power sum polynomials pk(X1, ..., Xn) for k > n can be so expressed in the first n power sum polynomials; for example p 3 ( X 1 , X 2 ) = 3 2 p 2 ( X 1 , X 2 ) p 1 ( X 1 , X 2 ) − 1 2 p 1 ( X 1 , X 2 ) 3 . {\displaystyle p_{3}(X_{1},X_{2})=\textstyle {\frac {3}{2}}p_{2}(X_{1},X_{2})p_{1}(X_{1},X_{2})-{\frac {1}{2}}p_{1}(X_{1},X_{2})^{3}.} In contrast to the situation for the elementary and complete homogeneous polynomials, a symmetric polynomial in n variables with integral coefficients need not be a polynomial function with integral coefficients of the power sum symmetric polynomials. For an example, for n = 2, the symmetric polynomial m ( 2 , 1 ) ( X 1 , X 2 ) = X 1 2 X 2 + X 1 X 2 2 {\displaystyle m_{(2,1)}(X_{1},X_{2})=X_{1}^{2}X_{2}+X_{1}X_{2}^{2}} has the expression m ( 2 , 1 ) ( X 1 , X 2 ) = 1 2 p 1 ( X 1 , X 2 ) 3 − 1 2 p 2 ( X 1 , X 2 ) p 1 ( X 1 , X 2 ) . {\displaystyle m_{(2,1)}(X_{1},X_{2})=\textstyle {\frac {1}{2}}p_{1}(X_{1},X_{2})^{3}-{\frac {1}{2}}p_{2}(X_{1},X_{2})p_{1}(X_{1},X_{2}).} Using three variables one gets a different expression m ( 2 , 1 ) ( X 1 , X 2 , X 3 ) = X 1 2 X 2 + X 1 X 2 2 + X 1 2 X 3 + X 1 X 3 2 + X 2 2 X 3 + X 2 X 3 2 = p 1 ( X 1 , X 2 , X 3 ) p 2 ( X 1 , X 2 , X 3 ) − p 3 ( X 1 , X 2 , X 3 ) . {\displaystyle {\begin{aligned}m_{(2,1)}(X_{1},X_{2},X_{3})&=X_{1}^{2}X_{2}+X_{1}X_{2}^{2}+X_{1}^{2}X_{3}+X_{1}X_{3}^{2}+X_{2}^{2}X_{3}+X_{2}X_{3}^{2}\\&=p_{1}(X_{1},X_{2},X_{3})p_{2}(X_{1},X_{2},X_{3})-p_{3}(X_{1},X_{2},X_{3}).\end{aligned}}} The corresponding expression was valid for two variables as well (it suffices to set X3 to zero), but since it involves p3, it could not be used to illustrate the statement for n = 2. The example shows that whether or not the expression for a given monomial symmetric polynomial in terms of the first n power sum polynomials involves rational coefficients may depend on n. But rational coefficients are always needed to express elementary symmetric polynomials (except the constant ones, and e1 which coincides with the first power sum) in terms of power sum polynomials. The Newton identities provide an explicit method to do this; it involves division by integers up to n, which explains the rational coefficients. Because of these divisions, the mentioned statement fails in general when coefficients are taken in a field of finite characteristic; however, it is valid with coefficients in any ring containing the rational numbers. === Complete homogeneous symmetric polynomials === For each nonnegative integer k, the complete homogeneous symmetric polynomial hk(X1, ..., Xn) is the sum of all distinct monomials of degree k in the variables X1, ..., Xn. For instance h 3 ( X 1 , X 2 , X 3 ) = X 1 3 + X 1 2 X 2 + X 1 2 X 3 + X 1 X 2 2 + X 1 X 2 X 3 + X 1 X 3 2 + X 2 3 + X 2 2 X 3 + X 2 X 3 2 + X 3 3 . {\displaystyle h_{3}(X_{1},X_{2},X_{3})=X_{1}^{3}+X_{1}^{2}X_{2}+X_{1}^{2}X_{3}+X_{1}X_{2}^{2}+X_{1}X_{2}X_{3}+X_{1}X_{3}^{2}+X_{2}^{3}+X_{2}^{2}X_{3}+X_{2}X_{3}^{2}+X_{3}^{3}.} The polynomial hk(X1, ..., Xn) is also the sum of all distinct monomial symmetric polynomials of degree k in X1, ..., Xn, for instance for the given example h 3 ( X 1 , X 2 , X 3 ) = m ( 3 ) ( X 1 , X 2 , X 3 ) + m ( 2 , 1 ) ( X 1 , X 2 , X 3 ) + m ( 1 , 1 , 1 ) ( X 1 , X 2 , X 3 ) = ( X 1 3 + X 2 3 + X 3 3 ) + ( X 1 2 X 2 + X 1 2 X 3 + X 1 X 2 2 + X 1 X 3 2 + X 2 2 X 3 + X 2 X 3 2 ) + ( X 1 X 2 X 3 ) . {\displaystyle {\begin{aligned}h_{3}(X_{1},X_{2},X_{3})&=m_{(3)}(X_{1},X_{2},X_{3})+m_{(2,1)}(X_{1},X_{2},X_{3})+m_{(1,1,1)}(X_{1},X_{2},X_{3})\\&=(X_{1}^{3}+X_{2}^{3}+X_{3}^{3})+(X_{1}^{2}X_{2}+X_{1}^{2}X_{3}+X_{1}X_{2}^{2}+X_{1}X_{3}^{2}+X_{2}^{2}X_{3}+X_{2}X_{3}^{2})+(X_{1}X_{2}X_{3}).\\\end{aligned}}} All symmetric polynomials in these variables can be built up from complete homogeneous ones: any symmetric polynomial in X1, ..., Xn can be obtained from the complete homogeneous symmetric polynomials h1(X1, ..., Xn), ..., hn(X1, ..., Xn) via multiplications and additions. More precisely: Any symmetric polynomial P in X1, ..., Xn can be written as a polynomial expression in the polynomials hk(X1, ..., Xn) with 1 ≤ k ≤ n. If P has integral coefficients, then the polynomial expression also has integral coefficients. For example, for n = 2, the relevant complete homogeneous symmetric polynomials are h1(X1, X2) = X1 + X2 and h2(X1, X2) = X12 + X1X2 + X22. The first polynomial in the list of examples above can then be written as X 1 3 + X 2 3 − 7 = − 2 h 1 ( X 1 , X 2 ) 3 + 3 h 1 ( X 1 , X 2 ) h 2 ( X 1 , X 2 ) − 7. {\displaystyle X_{1}^{3}+X_{2}^{3}-7=-2h_{1}(X_{1},X_{2})^{3}+3h_{1}(X_{1},X_{2})h_{2}(X_{1},X_{2})-7.} As in the case of power sums, the given statement applies in particular to the complete homogeneous symmetric polynomials beyond hn(X1, ..., Xn), allowing them to be expressed in terms of the ones up to that point; again the resulting identities become invalid when the number of variables is increased. An important aspect of complete homogeneous symmetric polynomials is their relation to elementary symmetric polynomials, which can be expressed as the identities ∑ i = 0 k ( − 1 ) i e i ( X 1 , … , X n ) h k − i ( X 1 , … , X n ) = 0 {\displaystyle \sum _{i=0}^{k}(-1)^{i}e_{i}(X_{1},\ldots ,X_{n})h_{k-i}(X_{1},\ldots ,X_{n})=0} , for all k > 0, and any number of variables n. Since e0(X1, ..., Xn) and h0(X1, ..., Xn) are both equal to 1, one can isolate either the first or the last term of these summations; the former gives a set of equations that allows one to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials, and the latter gives a set of equations that allows doing the inverse. This implicitly shows that any symmetric polynomial can be expressed in terms of the hk(X1, ..., Xn) with 1 ≤ k ≤ n: one first expresses the symmetric polynomial in terms of the elementary symmetric polynomials, and then expresses those in terms of the mentioned complete homogeneous ones. === Schur polynomials === Another class of symmetric polynomials is that of the Schur polynomials, which are of fundamental importance in the applications of symmetric polynomials to representation theory. They are however not as easy to describe as the other kinds of special symmetric polynomials; see the main article for details. == Symmetric polynomials in algebra == Symmetric polynomials are important to linear algebra, representation theory, and Galois theory. They are also important in combinatorics, where they are mostly studied through the ring of symmetric functions, which avoids having to carry around a fixed number of variables all the time. == Alternating polynomials == Analogous to symmetric polynomials are alternating polynomials: polynomials that, rather than being invariant under permutation of the entries, change according to the sign of the permutation. These are all products of the Vandermonde polynomial and a symmetric polynomial, and form a quadratic extension of the ring of symmetric polynomials: the Vandermonde polynomial is a square root of the discriminant. == See also == Symmetric function Newton's identities Stanley symmetric function Muirhead's inequality == References == Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556, Zbl 0984.00001 Macdonald, I.G. (1979), Symmetric Functions and Hall Polynomials. Oxford Mathematical Monographs. Oxford: Clarendon Press. I.G. Macdonald (1995), Symmetric Functions and Hall Polynomials, second ed. Oxford: Clarendon Press. ISBN 0-19-850450-0 (paperback, 1998). Richard P. Stanley (1999), Enumerative Combinatorics, Vol. 2. Cambridge: Cambridge University Press. ISBN 0-521-56069-1
|
Wikipedia:Symmetric product of an algebraic curve#0
|
In mathematics, the n-fold symmetric product of an algebraic curve C is the quotient space of the n-fold cartesian product C × C × ... × C or Cn by the group action of the symmetric group Sn on n letters permuting the factors. It exists as a smooth algebraic variety denoted by ΣnC. If C is a compact Riemann surface, ΣnC is therefore a complex manifold. Its interest in relation to the classical geometry of curves is that its points correspond to effective divisors on C of degree n, that is, formal sums of points with non-negative integer coefficients. For C the projective line (say the Riemann sphere C {\displaystyle \mathbb {C} } ∪ {∞} ≈ S2), its nth symmetric product ΣnC can be identified with complex projective space C P n {\displaystyle \mathbb {CP} ^{n}} of dimension n. If G has genus g ≥ 1 then the ΣnC are closely related to the Jacobian variety J of C. More accurately for n taking values up to g they form a sequence of approximations to J from below: their images in J under addition on J (see theta-divisor) have dimension n and fill up J, with some identifications caused by special divisors. For g = n we have ΣgC actually birationally equivalent to J; the Jacobian is a blowing down of the symmetric product. That means that at the level of function fields it is possible to construct J by taking linearly disjoint copies of the function field of C, and within their compositum taking the fixed subfield of the symmetric group. This is the source of André Weil's technique of constructing J as an abstract variety from 'birational data'. Other ways of constructing J, for example as a Picard variety, are preferred now but this does mean that for any rational function F on C F(x1) + ... + F(xg) makes sense as a rational function on J, for the xi staying away from the poles of F. For n > g the mapping from ΣnC to J by addition fibers it over J; when n is large enough (around twice g) this becomes a projective space bundle (the Picard bundle). It has been studied in detail, for example by Kempf and Mukai. == Betti numbers and the Euler characteristic of the symmetric product == Let C be a smooth projective curve of genus g over the complex numbers C. The Betti numbers bi(ΣnC) of the symmetric products ΣnC for all n = 0, 1, 2, ... are given by the generating function ∑ n = 0 ∞ ∑ i = 0 2 n b i ( Σ n C ) y n u i − n = ( 1 + y ) 2 g ( 1 − u y ) ( 1 − u − 1 y ) {\displaystyle \sum _{n=0}^{\infty }\sum _{i=0}^{2n}b_{i}(\Sigma ^{n}C)y^{n}u^{i-n}={\frac {(1+y)^{2g}}{(1-uy)(1-u^{-1}y)}}} and their Euler characteristics e(ΣnC) are given by the generating function ∑ n = 0 ∞ e ( Σ n C ) p n = ( 1 − p ) 2 g − 2 . {\displaystyle \sum _{n=0}^{\infty }e(\Sigma ^{n}C)p^{n}=(1-p)^{2g-2}.} Here we have set u = -1 and y = -p in the previous formula. == Notes == == References == Macdonald, I. G. (1962), "Symmetric products of an algebraic curve", Topology, 1 (4): 319–343, doi:10.1016/0040-9383(62)90019-8, MR 0151460 Anderson, Greg W. (2002), "Abeliants and their application to an elementary construction of Jacobians", Advances in Mathematics, 172 (2): 169–205, arXiv:math/0112321, doi:10.1016/S0001-8708(02)00024-5, MR 1942403
|
Wikipedia:Symmetrization#0
|
In mathematics, symmetrization is a process that converts any function in n {\displaystyle n} variables to a symmetric function in n {\displaystyle n} variables. Similarly, antisymmetrization converts any function in n {\displaystyle n} variables into an antisymmetric function. == Two variables == Let S {\displaystyle S} be a set and A {\displaystyle A} be an additive abelian group. A map α : S × S → A {\displaystyle \alpha :S\times S\to A} is called a symmetric map if α ( s , t ) = α ( t , s ) for all s , t ∈ S . {\displaystyle \alpha (s,t)=\alpha (t,s)\quad {\text{ for all }}s,t\in S.} It is called an antisymmetric map if instead α ( s , t ) = − α ( t , s ) for all s , t ∈ S . {\displaystyle \alpha (s,t)=-\alpha (t,s)\quad {\text{ for all }}s,t\in S.} The symmetrization of a map α : S × S → A {\displaystyle \alpha :S\times S\to A} is the map ( x , y ) ↦ α ( x , y ) + α ( y , x ) . {\displaystyle (x,y)\mapsto \alpha (x,y)+\alpha (y,x).} Similarly, the antisymmetrization or skew-symmetrization of a map α : S × S → A {\displaystyle \alpha :S\times S\to A} is the map ( x , y ) ↦ α ( x , y ) − α ( y , x ) . {\displaystyle (x,y)\mapsto \alpha (x,y)-\alpha (y,x).} The sum of the symmetrization and the antisymmetrization of a map α {\displaystyle \alpha } is 2 α . {\displaystyle 2\alpha .} Thus, away from 2, meaning if 2 is invertible, such as for the real numbers, one can divide by 2 and express every function as a sum of a symmetric function and an anti-symmetric function. The symmetrization of a symmetric map is its double, while the symmetrization of an alternating map is zero; similarly, the antisymmetrization of a symmetric map is zero, while the antisymmetrization of an anti-symmetric map is its double. === Bilinear forms === The symmetrization and antisymmetrization of a bilinear map are bilinear; thus away from 2, every bilinear form is a sum of a symmetric form and a skew-symmetric form, and there is no difference between a symmetric form and a quadratic form. At 2, not every form can be decomposed into a symmetric form and a skew-symmetric form. For instance, over the integers, the associated symmetric form (over the rationals) may take half-integer values, while over Z / 2 Z , {\displaystyle \mathbb {Z} /2\mathbb {Z} ,} a function is skew-symmetric if and only if it is symmetric (as 1 = − 1 {\displaystyle 1=-1} ). This leads to the notion of ε-quadratic forms and ε-symmetric forms. === Representation theory === In terms of representation theory: exchanging variables gives a representation of the symmetric group on the space of functions in two variables, the symmetric and antisymmetric functions are the subrepresentations corresponding to the trivial representation and the sign representation, and symmetrization and antisymmetrization map a function into these subrepresentations – if one divides by 2, these yield projection maps. As the symmetric group of order two equals the cyclic group of order two ( S 2 = C 2 {\displaystyle \mathrm {S} _{2}=\mathrm {C} _{2}} ), this corresponds to the discrete Fourier transform of order two. == n variables == More generally, given a function in n {\displaystyle n} variables, one can symmetrize by taking the sum over all n ! {\displaystyle n!} permutations of the variables, or antisymmetrize by taking the sum over all n ! / 2 {\displaystyle n!/2} even permutations and subtracting the sum over all n ! / 2 {\displaystyle n!/2} odd permutations (except that when n ≤ 1 , {\displaystyle n\leq 1,} the only permutation is even). Here symmetrizing a symmetric function multiplies by n ! {\displaystyle n!} – thus if n ! {\displaystyle n!} is invertible, such as when working over a field of characteristic 0 {\displaystyle 0} or p > n , {\displaystyle p>n,} then these yield projections when divided by n ! . {\displaystyle n!.} In terms of representation theory, these only yield the subrepresentations corresponding to the trivial and sign representation, but for n > 2 {\displaystyle n>2} there are others – see representation theory of the symmetric group and symmetric polynomials. == Bootstrapping == Given a function in k {\displaystyle k} variables, one can obtain a symmetric function in n {\displaystyle n} variables by taking the sum over k {\displaystyle k} -element subsets of the variables. In statistics, this is referred to as bootstrapping, and the associated statistics are called U-statistics. == See also == Alternating multilinear map – Multilinear map that is 0 whenever arguments are linearly dependent Antisymmetric tensor – Tensor equal to the negative of any of its transpositions == Notes == == References == Hazewinkel, Michiel (1990). Encyclopaedia of mathematics: an updated and annotated translation of the Soviet "Mathematical encyclopaedia". Vol. 6. Springer. ISBN 978-1-55608-005-0.
|
Wikipedia:Symmetry of second derivatives#0
|
In mathematics, the symmetry of second derivatives (also called the equality of mixed partials) is the fact that exchanging the order of partial derivatives of a multivariate function f ( x 1 , x 2 , … , x n ) {\displaystyle f\left(x_{1},\,x_{2},\,\ldots ,\,x_{n}\right)} does not change the result if some continuity conditions are satisfied (see below); that is, the second-order partial derivatives satisfy the identities ∂ ∂ x i ( ∂ f ∂ x j ) = ∂ ∂ x j ( ∂ f ∂ x i ) . {\displaystyle {\frac {\partial }{\partial x_{i}}}\left({\frac {\partial f}{\partial x_{j}}}\right)\ =\ {\frac {\partial }{\partial x_{j}}}\left({\frac {\partial f}{\partial x_{i}}}\right).} In other words, the matrix of the second-order partial derivatives, known as the Hessian matrix, is a symmetric matrix. Sufficient conditions for the symmetry to hold are given by Schwarz's theorem, also called Clairaut's theorem or Young's theorem. In the context of partial differential equations, it is called the Schwarz integrability condition. == Formal expressions of symmetry == In symbols, the symmetry may be expressed as: ∂ ∂ x ( ∂ f ∂ y ) = ∂ ∂ y ( ∂ f ∂ x ) or ∂ 2 f ∂ x ∂ y = ∂ 2 f ∂ y ∂ x . {\displaystyle {\frac {\partial }{\partial x}}\left({\frac {\partial f}{\partial y}}\right)\ =\ {\frac {\partial }{\partial y}}\left({\frac {\partial f}{\partial x}}\right)\qquad {\text{or}}\qquad {\frac {\partial ^{2}\!f}{\partial x\,\partial y}}\ =\ {\frac {\partial ^{2}\!f}{\partial y\,\partial x}}.} Another notation is: ∂ x ∂ y f = ∂ y ∂ x f or f y x = f x y . {\displaystyle \partial _{x}\partial _{y}f=\partial _{y}\partial _{x}f\qquad {\text{or}}\qquad f_{yx}=f_{xy}.} In terms of composition of the differential operator Di which takes the partial derivative with respect to xi: D i ∘ D j = D j ∘ D i {\displaystyle D_{i}\circ D_{j}=D_{j}\circ D_{i}} . From this relation it follows that the ring of differential operators with constant coefficients, generated by the Di, is commutative; but this is only true as operators over a domain of sufficiently differentiable functions. It is easy to check the symmetry as applied to monomials, so that one can take polynomials in the xi as a domain. In fact smooth functions are another valid domain. == History == The result on the equality of mixed partial derivatives under certain conditions has a long history. The list of unsuccessful proposed proofs started with Euler's, published in 1740, although already in 1721 Bernoulli had implicitly assumed the result with no formal justification. Clairaut also published a proposed proof in 1740, with no other attempts until the end of the 18th century. Starting then, for a period of 70 years, a number of incomplete proofs were proposed. The proof of Lagrange (1797) was improved by Cauchy (1823), but assumed the existence and continuity of the partial derivatives ∂ 2 f ∂ x 2 {\displaystyle {\tfrac {\partial ^{2}f}{\partial x^{2}}}} and ∂ 2 f ∂ y 2 {\displaystyle {\tfrac {\partial ^{2}f}{\partial y^{2}}}} . Other attempts were made by P. Blanchet (1841), Duhamel (1856), Sturm (1857), Schlömilch (1862), and Bertrand (1864). Finally in 1867 Lindelöf systematically analyzed all the earlier flawed proofs and was able to exhibit a specific counterexample where mixed derivatives failed to be equal. Six years after that, Schwarz succeeded in giving the first rigorous proof. Dini later contributed by finding more general conditions than those of Schwarz. Eventually a clean and more general version was found by Jordan in 1883 that is still the proof found in most textbooks. Minor variants of earlier proofs were published by Laurent (1885), Peano (1889 and 1893), J. Edwards (1892), P. Haag (1893), J. K. Whittemore (1898), Vivanti (1899) and Pierpont (1905). Further progress was made in 1907-1909 when E. W. Hobson and W. H. Young found proofs with weaker conditions than those of Schwarz and Dini. In 1918, Carathéodory gave a different proof based on the Lebesgue integral. == Schwarz's theorem == In mathematical analysis, Schwarz's theorem (or Clairaut's theorem on equality of mixed partials) named after Alexis Clairaut and Hermann Schwarz, states that for a function f : Ω → R {\displaystyle f\colon \Omega \to \mathbb {R} } defined on a set Ω ⊂ R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} , if p ∈ R n {\displaystyle \mathbf {p} \in \mathbb {R} ^{n}} is a point such that some neighborhood of p {\displaystyle \mathbf {p} } is contained in Ω {\displaystyle \Omega } and f {\displaystyle f} has continuous second partial derivatives on that neighborhood of p {\displaystyle \mathbf {p} } , then for all i and j in { 1 , 2 … , n } , {\displaystyle \{1,2\ldots ,\,n\},} ∂ 2 ∂ x i ∂ x j f ( p ) = ∂ 2 ∂ x j ∂ x i f ( p ) . {\displaystyle {\frac {\partial ^{2}}{\partial x_{i}\,\partial x_{j}}}f(\mathbf {p} )={\frac {\partial ^{2}}{\partial x_{j}\,\partial x_{i}}}f(\mathbf {p} ).} The partial derivatives of this function commute at that point. There exists a version of this theorem where f {\displaystyle f} is only required to be twice differentiable at the point p {\displaystyle \mathbf {p} } . One easy way to establish this theorem (in the case where n = 2 {\displaystyle n=2} , i = 1 {\displaystyle i=1} , and j = 2 {\displaystyle j=2} , which readily entails the result in general) is by applying Green's theorem to the gradient of f . {\displaystyle f.} An elementary proof for functions on open subsets of the plane is as follows (by a simple reduction, the general case for the theorem of Schwarz easily reduces to the planar case). Let f ( x , y ) {\displaystyle f(x,y)} be a differentiable function on an open rectangle Ω {\displaystyle \Omega } containing a point ( a , b ) {\displaystyle (a,b)} and suppose that d f {\displaystyle df} is continuous with continuous ∂ x ∂ y f {\displaystyle \partial _{x}\partial _{y}f} and ∂ y ∂ x f {\displaystyle \partial _{y}\partial _{x}f} over Ω . {\displaystyle \Omega .} Define u ( h , k ) = f ( a + h , b + k ) − f ( a + h , b ) , v ( h , k ) = f ( a + h , b + k ) − f ( a , b + k ) , w ( h , k ) = f ( a + h , b + k ) − f ( a + h , b ) − f ( a , b + k ) + f ( a , b ) . {\displaystyle {\begin{aligned}u\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a+h,\,b\right),\\v\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a,\,b+k\right),\\w\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a+h,\,b\right)-f\left(a,\,b+k\right)+f\left(a,\,b\right).\end{aligned}}} These functions are defined for | h | , | k | < ε {\displaystyle \left|h\right|,\,\left|k\right|<\varepsilon } , where ε > 0 {\displaystyle \varepsilon >0} and [ a − ε , a + ε ] × [ b − ε , b + ε ] {\displaystyle \left[a-\varepsilon ,\,a+\varepsilon \right]\times \left[b-\varepsilon ,\,b+\varepsilon \right]} is contained in Ω . {\displaystyle \Omega .} By the mean value theorem, for fixed h and k non-zero, θ , θ ′ , ϕ , ϕ ′ {\displaystyle \theta ,\theta ',\phi ,\phi '} can be found in the open interval ( 0 , 1 ) {\displaystyle (0,1)} with w ( h , k ) = u ( h , k ) − u ( 0 , k ) = h ∂ x u ( θ h , k ) = h [ ∂ x f ( a + θ h , b + k ) − ∂ x f ( a + θ h , b ) ] = h k ∂ y ∂ x f ( a + θ h , b + θ ′ k ) w ( h , k ) = v ( h , k ) − v ( h , 0 ) = k ∂ y v ( h , ϕ k ) = k [ ∂ y f ( a + h , b + ϕ k ) − ∂ y f ( a , b + ϕ k ) ] = h k ∂ x ∂ y f ( a + ϕ ′ h , b + ϕ k ) . {\displaystyle {\begin{aligned}w\left(h,\,k\right)&=u\left(h,\,k\right)-u\left(0,\,k\right)=h\,\partial _{x}u\left(\theta h,\,k\right)\\&=h\,\left[\partial _{x}f\left(a+\theta h,\,b+k\right)-\partial _{x}f\left(a+\theta h,\,b\right)\right]\\&=hk\,\partial _{y}\partial _{x}f\left(a+\theta h,\,b+\theta ^{\prime }k\right)\\w\left(h,\,k\right)&=v\left(h,\,k\right)-v\left(h,\,0\right)=k\,\partial _{y}v\left(h,\,\phi k\right)\\&=k\left[\partial _{y}f\left(a+h,\,b+\phi k\right)-\partial _{y}f\left(a,\,b+\phi k\right)\right]\\&=hk\,\partial _{x}\partial _{y}f\left(a+\phi ^{\prime }h,\,b+\phi k\right).\end{aligned}}} Since h , k ≠ 0 {\displaystyle h,\,k\neq 0} , the first equality below can be divided by h k {\displaystyle hk} : h k ∂ y ∂ x f ( a + θ h , b + θ ′ k ) = h k ∂ x ∂ y f ( a + ϕ ′ h , b + ϕ k ) , ∂ y ∂ x f ( a + θ h , b + θ ′ k ) = ∂ x ∂ y f ( a + ϕ ′ h , b + ϕ k ) . {\displaystyle {\begin{aligned}hk\,\partial _{y}\partial _{x}f\left(a+\theta h,\,b+\theta ^{\prime }k\right)&=hk\,\partial _{x}\partial _{y}f\left(a+\phi ^{\prime }h,\,b+\phi k\right),\\\partial _{y}\partial _{x}f\left(a+\theta h,\,b+\theta ^{\prime }k\right)&=\partial _{x}\partial _{y}f\left(a+\phi ^{\prime }h,\,b+\phi k\right).\end{aligned}}} Letting h , k {\displaystyle h,\,k} tend to zero in the last equality, the continuity assumptions on ∂ y ∂ x f {\displaystyle \partial _{y}\partial _{x}f} and ∂ x ∂ y f {\displaystyle \partial _{x}\partial _{y}f} now imply that ∂ 2 ∂ x ∂ y f ( a , b ) = ∂ 2 ∂ y ∂ x f ( a , b ) . {\displaystyle {\frac {\partial ^{2}}{\partial x\partial y}}f\left(a,\,b\right)={\frac {\partial ^{2}}{\partial y\partial x}}f\left(a,\,b\right).} This account is a straightforward classical method found in many text books, for example in Burkill, Apostol and Rudin. Although the derivation above is elementary, the approach can also be viewed from a more conceptual perspective so that the result becomes more apparent. Indeed the difference operators Δ x t , Δ y t {\displaystyle \Delta _{x}^{t},\,\,\Delta _{y}^{t}} commute and Δ x t f , Δ y t f {\displaystyle \Delta _{x}^{t}f,\,\,\Delta _{y}^{t}f} tend to ∂ x f , ∂ y f {\displaystyle \partial _{x}f,\,\,\partial _{y}f} as t {\displaystyle t} tends to 0, with a similar statement for second order operators. Here, for z {\displaystyle z} a vector in the plane and u {\displaystyle u} a directional vector ( 1 0 ) {\displaystyle {\tbinom {1}{0}}} or ( 0 1 ) {\displaystyle {\tbinom {0}{1}}} , the difference operator is defined by Δ u t f ( z ) = f ( z + t u ) − f ( z ) t . {\displaystyle \Delta _{u}^{t}f(z)={f(z+tu)-f(z) \over t}.} By the fundamental theorem of calculus for C 1 {\displaystyle C^{1}} functions f {\displaystyle f} on an open interval I {\displaystyle I} with ( a , b ) ⊂ I {\displaystyle (a,b)\subset I} ∫ a b f ′ ( x ) d x = f ( b ) − f ( a ) . {\displaystyle \int _{a}^{b}f^{\prime }(x)\,dx=f(b)-f(a).} Hence | f ( b ) − f ( a ) | ≤ ( b − a ) sup c ∈ ( a , b ) | f ′ ( c ) | {\displaystyle |f(b)-f(a)|\leq (b-a)\,\sup _{c\in (a,b)}|f^{\prime }(c)|} . This is a generalized version of the mean value theorem. Recall that the elementary discussion on maxima or minima for real-valued functions implies that if f {\displaystyle f} is continuous on [ a , b ] {\displaystyle [a,b]} and differentiable on ( a , b ) {\displaystyle (a,b)} , then there is a point c {\displaystyle c} in ( a , b ) {\displaystyle (a,b)} such that f ( b ) − f ( a ) b − a = f ′ ( c ) . {\displaystyle {f(b)-f(a) \over b-a}=f^{\prime }(c).} For vector-valued functions with V {\displaystyle V} a finite-dimensional normed space, there is no analogue of the equality above, indeed it fails. But since inf f ′ ≤ f ′ ( c ) ≤ sup f ′ {\displaystyle \inf f^{\prime }\leq f^{\prime }(c)\leq \sup f^{\prime }} , the inequality above is a useful substitute. Moreover, using the pairing of the dual of V {\displaystyle V} with its dual norm, yields the following inequality: ‖ f ( b ) − f ( a ) ‖ ≤ ( b − a ) sup c ∈ ( a , b ) ‖ f ′ ( c ) ‖ {\displaystyle \|f(b)-f(a)\|\leq (b-a)\,\sup _{c\in (a,b)}\|f^{\prime }(c)\|} . These versions of the mean valued theorem are discussed in Rudin, Hörmander and elsewhere. For f {\displaystyle f} a C 2 {\displaystyle C^{2}} function on an open set in the plane, define D 1 = ∂ x {\displaystyle D_{1}=\partial _{x}} and D 2 = ∂ y {\displaystyle D_{2}=\partial _{y}} . Furthermore for t ≠ 0 {\displaystyle t\neq 0} set Δ 1 t f ( x , y ) = [ f ( x + t , y ) − f ( x , y ) ] / t , Δ 2 t f ( x , y ) = [ f ( x , y + t ) − f ( x , y ) ] / t {\displaystyle \Delta _{1}^{t}f(x,y)=[f(x+t,y)-f(x,y)]/t,\,\,\,\,\,\,\Delta _{2}^{t}f(x,y)=[f(x,y+t)-f(x,y)]/t} . Then for ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} in the open set, the generalized mean value theorem can be applied twice: | Δ 1 t Δ 2 t f ( x 0 , y 0 ) − D 1 D 2 f ( x 0 , y 0 ) | ≤ sup 0 ≤ s ≤ 1 | Δ 1 t D 2 f ( x 0 , y 0 + t s ) − D 1 D 2 f ( x 0 , y 0 ) | ≤ sup 0 ≤ r , s ≤ 1 | D 1 D 2 f ( x 0 + t r , y 0 + t s ) − D 1 D 2 f ( x 0 , y 0 ) | . {\displaystyle \left|\Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})-D_{1}D_{2}f(x_{0},y_{0})\right|\leq \sup _{0\leq s\leq 1}\left|\Delta _{1}^{t}D_{2}f(x_{0},y_{0}+ts)-D_{1}D_{2}f(x_{0},y_{0})\right|\leq \sup _{0\leq r,s\leq 1}\left|D_{1}D_{2}f(x_{0}+tr,y_{0}+ts)-D_{1}D_{2}f(x_{0},y_{0})\right|.} Thus Δ 1 t Δ 2 t f ( x 0 , y 0 ) {\displaystyle \Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})} tends to D 1 D 2 f ( x 0 , y 0 ) {\displaystyle D_{1}D_{2}f(x_{0},y_{0})} as t {\displaystyle t} tends to 0. The same argument shows that Δ 2 t Δ 1 t f ( x 0 , y 0 ) {\displaystyle \Delta _{2}^{t}\Delta _{1}^{t}f(x_{0},y_{0})} tends to D 2 D 1 f ( x 0 , y 0 ) {\displaystyle D_{2}D_{1}f(x_{0},y_{0})} . Hence, since the difference operators commute, so do the partial differential operators D 1 {\displaystyle D_{1}} and D 2 {\displaystyle D_{2}} , as claimed. Remark. By two applications of the classical mean value theorem, Δ 1 t Δ 2 t f ( x 0 , y 0 ) = D 1 D 2 f ( x 0 + t θ , y 0 + t θ ′ ) {\displaystyle \Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})=D_{1}D_{2}f(x_{0}+t\theta ,y_{0}+t\theta ^{\prime })} for some θ {\displaystyle \theta } and θ ′ {\displaystyle \theta ^{\prime }} in ( 0 , 1 ) {\displaystyle (0,1)} . Thus the first elementary proof can be reinterpreted using difference operators. Conversely, instead of using the generalized mean value theorem in the second proof, the classical mean valued theorem could be used. == Proof of Clairaut's theorem using iterated integrals == The properties of repeated Riemann integrals of a continuous function F on a compact rectangle [a,b] × [c,d] are easily established. The uniform continuity of F implies immediately that the functions g ( x ) = ∫ c d F ( x , y ) d y {\displaystyle g(x)=\int _{c}^{d}F(x,y)\,dy} and h ( y ) = ∫ a b F ( x , y ) d x {\displaystyle h(y)=\int _{a}^{b}F(x,y)\,dx} are continuous. It follows that ∫ a b ∫ c d F ( x , y ) d y d x = ∫ c d ∫ a b F ( x , y ) d x d y {\displaystyle \int _{a}^{b}\int _{c}^{d}F(x,y)\,dy\,dx=\int _{c}^{d}\int _{a}^{b}F(x,y)\,dx\,dy} ; moreover it is immediate that the iterated integral is positive if F is positive. The equality above is a simple case of Fubini's theorem, involving no measure theory. Titchmarsh (1939) proves it in a straightforward way using Riemann approximating sums corresponding to subdivisions of a rectangle into smaller rectangles. To prove Clairaut's theorem, assume f is a differentiable function on an open set U, for which the mixed second partial derivatives fyx and fxy exist and are continuous. Using the fundamental theorem of calculus twice, ∫ c d ∫ a b f y x ( x , y ) d x d y = ∫ c d f y ( b , y ) − f y ( a , y ) d y = f ( b , d ) − f ( a , d ) − f ( b , c ) + f ( a , c ) . {\displaystyle \int _{c}^{d}\int _{a}^{b}f_{yx}(x,y)\,dx\,dy=\int _{c}^{d}f_{y}(b,y)-f_{y}(a,y)\,dy=f(b,d)-f(a,d)-f(b,c)+f(a,c).} Similarly ∫ a b ∫ c d f x y ( x , y ) d y d x = ∫ a b f x ( x , d ) − f x ( x , c ) d x = f ( b , d ) − f ( a , d ) − f ( b , c ) + f ( a , c ) . {\displaystyle \int _{a}^{b}\int _{c}^{d}f_{xy}(x,y)\,dy\,dx=\int _{a}^{b}f_{x}(x,d)-f_{x}(x,c)\,dx=f(b,d)-f(a,d)-f(b,c)+f(a,c).} The two iterated integrals are therefore equal. On the other hand, since fxy(x,y) is continuous, the second iterated integral can be performed by first integrating over x and then afterwards over y. But then the iterated integral of fyx − fxy on [a,b] × [c,d] must vanish. However, if the iterated integral of a continuous function function F vanishes for all rectangles, then F must be identically zero; for otherwise F or −F would be strictly positive at some point and therefore by continuity on a rectangle, which is not possible. Hence fyx − fxy must vanish identically, so that fyx = fxy everywhere. == Sufficiency of twice-differentiability == A weaker condition than the continuity of second partial derivatives (which is implied by the latter) which suffices to ensure symmetry is that all partial derivatives are themselves differentiable. Another strengthening of the theorem, in which existence of the permuted mixed partial is asserted, was provided by Peano in a short 1890 note on Mathesis: If f : E → R {\displaystyle f:E\to \mathbb {R} } is defined on an open set E ⊂ R 2 {\displaystyle E\subset \mathbb {R} ^{2}} ; ∂ 1 f ( x , y ) {\displaystyle \partial _{1}f(x,\,y)} and ∂ 2 , 1 f ( x , y ) {\displaystyle \partial _{2,1}f(x,\,y)} exist everywhere on E {\displaystyle E} ; ∂ 2 , 1 f {\displaystyle \partial _{2,1}f} is continuous at ( x 0 , y 0 ) ∈ E {\displaystyle \left(x_{0},\,y_{0}\right)\in E} , and if ∂ 2 f ( x , y 0 ) {\displaystyle \partial _{2}f(x,\,y_{0})} exists in a neighborhood of x = x 0 {\displaystyle x=x_{0}} , then ∂ 1 , 2 f {\displaystyle \partial _{1,2}f} exists at ( x 0 , y 0 ) {\displaystyle \left(x_{0},\,y_{0}\right)} and ∂ 1 , 2 f ( x 0 , y 0 ) = ∂ 2 , 1 f ( x 0 , y 0 ) {\displaystyle \partial _{1,2}f\left(x_{0},\,y_{0}\right)=\partial _{2,1}f\left(x_{0},\,y_{0}\right)} . == Distribution theory formulation == The theory of distributions (generalized functions) eliminates analytic problems with the symmetry. The derivative of an integrable function can always be defined as a distribution, and symmetry of mixed partial derivatives always holds as an equality of distributions. The use of formal integration by parts to define differentiation of distributions puts the symmetry question back onto the test functions, which are smooth and certainly satisfy this symmetry. In more detail (where f is a distribution, written as an operator on test functions, and φ is a test function), ( D 1 D 2 f ) [ ϕ ] = − ( D 2 f ) [ D 1 ϕ ] = f [ D 2 D 1 ϕ ] = f [ D 1 D 2 ϕ ] = − ( D 1 f ) [ D 2 ϕ ] = ( D 2 D 1 f ) [ ϕ ] . {\displaystyle \left(D_{1}D_{2}f\right)[\phi ]=-\left(D_{2}f\right)\left[D_{1}\phi \right]=f\left[D_{2}D_{1}\phi \right]=f\left[D_{1}D_{2}\phi \right]=-\left(D_{1}f\right)\left[D_{2}\phi \right]=\left(D_{2}D_{1}f\right)[\phi ].} Another approach, which defines the Fourier transform of a function, is to note that on such transforms partial derivatives become multiplication operators that commute much more obviously. == Requirement of continuity == The symmetry may be broken if the function fails to have differentiable partial derivatives, which is possible if Clairaut's theorem is not satisfied (the second partial derivatives are not continuous). An example of non-symmetry is the function (due to Peano) This can be visualized by the polar form f ( r cos ( θ ) , r sin ( θ ) ) = r 2 sin ( 4 θ ) 4 {\displaystyle f(r\cos(\theta ),r\sin(\theta ))={\frac {r^{2}\sin(4\theta )}{4}}} ; it is everywhere continuous, but its derivatives at (0, 0) cannot be computed algebraically. Rather, the limit of difference quotients shows that f x ( 0 , 0 ) = f y ( 0 , 0 ) = 0 {\displaystyle f_{x}(0,0)=f_{y}(0,0)=0} , so the graph z = f ( x , y ) {\displaystyle z=f(x,y)} has a horizontal tangent plane at (0, 0), and the partial derivatives f x , f y {\displaystyle f_{x},f_{y}} exist and are everywhere continuous. However, the second partial derivatives are not continuous at (0, 0), and the symmetry fails. In fact, along the x-axis the y-derivative is f y ( x , 0 ) = x {\displaystyle f_{y}(x,0)=x} , and so: f y x ( 0 , 0 ) = lim ε → 0 f y ( ε , 0 ) − f y ( 0 , 0 ) ε = 1. {\displaystyle f_{yx}(0,0)=\lim _{\varepsilon \to 0}{\frac {f_{y}(\varepsilon ,0)-f_{y}(0,0)}{\varepsilon }}=1.} In contrast, along the y-axis the x-derivative f x ( 0 , y ) = − y {\displaystyle f_{x}(0,y)=-y} , and so f x y ( 0 , 0 ) = − 1 {\displaystyle f_{xy}(0,0)=-1} . That is, f y x ≠ f x y {\displaystyle f_{yx}\neq f_{xy}} at (0, 0), although the mixed partial derivatives do exist, and at every other point the symmetry does hold. The above function, written in polar coordinates, can be expressed as f ( r , θ ) = r 2 sin 4 θ 4 , {\displaystyle f(r,\,\theta )={\frac {r^{2}\sin {4\theta }}{4}},} showing that the function oscillates four times when traveling once around an arbitrarily small loop containing the origin. Intuitively, therefore, the local behavior of the function at (0, 0) cannot be described as a quadratic form, and the Hessian matrix thus fails to be symmetric. In general, the interchange of limiting operations need not commute. Given two variables near (0, 0) and two limiting processes on f ( h , k ) − f ( h , 0 ) − f ( 0 , k ) + f ( 0 , 0 ) {\displaystyle f(h,\,k)-f(h,\,0)-f(0,\,k)+f(0,\,0)} corresponding to making h → 0 first, and to making k → 0 first. It can matter, looking at the first-order terms, which is applied first. This leads to the construction of pathological examples in which second derivatives are non-symmetric. This kind of example belongs to the theory of real analysis where the pointwise value of functions matters. When viewed as a distribution the second partial derivative's values can be changed at an arbitrary set of points as long as this has Lebesgue measure 0. Since in the example the Hessian is symmetric everywhere except (0, 0), there is no contradiction with the fact that the Hessian, viewed as a Schwartz distribution, is symmetric. == In Lie theory == Consider the first-order differential operators Di to be infinitesimal operators on Euclidean space. That is, Di in a sense generates the one-parameter group of translations parallel to the xi-axis. These groups commute with each other, and therefore the infinitesimal generators do also; the Lie bracket [Di, Dj] = 0 is this property's reflection. In other words, the Lie derivative of one coordinate with respect to another is zero. == Application to differential forms == The Clairaut-Schwarz theorem is the key fact needed to prove that for every C ∞ {\displaystyle C^{\infty }} (or at least twice differentiable) differential form ω ∈ Ω k ( M ) {\displaystyle \omega \in \Omega ^{k}(M)} , the second exterior derivative vanishes: d 2 ω := d ( d ω ) = 0 {\displaystyle d^{2}\omega :=d(d\omega )=0} . This implies that every differentiable exact form (i.e., a form α {\displaystyle \alpha } such that α = d ω {\displaystyle \alpha =d\omega } for some form ω {\displaystyle \omega } ) is closed (i.e., d α = 0 {\displaystyle d\alpha =0} ), since d α = d ( d ω ) = 0 {\displaystyle d\alpha =d(d\omega )=0} . In the middle of the 18th century, the theory of differential forms was first studied in the simplest case of 1-forms in the plane, i.e. A d x + B d y {\displaystyle A\,dx+B\,dy} , where A {\displaystyle A} and B {\displaystyle B} are functions in the plane. The study of 1-forms and the differentials of functions began with Clairaut's papers in 1739 and 1740. At that stage his investigations were interpreted as ways of solving ordinary differential equations. Formally Clairaut showed that a 1-form ω = A d x + B d y {\displaystyle \omega =A\,dx+B\,dy} on an open rectangle is closed, i.e. d ω = 0 {\displaystyle d\omega =0} , if and only ω {\displaystyle \omega } has the form d f {\displaystyle df} for some function f {\displaystyle f} in the disk. The solution for f {\displaystyle f} can be written by Cauchy's integral formula f ( x , y ) = ∫ x 0 x A ( x , y ) d x + ∫ y 0 y B ( x , y ) d y ; {\displaystyle f(x,y)=\int _{x_{0}}^{x}A(x,y)\,dx+\int _{y_{0}}^{y}B(x,y)\,dy;} while if ω = d f {\displaystyle \omega =df} , the closed property d ω = 0 {\displaystyle d\omega =0} is the identity ∂ x ∂ y f = ∂ y ∂ x f {\displaystyle \partial _{x}\partial _{y}f=\partial _{y}\partial _{x}f} . (In modern language this is one version of the Poincaré lemma.) == Notes == == References == Aksoy, A.; Martelli, M. (2002), "Mixed Partial Derivatives and Fubini's Theorem", College Mathematics Journal of MAA, 33 (2): 126–130, doi:10.1080/07468342.2002.11921930, S2CID 124561972 Allen, R. G. D. (1964). Mathematical Analysis for Economists. New York: St. Martin's Press. ISBN 9781443725224. {{cite book}}: ISBN / Date incompatibility (help) Apostol, Tom M. (1965), Mathematical analysis: a modern approach to advanced calculus, London: Addison-Wesley, OCLC 901554874 Apostol, Tom M. (1974), Mathematical Analysis, Addison-Wesley, ISBN 9780201002881 Axler, Sheldon (2020), Measure, integration & real analysis, Graduate Texts in Mathematics, vol. 282, Springer, ISBN 9783030331436 Bourbaki, Nicolas (1952), "Chapitre III: Mesures sur les espaces localement compacts", Eléments de mathématique, Livre VI: Intégration (in French), Hermann et Cie Burkill, J. C. (1962), A First Course in Mathematical Analysis, Cambridge University Press, ISBN 9780521294683 {{citation}}: ISBN / Date incompatibility (help) (reprinted 1978) Cartan, Henri (1971), Calcul Differentiel (in French), Hermann, ISBN 9780395120330 Clairaut, A. C. (1739), "Recherches générales sur le calcul intégral", Mémoires de l'Académie Royale des Sciences: 425–436 Clairaut, A. C. (1740), "Sur l'integration ou la construction des equations différentielles du premier ordre", Mémoires de l'Académie Royale des Sciences, 2: 293–323 Dieudonné, J. (1937), "Sur les fonctions continues numérique définies dans une produit de deux espaces compacts", Comptes Rendus de l'Académie des Sciences de Paris, 205: 593–595 Dieudonné, J. (1960), Foundations of Modern Analysis, Pure and Applied Mathematics, vol. 10, Academic Press, ISBN 9780122155505 {{citation}}: ISBN / Date incompatibility (help) Dieudonné, J. (1976), Treatise on analysis. Vol. II., Pure and Applied Mathematics, vol. 10-II, translated by I. G. Macdonald, Academic Press, ISBN 9780122155024 Euler, Leonhard (1740). "De infinitis curvis eiusdem generis seu methodus inveniendi aequationes pro infinitis curvis eiusdem generis" [On infinite(ly many) curves of the same type, that is, a method of finding equations for infinite(ly many) curves of the same type]. Commentarii Academiae Scientiarum Petropolitanae (in Latin). 7: 174–189, 180–183 – via The Euler Archive, maintained by the University of the Pacific. Gilkey, Peter; Park, JeongHyeong; Vázquez-Lorenzo, Ramón (2015), Aspects of differential geometry I, Synthesis Lectures on Mathematics and Statistics, vol. 15, Morgan & Claypool, ISBN 9781627056632 Godement, Roger (1998a), Analyse mathématique I, Springer Godement, Roger (1998b), Analyse mathématique II, Springer Higgins, Thomas James (1940). "A note on the history of mixed partial derivatives". Scripta Mathematica. 7: 59–62. Archived from the original on 2017-04-19. Retrieved 2017-04-19. Hobson, E. W. (1921), The theory of functions of a real variable and the theory of Fourier's series. Vol. I., Cambridge University Press Hörmander, Lars (2015), The Analysis of Linear Partial Differential Operators I: Distribution Theory and Fourier Analysis, Classics in Mathematics (2nd ed.), Springer, ISBN 9783642614972 Hubbard, John; Hubbard, Barbara (2015). Vector Calculus, Linear Algebra and Differential Forms (5th ed.). Matrix Editions. ISBN 9780971576681. James, R. C. (1966). Advanced Calculus. Belmont, CA: Wadsworth. Jordan, Camille (1893), Cours d'analyse de l'École polytechnique. Tome I. Calcul différentiel (Les Grands Classiques Gauthier-Villars), Éditions Jacques Gaba] Katz, Victor J. (1981), "The history of differential forms from Clairaut to Poincaré", Historia Mathematica, 8 (2): 161–188, doi:10.1016/0315-0860(81)90027-6 Lang, Serge (1969), Real Analysis, Addison-Wesley, ISBN 0201041790 Lindelöf, L. L. (1867), "Remarques sur les différentes manières d'établir la formule d2 z/dx dy = d2 z/dy dx", Acta Societatis Scientiarum Fennicae, 8: 205–213 Loomis, Lynn H. (1953), An introduction to abstract harmonic analysis, D. Van Nostrand, hdl:2027/uc1.b4250788 McGrath, Peter J. (2014), "Another proof of Clairaut's theorem", Amer. Math. Monthly, 121 (2): 165–166, doi:10.4169/amer.math.monthly.121.02.165, S2CID 12698408 Minguzzi, E. (2015). "The equality of mixed partial derivatives under weak differentiability conditions". Real Analysis Exchange. 40: 81–98. arXiv:1309.5841. doi:10.14321/realanalexch.40.1.0081. S2CID 119315951. Nachbin, Leopoldo (1965), Elements of approximation theory, Notas de Matemática, vol. 33, Rio de Janeiro: Fascículo publicado pelo Instituto de Matemática Pura e Aplicada do Conselho Nacional de Pesquisas Rudin, Walter (1976), Principles of Mathematical Analysis, International Series in Pure & Applied Mathematics, McGraw-Hill, ISBN 0-07-054235-X Sandifer, C. Edward (2007), "Mixed partial derivatives are equal", The Early Mathematics of Leonhard Euler, Vol. 1, Mathematics Association of America, ISBN 9780883855591 Schwarz, H. A. (1873), "Communication", Archives des Sciences Physiques et Naturelles, 48: 38–44 Spivak, Michael (1965), Calculus on manifolds. A modern approach to classical theorems of advanced calculus, W. A. Benjamin Tao, Terence (2006), Analysis II (PDF), Texts and Readings in Mathematics, vol. 38, Hindustan Book Agency, doi:10.1007/978-981-10-1804-6, ISBN 8185931631 Titchmarsh, E. C. (1939), The Theory of Functions (2nd ed.), Oxford University Press Tu, Loring W. (2010), An Introduction to Manifolds (2nd ed.), New York: Springer, ISBN 978-1-4419-7399-3 == Further reading == "Partial derivative", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia:Symplectic representation#0
|
In mathematical field of representation theory, a symplectic representation is a representation of a group or a Lie algebra on a symplectic vector space (V, ω) which preserves the symplectic form ω. Here ω is a nondegenerate skew symmetric bilinear form ω : V × V → F {\displaystyle \omega \colon V\times V\to \mathbb {F} } where F is the field of scalars. A representation of a group G preserves ω if ω ( g ⋅ v , g ⋅ w ) = ω ( v , w ) {\displaystyle \omega (g\cdot v,g\cdot w)=\omega (v,w)} for all g in G and v, w in V, whereas a representation of a Lie algebra g preserves ω if ω ( ξ ⋅ v , w ) + ω ( v , ξ ⋅ w ) = 0 {\displaystyle \omega (\xi \cdot v,w)+\omega (v,\xi \cdot w)=0} for all ξ in g and v, w in V. Thus a representation of G or g is equivalently a group or Lie algebra homomorphism from G or g to the symplectic group Sp(V,ω) or its Lie algebra sp(V,ω) If G is a compact group (for example, a finite group), and F is the field of complex numbers, then by introducing a compatible unitary structure (which exists by an averaging argument), one can show that any complex symplectic representation is a quaternionic representation. Quaternionic representations of finite or compact groups are often called symplectic representations, and may be identified using the Frobenius–Schur indicator. == References == Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103..
|
Wikipedia:Symplectic vector space#0
|
In mathematics, a symplectic vector space is a vector space V {\displaystyle V} over a field F {\displaystyle F} (for example the real numbers R {\displaystyle \mathbb {R} } ) equipped with a symplectic bilinear form. A symplectic bilinear form is a mapping ω : V × V → F {\displaystyle \omega :V\times V\to F} that is Bilinear Linear in each argument separately; Alternating ω ( v , v ) = 0 {\displaystyle \omega (v,v)=0} holds for all v ∈ V {\displaystyle v\in V} ; and Non-degenerate ω ( v , u ) = 0 {\displaystyle \omega (v,u)=0} for all v ∈ V {\displaystyle v\in V} implies that u = 0 {\displaystyle u=0} . If the underlying field has characteristic not 2, alternation is equivalent to skew-symmetry. If the characteristic is 2, the skew-symmetry is implied by, but does not imply alternation. In this case every symplectic form is a symmetric form, but not vice versa. Working in a fixed basis, ω {\displaystyle \omega } can be represented by a matrix. The conditions above are equivalent to this matrix being skew-symmetric, nonsingular, and hollow (all diagonal entries are zero). This should not be confused with a symplectic matrix, which represents a symplectic transformation of the space. If V {\displaystyle V} is finite-dimensional, then its dimension must necessarily be even since every skew-symmetric, hollow matrix of odd size has determinant zero. Notice that the condition that the matrix be hollow is not redundant if the characteristic of the field is 2. A symplectic form behaves quite differently from a symmetric form, for example, the scalar product on Euclidean vector spaces. == Standard symplectic space == The standard symplectic space is R 2 n {\displaystyle \mathbb {R} ^{2n}} with the symplectic form given by a nonsingular, skew-symmetric matrix. Typically ω {\displaystyle \omega } is chosen to be the block matrix ω = [ 0 I n − I n 0 ] {\displaystyle \omega ={\begin{bmatrix}0&I_{n}\\-I_{n}&0\end{bmatrix}}} where In is the n × n identity matrix. In terms of basis vectors (x1, ..., xn, y1, ..., yn): ω ( x i , y j ) = − ω ( y j , x i ) = δ i j , ω ( x i , x j ) = ω ( y i , y j ) = 0. {\displaystyle {\begin{aligned}\omega (x_{i},y_{j})=-\omega (y_{j},x_{i})&=\delta _{ij},\\\omega (x_{i},x_{j})=\omega (y_{i},y_{j})&=0.\end{aligned}}} A modified version of the Gram–Schmidt process shows that any finite-dimensional symplectic vector space has a basis such that ω {\displaystyle \omega } takes this form, often called a Darboux basis or symplectic basis. Sketch of process: Start with an arbitrary basis v 1 , . . . , v n {\displaystyle v_{1},...,v_{n}} , and represent the dual of each basis vector by the dual basis: ω ( v i , ⋅ ) = ∑ j ω ( v i , v j ) v j ∗ {\displaystyle \omega (v_{i},\cdot )=\sum _{j}\omega (v_{i},v_{j})v_{j}^{*}} . This gives us a n × n {\displaystyle n\times n} matrix with entries ω ( v i , v j ) {\displaystyle \omega (v_{i},v_{j})} . Solve for its null space. Now for any ( λ 1 , . . . , λ n ) {\displaystyle (\lambda _{1},...,\lambda _{n})} in the null space, we have ∑ i ω ( v i , ⋅ ) = 0 {\displaystyle \sum _{i}\omega (v_{i},\cdot )=0} , so the null space gives us the degenerate subspace V 0 {\displaystyle V_{0}} . Now arbitrarily pick a complementary W {\displaystyle W} such that V = V 0 ⊕ W {\displaystyle V=V_{0}\oplus W} , and let w 1 , . . . , w m {\displaystyle w_{1},...,w_{m}} be a basis of W {\displaystyle W} . Since ω ( w 1 , ⋅ ) ≠ 0 {\displaystyle \omega (w_{1},\cdot )\neq 0} , and ω ( w 1 , w 1 ) = 0 {\displaystyle \omega (w_{1},w_{1})=0} , WLOG ω ( w 1 , w 2 ) ≠ 0 {\displaystyle \omega (w_{1},w_{2})\neq 0} . Now scale w 2 {\displaystyle w_{2}} so that ω ( w 1 , w 2 ) = 1 {\displaystyle \omega (w_{1},w_{2})=1} . Then define w ′ = w − ω ( w , w 2 ) w 1 + ω ( w , w 1 ) w 2 {\displaystyle w'=w-\omega (w,w_{2})w_{1}+\omega (w,w_{1})w_{2}} for each of w = w 3 , w 4 , . . . , w m {\displaystyle w=w_{3},w_{4},...,w_{m}} . Iterate. Notice that this method applies for symplectic vector space over any field, not just the field of real numbers. Case of real or complex field: When the space is over the field of real numbers, then we can modify the modified Gram-Schmidt process as follows: Start the same way. Let w 1 , . . . , w m {\displaystyle w_{1},...,w_{m}} be an orthonormal basis (with respect to the usual inner product on R n {\displaystyle \mathbb {R} ^{n}} ) of W {\displaystyle W} . Since ω ( w 1 , ⋅ ) ≠ 0 {\displaystyle \omega (w_{1},\cdot )\neq 0} , and ω ( w 1 , w 1 ) = 0 {\displaystyle \omega (w_{1},w_{1})=0} , WLOG ω ( w 1 , w 2 ) ≠ 0 {\displaystyle \omega (w_{1},w_{2})\neq 0} . Now multiply w 2 {\displaystyle w_{2}} by a sign, so that ω ( w 1 , w 2 ) ≥ 0 {\displaystyle \omega (w_{1},w_{2})\geq 0} . Then define w ′ = w − ω ( w , w 2 ) w 1 + ω ( w , w 1 ) w 2 {\displaystyle w'=w-\omega (w,w_{2})w_{1}+\omega (w,w_{1})w_{2}} for each of w = w 3 , w 4 , . . . , w m {\displaystyle w=w_{3},w_{4},...,w_{m}} , then scale each w ′ {\displaystyle w'} so that it has norm one. Iterate. Similarly, for the field of complex numbers, we may choose a unitary basis. This proves the spectral theory of antisymmetric matrices. === Lagrangian form === There is another way to interpret this standard symplectic form. Since the model space R2n used above carries much canonical structure which might easily lead to misinterpretation, we will use "anonymous" vector spaces instead. Let V be a real vector space of dimension n and V∗ its dual space. Now consider the direct sum W = V ⊕ V∗ of these spaces equipped with the following form: ω ( x ⊕ η , y ⊕ ξ ) = ξ ( x ) − η ( y ) . {\displaystyle \omega (x\oplus \eta ,y\oplus \xi )=\xi (x)-\eta (y).} Now choose any basis (v1, ..., vn) of V and consider its dual basis ( v 1 ∗ , … , v n ∗ ) . {\displaystyle \left(v_{1}^{*},\ldots ,v_{n}^{*}\right).} We can interpret the basis vectors as lying in W if we write xi = (vi, 0) and yi = (0, vi∗). Taken together, these form a complete basis of W, ( x 1 , … , x n , y 1 , … , y n ) . {\displaystyle (x_{1},\ldots ,x_{n},y_{1},\ldots ,y_{n}).} The form ω defined here can be shown to have the same properties as in the beginning of this section. On the other hand, every symplectic structure is isomorphic to one of the form V ⊕ V∗. The subspace V is not unique, and a choice of subspace V is called a polarization. The subspaces that give such an isomorphism are called Lagrangian subspaces or simply Lagrangians. Explicitly, given a Lagrangian subspace as defined below, then a choice of basis (x1, ..., xn) defines a dual basis for a complement, by ω(xi, yj) = δij. === Analogy with complex structures === Just as every symplectic structure is isomorphic to one of the form V ⊕ V∗, every complex structure on a vector space is isomorphic to one of the form V ⊕ V. Using these structures, the tangent bundle of an n-manifold, considered as a 2n-manifold, has an almost complex structure, and the cotangent bundle of an n-manifold, considered as a 2n-manifold, has a symplectic structure: T∗(T∗M)p = Tp(M) ⊕ (Tp(M))∗. The complex analog to a Lagrangian subspace is a real subspace, a subspace whose complexification is the whole space: W = V ⊕ J V. As can be seen from the standard symplectic form above, every symplectic form on R2n is isomorphic to the imaginary part of the standard complex (Hermitian) inner product on Cn (with the convention of the first argument being anti-linear). == Volume form == Let ω be an alternating bilinear form on an n-dimensional real vector space V, ω ∈ Λ2(V). Then ω is non-degenerate if and only if n is even and ωn/2 = ω ∧ ... ∧ ω is a volume form. A volume form on a n-dimensional vector space V is a non-zero multiple of the n-form e1∗ ∧ ... ∧ en∗ where e1, e2, ..., en is a basis of V. For the standard basis defined in the previous section, we have ω n = ( − 1 ) n 2 x 1 ∗ ∧ ⋯ ∧ x n ∗ ∧ y 1 ∗ ∧ ⋯ ∧ y n ∗ . {\displaystyle \omega ^{n}=(-1)^{\frac {n}{2}}x_{1}^{*}\wedge \dotsb \wedge x_{n}^{*}\wedge y_{1}^{*}\wedge \dotsb \wedge y_{n}^{*}.} By reordering, one can write ω n = x 1 ∗ ∧ y 1 ∗ ∧ ⋯ ∧ x n ∗ ∧ y n ∗ . {\displaystyle \omega ^{n}=x_{1}^{*}\wedge y_{1}^{*}\wedge \dotsb \wedge x_{n}^{*}\wedge y_{n}^{*}.} Authors variously define ωn or (−1)n/2ωn as the standard volume form. An occasional factor of n! may also appear, depending on whether the definition of the alternating product contains a factor of n! or not. The volume form defines an orientation on the symplectic vector space (V, ω). == Symplectic map == Suppose that (V, ω) and (W, ρ) are symplectic vector spaces. Then a linear map f : V → W is called a symplectic map if the pullback preserves the symplectic form, i.e. f∗ρ = ω, where the pullback form is defined by (f∗ρ)(u, v) = ρ(f(u), f(v)). Symplectic maps are volume- and orientation-preserving. == Symplectic group == If V = W, then a symplectic map is called a linear symplectic transformation of V. In particular, in this case one has that ω(f(u), f(v)) = ω(u, v), and so the linear transformation f preserves the symplectic form. The set of all symplectic transformations forms a group and in particular a Lie group, called the symplectic group and denoted by Sp(V) or sometimes Sp(V, ω). In matrix form symplectic transformations are given by symplectic matrices. == Subspaces == Let W be a linear subspace of V. Define the symplectic complement of W to be the subspace W ⊥ = { v ∈ V ∣ ω ( v , w ) = 0 for all w ∈ W } . {\displaystyle W^{\perp }=\{v\in V\mid \omega (v,w)=0{\mbox{ for all }}w\in W\}.} The symplectic complement satisfies: ( W ⊥ ) ⊥ = W dim W + dim W ⊥ = dim V . {\displaystyle {\begin{aligned}\left(W^{\perp }\right)^{\perp }&=W\\\dim W+\dim W^{\perp }&=\dim V.\end{aligned}}} However, unlike orthogonal complements, W⊥ ∩ W need not be 0. We distinguish four cases: W is symplectic if W⊥ ∩ W = {0}. This is true if and only if ω restricts to a nondegenerate form on W. A symplectic subspace with the restricted form is a symplectic vector space in its own right. W is isotropic if W ⊆ W⊥. This is true if and only if ω restricts to 0 on W. Any one-dimensional subspace is isotropic. W is coisotropic if W⊥ ⊆ W. W is coisotropic if and only if ω descends to a nondegenerate form on the quotient space W/W⊥. Equivalently W is coisotropic if and only if W⊥ is isotropic. Any codimension-one subspace is coisotropic. W is Lagrangian if W = W⊥. A subspace is Lagrangian if and only if it is both isotropic and coisotropic. In a finite-dimensional vector space, a Lagrangian subspace is an isotropic one whose dimension is half that of V. Every isotropic subspace can be extended to a Lagrangian one. Referring to the canonical vector space R2n above, the subspace spanned by {x1, y1} is symplectic the subspace spanned by {x1, x2} is isotropic the subspace spanned by {x1, x2, ..., xn, y1} is coisotropic the subspace spanned by {x1, x2, ..., xn} is Lagrangian. == Heisenberg group == A Heisenberg group can be defined for any symplectic vector space, and this is the typical way that Heisenberg groups arise. A vector space can be thought of as a commutative Lie group (under addition), or equivalently as a commutative Lie algebra, meaning with trivial Lie bracket. The Heisenberg group is a central extension of such a commutative Lie group/algebra: the symplectic form defines the commutation, analogously to the canonical commutation relations (CCR), and a Darboux basis corresponds to canonical coordinates – in physics terms, to momentum operators and position operators. Indeed, by the Stone–von Neumann theorem, every representation satisfying the CCR (every representation of the Heisenberg group) is of this form, or more properly unitarily conjugate to the standard one. Further, the group algebra of (the dual to) a vector space is the symmetric algebra, and the group algebra of the Heisenberg group (of the dual) is the Weyl algebra: one can think of the central extension as corresponding to quantization or deformation. Formally, the symmetric algebra of a vector space V over a field F is the group algebra of the dual, Sym(V) := F[V∗], and the Weyl algebra is the group algebra of the (dual) Heisenberg group W(V) = F[H(V∗)]. Since passing to group algebras is a contravariant functor, the central extension map H(V) → V becomes an inclusion Sym(V) → W(V). == See also == A symplectic manifold is a smooth manifold with a smoothly-varying closed symplectic form on each tangent space. Maslov index A symplectic representation is a group representation where each group element acts as a symplectic transformation. == References == Claude Godbillon (1969) "Géométrie différentielle et mécanique analytique", Hermann Abraham, Ralph; Marsden, Jerrold E. (1978). "Hamiltonian and Lagrangian Systems". Foundations of Mechanics (2nd ed.). London: Benjamin-Cummings. pp. 161–252. ISBN 0-8053-0102-X. PDF Paulette Libermann and Charles-Michel Marle (1987) "Symplectic Geometry and Analytical Mechanics", D. Reidel Jean-Marie Souriau (1997) "Structure of Dynamical Systems, A Symplectic View of Physics", Springer
|
Wikipedia:Synthetic division#0
|
In algebra, synthetic division is a method for manually performing Euclidean division of polynomials, with less writing and fewer calculations than long division. It is mostly taught for division by linear monic polynomials (known as Ruffini's rule), but the method can be generalized to division by any polynomial. The advantages of synthetic division are that it allows one to calculate without writing variables, it uses few calculations, and it takes significantly less space on paper than long division. Also, the subtractions in long division are converted to additions by switching the signs at the very beginning, helping to prevent sign errors. == Regular synthetic division == The first example is synthetic division with only a monic linear denominator x − a {\displaystyle x-a} . x 3 − 12 x 2 − 42 x − 3 {\displaystyle {\frac {x^{3}-12x^{2}-42}{x-3}}} The numerator can be written as p ( x ) = x 3 − 12 x 2 + 0 x − 42 {\displaystyle p(x)=x^{3}-12x^{2}+0x-42} . The zero of the denominator g ( x ) {\displaystyle g(x)} is 3 {\displaystyle 3} . The coefficients of p ( x ) {\displaystyle p(x)} are arranged as follows, with the zero of g ( x ) {\displaystyle g(x)} on the left: 3 1 − 12 0 − 42 {\displaystyle {\begin{array}{cc}{\begin{array}{r}\\3\\\end{array}}&{\begin{array}{|rrrr}\ 1&-12&0&-42\\&&&\\\hline \end{array}}\end{array}}} The first coefficient after the bar is "dropped" to the last row. 3 1 − 12 0 − 42 1 {\displaystyle {\begin{array}{cc}{\begin{array}{r}\\3\\\\\end{array}}&{\begin{array}{|rrrr}\color {blue}1&-12&0&-42\\&&&\\\hline \color {blue}1&&&\\\end{array}}\end{array}}} The dropped number is multiplied by the number before the bar and placed in the next column. 3 1 − 12 0 − 42 3 1 {\displaystyle {\begin{array}{cc}{\begin{array}{r}\\\color {grey}3\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&\color {brown}3&&\\\hline \color {blue}1&&&\\\end{array}}\end{array}}} An addition is performed in the next column. 3 1 − 12 0 − 42 3 1 − 9 {\displaystyle {\begin{array}{cc}{\begin{array}{c}\\3\\\\\end{array}}&{\begin{array}{|rrrr}1&\color {green}-12&0&-42\\&\color {green}3&&\\\hline 1&\color {green}-9&&\\\end{array}}\end{array}}} The previous two steps are repeated, and the following is obtained: 3 1 − 12 0 − 42 3 − 27 − 81 1 − 9 − 27 − 123 {\displaystyle {\begin{array}{cc}{\begin{array}{c}\\3\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&3&-27&-81\\\hline 1&-9&-27&-123\end{array}}\end{array}}} Here, the last term (-123) is the remainder while the rest correspond to the coefficients of the quotient. The terms are written with increasing degree from right to left beginning with degree zero for the remainder and the result. 1 x 2 − 9 x − 27 − 123 {\displaystyle {\begin{array}{rrr|r}1x^{2}&-9x&-27&-123\end{array}}} Hence the quotient and remainder are: q ( x ) = x 2 − 9 x − 27 {\displaystyle q(x)=x^{2}-9x-27} r ( x ) = − 123 {\displaystyle r(x)=-123} === Evaluating polynomials by the remainder theorem === The above form of synthetic division is useful in the context of the polynomial remainder theorem for evaluating univariate polynomials. To summarize, the value of p ( x ) {\displaystyle p(x)} at a {\displaystyle a} is equal to the remainder of the division of p ( x ) {\displaystyle p(x)} by x − a . {\displaystyle x-a.} The advantage of calculating the value this way is that it requires just over half as many multiplication steps as naive evaluation. An alternative evaluation strategy is Horner's method. == Expanded synthetic division == This method generalizes to division by any monic polynomial with only a slight modification with changes in bold. Note that while it may not be displayed in the following example, the divisor must also be written with verbose coefficients. (Such as with 2 x 3 + 0 x 2 − 4 x + 8 {\displaystyle 2x^{3}+0x^{2}-4x+8} ) Using the same steps as before, perform the following division: x 3 − 12 x 2 − 42 x 2 + x − 3 {\displaystyle {\frac {x^{3}-12x^{2}-42}{x^{2}+x-3}}} We concern ourselves only with the coefficients. Write the coefficients of the polynomial to be divided at the top. 1 − 12 0 − 42 {\displaystyle {\begin{array}{|rrrr}\ 1&-12&0&-42\end{array}}} Negate the coefficients of the divisor. − 1 x 2 − 1 x + 3 {\displaystyle {\begin{array}{rrr}-1x^{2}&-1x&+3\end{array}}} Write in every coefficient but the first one on the left in an upward right diagonal (see next diagram). 3 − 1 1 − 12 0 − 42 {\displaystyle {\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\end{array}}&{\begin{array}{|rrrr}\ 1&-12&0&-42\\&&&\\&&&\\\hline \end{array}}\end{array}}} Note the change of sign from 1 to −1 and from −3 to 3. "Drop" the first coefficient after the bar to the last row. 3 − 1 1 − 12 0 − 42 1 {\displaystyle {\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&&&\\&&&\\\hline 1&&&\\\end{array}}\end{array}}} Multiply the dropped number by the diagonal before the bar and place the resulting entries diagonally to the right from the dropped entry. 3 − 1 1 − 12 0 − 42 3 − 1 1 {\displaystyle {\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&&3&\\&-1&&\\\hline 1&&&\\\end{array}}\end{array}}} Perform an addition in the next column. 3 − 1 1 − 12 0 − 42 3 − 1 1 − 13 {\displaystyle {\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&&3&\\&-1&&\\\hline 1&-13&&\\\end{array}}\end{array}}} Repeat the previous two steps until you would go past the entries at the top with the next diagonal. 3 − 1 1 − 12 0 − 42 3 − 39 − 1 13 1 − 13 16 {\displaystyle {\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&&3&-39\\&-1&13&\\\hline 1&-13&16&\\\end{array}}\end{array}}} Then simply add up any remaining columns. 3 − 1 1 − 12 0 − 42 3 − 39 − 1 13 1 − 13 16 − 81 {\displaystyle {\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&&3&-39\\&-1&13&\\\hline 1&-13&16&-81\\\end{array}}\end{array}}} Count the terms to the left of the bar. Since there are two, the remainder has degree one and this is the two right-most terms under the bar. Mark the separation with a vertical bar. 1 − 13 16 − 81 {\displaystyle {\begin{array}{rr|rr}1&-13&16&-81\end{array}}} The terms are written with increasing degree from right to left beginning with degree zero for both the remainder and the result. 1 x − 13 16 x − 81 {\displaystyle {\begin{array}{rr|rr}1x&-13&16x&-81\end{array}}} The result of our division is: x 3 − 12 x 2 − 42 x 2 + x − 3 = x − 13 + 16 x − 81 x 2 + x − 3 {\displaystyle {\frac {x^{3}-12x^{2}-42}{x^{2}+x-3}}=x-13+{\frac {16x-81}{x^{2}+x-3}}} === For non-monic divisors === With a little prodding, the expanded technique may be generalised even further to work for any polynomial, not just monics. The usual way of doing this would be to divide the divisor g ( x ) {\displaystyle g(x)} with its leading coefficient (call it a): h ( x ) = g ( x ) a {\displaystyle h(x)={\frac {g(x)}{a}}} then using synthetic division with h ( x ) {\displaystyle h(x)} as the divisor, and then dividing the quotient by a to get the quotient of the original division (the remainder stays the same). But this often produces unsightly fractions which get removed later and is thus more prone to error. It is possible to do it without first reducing the coefficients of g ( x ) {\displaystyle g(x)} . As can be observed by first performing long division with such a non-monic divisor, the coefficients of f ( x ) {\displaystyle f(x)} are divided by the leading coefficient of g ( x ) {\displaystyle g(x)} after "dropping", and before multiplying. Let's illustrate by performing the following division: 6 x 3 + 5 x 2 − 7 3 x 2 − 2 x − 1 {\displaystyle {\frac {6x^{3}+5x^{2}-7}{3x^{2}-2x-1}}} A slightly modified table is used: 1 2 / 3 6 5 0 − 7 {\displaystyle {\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&&\\&&&\\\hline &&&\\&&&\\\end{array}}\end{array}}} Note the extra row at the bottom. This is used to write values found by dividing the "dropped" values by the leading coefficient of g ( x ) {\displaystyle g(x)} (in this case, indicated by the /3; note that, unlike the rest of the coefficients of g ( x ) {\displaystyle g(x)} , the sign of this number is not changed). Next, the first coefficient of f ( x ) {\displaystyle f(x)} is dropped as usual: 1 2 / 3 6 5 0 − 7 6 {\displaystyle {\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&&\\&&&\\\hline 6&&&\\&&&\\\end{array}}\end{array}}} and then the dropped value is divided by 3 and placed in the row below: 1 2 / 3 6 5 0 − 7 6 2 {\displaystyle {\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&&\\&&&\\\hline 6&&&\\2&&&\\\end{array}}\end{array}}} Next, the new (divided) value is used to fill the top rows with multiples of 2 and 1, as in the expanded technique: 1 2 / 3 6 5 0 − 7 2 4 6 2 {\displaystyle {\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&2&\\&4&&\\\hline 6&&&\\2&&&\\\end{array}}\end{array}}} The 5 is dropped next, with the obligatory adding of the 4 below it, and the answer is divided again: 1 2 / 3 6 5 0 − 7 2 4 6 9 2 3 {\displaystyle {\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&2&\\&4&&\\\hline 6&9&&\\2&3&&\\\end{array}}\end{array}}} Then the 3 is used to fill the top rows: 1 2 / 3 6 5 0 − 7 2 3 4 6 6 9 2 3 {\displaystyle {\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&2&3\\&4&6&\\\hline 6&9&&\\2&3&&\\\end{array}}\end{array}}} At this point, if, after getting the third sum, we were to try and use it to fill the top rows, we would "fall off" the right side, thus the third sum is the first coefficient of the remainder, as in regular synthetic division. But the values of the remainder are not divided by the leading coefficient of the divisor: 1 2 / 3 6 5 0 − 7 2 3 4 6 6 9 8 − 4 2 3 {\displaystyle {\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&2&3\\&4&6&\\\hline 6&9&8&-4\\2&3&&\\\end{array}}\end{array}}} Now we can read off the coefficients of the answer. As in expanded synthetic division, the last two values (2 is the degree of the divisor) are the coefficients of the remainder, and the remaining values are the coefficients of the quotient: 2 x + 3 8 x − 4 {\displaystyle {\begin{array}{rr|rr}2x&+3&8x&-4\end{array}}} and the result is 6 x 3 + 5 x 2 − 7 3 x 2 − 2 x − 1 = 2 x + 3 + 8 x − 4 3 x 2 − 2 x − 1 {\displaystyle {\frac {6x^{3}+5x^{2}-7}{3x^{2}-2x-1}}=2x+3+{\frac {8x-4}{3x^{2}-2x-1}}} === Compact Expanded Synthetic Division === However, the diagonal format above becomes less space-efficient when the degree of the divisor exceeds half of the degree of the dividend. Consider the following division: a 7 x 7 + a 6 x 6 + a 5 x 5 + a 4 x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 b 4 x 4 − b 3 x 3 − b 2 x 2 − b 1 x − b 0 {\displaystyle {\dfrac {a_{7}x^{7}+a_{6}x^{6}+a_{5}x^{5}+a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}}{b_{4}x^{4}-b_{3}x^{3}-b_{2}x^{2}-b_{1}x-b_{0}}}} It is easy to see that we have complete freedom to write each product in any row as long as it is in the correct column, so the algorithm can be compactified by a greedy strategy, as illustrated in the division below: b 3 b 2 b 1 b 0 / b 4 q 0 b 3 q 1 b 3 q 1 b 2 q 0 b 2 q 2 b 3 q 2 b 2 q 2 b 1 q 1 b 1 q 0 b 1 q 3 b 3 q 3 b 2 q 3 b 1 q 3 b 0 q 2 b 0 q 1 b 0 q 0 b 0 a 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0 a 7 q 2 ′ q 1 ′ q 0 ′ r 3 r 2 r 1 r 0 q 3 q 2 q 1 q 0 {\displaystyle {\begin{array}{cc}{\begin{array}{rrrr}\\\\\\\\b_{3}&b_{2}&b_{1}&b_{0}\\\\&&&&/b_{4}\\\end{array}}{\begin{array}{|rrrr|rrrr}&&&&q_{0}b_{3}&&&\\&&&q_{1}b_{3}&q_{1}b_{2}&q_{0}b_{2}&&\\&&q_{2}b_{3}&q_{2}b_{2}&q_{2}b_{1}&q_{1}b_{1}&q_{0}b_{1}&\\&q_{3}b_{3}&q_{3}b_{2}&q_{3}b_{1}&q_{3}b_{0}&q_{2}b_{0}&q_{1}b_{0}&q_{0}b_{0}\\a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline a_{7}&q_{2}'&q_{1}'&q_{0}'&r_{3}&r_{2}&r_{1}&r_{0}\\q_{3}&q_{2}&q_{1}&q_{0}&&&&\\\end{array}}\end{array}}} The following describes how to perform the algorithm; this algorithm includes steps for dividing non-monic divisors: === Python implementation === The following snippet implements Expanded Synthetic Division in Python for arbitrary univariate polynomials: == See also == Euclidean domain Greatest common divisor of two polynomials Gröbner basis Horner scheme Polynomial remainder theorem Ruffini's rule == References == Fan, Lianghuo (June 2003). "A Generalization of Synthetic Division and A General Theorem of Division of Polynomials" (PDF). Mathematical Medley. 30 (1): 30–37. Archived (PDF) from the original on September 7, 2015. Li, Zhou (January 2009). "Short Division of Polynomials" (PDF). College Mathematics Journal. 40 (1): 44–46. doi:10.4169/193113409X469721 (inactive 5 April 2025). JSTOR 27646720. Archived (PDF) from the original on July 9, 2020.{{cite journal}}: CS1 maint: DOI inactive as of April 2025 (link) == External links == Goodman, Len; Stover, Christopher & Weisstein, Eric W. "Synthetic Division". MathWorld. Stover, Christopher. "Ruffini's Rule". MathWorld.
|
Wikipedia:System of bilinear equations#0
|
In mathematics, a system of bilinear equations is a special sort of system of polynomial equations, where each equation equates a bilinear form with a constant (possibly zero). More precisely, given two sets of variables represented as coordinate vectors x and y, then each equation of the system can be written y T A i x = g i , {\displaystyle y^{T}A_{i}x=g_{i},} where, i is an integer whose value ranges from 1 to the number of equations, each A i {\displaystyle A_{i}} is a matrix, and each g i {\displaystyle g_{i}} is a real number. Systems of bilinear equations arise in many subjects including engineering, biology, and statistics. == See also == Systems of linear equations == References == Charles R. Johnson, Joshua A. Link 'Solution theory for complete bilinear systems of equations' - http://onlinelibrary.wiley.com/doi/10.1002/nla.676/abstract Vinh, Le Anh 'On the solvability of systems of bilinear equations in finite fields' - https://arxiv.org/abs/0903.1156 Yang Dian 'Solution theory for system of bilinear equations' - https://digitalarchive.wm.edu/handle/10288/13726 Scott Cohen and Carlo Tomasi. 'Systems of bilinear equations'. Technical report, Stanford, CA, USA, 1997.- ftp://reports.stanford.edu/public_html/cstr/reports/cs/tr/97/1588/CS-TR-97-1588.pdf
|
Wikipedia:System of linear equations#0
|
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variables. For example, { 3 x + 2 y − z = 1 2 x − 2 y + 4 z = − 2 − x + 1 2 y − z = 0 {\displaystyle {\begin{cases}3x+2y-z=1\\2x-2y+4z=-2\\-x+{\frac {1}{2}}y-z=0\end{cases}}} is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple ( x , y , z ) = ( 1 , − 2 , − 2 ) , {\displaystyle (x,y,z)=(1,-2,-2),} since it makes all three equations valid. Linear systems are a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers, but the theory and algorithms apply to coefficients and solutions in any field. For other algebraic structures, other theories have been developed. For coefficients and solutions in an integral domain, such as the ring of integers, see Linear equation over a ring. For coefficients and solutions that are polynomials, see Gröbner basis. For finding the "best" integer solutions among many, see Integer linear programming. For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry. == Elementary examples == === Trivial example === The system of one equation in one unknown 2 x = 4 {\displaystyle 2x=4} has the solution x = 2. {\displaystyle x=2.} However, most interesting linear systems have at least two equations. === Simple nontrivial example === The simplest kind of nontrivial linear system involves two equations and two variables: 2 x + 3 y = 6 4 x + 9 y = 15 . {\displaystyle {\begin{alignedat}{5}2x&&\;+\;&&3y&&\;=\;&&6&\\4x&&\;+\;&&9y&&\;=\;&&15&.\end{alignedat}}} One method for solving such a system is as follows. First, solve the top equation for x {\displaystyle x} in terms of y {\displaystyle y} : x = 3 − 3 2 y . {\displaystyle x=3-{\frac {3}{2}}y.} Now substitute this expression for x into the bottom equation: 4 ( 3 − 3 2 y ) + 9 y = 15. {\displaystyle 4\left(3-{\frac {3}{2}}y\right)+9y=15.} This results in a single equation involving only the variable y {\displaystyle y} . Solving gives y = 1 {\displaystyle y=1} , and substituting this back into the equation for x {\displaystyle x} yields x = 3 2 {\displaystyle x={\frac {3}{2}}} . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.) == General form == A general system of m linear equations with n unknowns and coefficients can be written as { a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = b m , {\displaystyle {\begin{cases}a_{11}x_{1}+a_{12}x_{2}+\dots +a_{1n}x_{n}=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+\dots +a_{2n}x_{n}=b_{2}\\\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\dots +a_{mn}x_{n}=b_{m},\end{cases}}} where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} are the unknowns, a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\dots ,a_{mn}} are the coefficients of the system, and b 1 , b 2 , … , b m {\displaystyle b_{1},b_{2},\dots ,b_{m}} are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. === Vector equation === One extremely helpful view is that each unknown is a weight for a column vector in a linear combination. x 1 [ a 11 a 21 ⋮ a m 1 ] + x 2 [ a 12 a 22 ⋮ a m 2 ] + ⋯ + x n [ a 1 n a 2 n ⋮ a m n ] = [ b 1 b 2 ⋮ b m ] {\displaystyle x_{1}{\begin{bmatrix}a_{11}\\a_{21}\\\vdots \\a_{m1}\end{bmatrix}}+x_{2}{\begin{bmatrix}a_{12}\\a_{22}\\\vdots \\a_{m2}\end{bmatrix}}+\dots +x_{n}{\begin{bmatrix}a_{1n}\\a_{2n}\\\vdots \\a_{mn}\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}} This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side (LHS) is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side (RHS), and otherwise not guaranteed. === Matrix equation === The vector equation is equivalent to a matrix equation of the form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } where A is an m×n matrix, x is a column vector with n entries, and b is a column vector with m entries. A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] , x = [ x 1 x 2 ⋮ x n ] , b = [ b 1 b 2 ⋮ b m ] . {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}},\quad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}.} The number of vectors in a basis for the span is now expressed as the rank of the matrix. == Solution set == A solution of a linear system is an assignment of values to the variables x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} such that each of the equations is satisfied. The set of all possible solutions is called the solution set. A linear system may behave in any one of three possible ways: The system has infinitely many solutions. The system has a unique solution. The system has no solution. === Geometric interpretation === For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set. For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points. For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, and is a flat, which may have any dimension lower than n. === General behavior === In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations. In general, a system with fewer equations than unknowns has infinitely many solutions, but it may have no solution. Such a system is known as an underdetermined system. In general, a system with the same number of equations and unknowns has a single unique solution. In general, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system. In the first case, the dimension of the solution set is, in general, equal to n − m, where n is the number of variables and m is the number of equations. The following pictures illustrate this trichotomy in the case of two variables: The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point. It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). A system of linear equations behave differently from the general case if the equations are linearly dependent, or if it is inconsistent and has no more equations than unknowns. == Properties == === Independence === The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence. For example, the equations 3 x + 2 y = 6 and 6 x + 4 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;6x+4y=12} are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations. For a more complicated example, the equations x − 2 y = − 1 3 x + 5 y = 8 4 x + 3 y = 7 {\displaystyle {\begin{alignedat}{5}x&&\;-\;&&2y&&\;=\;&&-1&\\3x&&\;+\;&&5y&&\;=\;&&8&\\4x&&\;+\;&&3y&&\;=\;&&7&\end{alignedat}}} are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point. === Consistency === A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 = 1. For example, the equations 3 x + 2 y = 6 and 3 x + 2 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;3x+2y=12} are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1. The graphs of these equations on the xy-plane are a pair of parallel lines. It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations x + y = 1 2 x + y = 1 3 x + 2 y = 3 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&y&&\;=\;&&1&\\2x&&\;+\;&&y&&\;=\;&&1&\\3x&&\;+\;&&2y&&\;=\;&&3&\end{alignedat}}} are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be subtracted from the third equation to yield 0 = 1. Any two of these equations have a common solution. The same phenomenon can occur for any number of equations. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent. Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1. === Equivalence === Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set. == Solving a linear system == There are several algorithms for solving a system of linear equations. === Describing the solution === When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example ( x = 3 , y = − 2 , z = 6 ) {\displaystyle (x=3,\;y=-2,\;z=6)} . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like ( 3 , − 2 , 6 ) {\displaystyle (3,\,-2,\,6)} for the previous example. To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables. For example, consider the following system: x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&3y&&\;-\;&&2z&&\;=\;&&5&\\3x&&\;+\;&&5y&&\;+\;&&6z&&\;=\;&&7&\end{alignedat}}} The solution set to this system can be described by the following equations: x = − 7 z − 1 and y = 3 z + 2 . {\displaystyle x=-7z-1\;\;\;\;{\text{and}}\;\;\;\;y=3z+2{\text{.}}} Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y. Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher-dimensional set. Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows: y = − 3 7 x + 11 7 and z = − 1 7 x − 1 7 . {\displaystyle y=-{\frac {3}{7}}x+{\frac {11}{7}}\;\;\;\;{\text{and}}\;\;\;\;z=-{\frac {1}{7}}x-{\frac {1}{7}}{\text{.}}} Here x is the free variable, and y and z are dependent. === Elimination of variables === The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows: In the first equation, solve for one of the variables in terms of the others. Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and unknown. Repeat steps 1 and 2 until the system is reduced to a single linear equation. Solve this equation, and then back-substitute until the entire solution is found. For example, consider the following system: { x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{cases}x+3y-2z=5\\3x+5y+6z=7\\2x+4y+3z=8\end{cases}}} Solving the first equation for x gives x = 5 + 2 z − 3 y {\displaystyle x=5+2z-3y} , and plugging this into the second and third equation yields { y = 3 z + 2 y = 7 2 z + 1 {\displaystyle {\begin{cases}y=3z+2\\y={\tfrac {7}{2}}z+1\end{cases}}} Since the LHS of both of these equations equal y, equating the RHS of the equations. We now have: 3 z + 2 = 7 2 z + 1 ⇒ z = 2 {\displaystyle {\begin{aligned}3z+2={\tfrac {7}{2}}z+1\\\Rightarrow z=2\end{aligned}}} Substituting z = 2 into the second or third equation gives y = 8, and the values of y and z into the first equation yields x = −15. Therefore, the solution set is the ordered triple ( x , y , z ) = ( − 15 , 8 , 2 ) {\displaystyle (x,y,z)=(-15,8,2)} . === Row reduction === In row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] . {\displaystyle \left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]{\text{.}}} This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations: Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another. Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original. There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above: [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 0 1 2 ] ∼ [ 1 3 − 2 5 0 1 0 8 0 0 1 2 ] ∼ [ 1 3 0 9 0 1 0 8 0 0 1 2 ] ∼ [ 1 0 0 − 15 0 1 0 8 0 0 1 2 ] . {\displaystyle {\begin{aligned}\left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\2&4&3&8\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\0&-2&7&-2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&-2&7&-2\end{array}}\right]\\&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&0&9\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&0&0&-15\\0&1&0&8\\0&0&1&2\end{array}}\right].\end{aligned}}} The last matrix is in reduced row echelon form, and represents the system x = −15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down. === Cramer's rule === Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{alignedat}{7}x&\;+&\;3y&\;-&\;2z&\;=&\;5\\3x&\;+&\;5y&\;+&\;6z&\;=&\;7\\2x&\;+&\;4y&\;+&\;3z&\;=&\;8\end{alignedat}}} is given by x = | 5 3 − 2 7 5 6 8 4 3 | | 1 3 − 2 3 5 6 2 4 3 | , y = | 1 5 − 2 3 7 6 2 8 3 | | 1 3 − 2 3 5 6 2 4 3 | , z = | 1 3 5 3 5 7 2 4 8 | | 1 3 − 2 3 5 6 2 4 3 | . {\displaystyle x={\frac {\,{\begin{vmatrix}5&3&-2\\7&5&6\\8&4&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;y={\frac {\,{\begin{vmatrix}1&5&-2\\3&7&6\\2&8&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;z={\frac {\,{\begin{vmatrix}1&3&5\\3&5&7\\2&4&8\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}}.} For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms. Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. === Matrix solution === If the equation system is expressed in the matrix form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n=m columns) and has full rank (all m rows are independent), then the system has a unique solution given by x = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} } where A − 1 {\displaystyle A^{-1}} is the inverse of A. More generally, regardless of whether m=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore–Penrose inverse of A, denoted A + {\displaystyle A^{+}} , as follows: x = A + b + ( I − A + A ) w {\displaystyle \mathbf {x} =A^{+}\mathbf {b} +\left(I-A^{+}A\right)\mathbf {w} } where w {\displaystyle \mathbf {w} } is a vector of free parameters that ranges over all possible n×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using w = 0 {\displaystyle \mathbf {w} =\mathbf {0} } satisfy A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } — that is, that A A + b = b . {\displaystyle AA^{+}\mathbf {b} =\mathbf {b} .} If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, A + {\displaystyle A^{+}} simply equals A − 1 {\displaystyle A^{-1}} and the general solution equation simplifies to x = A − 1 b + ( I − A − 1 A ) w = A − 1 b + ( I − I ) w = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} +\left(I-A^{-1}A\right)\mathbf {w} =A^{-1}\mathbf {b} +\left(I-I\right)\mathbf {w} =A^{-1}\mathbf {b} } as previously stated, where w {\displaystyle \mathbf {w} } has completely dropped out of the solution, leaving only a single solution. In other cases, though, w {\displaystyle \mathbf {w} } remains and hence an infinitude of potential values of the free parameter vector w {\displaystyle \mathbf {w} } give an infinitude of solutions of the equation. === Other methods === While systems of three or four equations can be readily solved by hand (see Cracovian), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b. If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. One example of an iterative method is the Jacobi method, where the matrix A {\displaystyle A} is split into its diagonal component D {\displaystyle D} and its non-diagonal component L + U {\displaystyle L+U} . An initial guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation: x ( k + 1 ) = D − 1 ( b − ( L + U ) x ( k ) ) {\displaystyle {\mathbf {x}}^{(k+1)}=D^{-1}({\mathbf {b}}-(L+U){\mathbf {x}}^{(k)})} When the difference between guesses x ( k ) {\displaystyle {\mathbf {x}}^{(k)}} and x ( k + 1 ) {\displaystyle {\mathbf {x}}^{(k+1)}} is sufficiently small, the algorithm is said to have converged on the solution. There is also a quantum algorithm for linear systems of equations. == Homogeneous systems == A system of linear equations is homogeneous if all of the constant terms are zero: a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0. {\displaystyle {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=\;&&&0\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=\;&&&0\\&&&&&&&&&&\vdots \;\ &&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=\;&&&0.\\\end{alignedat}}} A homogeneous system is equivalent to a matrix equation of the form A x = 0 {\displaystyle A\mathbf {x} =\mathbf {0} } where A is an m × n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries. === Homogeneous solution set === Every homogeneous system has at least one solution, known as the zero (or trivial) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix (det(A) ≠ 0) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties: If u and v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system. If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system. These are exactly the properties required for the solution set to be a linear subspace of Rn. In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A. === Relation to nonhomogeneous systems === There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system: A x = b and A x = 0 . {\displaystyle A\mathbf {x} =\mathbf {b} \qquad {\text{and}}\qquad A\mathbf {x} =\mathbf {0} .} Specifically, if p is any specific solution to the linear system Ax = b, then the entire solution set can be described as { p + v : v is any solution to A x = 0 } . {\displaystyle \left\{\mathbf {p} +\mathbf {v} :\mathbf {v} {\text{ is any solution to }}A\mathbf {x} =\mathbf {0} \right\}.} Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p. This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A. == See also == Arrangement of hyperplanes Iterative refinement – Method to improve accuracy of numerical solutions to systems of linear equations Coates graph – A mathematical graph for solution of linear equations LAPACK – Software library for numerical linear algebra Linear equation over a ring Linear least squares – Least squares approximation of linear functions to dataPages displaying short descriptions of redirect targets Matrix decomposition – Representation of a matrix as a product Matrix splitting – Representation of a matrix as a sum NAG Numerical Library – Software library of numerical-analysis algorithms Rybicki Press algorithm – An algorithm for inverting a matrix Simultaneous equations – Set of equations to be solved togetherPages displaying short descriptions of redirect targets == References == == Bibliography == Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0 Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3 Cullen, Charles G. (1990), Matrices and Linear Transformations, MA: Dover, ISBN 978-0-486-66328-9 Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 0-8018-5414-8 Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9 Harrow, Aram W.; Hassidim, Avinatan; Lloyd, Seth (2009), "Quantum Algorithm for Linear Systems of Equations", Physical Review Letters, 103 (15): 150502, arXiv:0811.3171, Bibcode:2009PhRvL.103o0502H, doi:10.1103/PhysRevLett.103.150502, PMID 19905613, S2CID 5187993 Sterling, Mary J. (2009), Linear Algebra for Dummies, Indianapolis, Indiana: Wiley, ISBN 978-0-470-43090-3 Whitelaw, T. A. (1991), Introduction to Linear Algebra (2nd ed.), CRC Press, ISBN 0-7514-0159-5 == Further reading == Axler, Sheldon Jay (1997). Linear Algebra Done Right (2nd ed.). Springer-Verlag. ISBN 0-387-98259-0. Lay, David C. (August 22, 2005). Linear Algebra and Its Applications (3rd ed.). Addison Wesley. ISBN 978-0-321-28713-7. Meyer, Carl D. (February 15, 2001). Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics (SIAM). ISBN 978-0-89871-454-8. Archived from the original on March 1, 2001. Poole, David (2006). Linear Algebra: A Modern Introduction (2nd ed.). Brooks/Cole. ISBN 0-534-99845-3. Anton, Howard (2005). Elementary Linear Algebra (Applications Version) (9th ed.). Wiley International. Leon, Steven J. (2006). Linear Algebra With Applications (7th ed.). Pearson Prentice Hall. Strang, Gilbert (2005). Linear Algebra and Its Applications. Peng, Richard; Vempala, Santosh S. (2024). "Solving Sparse Linear Systems Faster than Matrix Multiplication". Comm. ACM. 67 (7): 79–86. arXiv:2007.10254. doi:10.1145/3615679. == External links == Media related to System of linear equations at Wikimedia Commons
|
Wikipedia:System of polynomial equations#0
|
A system of polynomial equations (sometimes simply a polynomial system) is a set of simultaneous equations f1 = 0, ..., fh = 0 where the fi are polynomials in several variables, say x1, ..., xn, over some field k. A solution of a polynomial system is a set of values for the xis which belong to some algebraically closed field extension K of k, and make all equations true. When k is the field of rational numbers, K is generally assumed to be the field of complex numbers, because each solution belongs to a field extension of k, which is isomorphic to a subfield of the complex numbers. This article is about the methods for solving, that is, finding all solutions or describing them. As these methods are designed for being implemented in a computer, emphasis is given on fields k in which computation (including equality testing) is easy and efficient, that is the field of rational numbers and finite fields. Searching for solutions that belong to a specific set is a problem which is generally much more difficult, and is outside the scope of this article, except for the case of the solutions in a given finite field. For the case of solutions of which all components are integers or rational numbers, see Diophantine equation. == Definition == A simple example of a system of polynomial equations is x 2 + y 2 − 5 = 0 x y − 2 = 0. {\displaystyle {\begin{aligned}x^{2}+y^{2}-5&=0\\xy-2&=0.\end{aligned}}} Its solutions are the four pairs (x, y) = (1, 2), (2, 1), (-1, -2), (-2, -1). These solutions can easily be checked by substitution, but more work is needed for proving that there are no other solutions. The subject of this article is the study of generalizations of such an examples, and the description of the methods that are used for computing the solutions. A system of polynomial equations, or polynomial system is a collection of equations f 1 ( x 1 , … , x m ) = 0 ⋮ f n ( x 1 , … , x m ) = 0 , {\displaystyle {\begin{aligned}f_{1}\left(x_{1},\ldots ,x_{m}\right)&=0\\&\;\;\vdots \\f_{n}\left(x_{1},\ldots ,x_{m}\right)&=0,\end{aligned}}} where each fh is a polynomial in the indeterminates x1, ..., xm, with integer coefficients, or coefficients in some fixed field, often the field of rational numbers or a finite field. Other fields of coefficients, such as the real numbers, are less often used, as their elements cannot be represented in a computer (only approximations of real numbers can be used in computations, and these approximations are always rational numbers). A solution of a polynomial system is a tuple of values of (x1, ..., xm) that satisfies all equations of the polynomial system. The solutions are sought in the complex numbers, or more generally in an algebraically closed field containing the coefficients. In particular, in characteristic zero, all complex solutions are sought. Searching for the real or rational solutions are much more difficult problems that are not considered in this article. The set of solutions is not always finite; for example, the solutions of the system x ( x − 1 ) = 0 x ( y − 1 ) = 0 {\displaystyle {\begin{aligned}x(x-1)&=0\\x(y-1)&=0\end{aligned}}} are a point (x,y) = (1,1) and a line x = 0. Even when the solution set is finite, there is, in general, no closed-form expression of the solutions (in the case of a single equation, this is Abel–Ruffini theorem). The Barth surface, shown in the figure is the geometric representation of the solutions of a polynomial system reduced to a single equation of degree 6 in 3 variables. Some of its numerous singular points are visible on the image. They are the solutions of a system of 4 equations of degree 5 in 3 variables. Such an overdetermined system has no solution in general (that is if the coefficients are not specific). If it has a finite number of solutions, this number is at most 53 = 125, by Bézout's theorem. However, it has been shown that, for the case of the singular points of a surface of degree 6, the maximum number of solutions is 65, and is reached by the Barth surface. == Basic properties and definitions == A system is overdetermined if the number of equations is higher than the number of variables. A system is inconsistent if it has no complex solution (or, if the coefficients are not complex numbers, no solution in an algebraically closed field containing the coefficients). By Hilbert's Nullstellensatz this means that 1 is a linear combination (with polynomials as coefficients) of the first members of the equations. Most but not all overdetermined systems, when constructed with random coefficients, are inconsistent. For example, the system x3 – 1 = 0, x2 – 1 = 0 is overdetermined (having two equations but only one unknown), but it is not inconsistent since it has the solution x = 1. A system is underdetermined if the number of equations is lower than the number of the variables. An underdetermined system is either inconsistent or has infinitely many complex solutions (or solutions in an algebraically closed field that contains the coefficients of the equations). This is a non-trivial result of commutative algebra that involves, in particular, Hilbert's Nullstellensatz and Krull's principal ideal theorem. A system is zero-dimensional if it has a finite number of complex solutions (or solutions in an algebraically closed field). This terminology comes from the fact that the algebraic variety of the solutions has dimension zero. A system with infinitely many solutions is said to be positive-dimensional. A zero-dimensional system with as many equations as variables is sometimes said to be well-behaved. Bézout's theorem asserts that a well-behaved system whose equations have degrees d1, ..., dn has at most d1⋅⋅⋅dn solutions. This bound is sharp. If all the degrees are equal to d, this bound becomes dn and is exponential in the number of variables. (The fundamental theorem of algebra is the special case n = 1.) This exponential behavior makes solving polynomial systems difficult and explains why there are few solvers that are able to automatically solve systems with Bézout's bound higher than, say, 25 (three equations of degree 3 or five equations of degree 2 are beyond this bound). == What is solving? == The first thing to do for solving a polynomial system is to decide whether it is inconsistent, zero-dimensional or positive dimensional. This may be done by the computation of a Gröbner basis of the left-hand sides of the equations. The system is inconsistent if this Gröbner basis is reduced to 1. The system is zero-dimensional if, for every variable there is a leading monomial of some element of the Gröbner basis which is a pure power of this variable. For this test, the best monomial order (that is the one which leads generally to the fastest computation) is usually the graded reverse lexicographic one (grevlex). If the system is positive-dimensional, it has infinitely many solutions. It is thus not possible to enumerate them. It follows that, in this case, solving may only mean "finding a description of the solutions from which the relevant properties of the solutions are easy to extract". There is no commonly accepted such description. In fact there are many different "relevant properties", which involve almost every subfield of algebraic geometry. A natural example of such a question concerning positive-dimensional systems is the following: decide if a polynomial system over the rational numbers has a finite number of real solutions and compute them. A generalization of this question is find at least one solution in each connected component of the set of real solutions of a polynomial system. The classical algorithm for solving these question is cylindrical algebraic decomposition, which has a doubly exponential computational complexity and therefore cannot be used in practice, except for very small examples. For zero-dimensional systems, solving consists of computing all the solutions. There are two different ways of outputting the solutions. The most common way is possible only for real or complex solutions, and consists of outputting numeric approximations of the solutions. Such a solution is called numeric. A solution is certified if it is provided with a bound on the error of the approximations, and if this bound separates the different solutions. The other way of representing the solutions is said to be algebraic. It uses the fact that, for a zero-dimensional system, the solutions belong to the algebraic closure of the field k of the coefficients of the system. There are several ways to represent the solution in an algebraic closure, which are discussed below. All of them allow one to compute a numerical approximation of the solutions by solving one or several univariate equations. For this computation, it is preferable to use a representation that involves solving only one univariate polynomial per solution, because computing the roots of a polynomial which has approximate coefficients is a highly unstable problem. == Extensions == === Trigonometric equations === A trigonometric equation is an equation g = 0 where g is a trigonometric polynomial. Such an equation may be converted into a polynomial system by expanding the sines and cosines in it (using sum and difference formulas), replacing sin(x) and cos(x) by two new variables s and c and adding the new equation s2 + c2 – 1 = 0. For example, because of the identity cos ( 3 x ) = 4 cos 3 ( x ) − 3 cos ( x ) , {\displaystyle \cos(3x)=4\cos ^{3}(x)-3\cos(x),} solving the equation sin 3 ( x ) + cos ( 3 x ) = 0 {\displaystyle \sin ^{3}(x)+\cos(3x)=0} is equivalent to solving the polynomial system { s 3 + 4 c 3 − 3 c = 0 s 2 + c 2 − 1 = 0. {\displaystyle {\begin{cases}s^{3}+4c^{3}-3c&=0\\s^{2}+c^{2}-1&=0.\end{cases}}} For each solution (c0, s0) of this system, there is a unique solution x of the equation such that 0 ≤ x < 2π. In the case of this simple example, it may be unclear whether the system is, or not, easier to solve than the equation. On more complicated examples, one lacks systematic methods for solving directly the equation, while software are available for automatically solving the corresponding system. === Solutions in a finite field === When solving a system over a finite field k with q elements, one is primarily interested in the solutions in k. As the elements of k are exactly the solutions of the equation xq – x = 0, it suffices, for restricting the solutions to k, to add the equation xiq – xi = 0 for each variable xi. === Coefficients in a number field or in a finite field with non-prime order === The elements of an algebraic number field are usually represented as polynomials in a generator of the field which satisfies some univariate polynomial equation. To work with a polynomial system whose coefficients belong to a number field, it suffices to consider this generator as a new variable and to add the equation of the generator to the equations of the system. Thus solving a polynomial system over a number field is reduced to solving another system over the rational numbers. For example, if a system contains 2 {\displaystyle {\sqrt {2}}} , a system over the rational numbers is obtained by adding the equation r22 – 2 = 0 and replacing 2 {\displaystyle {\sqrt {2}}} by r2 in the other equations. In the case of a finite field, the same transformation allows always supposing that the field k has a prime order. == Algebraic representation of the solutions == === Regular chains === The usual way of representing the solutions is through zero-dimensional regular chains. Such a chain consists of a sequence of polynomials f1(x1), f2(x1, x2), ..., fn(x1, ..., xn) such that, for every i such that 1 ≤ i ≤ n fi is a polynomial in x1, ..., xi only, which has a degree di > 0 in xi; the coefficient of xidi in fi is a polynomial in x1, ..., xi −1 which does not have any common zero with f1, ..., fi − 1. To such a regular chain is associated a triangular system of equations { f 1 ( x 1 ) = 0 f 2 ( x 1 , x 2 ) = 0 ⋮ f n ( x 1 , x 2 , … , x n ) = 0. {\displaystyle {\begin{cases}f_{1}(x_{1})=0\\f_{2}(x_{1},x_{2})=0\\\quad \vdots \\f_{n}(x_{1},x_{2},\ldots ,x_{n})=0.\end{cases}}} The solutions of this system are obtained by solving the first univariate equation, substituting the solutions in the other equations, then solving the second equation which is now univariate, and so on. The definition of regular chains implies that the univariate equation obtained from fi has degree di and thus that the system has d1 ... dn solutions, provided that there is no multiple root in this resolution process (fundamental theorem of algebra). Every zero-dimensional system of polynomial equations is equivalent (i.e. has the same solutions) to a finite number of regular chains. Several regular chains may be needed, as it is the case for the following system which has three solutions. { x 2 − 1 = 0 ( x − 1 ) ( y − 1 ) = 0 y 2 − 1 = 0. {\displaystyle {\begin{cases}x^{2}-1=0\\(x-1)(y-1)=0\\y^{2}-1=0.\end{cases}}} There are several algorithms for computing a triangular decomposition of an arbitrary polynomial system (not necessarily zero-dimensional) into regular chains (or regular semi-algebraic systems). There is also an algorithm which is specific to the zero-dimensional case and is competitive, in this case, with the direct algorithms. It consists in computing first the Gröbner basis for the graded reverse lexicographic order (grevlex), then deducing the lexicographical Gröbner basis by FGLM algorithm and finally applying the Lextriangular algorithm. This representation of the solutions are fully convenient for coefficients in a finite field. However, for rational coefficients, two aspects have to be taken care of: The output may involve huge integers which may make the computation and the use of the result problematic. To deduce the numeric values of the solutions from the output, one has to solve univariate polynomials with approximate coefficients, which is a highly unstable problem. The first issue has been solved by Dahan and Schost: Among the sets of regular chains that represent a given set of solutions, there is a set for which the coefficients are explicitly bounded in terms of the size of the input system, with a nearly optimal bound. This set, called equiprojectable decomposition, depends only on the choice of the coordinates. This allows the use of modular methods for computing efficiently the equiprojectable decomposition. The second issue is generally solved by outputting regular chains of a special form, sometimes called shape lemma, for which all di but the first one are equal to 1. For getting such regular chains, one may have to add a further variable, called separating variable, which is given the index 0. The rational univariate representation, described below, allows computing such a special regular chain, satisfying Dahan–Schost bound, by starting from either a regular chain or a Gröbner basis. === Rational univariate representation === The rational univariate representation or RUR is a representation of the solutions of a zero-dimensional polynomial system over the rational numbers which has been introduced by F. Rouillier. A RUR of a zero-dimensional system consists in a linear combination x0 of the variables, called separating variable, and a system of equations { h ( x 0 ) = 0 x 1 = g 1 ( x 0 ) / g 0 ( x 0 ) ⋮ x n = g n ( x 0 ) / g 0 ( x 0 ) , {\displaystyle {\begin{cases}h(x_{0})=0\\x_{1}=g_{1}(x_{0})/g_{0}(x_{0})\\\quad \vdots \\x_{n}=g_{n}(x_{0})/g_{0}(x_{0}),\end{cases}}} where h is a univariate polynomial in x0 of degree D and g0, ..., gn are univariate polynomials in x0 of degree less than D. Given a zero-dimensional polynomial system over the rational numbers, the RUR has the following properties. All but a finite number linear combinations of the variables are separating variables. When the separating variable is chosen, the RUR exists and is unique. In particular h and the gi are defined independently of any algorithm to compute them. The solutions of the system are in one-to-one correspondence with the roots of h and the multiplicity of each root of h equals the multiplicity of the corresponding solution. The solutions of the system are obtained by substituting the roots of h in the other equations. If h does not have any multiple root then g0 is the derivative of h. For example, for the system in the previous section, every linear combination of the variable, except the multiples of x, y and x + y, is a separating variable. If one chooses t = x – y/2 as a separating variable, then the RUR is { t 3 − t = 0 x = t 2 + 2 t − 1 3 t 2 − 1 y = t 2 − 2 t − 1 3 t 2 − 1 . {\displaystyle {\begin{cases}t^{3}-t=0\\x={\frac {t^{2}+2t-1}{3t^{2}-1}}\\y={\frac {t^{2}-2t-1}{3t^{2}-1}}.\\\end{cases}}} The RUR is uniquely defined for a given separating variable, independently of any algorithm, and it preserves the multiplicities of the roots. This is a notable difference with triangular decompositions (even the equiprojectable decomposition), which, in general, do not preserve multiplicities. The RUR shares with equiprojectable decomposition the property of producing an output with coefficients of relatively small size. For zero-dimensional systems, the RUR allows retrieval of the numeric values of the solutions by solving a single univariate polynomial and substituting them in rational functions. This allows production of certified approximations of the solutions to any given precision. Moreover, the univariate polynomial h(x0) of the RUR may be factorized, and this gives a RUR for every irreducible factor. This provides the prime decomposition of the given ideal (that is the primary decomposition of the radical of the ideal). In practice, this provides an output with much smaller coefficients, especially in the case of systems with high multiplicities. Contrarily to triangular decompositions and equiprojectable decompositions, the RUR is not defined in positive dimension. == Solving numerically == === General solving algorithms === The general numerical algorithms which are designed for any system of nonlinear equations work also for polynomial systems. However the specific methods will generally be preferred, as the general methods generally do not allow one to find all solutions. In particular, when a general method does not find any solution, this is usually not an indication that there is no solution. Nevertheless, two methods deserve to be mentioned here. Newton's method may be used if the number of equations is equal to the number of variables. It does not allow one to find all the solutions nor to prove that there is no solution. But it is very fast when starting from a point which is close to a solution. Therefore, it is a basic tool for the homotopy continuation method described below. Optimization is rarely used for solving polynomial systems, but it succeeded, circa 1970, in showing that a system of 81 quadratic equations in 56 variables is not inconsistent. With the other known methods, this remains beyond the possibilities of modern technology, as of 2022. This method consists simply in minimizing the sum of the squares of the equations. If zero is found as a local minimum, then it is attained at a solution. This method works for overdetermined systems, but outputs an empty information if all local minimums which are found are positive. === Homotopy continuation method === This is a semi-numeric method which supposes that the number of equations is equal to the number of variables. This method is relatively old but it has been dramatically improved in the last decades. This method divides into three steps. First an upper bound on the number of solutions is computed. This bound has to be as sharp as possible. Therefore, it is computed by, at least, four different methods and the best value, say N {\displaystyle N} , is kept. In the second step, a system g 1 = 0 , … , g n = 0 {\displaystyle g_{1}=0,\,\ldots ,\,g_{n}=0} of polynomial equations is generated which has exactly N {\displaystyle N} solutions that are easy to compute. This new system has the same number n {\displaystyle n} of variables and the same number n {\displaystyle n} of equations and the same general structure as the system to solve, f 1 = 0 , … , f n = 0 {\displaystyle f_{1}=0,\,\ldots ,\,f_{n}=0} . Then a homotopy between the two systems is considered. It consists, for example, of the straight line between the two systems, but other paths may be considered, in particular to avoid some singularities, in the system ( 1 − t ) g 1 + t f 1 = 0 , … , ( 1 − t ) g n + t f n = 0 {\displaystyle (1-t)g_{1}+tf_{1}=0,\,\ldots ,\,(1-t)g_{n}+tf_{n}=0} . The homotopy continuation consists in deforming the parameter t {\displaystyle t} from 0 to 1 and following the N {\displaystyle N} solutions during this deformation. This gives the desired solutions for t = 1 {\displaystyle t=1} . Following means that, if t 1 < t 2 {\displaystyle t_{1}<t_{2}} , the solutions for t = t 2 {\displaystyle t=t_{2}} are deduced from the solutions for t = t 1 {\displaystyle t=t_{1}} by Newton's method. The difficulty here is to well choose the value of t 2 − t 1 : {\displaystyle t_{2}-t_{1}:} Too large, Newton's convergence may be slow and may even jump from a solution path to another one. Too small, and the number of steps slows down the method. === Numerically solving from the rational univariate representation === To deduce the numeric values of the solutions from a RUR seems easy: it suffices to compute the roots of the univariate polynomial and to substitute them in the other equations. This is not so easy because the evaluation of a polynomial at the roots of another polynomial is highly unstable. The roots of the univariate polynomial have thus to be computed at a high precision which may not be defined once for all. There are two algorithms which fulfill this requirement. Aberth method, implemented in MPSolve computes all the complex roots to any precision. Uspensky's algorithm of Collins and Akritas, improved by Rouillier and Zimmermann and based on Descartes' rule of signs. This algorithms computes the real roots, isolated in intervals of arbitrary small width. It is implemented in Maple (functions fsolve and RootFinding[Isolate]). == Software packages == There are at least four software packages which can solve zero-dimensional systems automatically (by automatically, one means that no human intervention is needed between input and output, and thus that no knowledge of the method by the user is needed). There are also several other software packages which may be useful for solving zero-dimensional systems. Some of them are listed after the automatic solvers. The Maple function RootFinding[Isolate] takes as input any polynomial system over the rational numbers (if some coefficients are floating point numbers, they are converted to rational numbers) and outputs the real solutions represented either (optionally) as intervals of rational numbers or as floating point approximations of arbitrary precision. If the system is not zero dimensional, this is signaled as an error. Internally, this solver, designed by F. Rouillier computes first a Gröbner basis and then a Rational Univariate Representation from which the required approximation of the solutions are deduced. It works routinely for systems having up to a few hundred complex solutions. The rational univariate representation may be computed with Maple function Groebner[RationalUnivariateRepresentation]. To extract all the complex solutions from a rational univariate representation, one may use MPSolve, which computes the complex roots of univariate polynomials to any precision. It is recommended to run MPSolve several times, doubling the precision each time, until solutions remain stable, as the substitution of the roots in the equations of the input variables can be highly unstable. The second solver is PHCpack, written under the direction of J. Verschelde. PHCpack implements the homotopy continuation method. This solver computes the isolated complex solutions of polynomial systems having as many equations as variables. The third solver is Bertini, written by D. J. Bates, J. D. Hauenstein, A. J. Sommese, and C. W. Wampler. Bertini uses numerical homotopy continuation with adaptive precision. In addition to computing zero-dimensional solution sets, both PHCpack and Bertini are capable of working with positive dimensional solution sets. The fourth solver is the Maple library RegularChains, written by Marc Moreno-Maza and collaborators. It contains various functions for solving polynomial systems by means of regular chains. == See also == Elimination theory Systems of polynomial inequalities Triangular decomposition Wu's method of characteristic set == References ==
|
Wikipedia:Szegő limit theorems#0
|
In mathematical analysis, the Szegő limit theorems describe the asymptotic behaviour of the determinants of large Toeplitz matrices. They were first proved by Gábor Szegő. == Notation == Let w {\displaystyle w} be a Fourier series with Fourier coefficients c k {\displaystyle c_{k}} , relating to each other as w ( θ ) = ∑ k = − ∞ ∞ c k e i k θ , θ ∈ [ 0 , 2 π ] , {\displaystyle w(\theta )=\sum _{k=-\infty }^{\infty }c_{k}e^{ik\theta },\qquad \theta \in [0,2\pi ],} c k = 1 2 π ∫ 0 2 π w ( θ ) e − i k θ d θ , {\displaystyle c_{k}={\frac {1}{2\pi }}\int _{0}^{2\pi }w(\theta )e^{-ik\theta }\,d\theta ,} such that the n × n {\displaystyle n\times n} Toeplitz matrices T n ( w ) = ( c k − l ) 0 ≤ k , l ≤ n − 1 {\displaystyle T_{n}(w)=\left(c_{k-l}\right)_{0\leq k,l\leq n-1}} are Hermitian, i.e., if T n ( w ) = T n ( w ) ∗ {\displaystyle T_{n}(w)=T_{n}(w)^{\ast }} then c − k = c k ¯ {\displaystyle c_{-k}={\overline {c_{k}}}} . Then both w {\displaystyle w} and eigenvalues ( λ m ( n ) ) 0 ≤ m ≤ n − 1 {\displaystyle (\lambda _{m}^{(n)})_{0\leq m\leq n-1}} are real-valued and the determinant of T n ( w ) {\displaystyle T_{n}(w)} is given by det T n ( w ) = ∏ m = 1 n − 1 λ m ( n ) {\displaystyle \det T_{n}(w)=\prod _{m=1}^{n-1}\lambda _{m}^{(n)}} . == Szegő theorem == Under suitable assumptions the Szegő theorem states that lim n → ∞ 1 n ∑ m = 0 n − 1 F ( λ m ( n ) ) = 1 2 π ∫ 0 2 π F ( w ( θ ) ) d θ {\displaystyle \lim _{n\rightarrow \infty }{\frac {1}{n}}\sum _{m=0}^{n-1}F(\lambda _{m}^{(n)})={\frac {1}{2\pi }}\int _{0}^{2\pi }F(w(\theta ))\,d\theta } for any function F {\displaystyle F} that is continuous on the range of w {\displaystyle w} . In particular such that the arithmetic mean of λ ( n ) {\displaystyle \lambda ^{(n)}} converges to the integral of w {\displaystyle w} . === First Szegő theorem === The first Szegő theorem states that, if right-hand side of (1) holds and w ≥ 0 {\displaystyle w\geq 0} , then holds for w > 0 {\displaystyle w>0} and w ∈ L 1 {\displaystyle w\in L_{1}} . The RHS of (2) is the geometric mean of w {\displaystyle w} (well-defined by the arithmetic-geometric mean inequality). === Second Szegő theorem === Let c ^ k {\displaystyle {\widehat {c}}_{k}} be the Fourier coefficient of log w ∈ L 1 {\displaystyle \log w\in L^{1}} , written as c ^ k = 1 2 π ∫ 0 2 π log ( w ( θ ) ) e − i k θ d θ {\displaystyle {\widehat {c}}_{k}={\frac {1}{2\pi }}\int _{0}^{2\pi }\log(w(\theta ))e^{-ik\theta }\,d\theta } The second (or strong) Szegő theorem states that, if w ≥ 0 {\displaystyle w\geq 0} , then lim n → ∞ det T n ( w ) e ( n + 1 ) c ^ 0 = exp ( ∑ k = 1 ∞ k | c ^ k | 2 ) . {\displaystyle \lim _{n\to \infty }{\frac {\det T_{n}(w)}{e^{(n+1){\widehat {c}}_{0}}}}=\exp \left(\sum _{k=1}^{\infty }k\left|{\widehat {c}}_{k}\right|^{2}\right).} == See also == Trigonometric moment problem Verblunsky's theorem == References ==
|
Wikipedia:Szolem Mandelbrojt#0
|
Szolem Mandelbrojt (10 January 1899 – 23 September 1983) was a Polish-French mathematician who specialized in mathematical analysis. He was a professor at the Collège de France from 1938 to 1972, where he held the Chair of Analytical Mechanics and Celestial Mechanics. == Biography == Szolem Mandelbrojt was born on 10 January 1899 in Warsaw, Poland into a Jewish family of Lithuanian descent. He was initially educated in Warsaw, then in 1919 he moved to Kharkov, Ukraine (then USSR) and spent a year as a student of the Russian mathematician Sergei Bernstein. A year later, he emigrated to France and settled in Paris. In subsequent years, he attended the seminars of Jacques Hadamard, Henri Lebesgue, Émile Picard, and others. In 1923, he received a doctorate from the University of Paris on the analytic continuation of the Taylor series. Hadamard was his Ph.D. advisor. In 1924 Mandelbrojt was awarded a Rockefeller Fellowship in the United States. In May 1926 he married Gladys Manuelle Grunwald (born 28 June 1904 in Paris). From 1926 to 1927, he spent a year as an assistant professor at the Rice Institute (now Rice University) in Houston, Texas. In 1928 he returned to France - having received French citizenship in 1927 – and was appointed an assistant professor at the University of Lille. The following year he became a full professor at the University of Clermont-Ferrand. In December 1934 Mandelbrojt co-founded the Nicolas Bourbaki group of mathematicians, of which he was a member until World War II. He succeeded Hadamard at Collège de France in 1938 and took up the Chair of Analytical Mechanics and Celestial Mechanics. Mandelbrojt helped several members of his family emigrate from Poland to France in 1936. One of them, his nephew Benoit Mandelbrot, was to discover the Mandelbrot set and coin the word fractal in the 1970s. In 1939 he fought for France when the country was invaded by the Nazis, then in 1940, along with many scientists helped by Louis Rapkine and the Rockefeller Foundation, Mandelbrojt relocated to the United States, taking up a position at the Rice Institute. In 1944 he joined the scientific committee of the Free French Forces in London, England. In 1945 Mandelbrojt moved back to France and resumed his professional activities at Collège de France, where he remained until his retirement in 1972. In his retirement year he was elected a member of the French Academy of Sciences. Szolem Mandelbrojt died at the age of 84 in Paris, France, on 23 September 1983. == Research == Even though Mandelbrojt was an early member of the Bourbaki group, and he did take part in a number of Bourbaki gatherings until the breakout of the war, his main research interests were actually quite remote from abstract algebra. As evidenced by his publications (see next), he focused on complex analysis and harmonic analysis, with an emphasis on Dirichlet series, lacunary series, and entire functions. Rather than a Bourbakist, he is perhaps more accurately described as a follower of G. H. Hardy. Together with Norbert Wiener and Torsten Carleman, he can be viewed as a moderate modernizer of classical Fourier analysis. Shmuel Agmon, Jean-Pierre Kahane, Yitzhak Katznelson, Paul Malliavin and George Piranian are among his students. == Selected works == === Books === Hadamard, Jacques; Mandelbrojt, Szolem (1926). La série de Taylor et son prolongement analytique (2nd ed.). Gauthier-Villars. Mandelbrojt, Szolem (1951). "General theorems of closure". Rice Institute Pamphlet (Monograph published as a special issue). XIV (4): 225–352. Mandelbrojt, Szolem (1958). "Composition Theorems". Rice Institute Pamphlet (Monograph published as a special issue). 45 (3). Mandelbrojt, Szolem (1967). Fonctions entières et transformées de Fourier. Applications. Mathematical Society of Japan. Mandelbrojt, Szolem (1972) [1969]. Dirichlet series. Principles and methods [Séries de Dirichlet. Principes et méthodes]. Reidel. === Lecture notes === Mandelbrojt, Szolem (1927). "Modern researches on the singularities of functions defined by Taylor's series; lectures delivered at the Rice Institute during the academic year 1926-27". Rice Institute Pamphlet (12 articles). 14 (4). Mandelbrojt, Szolem (1935). Séries de Fourier et classes quasi-analytiques de fonctions. Leçons professées à l'Institut Henri Poincaré et à la Faculté des sciences de Clermont-Ferrand. Gauthier-Villars. Mandelbrojt, Szolem (1942). "Analytic Functions and Classes of Infinitely Differentiable Functions. A series of lectures delivered at the Rice Institute during the academic year 1940-41". Rice Institute Pamphlet (17 articles). 29 (1). Mandelbrojt, Szolem (1944). "Dirichlet series. Lectures delivered at the Rice Institute during the academic year 1942-43". Rice Institute Pamphlet (10 articles). 31 (4). Mandelbrojt, Szolem (1952). Séries adhérentes. Régularisation des suites. Applications. Leçons professées au Collège de France et au Rice Institute. Gauthier-Villars. === Articles === Mandelbrojt, M. S. (1932). Les singularités des fonctions analytiques représentées par une série de Taylor (PDF). Mémorial des Sciences Mathématiques. Vol. 54. Gauthier-Villars. Mandelbrojt, Szolem (1936). "Séries lacunaires". Actualités Scientifiques et Industrielles. 305 (Exposés sur la théorie des fonctions, no. 2). Hermann. Mandelbrojt, Szolem (1938). "La régularisation des fonctions". Actualités Scientifiques et Industrielles. 733 (Exposés sur la théorie des fonctions, no. 13). Hermann. Mandelbrojt, Szolem; Ulrich, Floyd Edward (1942). "On a generalization of the problem of quasi-analyticity". Trans. Amer. Math. Soc. 52 (2): 265–282. doi:10.1090/S0002-9947-1942-0007015-4. MR 0007015. Mandelbrojt, Szolem (1944). "Quasi-analyticity and analytic continuation—a general principle". Trans. Amer. Math. Soc. 55: 96–131. doi:10.1090/S0002-9947-1944-0009635-1. MR 0009635. Mandelbrojt, Szolem; MacLane, Gerald R. (1947). "On functions holomorphic in a strip region, and an extension of Watson's problem". Trans. Amer. Math. Soc. 61 (3): 454–467. doi:10.1090/S0002-9947-1947-0020142-5. MR 0020142. Mandelbrojt, Szolem (1948). "Analytic continuation and infinitely differentiable functions". Bull. Amer. Math. Soc. 54 (3): 239–248. doi:10.1090/S0002-9904-1948-08963-7. MR 0023877. Chandrasekharan, K.; Mandelbrojt, Szolem (1959). "On solutions of Riemann's functional equation". Bull. Amer. Math. Soc. 65 (6): 358–362. doi:10.1090/S0002-9904-1959-10372-4. MR 0111727. Mandelbrojt, Szolem (1966). "Les taubériens généraux de Norbert Wiener". Bull. Amer. Math. Soc. 72 (1): 48–51. doi:10.1090/S0002-9904-1966-11461-1. MR 0184008. Mandelbrojt, Szolem (1967). "Exponentielle associée à un ensemble; transformées de Fourier généralisées". Annales de l'Institut Fourier. 17 (1): 325–351. doi:10.5802/aif.259. MR 0257360. === Thesis === Mandelbrojt, Szolem (1923). Sur les séries de Taylor qui présentent des lacunes (Ph.D. thesis). Paris-Sorbonne University. == Notes == == References == Comité national français de mathématiciens (1981). Szolem Mandelbrojt. Selecta. Gauthier-Villars. == External links == O'Connor, John J.; Robertson, Edmund F., "Szolem Mandelbrojt", MacTutor History of Mathematics Archive, University of St Andrews Szolem Mandelbrojt at the Mathematics Genealogy Project. Szolem Mandelbrojt at Collège de France. Szolem Mandelbrojt at Hathi Trust Digital Library.
|
Wikipedia:Sébastien Bubeck#0
|
Sébastien Bubeck (born April 16, 1985) is a French-American computer scientist and mathematician. He was Microsoft's Vice President of Applied Research and led the Machine Learning Foundations group at Microsoft Research Redmond. Bubeck was formerly professor at Princeton University and a researcher at the University of California, Berkeley. He is known for his contributions to online learning, optimization and more recently studying deep neural networks, and in particular transformer models. Since 2024, he works for OpenAI. == Work == Bubeck's work spans a wide variety of topics in machine learning, theoretical computer science and artificial intelligence. Some of his most notable contributions include developing minimax rate for multi-armed bandits, linear bandits, developing an optimal algorithm for bandit convex optimization, and solving long-standing problems in k-server and metrical task systems. In regards to the mathematical theory of neural networks, Bubeck has both introduced and proved the law of robustness which links the number of parameters of a neural network and its regularity properties. Bubeck has also made contributions to convex optimization, network analysis, and information theory. Bubeck's papers have over 25,000 citations to date. Prior to joining Microsoft Research, Bubeck was an assistant professor at Princeton University in the Department of Operations Research and Financial Engineering. He received his PhD from the Lille 1 University of Science and Technology, and also studied at the Ecole Normale Supérieure de Cachan. Bubeck is the author of the book Convex optimization: Algorithms and complexity (2015). He has also been on the editorial board of several scientific journals and conferences, including the Journal of the ACM and Neural Information Processing Systems (NeurIPS) and was program committee chair for the 2018 Conference on Learning Theory (COLT) In 2023, Bubeck and his collaborators published a paper that claimed to observe "sparks of artificial general intelligence" in an early version of GPT-4, a large language model developed by OpenAI. The paper presented examples of GPT-4 performing tasks across various domains and modalities, such as mathematics, coding, vision, medicine, and law. The paper sparked wide interest and debate in the scientific community and the popular media, as it challenged the conventional understanding of learning and cognition in AI systems. Bubeck also investigated the potential use of GPT-4 as an AI chatbot for medicine in a paper that evaluated the strengths, weaknesses, and ethical issues of relying on such a tool for medical purposes In October 2024, Bubeck left Microsoft to join OpenAI. == Honors and awards == Bubeck has received numerous honors and awards for his work, including the Alfred P. Sloan Research Fellowship in Computer Science in 2015, and Best Paper Awards at the Conference on Learning Theory (COLT) in 2016, Neural Information Processing Systems (NeurIPS) in 2018 and 2021 and in the ACM Symposium on Theory of Computing (STOC) 2023. He has also received the Jacques Neveu prize for the best French PhD in Probability/Statistics, the runner-up prize in AfIA's 2011 French AI thesis awards, and one of the two second prizes in the 2010 Gilles Kahn prize for a French PhD in computer science. == Selected publications == Minimax policies for adversarial and stochastic bandits (2009), with Jean-Yves Audibert. Best arm identification in multi-armed bandits (2010), with Jean-Yves Audibert and Rémi Munos. Kernel-based methods for bandit convex optimization (2017), with Yin Tat Lee and Ronen Eldan. A universal law of robustness via isoperimetry (2020), with Mark Sellke. K-server via multiscale entropic regularization (2018), with Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Madry. Competitively chasing convex bodies (2019), with Yin Tat Lee, Yuanzhi Li, and Mark Sellke. Regret analysis of stochastic and nonstochastic multi-armed bandit problems (2012), with Nicolò Cesa-Bianchi. == References ==
|
Wikipedia:Søren Galatius#0
|
Søren Galatius (born 1 August 1976) is a Danish mathematician who works as a professor of mathematics at the University of Copenhagen. He works in algebraic topology, where one of his most important results concerns the homology of the automorphisms of free groups. He is also known for his joint work with Oscar Randal-Williams on moduli spaces of manifolds, comprising several papers. == Life == Galatius was born in Randers, Denmark. He earned his PhD from Aarhus University in 2004 under the supervision of Ib Madsen. He then joined the Stanford University faculty, first with a temporary position as a Szegő Assistant Professor and then two years later with a tenure-track position, eventually becoming full professor in 2011. He relocated to the University of Copenhagen in 2016. == Recognition == In 2010, Galatius won the Silver Medal of the Royal Danish Academy of Sciences and Letters. In 2012, he became one of the inaugural fellows of the American Mathematical Society. He was an invited speaker at the 2014 International Congress of Mathematicians, speaking about his joint work with Oscar Randal-Williams. In 2017, he won an Elite Research Prize from the Danish Government for his work. In 2022 he was awarded the Clay Research Award jointly with Oscar Randal-Williams. == Selected publications == Galatius, Søren; Tillmann, Ulrike; Madsen, Ib; Weiss, Michael (2009). "The homotopy type of the cobordism category". Acta Mathematica. 202 (2): 195–239. arXiv:math/0605249. doi:10.1007/s11511-009-0036-9. MR 2506750. Galatius, Søren (2011). "Stable homology of automorphism groups of free groups". Annals of Mathematics. 173 (2): 705–768. arXiv:math/0610216. doi:10.4007/annals.2011.173.2.3. S2CID 54829457. Galatius, Søren; Randal-Williams, Oscar (2018). "Homological stability for moduli spaces of high dimensional manifolds. I". Journal of the American Mathematical Society. 31: 215–268. arXiv:1403.2334. doi:10.1090/jams/884. MR 3718454. S2CID 199452925. Galatius, Søren; Randal-Williams, Oscar (2017). "Homological stability for moduli spaces of high dimensional manifolds. II". Annals of Mathematics. 186: 127–204. arXiv:1601.00232. doi:10.4007/annals.2017.186.1.4. Galatius, Søren; Randal-Williams, Oscar (2014). "Stable moduli spaces of high-dimensional manifolds". Acta Mathematica. 186 (2): 257–377. arXiv:1201.3527. doi:10.1007/s11511-014-0112-7. S2CID 119170153. == References == == External links == Official website
|
Wikipedia:Sławomir Kołodziej#0
|
Kołodziej (Polish pronunciation: [kɔˈwɔdʑɛi̯]) is a Polish surname meaning "wheelwright". Notable people with the surname include: Dariusz Kołodziej (born 1982), Polish footballer Janusz A. Kołodziej (born 1959), Polish politician Janusz Kołodziej (born 1984), Polish speedway rider Miriam Kolodziejová (born 1997), Czech tennis player Paweł Kołodziej (born 1980), Polish boxer Piast Kołodziej (c. 740–861 AD), Polish semi-legendary figure Ross Kolodziej (born 1978), American football player Sławomir Kołodziej (born 1961), Polish mathematician Władysław Kołodziej (1897–1978), pioneer of modern Paganism in Poland == See also == All pages with titles containing Kolodziej
|
Wikipedia:T. A. Springer#0
|
Tonny Albert Springer (13 February 1926 – 7 December 2011) was a mathematician at Utrecht University who worked on linear algebraic groups, Hecke algebras, complex reflection groups, and who introduced Springer representations and the Springer resolution. Springer began his undergraduate studies in 1945 at Leiden University and remained there for his graduate work in mathematics, earning his PhD in 1951 under Hendrik Kloosterman with thesis Over symplectische Transformaties. As a postdoc Springer spent the academic year 1951/1952 at the University of Nancy and then returned to Leiden University, where he was employed until 1955. In 1955 he accepted a lectureship at Utrecht University, where he became professor ordinarius in 1959 and continued in that position until 1991 when he retired as professor emeritus. Springer's visiting professorships included many institutions: the University of Göttingen (1963), the Institute for Advanced Study (1961/1962, 1969, 1983), IHES (1964, 1973, 1975, 1983), Tata Institute of Fundamental Research (1968, 1980), UCLA (1965/1966), the Australian National University, the University of Sydney, the University of Rome Tor Vergata, the University of Basel, the Erwin Schrödinger Institute in Vienna, and the University of Paris VI. In 1964 Springer was elected to the Royal Netherlands Academy of Arts and Sciences. In 2006 in Madrid he was an invited speaker at the International Congress of Mathematicians with lecture on Some results on compactifications of semisimple groups. (At the 1962 ICM in Stockholm he made a short contribution Twisted composition algebras, but was not an invited speaker.) == Publications == Springer, Tonny A. (1998), Jordan Algebras and Algebraic Groups, Classics in Mathematics, Springer-Verlag, ISBN 3-540-63632-3 Reprint of the 1973 edition. Springer, Tonny A.; Veldkamp, Ferdinand D. (2000), Octonions, Jordan Algebras, and Exceptional Groups, Springer Monographs in Mathematics, Berlin: Springer, ISBN 3-540-66337-1 Springer, Tonny A. (1998), Linear algebraic groups (2nd ed.), Birkhäuser, ISBN 978-0-8176-4021-7; 1st edition. 1981. Springer, Tonny A. (1977), Invariant theory, Lecture Notes in Mathematics, vol. 585, Springer-Verlag == References == Profile Springer's home page. T. A. Springer at the Mathematics Genealogy Project
|
Wikipedia:T. O. Engset#0
|
Tore Olaus Engset (May 8, 1865 – October, 1943 in Oslo) was a Norwegian mathematician and engineer who did pioneering work in the field of telephone traffic queuing theory. Tore Olaus Engset was born in Stranda Municipality in Møre og Romsdal, Norway. After he graduated school at the age of 18, Engset was admitted to the telegraph school in Stavanger in 1883. He received his certificate in a year and took employment. In his spare time he continued his studies, graduating in 1892 but continuing his university studies. Engset received an M.Sc. in physics and mathematics in 1894 at University of Oslo, after which he worked at Televerket as office worker, traffic analyst and from 1921 to 1922 and 1930–35 director general. He developed the Engset formula in 1915, before the breakthroughs of A. K. Erlang from 1917. His 1915 manuscript "Om beregningen av vælgere i et automatisk telefonsystem" (1915) was not, however, published until 1918. That work was translated to German as "Die Wahrscheinlichkeitsrechnung zur Bestimmung der Wählerzahl in automatischen Fernsprechämtern", in Elektrotechnische Zeitschrift, Heft 31, 1918. An English translation appeared in Arne Myskja, "On the Calculation of Switches in an Automatic Telephone System" in Telektronikk, 94(2):99-142, 1998. He also published a work on nuclear physics (1927): "Die Bahnen und die Lichtstrahlung der Wasserstoffelektronen. Ergänzende Betrachtungen über Bahnformen und Strahlungsfrequenzen" (3 parts) in Annalen der Physik, 82 (1927) 1017; 83 (1927) 903; 84 (1927) 880. == References == == Bibliography == Arne Myskja: T. Engset in New Light. The 14th Nordic Teletraffic Seminar (NTS-14), Lyngby, Denmark, August 18–20, 1998. Arne Myskja: The Engset Report of 1915. Summary and Comments. Telektronikk, vol. 94, no. 2, pp. 143–153, 1998. Arne Myskja, Ola Espvik: Tore Olaus Engset - The man behind the formula. Tapir Academic Press, Trondheim, Norway, 2002. ISBN 82-519-1828-6.
|
Wikipedia:T. R. Ramadas#0
|
Trivandrum Ramakrishnan "T. R." Ramadas (born 30 March 1955) is an Indian mathematician who specializes in algebraic and differential geometry, and mathematical physics. He was awarded the Shanti Swarup Bhatnagar Prize for Science and Technology in 1998, the highest science award in India, in the mathematical sciences category. He studied engineering in IIT Kanpur then joined TIFR as a graduate student in physics finally changing to mathematics after his interactions with M S Narasimhan. He is currently a professor at Chennai Mathematical Institute, Chennai, Tamil Nadu. == Selected publications == "The "Harder-Narasimhan Trace" and Unitarity of the KZ/Hitchin Connection: genus 0", Ann. of Math. 169, 1–39 (2009). (With V.B. Mehta) "Moduli of vector bundles, Frobenius splitting, and invariant theory", Ann. of Math. 144, 269–313 (1996). "Factorisation of generalised theta functions II", Topology 35, 641–654 (1996). (With M.S. Narasimhan) "Factorisation of generalised theta functions I", Invent. Math. 114, 565–624 (1993). (With I.M. Singer and J. Weitsman) "Some comments on Chern Simons gauge theory", Commun. Math. Phys. 126, 409–420 (1989). (With P.K. Mitter) "The two-dimensional O(N) nonlinear =E5 model: renormalisation and effective actions", Commun. Math. Phys. 122, 575–596 (1989). (With M.S. Narasimhan) "Geometry of SU(2) gauge fields", Commun. Math. Phys. 67, 121–136 (1979). == References ==
|
Wikipedia:Table of Newtonian series#0
|
In mathematics, a Newtonian series, named after Isaac Newton, is a sum over a sequence a n {\displaystyle a_{n}} written in the form f ( s ) = ∑ n = 0 ∞ ( − 1 ) n ( s n ) a n = ∑ n = 0 ∞ ( − s ) n n ! a n {\displaystyle f(s)=\sum _{n=0}^{\infty }(-1)^{n}{s \choose n}a_{n}=\sum _{n=0}^{\infty }{\frac {(-s)_{n}}{n!}}a_{n}} where ( s n ) {\displaystyle {s \choose n}} is the binomial coefficient and ( s ) n {\displaystyle (s)_{n}} is the falling factorial. Newtonian series often appear in relations of the form seen in umbral calculus. == List == The generalized binomial theorem gives ( 1 + z ) s = ∑ n = 0 ∞ ( s n ) z n = 1 + ( s 1 ) z + ( s 2 ) z 2 + ⋯ . {\displaystyle (1+z)^{s}=\sum _{n=0}^{\infty }{s \choose n}z^{n}=1+{s \choose 1}z+{s \choose 2}z^{2}+\cdots .} A proof for this identity can be obtained by showing that it satisfies the differential equation ( 1 + z ) d ( 1 + z ) s d z = s ( 1 + z ) s . {\displaystyle (1+z){\frac {d(1+z)^{s}}{dz}}=s(1+z)^{s}.} The digamma function: ψ ( s + 1 ) = − γ − ∑ n = 1 ∞ ( − 1 ) n n ( s n ) . {\displaystyle \psi (s+1)=-\gamma -\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n}}{s \choose n}.} The Stirling numbers of the second kind are given by the finite sum { n k } = 1 k ! ∑ j = 0 k ( − 1 ) k − j ( k j ) j n . {\displaystyle \left\{{\begin{matrix}n\\k\end{matrix}}\right\}={\frac {1}{k!}}\sum _{j=0}^{k}(-1)^{k-j}{k \choose j}j^{n}.} This formula is a special case of the kth forward difference of the monomial xn evaluated at x = 0: Δ k x n = ∑ j = 0 k ( − 1 ) k − j ( k j ) ( x + j ) n . {\displaystyle \Delta ^{k}x^{n}=\sum _{j=0}^{k}(-1)^{k-j}{k \choose j}(x+j)^{n}.} A related identity forms the basis of the Nörlund–Rice integral: ∑ k = 0 n ( n k ) ( − 1 ) n − k s − k = n ! s ( s − 1 ) ( s − 2 ) ⋯ ( s − n ) = Γ ( n + 1 ) Γ ( s − n ) Γ ( s + 1 ) = B ( n + 1 , s − n ) , s ∉ { 0 , … , n } {\displaystyle \sum _{k=0}^{n}{n \choose k}{\frac {(-1)^{n-k}}{s-k}}={\frac {n!}{s(s-1)(s-2)\cdots (s-n)}}={\frac {\Gamma (n+1)\Gamma (s-n)}{\Gamma (s+1)}}=B(n+1,s-n),s\notin \{0,\ldots ,n\}} where Γ ( x ) {\displaystyle \Gamma (x)} is the Gamma function and B ( x , y ) {\displaystyle B(x,y)} is the Beta function. The trigonometric functions have umbral identities: ∑ n = 0 ∞ ( − 1 ) n ( s 2 n ) = 2 s / 2 cos π s 4 {\displaystyle \sum _{n=0}^{\infty }(-1)^{n}{s \choose 2n}=2^{s/2}\cos {\frac {\pi s}{4}}} and ∑ n = 0 ∞ ( − 1 ) n ( s 2 n + 1 ) = 2 s / 2 sin π s 4 {\displaystyle \sum _{n=0}^{\infty }(-1)^{n}{s \choose 2n+1}=2^{s/2}\sin {\frac {\pi s}{4}}} The umbral nature of these identities is a bit more clear by writing them in terms of the falling factorial ( s ) n {\displaystyle (s)_{n}} . The first few terms of the sin series are s − ( s ) 3 3 ! + ( s ) 5 5 ! − ( s ) 7 7 ! + ⋯ {\displaystyle s-{\frac {(s)_{3}}{3!}}+{\frac {(s)_{5}}{5!}}-{\frac {(s)_{7}}{7!}}+\cdots } which can be recognized as resembling the Taylor series for sin x, with (s)n standing in the place of xn. In analytic number theory it is of interest to sum ∑ k = 0 B k z k , {\displaystyle \!\sum _{k=0}B_{k}z^{k},} where B are the Bernoulli numbers. Employing the generating function its Borel sum can be evaluated as ∑ k = 0 B k z k = ∫ 0 ∞ e − t t z e t z − 1 d t = ∑ k = 1 z ( k z + 1 ) 2 . {\displaystyle \sum _{k=0}B_{k}z^{k}=\int _{0}^{\infty }e^{-t}{\frac {tz}{e^{tz}-1}}\,dt=\sum _{k=1}{\frac {z}{(kz+1)^{2}}}.} The general relation gives the Newton series ∑ k = 0 B k ( x ) z k ( 1 − s k ) s − 1 = z s − 1 ζ ( s , x + z ) , {\displaystyle \sum _{k=0}{\frac {B_{k}(x)}{z^{k}}}{\frac {1-s \choose k}{s-1}}=z^{s-1}\zeta (s,x+z),} where ζ {\displaystyle \zeta } is the Hurwitz zeta function and B k ( x ) {\displaystyle B_{k}(x)} the Bernoulli polynomial. The series does not converge, the identity holds formally. Another identity is 1 Γ ( x ) = ∑ k = 0 ∞ ( x − a k ) ∑ j = 0 k ( − 1 ) k − j Γ ( a + j ) ( k j ) , {\displaystyle {\frac {1}{\Gamma (x)}}=\sum _{k=0}^{\infty }{x-a \choose k}\sum _{j=0}^{k}{\frac {(-1)^{k-j}}{\Gamma (a+j)}}{k \choose j},} which converges for x > a {\displaystyle x>a} . This follows from the general form of a Newton series for equidistant nodes (when it exists, i.e. is convergent) f ( x ) = ∑ k = 0 ( x − a h k ) ∑ j = 0 k ( − 1 ) k − j ( k j ) f ( a + j h ) . {\displaystyle f(x)=\sum _{k=0}{{\frac {x-a}{h}} \choose k}\sum _{j=0}^{k}(-1)^{k-j}{k \choose j}f(a+jh).} == See also == Binomial transform List of factorial and binomial topics Nörlund–Rice integral Carlson's theorem == References == Philippe Flajolet and Robert Sedgewick, "Mellin transforms and asymptotics: Finite differences and Rice's integrals", Theoretical Computer Science 144 (1995) pp 101–124.
|
Wikipedia:Tadashi Nakayama (mathematician)#0
|
Tadashi Nakayama or Tadasi Nakayama (中山 正, Nakayama Tadashi, July 26, 1912, Tokyo Prefecture – June 5, 1964, Nagoya) was a mathematician who made important contributions to representation theory. == Career == He received his degrees from Tokyo University and Osaka University and held permanent positions at Osaka University and Nagoya University. He had visiting positions at Princeton University, Illinois University, and Hamburg University. Nakayama's lemma, Nakayama algebras, Nakayama's conjecture and Murnaghan–Nakayama rule are named after him. == Selected works == Nakayama, Tadasi (1939), "On Frobeniusean algebras. I", Annals of Mathematics, Second Series, 40 (3), Annals of Mathematics: 611–633, Bibcode:1939AnMat..40..611N, doi:10.2307/1968946, JSTOR 1968946, MR 0000016 Nakayama, Tadasi (1941), "On Frobeniusean algebras. II" (PDF), Annals of Mathematics, Second Series, 42 (1), Annals of Mathematics: 1–21, doi:10.2307/1968984, JSTOR 1968984, MR 0004237 Tadasi Nakayama. A note on the elementary divisor theory in non-commutative domains. Bull. Amer. Math. Soc. 44 (1938) 719–723. MR1563855 doi:10.1090/S0002-9904-1938-06850-4 Tadasi Nakayama. A remark on representations of groups. Bull. Amer. Math. Soc. 44 (1938) 233–235. MR1563716 doi:10.1090/S0002-9904-1938-06723-7 Tadasi Nakayama. A remark on the sum and the intersection of two normal ideals in an algebra. Bull. Amer. Math. Soc. 46 (1940) 469–472. MR0001967 doi:10.1090/S0002-9904-1940-07235-0 Tadasi Nakayama and Junji Hashimoto. On a problem of G. Birkhoff . Proc. Amer. Math. Soc. 1 (1950) 141–142. MR0035279 doi:10.1090/S0002-9939-1950-0035279-X Tadasi Nakayama. Remark on the duality for noncommutative compact groups . Proc. Amer. Math. Soc. 2 (1951) 849–854. MR0045131 doi:10.1090/S0002-9939-1951-0045131-2 Tadasi Nakayama. Orthogonality relation for Frobenius- and quasi-Frobenius-algebras . Proc. Amer. Math. Soc. 3 (1952) 183–195. MR0049876 doi:10.2307/2032255 Tadasi Nakayama. Galois theory of simple rings . Trans. Amer. Math. Soc. 73 (1952) 276–292. MR0049875 doi:10.1090/S0002-9947-1952-0049875-3 Masatosi Ikeda and Tadasi Nakayama. On some characteristic properties of quasi-Frobenius and regular rings . Proc. Amer. Math. Soc. 5 (1954) 15–19. MR0060489 doi:10.1090/S0002-9939-1954-0060489-9 == References == "Obituary: Tadasi Nakayama", Nagoya Mathematical Journal, 27: i–vii. (1 plate), 1966, ISSN 0027-7630, MR 0191789 == External links == O'Connor, John J.; Robertson, Edmund F., "Tadashi Nakayama", MacTutor History of Mathematics Archive, University of St Andrews Tadasi Nakayama at the Mathematics Genealogy Project https://www.math.uni-bielefeld.de/~sek/collect/nakayama.html
|
Wikipedia:Tadeusz Iwaniec#0
|
Tadeusz Iwaniec (born October 9, 1947 in Elbląg) is a Polish-American mathematician, and since 1996 John Raymond French Distinguished Professor of Mathematics at Syracuse University. He and mathematician Henryk Iwaniec are twin brothers. == Awards and honors == Iwaniec was given the Prize of the President of the Polish Academy of Sciences, 1980, the Alfred Jurzykowski Award in Mathematics in 1997, the Prix 2001 Institut Henri-Poincaré Gauthier-Villars, and the 2009 Sierpinski Medal of the Polish Mathematical Society and Warsaw University. In 1998 he was elected as a foreign member of the Academia di Scienze Fisiche e Matematiche, Italy and in 2012 as a foreign member of the Finnish Academy of Science and Letters. == References ==
|
Wikipedia:Taivo Arak#0
|
Taivo Arak (2 November 1946, Tallinn – 17 October 2007, Stockholm) was an Estonian mathematician, specializing in probability theory. == Biography == In 1969 he graduated from Leningrad State University. There he received in 1972 his Russian candidate degree (Ph.D.) under I. A. Ibragimov. In 1983 Arak defended his dissertation for his Russian doctorate (higher doctoral degree similar to habilitation). From 1972 to 1981 he worked at the Tallinn University of Technology. From 1981 he worked at the Institute of Cybernetics of the Academy of Sciences of the Estonian SSR. In 1986 he was an Invited Speaker at the International Congress of Mathematicians in Berkeley, California. Most of his research dealt with the theory of probability. == Awards == Markov Prize (1983) - for the series of papers "Равномерные предельные теоремы для сумм независимых случайных величин" (Uniform limit theorems for sums of independent random variables). == Selected publications == with Andrei Yuryevich Zaitsev: Uniform limit theorems for sums of independent random variables. Proceedings of the Steklov Institute of Mathematics, Vol. 174. American Mathematical Soc. 1988. ISBN 9780821831182. with Donatas Surgailis: Arak, T.; Surgailis, D. (February 1989). "Markov fields with polygonal realizations". Probability Theory and Related Fields. 80 (4): 543–579. doi:10.1007/BF00318906. S2CID 120932428. with D. Surgailis: Grigelionis, Bronius (1990). "Markov random graphs and polygonal fields with Y-shaped nodes". Probability theory and mathematical statistics. Proc. 5th Vilnius conference, vol. 1. pp. 57–67. ISBN 9067641286. with Peter Clifford and D. Surgailis: Arak, T.; Clifford, P.; Surgailis, D. (1993). "Point-based polygonal models for random graphs". Advances in Applied Probability. 25 (2): 348–372. doi:10.2307/1427657. JSTOR 1427657. S2CID 120107892. == References == == External links == Арак Тайво Викторович, ras.ru Arak Taivo Viktorovich, list of publications. mathnet.ru
|
Wikipedia:Taj Haider#0
|
Taj Haider, SI (Urdu: تاج حيدر; 8 March 1942 – 8 April 2025) was a left-wing politician, nationalist, playwright, mathematician, versatile scholar, and Marxist intellectual. He was one of the founding members of Pakistan People's Party (PPP) and was the general-secretary of the PPP from 2023 to 2025 after the office was vacated by Nayyar Hussain Bukhari. A mathematician and scientist by profession, Haider provided vital leadership in the formative years of clandestine atomic bomb projects in the 1970s. He was also noted for his writing of political plays for the Pakistan Television (PTV) from 1979 to 1985. == Biography == === Education === Haider was born on 8 March 1942 in Kotah, Rajasthan, British India. His family migrated to the Dominion of Pakistan following the partition of India in 1947. After graduating from a local high school, Haider ultimately enrolled in Karachi University in 1959. He studied mathematics at the Karachi University and graduated with a BSc (hons) in Mathematics in 1962. In 1965, he earned his MSc in mathematics from the same institution and opted to teach mathematics at the local college, later moving to Karachi University. During his career at the Karachi University, Haider primarily taught and focused on the ordinary differential equations and topics in multivariable calculus. === PPP and political activism === During the attendance of 1967 socialist convention, Haider was one of the founding members of the Pakistan Peoples Party (PPP) and committed himself as a vehement supporter of change by left-oriented philosophy of Zulfiqar Ali Bhutto. In the 1970s, he played a vital role in formulating the public policy concerning the atomic bomb projects. On multiple occasions, he provided his expertise on taking moral stance on nuclear weapons initiatives at the diplomatic conventions. On nuclear weapons development, Haider stated that "there was a need to aggressively project the peaceful intent of Pakistan's atomic bomb program." Haider disassociated himself from the politics but remained member of Pakistan Mathematical Society and shifted towards writing political dramas at the Pakistan Television (PTV) in 1979. The PTV aired various political dramas written by Haider until 1985 when he renewed his association with PPP. In 1990–2000, he contributed to PPP-initiated industrial projects such as the establishment of Heavy Mechanical Complex (HMC), Hub Dam and various other social programmes. He was elected to the Senate of Pakistan in 1995. In 2001, Haider returned to his literary activities after rejoining the PTV, and penned two political drama serials for the PTV which were aired in 2003. In 2004, he returned to politics in opposition to President of Pakistan Pervez Musharraf over the issue of nuclear proliferation. He bitterly criticised the United States over the sanctions of KRL and was one of the noted politicians expressing the discontent with the US, along with Raza Rabbani in 2004. About the nuclear proliferation case, Haider defended the case of Abdul Qadeer Khan in the public and condemned the Information minister, Rashid Ahmad's statement of acquitting former Prime minister Benazir Bhutto in the nuclear proliferation case. Ultimately, he called for a parliamentary inquiry on that issue, and questioned the involvement of President General Pervez Musharraf in the proliferation case. In 2006, Haider was awarded PTV Awards for Best Playwright Serial award, which he received in a televised ceremony. === Writing and philosophy === Haider extensively wrote on nuclear policy issues, left-wing ideas, literary and political philosophy. His more recent writings had included the support of social democracy in the country and power of balance in each state institution. In literary and political circles, he wrote critic articles against the military dictatorship, specifically policies enforced by the conservative President General Zia-ul-Haq throughout the 1980s. Haider opposed the ethnically based politics of the leader of Muttahida Qaumi Movement or MQM, Altaf Hussain based in Karachi by reportedly stating on one occasion, "We were not Mohajirs but Urdu-speaking citizens of this province and this country. Our mother-tongue was the official and national language of Pakistan and it would be wrong and degrading to consider ourselves as lesser citizens or Mohajirs. == Death == Haider was admitted to Karachi’s Ziauddin Hospital’s intensive care unit battling cancer. He died on 8 April 2025, at the age of 83. == Honors and awards == Sitara-e-Imtiaz (Star of Excellence) Award by the President of Pakistan (2013). 13th PTV Awards for Best Playwright Serial award (2006) Selected articles Haider, Taj. "CTBT Security Perspectives" Dawn Newspapers, 27 March 2000. Haider, Taj. "Setting the PPP record straight", Express Tribune 2013. Haider, Taj. "Why the PPP is boycotting the presidential election", 16 July 2013 Television plays Jinhein Raaste Main Khabar Hui Lab-e-Darya == See also == Pakistan Peoples Party Left-wing politics Pakistan Academy of Letters Pakistan Mathematical Society Pakistan and its Nuclear Deterrent Program == References == == External links == Pakistan Peoples Party
|
Wikipedia:Tak (function)#0
|
In computer science, the Tak function is a recursive function, named after Ikuo Takeuchi. It is defined as follows: τ ( x , y , z ) = { τ ( τ ( x − 1 , y , z ) , τ ( y − 1 , z , x ) , τ ( z − 1 , x , y ) ) if y < x z otherwise {\displaystyle \tau (x,y,z)={\begin{cases}\tau (\tau (x-1,y,z),\tau (y-1,z,x),\tau (z-1,x,y))&{\text{if }}y<x\\z&{\text{otherwise}}\end{cases}}} This function is often used as a benchmark for languages with optimization for recursion. == tak() vs. tarai() == The original definition by Takeuchi was as follows: tarai is short for たらい回し (tarai mawashi, "to pass around") in Japanese. John McCarthy named this function tak() after Takeuchi. However, in certain later references, the y somehow got turned into the z. This is a small, but significant difference because the original version benefits significantly from lazy evaluation. Though written in exactly the same manner as others, the Haskell code below runs much faster. One can easily accelerate this function via memoization yet lazy evaluation still wins. The best known way to optimize tarai is to use a mutually recursive helper function as follows. Here is an efficient implementation of tarai() in C: Note the additional check for (x <= y) before z (the third argument) is evaluated, avoiding unnecessary recursive evaluation. == References == == External links == Weisstein, Eric W. "TAK Function". MathWorld. TAK Function
|
Wikipedia:Takao Hayashi#0
|
Takao Hayashi (born 1949) is a Japanese mathematics educator, historian of mathematics specializing in Indian mathematics. Hayashi was born in Niigata, Japan. He obtained Bachelor of Science degree form Tohoku University, Sendai, Japan in 1974, Master of Arts degree from Tohuku University, Sendai, Japan in 1976 and a postgraduate degree from Kyoto University, Japan in 1979. He secured the Doctor of Philosophy degree from Brown University, USA in 1985 under the guidance of David Pingree. He was a researcher at Mehta Research Institute for Mathematics and Mathematical Physics, Allahabad, India during 1982–1983, a lecturer at Kyoto Women's College during 1985–1987. He joined Doshisha University, Kyoto as a lecturer in history of science in 1986 and was promoted as professor in 1995. He has also worked in various universities in Japan in different capacities. == Publications == Hayashi has a large number of research publications relation to the history of Indian mathematics. He has also contributed chapters to several encyclopedic publications. The books he has published include: The Bakhshali Manuscript: An Ancient Indian Mathematical Treatise, Egbert Forsten Publishing, 1995 (jointly with S. R. Sarma, Takanori Kusuba and Michio Yano), Gaṇitasārakaumudī: The Moonlight of the Essence of Mathematics by Thakkura Pherū, Manohar Publishers and Distributors, 2009 Kuṭṭā̄kāraśiromaṇi of Devarāja: Sanskrit Text with English Translation, Indian National Science Academy, 2012 Gaṇitamañjarī of Gaṇeśa, Indian National Science Academy, 2013 (jointly with Clemency Montelle, K. Ramasubramanian) Bhāskara-prabhā, Springer Singapore, 2018 == Awards/Prizes == The awards and prizes conferred on Hayashi include: The Salomon Reinach Foundation Prize, Institut de France (2001) Kuwabara Prize, the History of Mathematics Society of Japan (2005) Publication Prize, Mathematical Society of Japan (2005) == References ==
|
Wikipedia:Tamer Başar#0
|
Mustafa Tamer Başar (born January 19, 1946) is a control and game theorist who is the Swanlund Endowed Chair and Center for Advanced Study Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign, USA. He is also the Director of the Center for Advanced Study (since 2014). == Education == Tamer Başar received a B.S. in Electrical Engineering from Boğaziçi University (formerly known as Robert College) at Bebek, in Istanbul, Turkey, in 1969, and M.S., M.Phil., and Ph.D. in engineering and applied science from Yale University, in 1970, 1971 and 1972, respectively. == Academic life == He joined the University of Illinois at Urbana–Champaign - Electrical and Computer Engineering Department in 1981. He was the founding president of the International Society of Dynamic Games during 1990–1994, the president of the IEEE Control Systems Society in 2000, and the president of the American Automatic Control Council during 2010–2011. He received the Medal of Science of Turkey in 1993, H.W. Bode Lecture Prize of the IEEE Control Systems Society in 2004, Georgia Quazza Medal of the International Federation of Automatic Control in 2005, the Richard E. Bellman Control Heritage Award in 2006, Isaacs Award of the International Society of Dynamic Games in 2010, and IEEE Control Systems Award in 2014. He was elected as a member of National Academy of Engineering in 2000 in Electronics, Communication & Information Systems Engineering and Industrial, Manufacturing & Operational Systems Engineering for the development of dynamic game theory and application to robust control of systems with uncertainty. He is a Fellow of IEEE, IFAC, and SIAM. == Honorary degrees and chairs == He has been awarded Honorary Doctor of Science degrees and Honorary Professorships from: Honorary Professorship, Shandong University, Jinan, China, 2019 Honorary Chair Professorship from Tsinghua University, Beijing, China in 2014 Honorary Doctorate (Doctor Honoris Causa) from Boğaziçi University, Istanbul, Turkey in 2012 Honorary Doctorate from the National Academy of Sciences of Azerbaijan in 2011 Honorary Professorship from Northeastern University, Shenyang, China in 2008 Honorary Doctorate (Doctor Honoris Causa) from Doğuş University, Istanbul, Turkey in 2007 Swanlund Endowed Chair Professorship at UIUC in 2007 == Research areas == His research interests include optimal, robust, and nonlinear control; large-scale systems; dynamic games; stochastic control; estimation theory; stochastic processes; and mathematical economics. == Awards == AAA&S Member (2023) IEEE Control Systems Award (2014) Honorary Chair Professorship from Tsinghua University, Beijing, China (2014) Honorary Doctorate (Doctor Honoris Causa) from Boğaziçi University, Istanbul (2012) SIAM Fellow (2012) Honorary Doctorate from the National Academy of Sciences of Azerbaijan (2011) Isaacs Award of ISDG (2010, 2014) Honorary Professorship from Northeastern University, Shenyang, China (2008) Swanlund Endowed Chair at UIUC (2007) Honorary Doctorate (Doctor Honoris Causa) from Doğuş University, Istanbul (2007) Richard E. Bellman Control Heritage Award (2006) Giorgio Quazza Medal of IFAC (2005) Outstanding Service Award of IFAC (2005) IFAC Fellow (2005) Center for Advanced Study Professorship at UIUC (2005) Hendrik Wade Bode Lecture Prize of the IEEE Control Systems Society (2004) Tau Beta Pi Daniel C. Drucker Eminent Faculty Award of the College of Engineering of UIUC (2004) Elected to the National Academy of Engineering (of the USA) (2000) IEEE Millennium Medal (2000) Fredric G. and Elizabeth H. Nearing Distinguished Professorship at UIUC (1998) Axelby Outstanding Paper Award (1995) Distinguished Member Award of the IEEE Control Systems Society (1993) Medal of Science of Turkey (1993) IEEE Fellow (1983) == See also == List of game theorists List of members of the National Academy of Engineering (Electronics) == References ==
|
Wikipedia:Tan Eng Chye#0
|
Tan Eng Chye (simplified Chinese: 陈永财; traditional Chinese: 陳永財; pinyin: Chén Yǒngcái) is a Singaporean mathematician and university administrator who has been serving as the third president of the National University of Singapore since 2018. Prior to his presidency, he served as the deputy president of academic affairs and provost at the National University of Singapore. == Education == Tan attended Raffles Institution between 1974 and 1979 before graduating from the National University of Singapore in 1985 with a Bachelor of Science (First Class Honours) degree in mathematics. He later went on to obtain his PhD from Yale University in 1989, under the guidance of Roger Howe. == Career == He joined NUS as a faculty member in the Department of Mathematics in 1985, as a Senior Tutor. In June 2003, he was appointed Dean of the Faculty of Science, a post he held till March 2007. Up till 2017, he served as NUS' Deputy President (Academic Affairs) and Provost. Tan Eng Chye's research interests are representation theory of Lie groups and Lie algebras, invariant theory and algebraic combinatorics. In collaboration with Roger Howe, he has written a graduate-level textbook on non-Abelian harmonic analysis full of beautiful mathematics that is not as well known as it deserves to be. He has also been active in promoting mathematics, having established the Singapore Mathematical Society Enrichment Programmes in 1994, revamped the Singapore Mathematical Olympiad in 1995 to allow more participation from students, and initiated a series of project teaching workshops for teachers in 1998. He served as president of the Singapore Mathematical Society from 2001 to 2005 and President of the South East Asian Mathematical Society from 2004 to 2005. === 2018–present: NUS presidency === On 28 July 2017, Tan was named as the next president of NUS, taking over Tan Chorh Chuan. He assumed office at the start of 2018. Along with the appointed, he was appointed to A*STAR's board as well, taking the seat meant for the university's president. In 2020, NUS raised US$300 million through its first green bond. In the same year, it established a research institute called the Asian Institute of Digital Finance along with the Monetary Authority of Singapore and National Research Foundation. In 2020, Tan said in an interview that he had plans for NUS to "tear down structures that inhibit interdisciplinarity", with Professor Joanne Roberts of Yale-NUS College commenting that there were similarities between Yale-NUS and Tan's plans. On 22 September 2020, NUS unveiled its plans for an interdisciplinary college, the College of Humanities and Sciences, allowing students to take courses from both the Faculty of Science and the Faculty of Arts and Social Sciences. The new college took in its first intake in 2021. As part of a broader plan to introduce interdisciplinary colleges, in 2021, Tan announced that Yale-NUS College would be closed by 2025, with 2021 intake of freshmen being the last intake. The college would also be merged with NUS' University Scholars Programme to offer a new cirriculum. The decision was unilaterally made by NUS, and came as a surprise to Yale-NUS' students and faculty, NUS' faculty, and Yale. More than 10,000 people had signed a petition calling for the reversal of the decision. Questions about the decision filed in the Singapore Parliament by various members of parliament were answered on 13 September 2021. == Honours and awards == Tan received the Pingat Pentadbiran Awam, Emas (Public Administration Medal, Gold) in Singapore's National Day Awards 2014. He has been a Fellow of the Singapore National Academy of Science since 2011. He was conferred the Wilbur Lucius Cross Medal by the Yale Graduate School Alumni Association in 2018, and received an Honorary Doctor of Science from the University of Southampton, UK in the same year. Tan was conferred the title of Knight of the French Order of the Legion of Honour on 5 July 2022, in recognition of his distinguished contributions in education and research. == Selected works == Howe, Roger; Tan, Eng-Chye (1992). Non-Abelian harmonic analysis. Applications of SL(2,R). Universitext. Springer-Verlag, New York. doi:10.1007/978-1-4613-9200-2. ISBN 0-387-97768-6. Howe, Roger E.; Tan, Eng-Chye (1993). "Homogeneous Functions on Light Cones: \\the Infinitesimal Structure of some Degenerate Principal Series Representations". Bulletin of the American Mathematical Society. 28: 1–75. arXiv:math/9301214. doi:10.1090/S0273-0979-1993-00360-4. Aslaksen, Helmer; Tan, Eng-Chye; Zhu, Chen-Bo (1995). "Invariant theory of special orthogonal groups". Pacific Journal of Mathematics. 168 (2): 207–215. doi:10.2140/pjm.1995.168.207. Li, Jian-Shu; Paul, Annegret; Tan, Eng-Chye; Zhu, Chen-Bo (2003). "The explicit duality correspondence of (Sp(p,q),O∗(2n))". Journal of Functional Analysis. 200 (1): 71–100. doi:10.1016/S0022-1236(02)00079-4. Howe, Roger; Tan, Eng-Chye; Willenbring, Jeb F. (2005). "Stable branching rules for classical symmetric pairs". Transactions of the American Mathematical Society. 357 (4): 1601–1627. doi:10.1090/S0002-9947-04-03722-5. Howe, Roger; Jackson, Steven; Teck Lee, Soo; Tan, Eng-Chye; Willenbring, Jeb (2009). "Toric degeneration of branching algebras". Advances in Mathematics. 220 (6): 1809–1841. doi:10.1016/j.aim.2008.11.010. == References == == External links == Tan Eng Chye Personal Web Page The Mathematics Genealogy Project – Eng-Chye Tan National University of Singapore President Biography
|
Wikipedia:Tan Lei#0
|
Tan Lei (Chinese: 谭蕾; 18 March 1963 – 1 April 2016) was a mathematician specialising in complex dynamics and functions of complex numbers. She is most well-known for her contributions to the study of the Mandelbrot set and Julia set. == Career == After gaining her PhD in Mathematics in 1986 at University of Paris-Sud, Orsay, Tan worked as an assistant researcher in Geneva. She then conducted postdoctoral projects at the Max Planck Institute for Mathematics and University of Bremen until 1989, when she was made a lecturer at Ecole Normale Superieure de Lyon in France. Tan held a research position at University of Warwick from 1995 to 1999, before becoming a senior lecturer at Cergy-Pontoise University. She was made professor at University of Angers in 2009. == Mathematical work == Tan obtained important results about the Julia and Mandelbrot sets, in particular investigating their fractality and the similarities between the two. For example she showed that at the Misiurewicz points these sets are asymptotically similar through scaling and rotation. She constructed examples of polynomials whose Julia sets are homeomorphic to the Sierpiński carpet and which are disconnected. She contributed to other areas of complex dynamics. She also wrote some surveys and popularisation work around her research topics. == Legacy == A conference in Tan's memory was held in Beijing, China, in May 2016. == Publications == === Thesis === Tan, Lei (1986). Accouplements des polynômes quadratiques complexes. Comptes Rendus de l'Académie des Sciences (PhD). Vol. 302. Paris. === Books === Tan, Lei, ed. (2000). The Mandelbrot Set, Theme and Variations. London Mathematical Society Lecture Note Series. Vol. 274. Cambridge: Cambridge University Press. ISBN 9780521774765. === Articles === == References ==
|
Wikipedia:Tang Shunzhi#0
|
Tang Shunzhi (traditional Chinese: 唐順之; simplified Chinese: 唐顺之; pinyin: Táng Shùnzhī) was a Chinese engineer, mathematician, statesman, and martial artist in the Ming dynasty. == Biography == Born in Wujin District, Nanzhili Province. At first, he was educated at home. Then he began preparing for state exams. At this time, he became interested in mathematics, especially the works of Islamic algebraists. In 1529, he successfully passed the capital's huishi exam and received the degree of gongsheng. He was offered a position at the Hanlin Imperial Academy, but Tang chose to serve in the military department. Later, he received the position of Right Censor-in-Chief (右僉都御史). In 1533 he became a member of Huanling, where he organized archival records. However, due to illness, he left public service for some time. After recovery, he returned to the imperial court. Then he received the post of Governor of Fengyang County (in modern Anhui Province) to strengthen the fight against pirates. During this service, Tang Shunzhi died in Tongzhou, having previously obtained success in destruction of the pirates. Posthumously, he was given the name Xiangwen. == Mathematics == Tang Shunzhi has to his credit works on studying methods for measuring the elements of a circle. He wrote five books: Gougu Cewang Lun (勾股測望論, “Considerations Concerning Measurement at Distances of the Major and Minor Legs”), Gougu Rong Fangyuan Lun (勾股容方圓論, “Discourse on the Circle and the Square, what the larger and smaller legs contain"), Fenfa Lun (分法論, "Reflections on Methods of Distribution"), Liufen Lun (六分論, "Reflections on Division by Six"), Hushi Lun (弧矢論, “Judgements about the arc and chord”), of which the last one is the most important. == References == === Citations === === Sources === "唐順之/Tang Shunzhi". China Biographical Database Project (CBDB). Harvard University. "Tang, Shun zhi (1507-1560)". Catalogue Général. Bibliothèque nationale de France.
|
Wikipedia:Tangent half-angle formula#0
|
In trigonometry, tangent half-angle formulas relate the tangent of half of an angle to trigonometric functions of the entire angle. == Formulae == The tangent of half an angle is the stereographic projection of the circle through the point at angle π {\textstyle \pi } radians onto the line through the angles ± π 2 {\textstyle \pm {\frac {\pi }{2}}} . Tangent half-angle formulae include tan 1 2 ( η ± θ ) = tan 1 2 η ± tan 1 2 θ 1 ∓ tan 1 2 η tan 1 2 θ = sin η ± sin θ cos η + cos θ = − cos η − cos θ sin η ∓ sin θ , {\displaystyle {\begin{aligned}\tan {\tfrac {1}{2}}(\eta \pm \theta )&={\frac {\tan {\tfrac {1}{2}}\eta \pm \tan {\tfrac {1}{2}}\theta }{1\mp \tan {\tfrac {1}{2}}\eta \,\tan {\tfrac {1}{2}}\theta }}={\frac {\sin \eta \pm \sin \theta }{\cos \eta +\cos \theta }}=-{\frac {\cos \eta -\cos \theta }{\sin \eta \mp \sin \theta }}\,,\end{aligned}}} with simpler formulae when η is known to be 0, π/2, π, or 3π/2 because sin(η) and cos(η) can be replaced by simple constants. In the reverse direction, the formulae include sin α = 2 tan 1 2 α 1 + tan 2 1 2 α cos α = 1 − tan 2 1 2 α 1 + tan 2 1 2 α tan α = 2 tan 1 2 α 1 − tan 2 1 2 α . {\displaystyle {\begin{aligned}\sin \alpha &={\frac {2\tan {\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}\\[7pt]\cos \alpha &={\frac {1-\tan ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}\\[7pt]\tan \alpha &={\frac {2\tan {\tfrac {1}{2}}\alpha }{1-\tan ^{2}{\tfrac {1}{2}}\alpha }}\,.\end{aligned}}} == Proofs == === Algebraic proofs === Using the angle addition and subtraction formulae for both the sine and cosine one obtains sin ( a + b ) + sin ( a − b ) = 2 sin a cos b cos ( a + b ) + cos ( a − b ) = 2 cos a cos b . {\displaystyle {\begin{aligned}\sin(a+b)+\sin(a-b)&=2\sin a\cos b\\[15mu]\cos(a+b)+\cos(a-b)&=2\cos a\cos b\,.\end{aligned}}} Setting a = 1 2 ( η + θ ) {\textstyle a={\tfrac {1}{2}}(\eta +\theta )} and b = 1 2 ( η − θ ) {\displaystyle b={\tfrac {1}{2}}(\eta -\theta )} and substituting yields sin η + sin θ = 2 sin 1 2 ( η + θ ) cos 1 2 ( η − θ ) cos η + cos θ = 2 cos 1 2 ( η + θ ) cos 1 2 ( η − θ ) . {\displaystyle {\begin{aligned}\sin \eta +\sin \theta =2\sin {\tfrac {1}{2}}(\eta +\theta )\,\cos {\tfrac {1}{2}}(\eta -\theta )\\[15mu]\cos \eta +\cos \theta =2\cos {\tfrac {1}{2}}(\eta +\theta )\,\cos {\tfrac {1}{2}}(\eta -\theta )\,.\end{aligned}}} Dividing the sum of sines by the sum of cosines gives sin η + sin θ cos η + cos θ = tan 1 2 ( η + θ ) . {\displaystyle {\frac {\sin \eta +\sin \theta }{\cos \eta +\cos \theta }}=\tan {\tfrac {1}{2}}(\eta +\theta )\,.} Also, a similar calculation starting with sin ( a + b ) − sin ( a − b ) {\displaystyle \sin(a+b)-\sin(a-b)} and cos ( a + b ) − cos ( a − b ) {\displaystyle \cos(a+b)-\cos(a-b)} gives − cos η − cos θ sin η − sin θ = tan 1 2 ( η + θ ) . {\displaystyle -{\frac {\cos \eta -\cos \theta }{\sin \eta -\sin \theta }}=\tan {\tfrac {1}{2}}(\eta +\theta )\,.} Furthermore, using double-angle formulae and the Pythagorean identity 1 + tan 2 α = 1 / cos 2 α {\textstyle 1+\tan ^{2}\alpha =1{\big /}\cos ^{2}\alpha } gives sin α = 2 sin 1 2 α cos 1 2 α = 2 sin 1 2 α cos 1 2 α / cos 2 1 2 α 1 + tan 2 1 2 α = 2 tan 1 2 α 1 + tan 2 1 2 α {\displaystyle \sin \alpha =2\sin {\tfrac {1}{2}}\alpha \cos {\tfrac {1}{2}}\alpha ={\frac {2\sin {\tfrac {1}{2}}\alpha \,\cos {\tfrac {1}{2}}\alpha {\Big /}\cos ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}={\frac {2\tan {\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}} cos α = cos 2 1 2 α − sin 2 1 2 α = ( cos 2 1 2 α − sin 2 1 2 α ) / cos 2 1 2 α 1 + tan 2 1 2 α = 1 − tan 2 1 2 α 1 + tan 2 1 2 α . {\displaystyle \cos \alpha =\cos ^{2}{\tfrac {1}{2}}\alpha -\sin ^{2}{\tfrac {1}{2}}\alpha ={\frac {\left(\cos ^{2}{\tfrac {1}{2}}\alpha -\sin ^{2}{\tfrac {1}{2}}\alpha \right){\Big /}\cos ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}={\frac {1-\tan ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}\,.} Taking the quotient of the formulae for sine and cosine yields tan α = 2 tan 1 2 α 1 − tan 2 1 2 α . {\displaystyle \tan \alpha ={\frac {2\tan {\tfrac {1}{2}}\alpha }{1-\tan ^{2}{\tfrac {1}{2}}\alpha }}\,.} === Geometric proofs === Applying the formulae derived above to the rhombus figure on the right, it is readily shown that tan 1 2 ( a + b ) = sin 1 2 ( a + b ) cos 1 2 ( a + b ) = sin a + sin b cos a + cos b . {\displaystyle \tan {\tfrac {1}{2}}(a+b)={\frac {\sin {\tfrac {1}{2}}(a+b)}{\cos {\tfrac {1}{2}}(a+b)}}={\frac {\sin a+\sin b}{\cos a+\cos b}}.} In the unit circle, application of the above shows that t = tan 1 2 φ {\textstyle t=\tan {\tfrac {1}{2}}\varphi } . By similarity of triangles, t sin φ = 1 1 + cos φ . {\displaystyle {\frac {t}{\sin \varphi }}={\frac {1}{1+\cos \varphi }}.} It follows that t = sin φ 1 + cos φ = sin φ ( 1 − cos φ ) ( 1 + cos φ ) ( 1 − cos φ ) = 1 − cos φ sin φ . {\displaystyle t={\frac {\sin \varphi }{1+\cos \varphi }}={\frac {\sin \varphi (1-\cos \varphi )}{(1+\cos \varphi )(1-\cos \varphi )}}={\frac {1-\cos \varphi }{\sin \varphi }}.} == The tangent half-angle substitution in integral calculus == In various applications of trigonometry, it is useful to rewrite the trigonometric functions (such as sine and cosine) in terms of rational functions of a new variable t {\displaystyle t} . These identities are known collectively as the tangent half-angle formulae because of the definition of t {\displaystyle t} . These identities can be useful in calculus for converting rational functions in sine and cosine to functions of t in order to find their antiderivatives. Geometrically, the construction goes like this: for any point (cos φ, sin φ) on the unit circle, draw the line passing through it and the point (−1, 0). This point crosses the y-axis at some point y = t. One can show using simple geometry that t = tan(φ/2). The equation for the drawn line is y = (1 + x)t. The equation for the intersection of the line and circle is then a quadratic equation involving t. The two solutions to this equation are (−1, 0) and (cos φ, sin φ). This allows us to write the latter as rational functions of t (solutions are given below). The parameter t represents the stereographic projection of the point (cos φ, sin φ) onto the y-axis with the center of projection at (−1, 0). Thus, the tangent half-angle formulae give conversions between the stereographic coordinate t on the unit circle and the standard angular coordinate φ. Then we have sin φ = 2 t 1 + t 2 , cos φ = 1 − t 2 1 + t 2 , tan φ = 2 t 1 − t 2 cot φ = 1 − t 2 2 t , sec φ = 1 + t 2 1 − t 2 , csc φ = 1 + t 2 2 t , {\displaystyle {\begin{aligned}&\sin \varphi ={\frac {2t}{1+t^{2}}},&&\cos \varphi ={\frac {1-t^{2}}{1+t^{2}}},\\[8pt]&\tan \varphi ={\frac {2t}{1-t^{2}}}&&\cot \varphi ={\frac {1-t^{2}}{2t}},\\[8pt]&\sec \varphi ={\frac {1+t^{2}}{1-t^{2}}},&&\csc \varphi ={\frac {1+t^{2}}{2t}},\end{aligned}}} and e i φ = 1 + i t 1 − i t , e − i φ = 1 − i t 1 + i t . {\displaystyle e^{i\varphi }={\frac {1+it}{1-it}},\qquad e^{-i\varphi }={\frac {1-it}{1+it}}.} Both this expression of e i φ {\displaystyle e^{i\varphi }} and the expression t = tan ( φ / 2 ) {\displaystyle t=\tan(\varphi /2)} can be solved for φ {\displaystyle \varphi } . Equating these gives the arctangent in terms of the natural logarithm arctan t = − i 2 ln 1 + i t 1 − i t . {\displaystyle \arctan t={\frac {-i}{2}}\ln {\frac {1+it}{1-it}}.} In calculus, the tangent half-angle substitution is used to find antiderivatives of rational functions of sin φ and cos φ. Differentiating t = tan 1 2 φ {\displaystyle t=\tan {\tfrac {1}{2}}\varphi } gives d t d φ = 1 2 sec 2 1 2 φ = 1 2 ( 1 + tan 2 1 2 φ ) = 1 2 ( 1 + t 2 ) {\displaystyle {\frac {dt}{d\varphi }}={\tfrac {1}{2}}\sec ^{2}{\tfrac {1}{2}}\varphi ={\tfrac {1}{2}}(1+\tan ^{2}{\tfrac {1}{2}}\varphi )={\tfrac {1}{2}}(1+t^{2})} and thus d φ = 2 d t 1 + t 2 . {\displaystyle d\varphi ={{2\,dt} \over {1+t^{2}}}.} === Hyperbolic identities === One can play an entirely analogous game with the hyperbolic functions. A point on (the right branch of) a hyperbola is given by (cosh ψ, sinh ψ). Projecting this onto y-axis from the center (−1, 0) gives the following: t = tanh 1 2 ψ = sinh ψ cosh ψ + 1 = cosh ψ − 1 sinh ψ {\displaystyle t=\tanh {\tfrac {1}{2}}\psi ={\frac {\sinh \psi }{\cosh \psi +1}}={\frac {\cosh \psi -1}{\sinh \psi }}} with the identities sinh ψ = 2 t 1 − t 2 , cosh ψ = 1 + t 2 1 − t 2 , tanh ψ = 2 t 1 + t 2 , coth ψ = 1 + t 2 2 t , sech ψ = 1 − t 2 1 + t 2 , csch ψ = 1 − t 2 2 t , {\displaystyle {\begin{aligned}&\sinh \psi ={\frac {2t}{1-t^{2}}},&&\cosh \psi ={\frac {1+t^{2}}{1-t^{2}}},\\[8pt]&\tanh \psi ={\frac {2t}{1+t^{2}}},&&\coth \psi ={\frac {1+t^{2}}{2t}},\\[8pt]&\operatorname {sech} \,\psi ={\frac {1-t^{2}}{1+t^{2}}},&&\operatorname {csch} \,\psi ={\frac {1-t^{2}}{2t}},\end{aligned}}} and e ψ = 1 + t 1 − t , e − ψ = 1 − t 1 + t . {\displaystyle e^{\psi }={\frac {1+t}{1-t}},\qquad e^{-\psi }={\frac {1-t}{1+t}}.} Finding ψ in terms of t leads to following relationship between the inverse hyperbolic tangent artanh {\displaystyle \operatorname {artanh} } and the natural logarithm: 2 artanh t = ln 1 + t 1 − t . {\displaystyle 2\operatorname {artanh} t=\ln {\frac {1+t}{1-t}}.} The hyperbolic tangent half-angle substitution in calculus uses d ψ = 2 d t 1 − t 2 . {\displaystyle d\psi ={{2\,dt} \over {1-t^{2}}}\,.} == The Gudermannian function == Comparing the hyperbolic identities to the circular ones, one notices that they involve the same functions of t, just permuted. If we identify the parameter t in both cases we arrive at a relationship between the circular functions and the hyperbolic ones. That is, if t = tan 1 2 φ = tanh 1 2 ψ {\displaystyle t=\tan {\tfrac {1}{2}}\varphi =\tanh {\tfrac {1}{2}}\psi } then φ = 2 arctan ( tanh 1 2 ψ ) ≡ gd ψ . {\displaystyle \varphi =2\arctan {\bigl (}\tanh {\tfrac {1}{2}}\psi \,{\bigr )}\equiv \operatorname {gd} \psi .} where gd(ψ) is the Gudermannian function. The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. The above descriptions of the tangent half-angle formulae (projection the unit circle and standard hyperbola onto the y-axis) give a geometric interpretation of this function. == Rational values and Pythagorean triples == Starting with a Pythagorean triangle with side lengths a, b, and c that are positive integers and satisfy a2 + b2 = c2, it follows immediately that each interior angle of the triangle has rational values for sine and cosine, because these are just ratios of side lengths. Thus each of these angles has a rational value for its half-angle tangent, using tan φ/2 = sin φ / (1 + cos φ). The reverse is also true. If there are two positive angles that sum to 90°, each with a rational half-angle tangent, and the third angle is a right angle then a triangle with these interior angles can be scaled to a Pythagorean triangle. If the third angle is not required to be a right angle, but is the angle that makes the three positive angles sum to 180° then the third angle will necessarily have a rational number for its half-angle tangent when the first two do (using angle addition and subtraction formulas for tangents) and the triangle can be scaled to a Heronian triangle. Generally, if K is a subfield of the complex numbers then tan φ/2 ∈ K ∪ {∞} implies that {sin φ, cos φ, tan φ, sec φ, csc φ, cot φ} ⊆ K ∪ {∞}. == See also == List of trigonometric identities Half-side formula == External links == Tangent Of Halved Angle at Planetmath == References ==
|
Wikipedia:Tanguy Rivoal#0
|
Tanguy Rivoal (born 1972) is a French mathematician specializing in number theory and related fields. He is known for his work on transcendental numbers, special functions, and Diophantine approximation. He currently holds the position of Directeur de recherche (Research Director) at the Centre National de la Recherche Scientifique (CNRS) and is affiliated with the Université Grenoble Alpes. == Education == Rivoal obtained his Ph.D. from the Université de Caen Normandie in 2001 under the supervision of Francesco Amoroso. His dissertation was titled Propriétés diophantiennes de la fonction zêta de Riemann aux entiers impairs (Diophantine properties of the Riemann zeta function at odd integers). == Research == Rivoal's research focuses on several areas of mathematics, including Diophantine approximation, Padé approximation, arithmetic Gevrey series, values of the Gamma function, transcendental number theory, and E-function. His notable contributions include the proof that there is at least one irrational number among nine numbers ζ(5), ζ(7), ζ(9), ζ(11), ..., ζ(21), where ζ is the Riemann zeta function. Together with Keith Ball, Rivoal proved that an infinite number of values of ζ at odd integers are linearly independent over Q {\displaystyle \mathbb {Q} } , for which he was elected an Honorary Fellow of the Hardy-Ramanujan Society. They also proved that there exists an odd number j such that 1, ζ(3), and ζ(j) are linear independent over Q {\displaystyle \mathbb {Q} } where 2 < j < 170, a specific case of the more general folklore conjecture stating that π, ζ(3), ζ(5), ζ(7), ζ(9), ..., are algebraically independent over Q {\displaystyle \mathbb {Q} } , which is a consequence of Grothendieck's period conjecture for mixed Tate motives. == See also == Apéry's constant Apéry's theorem Particular values of the Riemann zeta function Wadim Zudilin == References == == External links == Tanguy Rivoal's homepage Tanguy Rivoal's list of published works Tanguy Rivoal at the Mathematics Genealogy Project
|
Wikipedia:Tannery's theorem#0
|
In mathematical analysis, Tannery's theorem gives sufficient conditions for the interchanging of the limit and infinite summation operations. It is named after Jules Tannery. == Statement == Let S n = ∑ k = 0 ∞ a k ( n ) {\displaystyle S_{n}=\sum _{k=0}^{\infty }a_{k}(n)} and suppose that lim n → ∞ a k ( n ) = b k {\displaystyle \lim _{n\to \infty }a_{k}(n)=b_{k}} . If | a k ( n ) | ≤ M k {\displaystyle |a_{k}(n)|\leq M_{k}} and ∑ k = 0 ∞ M k < ∞ {\displaystyle \sum _{k=0}^{\infty }M_{k}<\infty } , then lim n → ∞ S n = ∑ k = 0 ∞ b k {\displaystyle \lim _{n\to \infty }S_{n}=\sum _{k=0}^{\infty }b_{k}} . == Proofs == Tannery's theorem follows directly from Lebesgue's dominated convergence theorem applied to the sequence space ℓ 1 {\displaystyle \ell ^{1}} . An elementary proof can also be given. == Example == Tannery's theorem can be used to prove that the binomial limit and the infinite series characterizations of the exponential e x {\displaystyle e^{x}} are equivalent. Note that lim n → ∞ ( 1 + x n ) n = lim n → ∞ ∑ k = 0 n ( n k ) x k n k . {\displaystyle \lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}=\lim _{n\to \infty }\sum _{k=0}^{n}{n \choose k}{\frac {x^{k}}{n^{k}}}.} Define a k ( n ) = ( n k ) x k n k {\displaystyle a_{k}(n)={n \choose k}{\frac {x^{k}}{n^{k}}}} . We have that | a k ( n ) | ≤ | x | k k ! {\displaystyle |a_{k}(n)|\leq {\frac {|x|^{k}}{k!}}} and that ∑ k = 0 ∞ | x | k k ! = e | x | < ∞ {\displaystyle \sum _{k=0}^{\infty }{\frac {|x|^{k}}{k!}}=e^{|x|}<\infty } , so Tannery's theorem can be applied and lim n → ∞ ∑ k = 0 ∞ ( n k ) x k n k = ∑ k = 0 ∞ lim n → ∞ ( n k ) x k n k = ∑ k = 0 ∞ x k k ! = e x . {\displaystyle \lim _{n\to \infty }\sum _{k=0}^{\infty }{n \choose k}{\frac {x^{k}}{n^{k}}}=\sum _{k=0}^{\infty }\lim _{n\to \infty }{n \choose k}{\frac {x^{k}}{n^{k}}}=\sum _{k=0}^{\infty }{\frac {x^{k}}{k!}}=e^{x}.} == References ==
|
Wikipedia:Tantrasamgraha#0
|
Tantrasamgraha, or Tantrasangraha, (literally, A Compilation of the System) is an important astronomical treatise written by Nilakantha Somayaji, an astronomer/mathematician belonging to the Kerala school of astronomy and mathematics. The treatise was completed in 1501 CE. It consists of 432 verses in Sanskrit divided into eight chapters. Tantrasamgraha had spawned a few commentaries: Tantrasamgraha-vyakhya of anonymous authorship and Yuktibhāṣā authored by Jyeshtadeva in about 1550 CE. Tantrasangraha, together with its commentaries, bring forth the depths of the mathematical accomplishments the Kerala school of astronomy and mathematics, in particular the achievements of the remarkable mathematician of the school Sangamagrama Madhava. In his Tantrasangraha, Nilakantha revised Aryabhata's model for the planets Mercury and Venus. According to George G Joseph his equation of the centre for these planets remained the most accurate until the time of Johannes Kepler in the 17th century. It was C.M. Whish, a civil servant of East India Company, who brought to the attention of the western scholarship the existence of Tantrasamgraha through a paper published in 1835. The other books mentioned by C.M. Whish in his paper were Yuktibhāṣā of Jyeshtadeva, Karanapaddhati of Puthumana Somayaji and Sadratnamala of Sankara Varman. == Author and date of Tantrasamgraha == Nilakantha Somayaji, the author of Tantrasamgraha, was a Nambudiri belonging to the Gargya gotra and a resident of Trikkantiyur, near Tirur in central Kerala. The name of his Illam was Kelallur. He studied under Damodara, son of Paramesvara. The first and the last verses in Tantrasamgraha contain chronograms specifying the dates, in the form Kali days, of the commencement and of the completion of book. These work out to dates in 1500–01. == Synopsis of the book == A brief account of the contents of Tantrasamgraha is presented below. A descriptive account of the contents is available in Bharatheeya Vijnana/Sastra Dhara. Full details of the contents are available in an edition of Tantrasamgraha published in the Indian Journal of History of Science. Chapter 1 (Madhyama-prakaranam): The purpose of the astronomical computation, civil and sidereal day measurements, lunar month, solar month, intercalary month, revolutions of the planets, theory of intercalation, planetary revolution in circular orbits, computation of kali days, mathematical operations like addition, subtraction, multiplication, division, squaring and determining square root, fractions, positive and negative numbers, computation of mean planets, correction for longitude, longitudinal time, positions of the planets at the beginning of Kali era, planetary apogees in degrees. (40 slokas) Chapter 2 (Sphuta-prakaranam (On true planets)): Computation of risings, and arcs, construction of a circle of diameter equal to the side of a given square, computation of the circumference without the use of square and roots, sum of series, sum of the series of natural numbers, of squares of numbers, of cubes of numbers, processes relating to Rsines and arcs, computation of the arc of a given Rsine, computation of the circumference of a circle, derivation of Rsines for given Rversed sine and arc, computation of Rsine and arcs, accurate computation of the 24 ordained Rsines, sectional Rsines and Rsine differences, sum of Rsine differences, summation of Rsine differences, computation of the arc of an Rsine according to Madhava, computation of Rsine and Rversed sine at desired point without the aid of the ordained Rsines, rules relating to triangles, rules relating to cyclic quadrilaterals, rules relating to the hypotenuse of a quadrilateral, computation of the diameter from the area of the cyclic quadrilateral, surface area of a sphere, computation of the desired Rsine, the ascensional difference, sun's daily motion in minutes of arc, application of ascensional difference to true planets, measure of day and night on applying ascensional difference, conversion of the arc of Rsine of the ascensional difference, etc. (59 slokas) Chapter 3 (Chhaya-prakaranam (Treatise on shadow)): Deals with various problems related with the sun's position on the celestial sphere, including the relationships of its expressions in the three systems of coordinates, namely ecliptic, equatorial and horizontal coordinates. (116 slokas) Chapter 4 (Chandragrahana-prakaranam (Treatise on the lunar eclipse)): Diameter of the Earth's shadow in minutes, Moon's latitude and Moon's rate of motion, probability of an eclipse, total eclipse and rationale of the explanation given for total eclipse, half duration and first and last contacts, points of contacts and points of release in eclipse, and their method of calculation, visibility of the contact in the eclipse at sunrise and sunset, contingency of the invisibility of an eclipse, possibility of the deflection, deflection due to latitude and that due to declination. (53 slokas) Chapter 5 (Ravigrahana-prakaranam (Treatise on the solar eclipse)): Possibility of a solar eclipse, minutes of parallax in latitude of the sun, minutes of parallax in latitude of the moon,. maximum measure of the eclipse, middle of the eclipse, time of first contact and last contact, half duration and times of submergence and emergence, reduction to observation of computed eclipse, mid eclipse, non prediction of an eclipse. (63 slokas) Chapter 6 (Vyatipata-prakaranam (On vyatipata)): Deals with the complete deviation of the longitudes of the sun and the moon. (24 slokas) Chapter 7 (Drikkarma-prakaranam (On visibility computation)): Discusses the rising and setting of the moon and planets. (15 slokas) Chapter 8 (Sringonnati-prakaranam (On elevation of the lunar cusps)): Examines the size of the part of the moon which is illuminated by the sun and gives a graphical representation of it. (40 slokas) == Some noteworthy features of Tantrasamgraha == "A remarkable synthesis of Indian spherical astronomical knowledge occurs in a passage in Tantrasamgraha." In astronomy, the spherical triangle formed by the zenith, the celestial north pole and the Sun is called the astronomical triangle. Its sides and two of its angles are important astronomical quantities. The sides are 90° – φ where φ is the observer's terrestrial latitude, 90° – δ where δ is the Sun's declination and 90° – a where a is the Sun's altitude above the horizon. The important angles are the angle at the zenith which is the Sun's azimuth and the angle at the north pole which is the Sun's hour angle. The problem is to compute two of these elements when the other three elements are specified. There are precisely ten different possibilities and Tantrasamgraha contains discussions of all these possibilities with complete solutions one by one in one place. "The spherical triangle is handled as systematically here as in any modern textbook." The terrestrial latitude of an observer's position is equal to the zenith distance of the Sun at noon on the equinoctial day. The effect of solar parallax on zenith distance was known to Indian astronomers right from Aryabhata. But it was Nilakantha Somayaji who first discussed the effect of solar parallax on the observer's latitude. Tantrasamgraha gives the magnitude of this correction and also a correction due to the finite size of the Sun. In his Aryabhatiyabhasya, a commentary on Aryabhata's Aryabhatiya, Nilakantha developed a computational system for a partially heliocentric planetary model in which Mercury, Venus, Mars, Jupiter and Saturn orbit the Sun, which in turn orbits the Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century. Most astronomers of the Kerala school who followed him accepted this planetary model. == Conference on 500 years of Tantrasamgraha == A Conference to celebrate the 500th Anniversary of Tantrasangraha was organised by the Department of Theoretical Physics, University of Madras, in collaboration with the Inter-University Centre of the Indian Institute of Advanced Study, Shimla, during 11–13 March 2000, at Chennai. The Conference turned out to be an important occasion for highlighting and reviewing the recent work on the achievements in Mathematics and Astronomy of the Kerala school and the new perspectives in History of Science, which are emerging from these studies. A compilation of the important papers presented at this Conference has also been published. == Other works of the same author == The following is a brief description of the other works by Nilakantha Somayaji. Jyotirmimamsa Golasara : Description of basic astronomical elements and procedures Sidhhantadarpana : A short work in 32 slokas enunciating the astronomical constants with reference to the Kalpa and specifying his views on astronomical concepts and topics. Candrachayaganita : A work in 32 verses on the methods for the calculation of time from the measurement of the shadow of the gnomon cast by the moon and vice versa. Aryabhatiya-bhashya : Elaborate commentary on Aryabhatiya. Sidhhantadarpana-vyakhya : Commentary on his own Siddhantadarapana. Chandrachhayaganita-vyakhya : Commentary on his own Chandrachhayaganita. Sundaraja-prasnottara : Nilakantha's answers to questions posed by Sundaraja, a Tamil Nadu-based astronomer. Grahanadi-grantha : Rationale of the necessity of correcting old astronomical constants by observations. Grahapariksakrama : Description of the principles and methods for verifying astronomical computations by regular observations. == References == == Further reading == Ramasubramanian, K (1998). "Model of planetary motion in the works of Kerala astronomers". Bulletin of the Astronomical Society of India. 26 (11–31): 11. Bibcode:1998BASI...26...11R. Ranjan Roy, R. (December 1990). "The discovery of the series formula for π by Leibniz, Gregory and Nilakantha" (PDF). Mathematics Magazine. 63 (5). Mathematical Association of America: 291–306. doi:10.2307/2690896. JSTOR 2690896.
|
Wikipedia:Tapering (mathematics)#0
|
In mathematics, physics, and theoretical computer graphics, tapering is a kind of shape deformation. Just as an affine transformation, such as scaling or shearing, is a first-order model of shape deformation, tapering is a higher order deformation just as twisting and bending. Tapering can be thought of as non-constant scaling by a given tapering function. The resultant deformations can be linear or nonlinear. To create a nonlinear taper, instead of scaling in x and y for all z with constants as in: q = [ a 0 0 0 b 0 0 0 1 ] p , {\displaystyle q={\begin{bmatrix}a&0&0\\0&b&0\\0&0&1\end{bmatrix}}p,} let a and b be functions of z so that: q = [ a ( p z ) 0 0 0 b ( p z ) 0 0 0 1 ] p . {\displaystyle q={\begin{bmatrix}a(p_{z})&0&0\\0&b(p_{z})&0\\0&0&1\end{bmatrix}}p.} An example of a linear taper is a ( z ) = α 0 + α 1 z {\displaystyle a(z)=\alpha _{0}+\alpha _{1}z} , and a quadratic taper a ( z ) = α 0 + α 1 z + α 2 z 2 {\displaystyle a(z)={\alpha }_{0}+{\alpha }_{1}z+{\alpha }_{2}z^{2}} . As another example, if the parametric equation of a cube were given by ƒ(t) = (x(t), y(t), z(t)), a nonlinear taper could be applied so that the cube's volume slowly decreases (or tapers) as the function moves in the positive z direction. For the given cube, an example of a nonlinear taper along z would be if, for instance, the function T(z) = 1/(a + bt) were applied to the cube's equation such that ƒ(t) = (T(z)x(t), T(z)y(t), T(z)z(t)), for some real constants a and b. == See also == 3D projection == References == == External links == [1], Computer Graphics Notes. University of Toronto. (See: Tapering). [2], 3D Transformations. Brown University. (See: Nonlinear deformations). [3], ScienceWorld article on Tapering in Image Synthesis.
|
Wikipedia:Taqi ad-Din Muhammad ibn Ma'ruf#0
|
Taqi ad-Din Muhammad ibn Ma'ruf ash-Shami al-Asadi (Arabic: تقي الدين محمد بن معروف الشامي; Ottoman Turkish: تقي الدين محمد بن معروف الشامي السعدي; Turkish: Takiyüddin 1526–1585) was an Ottoman polymath active in Cairo and Istanbul. He was the author of more than ninety books on a wide variety of subjects, including astronomy, clocks, engineering, mathematics, mechanics, optics, and natural philosophy. In 1574 the Ottoman Sultan Murad III invited Taqi ad-Din to build an observatory in the Ottoman capital, Istanbul. Taqi ad-Din constructed instruments such as an armillary sphere and mechanical clocks that he used to observe the Great Comet of 1577. He also used European celestial and terrestrial globes that were delivered to Istanbul in gift exchanges. His major work from the use of his observatory is titled "The tree of ultimate knowledge [in the end of time or the world] in the Kingdom of the Revolving Spheres: The astronomical tables of the King of Kings [Murad III]" (Sidrat al-muntah al-afkar fi malkūt al-falak al-dawār– al-zij al-Shāhinshāhi). The work was prepared according to the results of the observations carried out in Egypt and Istanbul in order to correct and complete Ulugh Beg's 15th century work, the Zij-i Sultani. The first 40 pages of the work dealt with calculations, followed by discussions of astronomical clocks, heavenly circles, and information on three eclipses which he observed in Cairo and Istanbul. As a polymath, Taqi al-Din wrote numerous books on astronomy, mathematics, mechanics, and theology. His method of finding coordinates of stars were reportedly so precise that he got better measurements than his contemporaries, Tycho Brahe and Nicolas Copernicus. Brahe is also thought to have been aware of Taqi al-Din's work. Taqi ad-Din also described a steam turbine with the practical application of rotating a spit in 1551. He worked on and created astronomical clocks for his observatory. Taqi ad-Din also wrote a book on optics, in which he determined the light emitted from objects, proved the Law of Reflection observationally, and worked on refraction. == Biography == Taqī al-Dīn was born in Damascus in 1526 according to most sources. His ethnicity has been described as Arab, Kurdish and Turkish. In his treatise, titled "Rayḥānat al-rūḥ", Taqī al-Dīn himself claimed descent from the Ayyubids tracing his lineage back to the Ayyubid prince Nasir al-Din Mankarus ibn Nasih al-Din Khumartekin who ruled Abu Qubays in Syria during the 12th century. The Encyclopaedia of Islam makes no mention of his ethnicity, simply calling him, "...the most important astronomer of Ottoman Turkey". Taqi ad-Din's education started in theology and as he went on he would gain an interest in the rational sciences. Following his interest, he would begin to study the rational sciences in Damascus and Cairo. During that time he studied alongside his father Maʿruf Efendi. Al-Dīn went on to teach at various madaris and served as a qadi, or judge, in Palestine, Damascus, and Cairo. He stayed in Egypt and Damascus for some time and while he was there he created work in astronomy and mathematics. His work in these categories would eventually become important. He became a chief astronomer to the Sultan in 1571 a year after he came to Istanbul, replacing Mustafa ibn Ali al-Muwaqqit. Taqī al-Dīn maintained a strong bond with the people from the Ulama and statesmen. He would pass on information to Sultan Murad III who had an interest in astronomy but also in astrology. The information stated that Ulugh Beg Zij had particular observational errors. Al-Dīn made a suggestions that those errors could be fixed if there were new observations made. He also suggested that an observatory should be created in Istanbul to make that situation easier. Murad III would become a patron of the first observatory in Istanbul. He preferred that construction for the new observatory begin immediately. Since Murad III was the patron he would assist with finances for the project. Taqī al-Dīn continued his studies at the Galata Tower while this was going on. His studies would continue until 1577 at the nearly complete observatory, which was called Dar al-Rasad al-Jadid. This new observatory contained a library that held books which covered astronomy and mathematics. The observatory, built in the higher part of Tophane in Istanbul, was made of two separate buildings. One building was big and the other one was small. Al-Dīn possessed some of the instruments used in the old Islamic observatories. He had those instruments reproduced and also created new instruments which would be used for observational purposes. The staff at the new observatory consisted of sixteen people. Eight of them were observers or rasids, four of them were clerks, and the last four were assistants. Taqī al-Dīn approached his observations in a creative way and created new answers to astronomical problems due to the new strategies he created along with the new equipment he created as well. He would go on to create trigonometric tables based on decimal fractions. These tables placed the ecliptic at 23° 28' 40". The current value was 23° 27' showing that al-Dīn's instruments and methods were more precise. Al-Dīn used a new method to calculate solar parameters and to determine the magnitude of the annual movement of the sun's apogee as 63 seconds. The known value today is 61 seconds. Copernicus came up with 24 seconds and Tycho Brahe had 45 seconds but al-Dīn was more accurate than both. The main purpose behind the observatory was to cater to the needs of the astronomers and provide a library and workshop so they could design and produce instruments. This observatory would become one of the largest ones in the Islamic world. It was complete in 1579. It would go on to run until January 22, 1580 which is when it was destroyed. Some say religious arguments was the reason why it was destroyed, but it really came down to political problems. A report by the grand vizier Sinan Pasha to Sultan Murad III goes into how the Sultan and the vizier attempted to keep Taqī Ad-Dīn away from the ulama because it seemed like they wanted to take him to trial for heresy. The vizier informs the sultan that Taqī Ad-Dīn wanted to go to Syria regardless of the sultan's orders. The vizier also warned the sultan that if Taqī Ad-Dīn went there, there is a possibility that he would be noticed by the ulama who would take him to trial. Despite Taqī al-Dīn's originality, his influence seemed to be limited. There are only a small number of surviving copies of his works so they were not able to reach a wide variety of people. His commentaries that are known are very few. However, one of his works and a piece of a library that he owned reached western Europe pretty quickly. This was due to the manuscript collecting efforts of Jacob Golius, a Dutch professor of Arabic and mathematics at Leiden University. Golius traveled to Istanbul in the early seventeenth century. In 1629 he wrote a letter to Constantijn Huygens that talks about seeing Taqī Ad-Dīn's work on optics in Istanbul. He argued that he was not able to get ahold of it from his friends even after all his efforts. He must have succeeded in acquiring it later since Taqī al-Dīn's work on optics would eventually make it to the Bodleian Library as Marsh 119. It was originally in the Golius collection so it is clear that Golius eventually succeeded at acquiring it. According to Salomon Schweigger, the chaplain of Habsburg ambassador Johann Joachim von Sinzendorf, Taqi al-Din was a charlatan who deceived Sultan Murad III and had him spent enormous resources. At the age of 59, after authoring more than ninety books, Taqī al-Dīn passed away in 1585. == The Constantinople Observatory == Taqī al-Dīn was both the founder and director of the Constantinople Observatory, which is also known as the Istanbul Observatory. This observatory is frequently said to be one of Taqī al-Dīn's most important contributions to sixteenth-century Islamic and Ottoman astronomy. In fact, it is known as one of the largest observatories in Islamic history. It is often compared to Tycho Brahe's Uraniborg Observatory, which was said to have been the home to the best instruments of its time in Europe. As a matter of fact, Brahe and Taqī al-Dīn have frequently been compared for their work in sixteenth-century astronomy. The founding of the Constantinople Observatory began when Taqī al-Dīn returned to Istanbul in 1570, after spending 20 years in Egypt developing his astronomy and mathematical knowledge. Shortly after his return, Sultan Selīm II appointed Taqī al-Dīn as the head astronomer (Müneccimbaşı), following the death of the previous head astronomer Muṣṭafā ibn ҁAlī al-Muwaqqit in 1571. During the early years of his position as head astronomer, Taqī al-Dīn worked in both the Galata Tower and a building overlooking Tophane. While working in these buildings, he began to gain the support and trust of many important Turkish officials. These newfound relationships lead to an imperial edict in 1569 from Sultan Murad III, which called for the construction of the Constantinople Observatory. This observatory became home to many important books and instruments, it had sixteen assistants who helped with the making of scientific instruments, as well as many renowned scholars of the time. While there is not much known of the architectural characteristics of the building, there are many depictions of the scholars and astronomical instruments present in the observatory. It was from this observatory that Taqī al-Dīn discovered the Great Comet of 1577, Murad III taught of the comet as a bad omen on the war with the Safavids (he also blamed Taqī al-Dīn for the plague that was occurring at the time). Due to political conflict, this observatory was short lived. It was closed in 1579 and, was demolished entirely by the state on 22 January 1580, only 11 short years after the imperial edict which called for its construction. === Politics === The rise and fall of Taqī al-Dīn and his observatory depended on political issues that surrounded him. Due to his father's occupation as a professor at the Damascene College of law Taqī al-Dīn spent much of his life in Syria and Egypt. During his trips to Istanbul he was able to make connections with many scholar-jurists. He was also able to use the private library of the Grand Vizier of the time, Semiz Ali Pasha. He then began working under Sultan Murad III's new Grand Vizier's, private mentor Sokollu. Continuing his research on observations of the heavens while in Egypt Taqī al-Dīn used the Galata tower and Sokollu's private residence. Although Murad III was the one who commanded an observatory to be built it was actually Sokollu who brought the idea to him knowing about his interest in science. The Sultan ultimately would provide Taqī al-Dīn with everything he needed from financial assistance for the physical buildings, to intellectual assistance making sure he had easy access to many types of books he would need. When the Sultan decided to create the observatory he saw it as a way to show off the power his monarchy had besides just financially backing it. Murad III showed his power by bringing Taqī al-Dīn and some of the most accomplished men in the field of astronomy together to work towards one goal and not only have them work well together but also make progress in the field. Murad III made sure that there was proof of his accomplishments by having his court historiographer Seyyid Lokman keep very detailed records of the work going on at the observatory. Seyyid Lokman wrote that his sultan's monarchy was much more powerful than others in Iraq, Persia, and Anatolia. He also claimed that Murad III was above other monarchs because the results of the observatory were new to the world and replaced many others. === Instruments used at the Observatory === Taqī al-Dīn used a variety of instruments to aid in his work at the observatory. Some were instruments that were already in use from European Astronomers while others he invented himself. While working in this observatory, Taqī al-Dīn not only operated many previously created instruments and techniques, but he also developed numerous new ones. Of these novel inventions, the automatic-mechanical clock is regarded as one of the most important developed in the Constantinople Observatory. Each of these instruments were first described by Ptolemy. An Armillary Sphere- A model of celestial bodies with rings that represent longitude and latitude. A Paralactic Ruler- also known as a Triquetrum was used to calculate the altitudes of celestial bodies. An Astrolabe- Measures the inclined position of celestial bodies. These instruments were created by Muslim astronomers. A Mural quadrant, a type of mural Instrument for measuring angles from 0 to 90 degrees. An Azimuthally Quadrant Each of the instruments were created by Taqi al-Din to use for his own work.A Parallel rulerA Ruler Quadrant or Wooden Quadrant an instrument with two holes for the measurement of apparent diameters and eclipses.A mechanical clock with a train of cogwheels which helped measure the true ascension of the stars.Muşabbaha bi'l-menatık, an instrument with chords to determine the equinoxes, invented to replace the equinoctial armillary. A Sunaydi Ruler which was apparently a special type of instrument of an auxiliary nature, the function of which was explained by Alaeddin el-Mansur == Contributions == === Clock mechanics === ==== Rise of clock use in the Ottoman Empire ==== Before the sixteenth century European mechanical clocks were not in high demand. This lack of demand was brought on by the extremely high prices and the lack of preciseness needed by the population who had to calculate when they would have to have the prayer. The use of hourglasses, water clocks, and sundials was more than enough to meet their needs. It was not until around 1547 that the Ottomans started creating a high demand for them. Initially, it was started by the gifts brought by the Austrians but this would end up starting a market for the clocks. European clockmakers began to create clocks designed to the tastes and needs of the Ottoman people. They did this by showing both the phases of the moon and by utilizing Ottoman numbers. ==== Taqī al-Dīn's work ==== Due to this high demand for mechanical clocks, Taqī al-Dīn was asked by the Grand Vizier to create a clock that would show exactly when the call to prayer was. This would lead him to write his first book on the construction of mechanical clocks called, "al-Kawakib al-Durriya fi Bengamat al-Dawriyya" (The Brightest Stars for the Construction of Mechanical Clocks) in 1563 A.D. which he used throughout his research at the short-lived observatory. He believed that it would be advantageous to bring a "true hermetic and distilled perception of the motion of the heavenly bodies." In order to get a better understanding of how clocks ran Taqī al-Dīn took the time to gain knowledge from many European clock makers as well as going into the treasury of Semiz Ali Pasha and learning anything he could from the many clocks he owned. ==== Types of clocks examined ==== Of the clocks in the Grand Vizier's treasury Taqī al-Dīn examined three different types. Those three were weight-driven, spring-driven, and driven by lever escapement. He wrote of these three types of watches but also made comments on pocket watches and astronomical ones. As Chief Astronomer, Taqī al-Dīn created a mechanical astronomical clock. This clock was made to permit more precise measurements at the Constantinople observatory. As stated above the creation of this clock was thought to be one of the most important astronomical discoveries of the sixteenth century. Taqī al-Dīn constructed a mechanical clock with three dials which show the hours, minutes, and seconds, with each minute consisting of five seconds. After this clock it is not known whether Taqī al-Dīn's work in mechanical clocks was ever continued, given that much of the clockmaking after that time in the Ottoman Empire was taken over by Europeans. === Steam === In 1551 Taqī al-Dīn described a self-rotating spit that is important in the history of the steam turbine. In Al-Turuq al-samiyya fi al-alat al-ruhaniyya (The Sublime Methods of Spiritual Machines) al-Dīn describes this machine as well as some practical applications for it. The spit is rotated by directing steam into the vanes which then turns the wheel at the end of the axle. Al-Dīn also described four water-raising machines. The first two are animal driven water pumps. The third and fourth are both driven by a paddle wheel. The third is a slot-rod pump while the fourth is a six-cylinder pump. The vertical pistons of the final machine are operated by cams and trip-hammers, run by the paddle wheel. The descriptions of these machines predates many of the more modern engines. The screw pump, for example, that al-Dīn describes predates Agricola, whose description of the rag and chain pump was published in 1556. The two pump engine, which was first described by al-Jazarī, was also the basis of the steam engine. == Important works == === Astronomy === Sidrat muntahā al-afkār fī malakūt al-falak al-dawwār (al-Zīj al-Shāhinshāhī): this is said to be one of Taqī al-Dīn's most important works in astronomy. He completed this book on the basis of his observations in both Egypt and Istanbul. The purpose of this work was to improve, correct, and ultimately complete Zīj-i Ulugh Beg, which was a project devised in Samarkand and furthered in the Constantinople Observatory. The first 40 pages of his writing focus on trigonometric calculations, with emphasis on trigonometric functions such sine, cosine, tangent, and cotangent. Jarīdat al-durar wa kharīdat al-fikar is a zīj that is said to be Taqī al-Dīn's second most important work in astronomy. This work contains the first recorded use of decimal fractions in trigonometric functions. He also gives the parts of degree of curves and angles in decimal fractions with precise calculations. Dustūr al-tarjīḥ li-qawā ҁ id al-tasṭīḥ is another important work by Taqī al-Dīn, which focuses on the projection of a sphere into a plane, among other geometric topics. Taqī al-Din is also accredited as the author of Rayḥānat al-rūḥ fī rasm al-sā ҁ āt ҁ alā mustawī al-suṭūḥ, which discusses sundials and their characteristics drawn on a marble surface. === Clocks and mechanics === al-Kawākib al-durriyya fī waḍ ҁ al-bankāmāt al-dawriyya was written by Taqī al-Dīn in 1559 and addressed mechanical-automatic clocks. This work is considered the first written work on mechanical-automatic clocks in the Islamic and Ottoman world. In this book, he accredits Alī Pasha as a contributor for allowing him to use and study his private library and collection of European mechanical clocks.al-Ṭuruq al-saniyya fī al-ālāt al-rūḥāniyya is a second book on mechanics by Taqī al-Dīn that emphasizes the geometrical-mechanical structure of clocks, which was a topic previously observed and studied by Banū Mūsā and Ismail al-Jazari (Abū al-ҁIzz al-Jazarī). === Physics and optics === Nawr ḥadīqat al-abṣar wa-nūr ḥaqīqat al-Anẓar was a work of Taqī al-Dīn that discussed physics and optics. This book discussed the structure of light, the relationship between light and color, as well as diffusion and global refraction. == See also == Inventions in the Muslim world Islamic astronomy Islamic science == Notes == == Further reading == Ben-Zaken, Avner. "The Revolving Planets and the Revolving Clocks: Circulating Mechanical Objects in the Mediterranean", History of Science, xlix (2010), pp. 125-148. Ben-Zaken, Avner. Cross-Cultural Scientific Exchanges in the Eastern Mediterranean 1560-1660 (Johns Hopkins University Press, 2010), pp. 8-47. King, David A. (2000). "Taḳī al-Dīn". In Bearman, P. J.; Bianquis, Th.; Bosworth, C. E.; van Donzel, E. & Heinrichs, W. P. (eds.). The Encyclopaedia of Islam, Second Edition. Volume X: T–U. Leiden: E. J. Brill. pp. 132–133. ISBN 978-90-04-11211-7. King, David A. (1986). A Survey of the Scientific manuscripts in the Egyptian National Library. Vol. 5. Winona Lake, IN, USA: American Research Center in Egypt. pp. 171–2. Hassan, Ahmad Y (1976). Taqi al-Din and Arabic Mechanical Engineering. Institute for the History of Arabic Science, Aleppo University. Gautier, Antoine (December 2005). "L'âge d'or de l'astronomie ottomane". L'Astronomie. 119. Tekeli, Sevim. (2002). 16'ıncı yüzyılda Osmanlılarda saat ve Takiyüddin'in "mekanik saat konstrüksüyonuna dair en parlak yıldızlar = The clocks in Ottoman Empire in 16th century and Taqi al Din's the brightest stars for the construction of the mechanical clocks. Second edition, Ankara: T. C. Kültür Bakanlıgi. Unat, Yavuz, "Time in The Sky of Istanbul, Taqî al Dîn al-Râsid's Observatory", Art and Culture Magazine, Time in Art, Winter 2004/Issue 11, pp. 86–103. == External links == Fazlıoğlu, İhsan (2007). "Taqī al-Dīn Abū Bakr Muḥammad ibn Zayn al-Dīn Maʿrūf al-Dimashqī al-Ḥanafī". In Thomas Hockey; et al. (eds.). The Biographical Encyclopedia of Astronomers. New York: Springer. pp. 1122–3. ISBN 978-0-387-31022-0. (PDF version)
|
Wikipedia:Tara E. Brendle#0
|
Tara Elise Brendle is an American mathematician who works in geometric group theory, which involves the intersection of algebra and low-dimensional topology. In particular, she studies mapping class group of surfaces, including braid groups, and their relationship to automorphism groups of free groups and arithmetic groups. She is a professor of mathematics and head of mathematics at the University of Glasgow. == Education and career == Brendle received her B.S. in mathematics, magna cum laude, from Haverford College in 1995. At Haverford, she won All Middle-Atlantic Conference honors in 1992 for her volleyball playing, and won honorable mention in the 1995 Alice T. Schafer Prize for Excellence in Mathematics by an Undergraduate Woman of the Association for Women in Mathematics for her undergraduate research in knot theory. She received her M.A. in mathematics from Columbia University in 1996 and went on to complete her Ph.D. at Columbia under the supervision of Joan Birman in 2002.. After receiving her Ph.D. from Columbia, Brendle was a National Science Foundation VIGRE Assistant Professor at Cornell University and an assistant professor at Louisiana State University. She moved to her present position at the University of Glasgow in 2008. == Recognition == Brendle became a member of the Young Academy of Scotland in 2014. She was elected a Fellow of the American Mathematical Society in the 2020 class, "for contributions to topology and geometry, for expository lectures, and for service to the profession aimed at the full participation of women in mathematics." She became a Fellow of the Royal Society of Edinburgh in 2021, and in the same year won the Senior Whitehead Prize "for her fundamental work in geometric group theory, concentrating on the study of groups arising in low-dimensional topology, and for her exemplary record of work in support of mathematics and mathematicians". == References ==
|
Wikipedia:Tarmo Soomere#0
|
Tarmo Soomere (born 11 October 1957 in Tallinn) is an Estonian marine scientist and mathematician. Since 2014, he is the president of Estonian Academy of Sciences. In March 2021 Soomere announced his candidacy for the 2021 Estonian presidential election. == Awards == 2002 Estonian National Research Award (for engineering sciences) 2005 People of the Year (for newspaper Postimees) 2007 Baltic Assembly Prize for Literature, the Arts and Science == References ==
|
Wikipedia:Taro Morishima#0
|
Taro Morishima (森嶋 太郎, Morishima Tarō, 1903 – 1989) was a Japanese mathematician specializing in algebra who attended University of Tokyo in Japan. Morishima published at least thirteen papers, including his work on Fermat's Last Theorem. and a collected works volume published in 1990 after his death. He also corresponded several times with American mathematician H. S. Vandiver. == Morishima's Theorem on FLT == Let m be a prime number not exceeding 31. Let p be prime, and let x, y, z be integers such that xp + yp + zp = 0. Assume that p does not divide the product xyz. Then, p² must divide mp − 1-1. == Review == Granville wrote that Morishima's proof could not be accepted. [1] == References == == External links == Collected papers at Queen's University
|
Wikipedia:Tasneem Muhammad Shah#0
|
Tasneem Muhammad Shah (Urdu: تسنیم محمد شاه) is a Pakistani scientist and mathematician who is a professor at Preston University. Previously, he was a professor and chairman of the Department of Mathematics at the Air University. Shah was born in Pakistan and had moved to Islamabad for his studies. He attended the Quaid-i-Azam University where he received his undergraduate degree in mathematics. He did his MSc, M. Phil. from Quaid-I-Azam University, followed by his DPhil in mathematics from University of Oxford. His doctoral thesis were written on "Analysis of Multi-Grid Methods, design, theory and development of Algo.". After serving in KRL for more than 27 years, Shah had begun providing training and teaching at the KRL to the field of fluid dynamics. He established the Department of Cryptology at NUST whereas he served as its first director. He also established the first IT University in private corporate sector and become the first rector of KASBIT, Karachi. He remained Dean, Faculty of Computer Sciences at Bahria University and established an Integrated Scientific and Industrial Software house for contract research (ISIS). He is the author of 14 research papers. He is also working for establishment of an HEC project, "National Institute of Vacuum Science and technology". == References ==
|
Wikipedia:Tatjana Stykel#0
|
Tatjana Stykel is a Russian mathematician who works as a professor of computational mathematics in the Institute of Mathematics of the University of Augsburg in Germany. Her research interests include numerical linear algebra, control theory, and differential-algebraic systems of equations. == Education and career == Stykel earned bachelor's and master's degrees from Novosibirsk State University in 1994 and 1996. After postgraduate study as a research institute at the Humboldt University of Berlin and Chemnitz University of Technology, she earned a doctorate (Dr. rer. nat.) from Technische Universität Berlin in 2002, and a habilitation from the same university in 2008. Her doctoral dissertation, Analysis and Numerical Solution of Generalized Lyapunov Equation, was supervised by Volker Mehrmann. After completing her doctorate, she was a postdoctoral researcher at the University of Calgary, and then a researcher and guest professor at Technische Universität Berlin from 2003 until 2011, when she took her current position in Augsburg. == Recognition == In 2003, Stykel was one of the Second Prize winners of the Leslie Fox Prize for Numerical Analysis. She won the Richard von Mises Prize of the Gesellschaft für Angewandte Mathematik und Mechanik in 2007. == References == == External links == Home page Tatjana Stykel publications indexed by Google Scholar
|
Wikipedia:Tatsujiro Shimizu#0
|
Tatsujiro Shimizu (清水 辰次郎, Shimizu Tatsujirō, 7 April 1897 – 8 November 1992) was a Japanese mathematician working in the field of complex analysis. He was the founder of the Japanese Association of Mathematical Sciences. == Life and career == Shimizu graduated from the Department of Mathematics, School of Science, Tokyo Imperial University in 1924, and stayed there working as a staff member. In 1932 he moved to Osaka Imperial University and became a professor. He made contributions to the establishment of the Department of Mathematics there. In 1949, Shimizu left Osaka and took up a professorship at Kobe University. After two years, he moved again to Osaka Prefectural University. From 1961 he was a professor at the Tokyo University of Science. In 1948, seeing the difficulty in publication of paper in mathematics, Shimizu started a new journal Mathematica Japonicae, for papers of pure and applied mathematics in general, on his own funds. The journal served as the foundation of the Japanese Association of Mathematical Sciences. Shimizu remained active in mathematics into old age. He gave talks at the meeting of the Mathematical Society of Japan until 90 years old. He died in Uji City, Kyoto Prefecture, on November 8, 1992, at the age 95. == Works == === Function theory === The first works of Shimizu treated topics of function theory, in particular the theory of meromorphic functions. A new form of the Nevanlinna characteristic generalised by him (and separately by Ahlfors) is now known as the Ahlfors-Shimizu characteristic. Additionally, with the idea of function groups, he attained a profound result on the construction of Riemann surface of meromorphic functions. In 1931, as a pioneer in Japan responding to Fatou's study of the theory of iteration of the algebraic functions, Shimizu published two papers introducing the subject in Japanese journals. === Applied mathematics === Since he moved to Osaka in 1932, Shimizu has been interested in the application of mathematical methods into science and technology. He broadly worked on the existence conditions of limit cycles, numerical analysis and applied analysis (including solving ordinary differential equations, numerical solutions and non-linear oscillations), computing machines and devices, as well as artificial intelligence (especially in solving arithmetic problems through electronic computer). His research in these areas was continued through this career. He was also involved in operations research and mathematics in management sciences, as well as probability theory and mathematical statistics. == Notable students == Among his students is Shizuo Kakutani, Osaka University, 1941 == Books == Statistical Machine Computing Method (「統計機械計算法」) Practical Mathematics (「実用数学」) Non-linear Oscillation Theory (「非線形振動論」) Applied Mathematics (「応用数学」) == References ==
|
Wikipedia:Tatyana Shaposhnikova#0
|
Tatyana Olegovna Shaposhnikova (Russian: Татьяна Олеговна Шапошникова, born 1946) is a Russian-born Swedish mathematician. She is best known for her work in the theory of multipliers in function spaces, partial differential operators and history of mathematics, some of which was partly done jointly with Vladimir Maz'ya. She is also a translator of both scientific and literary texts. == Biography == === Academic career === T.O. Shaposhnikova graduated from Leningrad University in 1969. From 1969 to 1972 she was a graduate student at the same university. In 1973 she was awarded the Candidate of Sciences degree. From 1973 to 1990 she worked in the mathematics departments of a number of technical institutes in Leningrad, first as an assistant and then as an associate professor. She lost her job twice because of her contacts with active dissidents, thus having to change her employer. She immigrated in Sweden in 1990 with her family. She has worked as associate professor (universitetslektor) at the Department of Mathematics of the University of Linköping from 1 July 1991 to September 2013, and held a position of full professor at the Department of Mathematics of the Ohio State University, from 2004 to 2008: in 2013-2018 she held a part-time job at the Department of Mathematics at the Royal Institute of Technology. From 2010 to 2016 she was a member of the European Mathematical Society Ethics Committee. Currently she serves as a member of the editorial boards of the journal Complex variable and Elliptic Equations and of the Eurasian Mathematical Journal. === Honors === In March 2003 Shaposhnikova and Vladimir Maz'ya were awarded the Verdaguer Prize by the French Academy of Sciences for their work resulting in the first scientific biography of Jacques Hadamard. In May 2010 she was awarded the Thureus prize by the Royal Society of Sciences in Uppsala "for her outstanding contribution to the theory of partial differential equations and in particular to the theory of multipliers in function spaces". == Work == === Research activity === Shaposhnikova is the author of more than 70 research papers and of four books: her research mainly belongs to the following fields. ==== Function spaces ==== From 1979 on, the theory of multipliers in various spaces of differentiable functions has been the main research theme of her work. She found conditions for the boundedness of singular integrals and pseudodifferential operators acting between pairs of Sobolev spaces in 1995. In 1989 she showed that multipliers in Bessel potential spaces are traces of multipliers belonging to a certain class of differentiable functions with a weighted mixed norm. A large part of her joint work with Vladimir Maz'ya on the theory of multipliers involves their analytic characterization, trace inequalities and relations between traces and extension of multipliers, relations of Sobolev multipliers and other function spaces, maximal subalgebras of multiplier spaces, estimates of their essential norm and compactness of multipliers. ==== Linear and non-linear PDEs ==== Based on her researches on the theory of multipliers, T. Shaposhnikova gave various applications of this theory to the study of solutions to second order linear and quasilinear elliptic partial differential equations and systems of such equations: this was a consequence of the fact that, in several cases, such solutions can be considered as multipliers in certain spaces of differentiable functions on a given domain (1986, 1987). She described the structure of composition operators in spaces of multipliers between Sobolev spaces and gave applications of those results to semilinear elliptic systems of equations (1987). She also showed that multipliers can be naturally suited to deal with the Lp coercivity of the Neumann problem (1989). Various other applications of multipliers, for example to the problem of higher regularity in single and double layer potential theory for Lipschitz domains, to the problem of regularity at the boundary in the Lp-theory of elliptic boundary value problems and to singular integral operators in Sobolev spaces are summarized in the book (Maz'ya & Shaposhnikova 2009). ==== History of mathematics ==== Her prize winning book on Jacques Hadamard, coauthored with V. Maz'ya, was published in 1998 jointly by the American Mathematical Society and the London Mathematical Society. An earlier work on the same subject was written by her jointly with E. Polishchuk (1990). Her recent activity in this field includes the paper (Shaposhnikova 2005) telling three stories of scientists who were forced to answer a mathematical question under rather trying circumstances. === Translation and editing activity === Shaposhnikova has translated and edited several mathematical monographs: it is worth to note the works by Koshelev et al. (1975) and by Mikhlin (1979), the book on Sobolev spaces by Maz'ya (1985), and the books by Kresin & Maz'ya (2007) and by Maz'ya & Soloviev (2010). However, her work is not restricted only to the translation of monographs: for example she translated into Russian a play by Lars Gårding, titled "Mathematics, Life and Death", published the mathematical journal Algebra i Analiz (Алгебра и анализ). Shaposhnikova began translating fiction while still living in Russia. In the 1970s she translated into Russian "The Voyage of the Dawn Treader", "The Silver Chair" and the "Screwtape Letters" by C. S. Lewis. These translations were impossible to publish due to ideological reasons and were distributed as samizdat: they first appeared as proper publications only in the mid-1990s, with new reprints appearing regularly. In 2005 she began translating Swedish children's books into Russian. Among them are "Kerstin and I" by Astrid Lindgren, "Mechanical Santa Claus" by Sven Nordqvist and two books of the "Loranga" series by Barbro Lindgren. == Selected publications == Maz'ya, V. G.; Shaposhnikova, T. O. (1979), "Traces and extensions of multipliers in the space Wlp", Uspekhi Matematicheskikh Nauk (in Russian), 34 (2 (206)): 205–206, MR 0535721, Zbl 0405.46026. Maz'ya, V. G.; Shaposhnikova, T. O. (1985), Theory of multipliers in spaces of differentiable functions, Monographs and Studies in Mathematics, vol. 23, Boston – London – Melbourne: Pitman Publishing Inc., pp. xiii+344, ISBN 978-0-273-08638-3, MR 0785568, Zbl 0645.46031. Shaposhnikova, T. O. (1986), "Bounded solutions of elliptic equations as multipliers in spaces of differentiable functions", Zapiski Nauchnykh Seminarov LOMI (in Russian), 149: 165–176, MR 0849306, Zbl 0601.35023. Shaposhnikova, T. O. (1986a), "Multiplicative properties of solutions of elliptic equations", Joint sessions of the Petrovskii Seminar on differential equations and mathematical problems of physics and of the Moscow Mathematical Society (ninth meeting, January 20–23, 1986), Uspekhi Matematicheskikh Nauk (in Russian), vol. 41, p. 209. Shaposhnikova, T. O. (1987), "On solvability of quasilinear elliptic equations in spaces of multipliers", Izvestiya Vysshikh Uchebnykh Zavedenii. Matematika (in Russian), 31 (8): 74–81, MR 0917617, Zbl 0701.35041. Shaposhnikova, T. O. (1987a), "The superposition operator in classes of multipliers of Sobolev spaces", Seminar Analysis (in Russian), 1986–1987: 181–190, MR 0941610, Zbl 0647.47043. Shaposhnikova, T. O. (1987b), "On nonlinear differential operators in spaces of multipliers", Joint sessions of the Petrovskii Seminar on differential equations and mathematical problems of physics and of the Moscow Mathematical Society (tenth meeting, 19–22 January 1987), Uspekhi Matematicheskikh Nauk (in Russian), vol. 42, p. 158. Shaposhnikova, T. O. (1988), "On coercivity in Lp of the Neumann problem in a domain with nonsmooth boundary", Joint sessions of the Petrovskii Seminar on differential equations and mathematical problems of physics and of the Moscow Mathematical Society (eleventh session, 18–21 January 1988), Uspekhi Matematicheskikh Nauk (in Russian), vol. 42, p. 181. Shaposhnikova, T. O. (1989), "Multipliers in Bessel potential spaces as traces of multipliers in weighted classes", Trudy Tbilisskogo Matematicheskogo Instituta A. Razmadze (in Russian), 88: 59–63, MR 1031711, Zbl 0711.42016. Shaposhnikova, T. O. (1989a), "Traces of multipliers in the space of Bessel potentials", Matematicheskie Zametki (in Russian), 46 (3): 100–109, MR 1032913, Zbl 0694.46028. Shaposhnikova, T. O. (1989b), "Applications of multipliers in Sobolev spaces to Lp–coercivity of the Neumann problem", Doklady Akademii Nauk SSSR (in Russian), 305 (4): 786–789, MR 0998033, Zbl 0727.35042. Polishchuk, E.; Shaposhnikova, T. (1990), Jacques Hadamard. 1865-1963. (Жак Адамар. 1865-1963.), Nauchno-Bibliograficheskaya Literatura (Научно-библиографическая литература) (in Russian), Leningrad: Nauka, p. 255, ISBN 978-5-02-024506-8, Zbl 0718.01030. An earlier biographic work on Jacques Hadamard written by T. Shaposhnikova jointly with E. Polishchuk. Shaposhnikova, T. O. (1995), "On Continuity of Singular Integral Operators in Sobolev Spaces", Mathematica Scandinavica, 76: 85–97, doi:10.7146/math.scand.a-12526, MR 1345090, Zbl 0845.35145. Maz'ya, Vladimir; Shaposhnikova, Tatyana (1998), Jacques Hadamard, a Universal Mathematician, History of Mathematics, vol. 14, Providence, RI and London: American Mathematical Society and London Mathematical Society, pp. xxv+574, ISBN 978-0-8218-0841-2, MR 1611073, Zbl 0906.01031. There are also two revised and expanded editions: the French translation Maz'ya, Vladimir; Shaposhnikova, Tatyana (January 2005) [1998], Jacques Hadamard, un mathématicien universel, Sciences & Histoire (in French), Paris: EDP Sciences, p. 554, ISBN 978-2-86883-707-3, and the (further revised and expanded) Russian translation Мазья, В. Г.; Шапошникова, Т. О. (2008) [1998], Жак Адамар Легенда Математики (in Russian), Москва: ИздателЬство МЦНМО, p. 528, ISBN 978-5-94057-083-7 Shaposhnikova, Tatyana (September 2005), "Three high-stakes math exams", The Mathematical Intelligencer, 27 (3): 44–46, doi:10.1007/BF02985838, MR 2162991, S2CID 123024048, Zbl 1173.01323. Maz'ya, Vladimir G.; Shaposhnikova, Tatyana O. (2009) [1985], Theory of Sobolev multipliers. With applications to differential and integral operators, Grundlehren der Mathematischen Wissenschaft, vol. 337, Heidelberg: Springer-Verlag, pp. xiii+609, ISBN 978-3-540-69490-8, MR 2457601, Zbl 1157.46001. Shaposhnikova, Tatyana (2010), "Jacques Hadamard – En universell matematiker och renässansmänniska", Årsbok 2010 (in Swedish), Uppsala: Kungl. Vetenskaps-Societeten i Uppsala, pp. 65–72, ISSN 0348-7849. The "Thuréusföredrag hållet vid prisutdelningsceremonin i Gustavianum" i.e. the Thuréus speech prof. T. Shaposhnikova gave on 31 August 2010 on the occasion of the ceremony for the 2010 prizes awarding by the Royal Society of Sciences in Uppsala. Published in the Society's yearbook, it includes a biography and a description of her research work, which motivated the award: the main theme of the speech though, as the title says (its English translation reads as:-"Jacques Hadamard – A universal mathematician and Renaissance man"), is a biographical sketch of Jacques Hadamard. == See also == Function space Multiplier (operator theory) Partial differential equation Potential theory Sobolev space == Notes == == References == === Biographical references === European Mathematical Society (2010), The Founding of the Ethics Committee (PDF), retrieved 30 May 2020. The document on the founding of the committee at the Home page of the European Mathematical Society, including a list of the former members. French Academy of Sciences (2009), Prix Verdaguer (PDF) (in French), archived from the original (PDF) on 31 May 2011, retrieved 8 May 2011. A list of the winners of the Verdaguer Prize in PDF format, including short motivations for the awarding. Shaposhnikova, Tatyana (24 January 2015), Curriculum vitae of Tatyana Shaposhnikova, Department of Mathematics of the Royal Institute of Technology, retrieved 2 February 2016{{citation}}: CS1 maint: location missing publisher (link). Sundelöf, Lars-Olof (2010), "Presentation av priser och belönigar år 2010", Årsbok 2010 (in Swedish), Uppsala: Kungl. Vetenskaps-Societeten i Uppsala, pp. 37–45, ISSN 0348-7849. The "Presentation of prizes and awards" speech given by the Secretary of the Royal Society of Sciences in Uppsala, written in the "yearbook 2010", on the occasion of the awarding of the Society prizes to prof. T. Shaposhnikova and to other 2010 winners. === References pertaining to her work === Koshelev, A. I.; Krasnosel'skij, M. A.; Mikhlin, S. G.; Rakovshchik, L. S.; Stet'senko, V. Ya.; Zabrejko, P. P. (1975), Integral equations. A reference text, Monographs and Texts on Pure and Applied Mathematics, Leyden, The Netherlands: Noordhoff International Publishing, pp. XIX+443, ISBN 978-90-286-0393-6, Zbl 0293.45001. Gårding, Lars (2000), "A philosophical dialog. Mathematics, life, and death", Algebra i Analiz (in Russian), 12 (5): 215–224, MR 1812949, Zbl 1076.00502. Gårding, Lars (2001), "A philosophical dialog. Mathematics, life, and death", Algebra i Analiz (in Russian), 13 (3): 229–239, MR 1850196, Zbl 1009.00503. Gårding, Lars (2009), "Von Neumann with the Devil", Algebra i Analiz (in Russian), 21 (5): 222–226, MR 2604570, Zbl 1220.00005 Kresin, Gershon; Maz'ya, Vladimir G. (2007), Sharp Real-Part Theorems. A Unified Approach, Lecture Notes in Mathematics, vol. 1903, Berlin–Heidelberg–New York City: Springer-Verlag, pp. xvi+140, ISBN 978-3-540-69573-8, MR 2298774, Zbl 1117.30001. Lewis, Clive Staples (1991), "Покоритель зари, или Плавание на край света (The Voyage of the Dawn Treader)", Хроники Нарнии (The Chronicles of Narnia) (in Russian), Москва: Космополис, pp. 375–482, ISBN 978-5-7008-0015-0. Lewis, Clive Staples (1991a), "Серебряное кресло (The Silver Chair)", Хроники Нарнии (The Chronicles of Narnia) (in Russian), Москва: Космополис, pp. 483–580, ISBN 978-5-7008-0015-0. Lewis, Clive Staples (1991b), Письма Баламута (The Screwtape Letters) (in Russian), Москва: Гнозис Прогресс, p. 432, ISBN 978-5-01-003665-2. Lindgren, Astrid (2008), Черстин и я (Kerstin and I), Внеклассное чтение (in Russian), Москва: АСТ: Астрель, p. 192, ISBN 978-5-17-053236-0{{citation}}: CS1 maint: publisher location (link). Lindgren, Barbro (2009), Лоранга, Мазарин и Дартаньян (Loranga, Mazarin and D'Artagnan) (in Russian), Москва: Самокат, p. 144, ISBN 978-5-902326-71-7. Maz'ya, Vladimir G. (1985), Sobolev Spaces, Berlin–Heidelberg–New York City: Springer-Verlag, pp. xix+486, ISBN 978-3-540-13589-0, MR 0817985, Zbl 0692.46023 (also published with ISBN 0-387-13589-8). Maz'ya, Vladimir G.; Soloviev, Alexander A. (2010), Boundary Integral Equations on Contours with Peaks, Operator Theory: Advances and Applications, vol. 196, Basel: Birkhäuser Verlag, pp. vii+342, ISBN 978-3-0346-0170-2, MR 2584276, Zbl 1179.45001. Mikhlin, S. G. (1979), Approximation on a rectangular grid with application to finite element methods and other problems, Mechanics: analysis, vol. 4, The Hague: Martinus Nijhoff Publishers, pp. xi+224, ISBN 978-90-286-0008-9, MR 0545652, Zbl 0466.65053 Nordqvist, Sven (2009), Механический дед мороз, Петсон и Финдус, Москва: Мир Детства Медиа, p. 120, ISBN 978-5-9743-0129-2.
|
Wikipedia:Taylor series#0
|
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century. The partial sum formed by the first n + 1 terms of a Taylor series is a polynomial of degree n that is called the nth Taylor polynomial of the function. Taylor polynomials are approximations of a function, which become generally more accurate as n increases. Taylor's theorem gives quantitative estimates on the error introduced by the use of such approximations. If the Taylor series of a function is convergent, its sum is the limit of the infinite sequence of the Taylor polynomials. A function may differ from the sum of its Taylor series, even if its Taylor series is convergent. A function is analytic at a point x if it is equal to the sum of its Taylor series in some open interval (or open disk in the complex plane) containing x. This implies that the function is analytic at every point of the interval (or disk). == Definition == The Taylor series of a real or complex-valued function f (x), that is infinitely differentiable at a real or complex number a, is the power series f ( a ) + f ′ ( a ) 1 ! ( x − a ) + f ″ ( a ) 2 ! ( x − a ) 2 + ⋯ = ∑ n = 0 ∞ f ( n ) ( a ) n ! ( x − a ) n . {\displaystyle f(a)+{\frac {f'(a)}{1!}}(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+\cdots =\sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}.} Here, n! denotes the factorial of n. The function f(n)(a) denotes the nth derivative of f evaluated at the point a. The derivative of order zero of f is defined to be f itself and (x − a)0 and 0! are both defined to be 1. This series can be written by using sigma notation, as in the right side formula. With a = 0, the Maclaurin series takes the form: f ( 0 ) + f ′ ( 0 ) 1 ! x + f ″ ( 0 ) 2 ! x 2 + ⋯ = ∑ n = 0 ∞ f ( n ) ( 0 ) n ! x n . {\displaystyle f(0)+{\frac {f'(0)}{1!}}x+{\frac {f''(0)}{2!}}x^{2}+\cdots =\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}x^{n}.} == Examples == The Taylor series of any polynomial is the polynomial itself. The Maclaurin series of 1/1 − x is the geometric series 1 + x + x 2 + x 3 + ⋯ . {\displaystyle 1+x+x^{2}+x^{3}+\cdots .} So, by substituting x for 1 − x, the Taylor series of 1/x at a = 1 is 1 − ( x − 1 ) + ( x − 1 ) 2 − ( x − 1 ) 3 + ⋯ . {\displaystyle 1-(x-1)+(x-1)^{2}-(x-1)^{3}+\cdots .} By integrating the above Maclaurin series, we find the Maclaurin series of ln(1 − x), where ln denotes the natural logarithm: − x − 1 2 x 2 − 1 3 x 3 − 1 4 x 4 − ⋯ . {\displaystyle -x-{\tfrac {1}{2}}x^{2}-{\tfrac {1}{3}}x^{3}-{\tfrac {1}{4}}x^{4}-\cdots .} The corresponding Taylor series of ln x at a = 1 is ( x − 1 ) − 1 2 ( x − 1 ) 2 + 1 3 ( x − 1 ) 3 − 1 4 ( x − 1 ) 4 + ⋯ , {\displaystyle (x-1)-{\tfrac {1}{2}}(x-1)^{2}+{\tfrac {1}{3}}(x-1)^{3}-{\tfrac {1}{4}}(x-1)^{4}+\cdots ,} and more generally, the corresponding Taylor series of ln x at an arbitrary nonzero point a is: ln a + 1 a ( x − a ) − 1 a 2 ( x − a ) 2 2 + ⋯ . {\displaystyle \ln a+{\frac {1}{a}}(x-a)-{\frac {1}{a^{2}}}{\frac {\left(x-a\right)^{2}}{2}}+\cdots .} The Maclaurin series of the exponential function ex is ∑ n = 0 ∞ x n n ! = x 0 0 ! + x 1 1 ! + x 2 2 ! + x 3 3 ! + x 4 4 ! + x 5 5 ! + ⋯ = 1 + x + x 2 2 + x 3 6 + x 4 24 + x 5 120 + ⋯ . {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}&={\frac {x^{0}}{0!}}+{\frac {x^{1}}{1!}}+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+{\frac {x^{5}}{5!}}+\cdots \\&=1+x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+{\frac {x^{4}}{24}}+{\frac {x^{5}}{120}}+\cdots .\end{aligned}}} The above expansion holds because the derivative of ex with respect to x is also ex, and e0 equals 1. This leaves the terms (x − 0)n in the numerator and n! in the denominator of each term in the infinite sum. == History == The ancient Greek philosopher Zeno of Elea considered the problem of summing an infinite series to achieve a finite result, but rejected it as an impossibility; the result was Zeno's paradox. Later, Aristotle proposed a philosophical resolution of the paradox, but the mathematical content was apparently unresolved until taken up by Archimedes, as it had been prior to Aristotle by the Presocratic Atomist Democritus. It was through Archimedes's method of exhaustion that an infinite number of progressive subdivisions could be performed to achieve a finite result. Liu Hui independently employed a similar method a few centuries later. In the 14th century, the earliest examples of specific Taylor series (but not the general method) were given by Indian mathematician Madhava of Sangamagrama. Though no record of his work survives, writings of his followers in the Kerala school of astronomy and mathematics suggest that he found the Taylor series for the trigonometric functions of sine, cosine, and arctangent (see Madhava series). During the following two centuries his followers developed further series expansions and rational approximations. In late 1670, James Gregory was shown in a letter from John Collins several Maclaurin series ( sin x , {\textstyle \sin x,} cos x , {\textstyle \cos x,} arcsin x , {\textstyle \arcsin x,} and x cot x {\textstyle x\cot x} ) derived by Isaac Newton, and told that Newton had developed a general method for expanding functions in series. Newton had in fact used a cumbersome method involving long division of series and term-by-term integration, but Gregory did not know it and set out to discover a general method for himself. In early 1671 Gregory discovered something like the general Maclaurin series and sent a letter to Collins including series for arctan x , {\textstyle \arctan x,} tan x , {\textstyle \tan x,} sec x , {\textstyle \sec x,} ln sec x {\textstyle \ln \,\sec x} (the integral of tan {\displaystyle \tan } ), ln tan 1 2 ( 1 2 π + x ) {\textstyle \ln \,\tan {\tfrac {1}{2}}{{\bigl (}{\tfrac {1}{2}}\pi +x{\bigr )}}} (the integral of sec, the inverse Gudermannian function), arcsec ( 2 e x ) , {\textstyle \operatorname {arcsec} {\bigl (}{\sqrt {2}}e^{x}{\bigr )},} and 2 arctan e x − 1 2 π {\textstyle 2\arctan e^{x}-{\tfrac {1}{2}}\pi } (the Gudermannian function). However, thinking that he had merely redeveloped a method by Newton, Gregory never described how he obtained these series, and it can only be inferred that he understood the general method by examining scratch work he had scribbled on the back of another letter from 1671. In 1691–1692, Isaac Newton wrote down an explicit statement of the Taylor and Maclaurin series in an unpublished version of his work De Quadratura Curvarum. However, this work was never completed and the relevant sections were omitted from the portions published in 1704 under the title Tractatus de Quadratura Curvarum. It was not until 1715 that a general method for constructing these series for all functions for which they exist was finally published by Brook Taylor, after whom the series are now named. The Maclaurin series was named after Colin Maclaurin, a Scottish mathematician, who published a special case of the Taylor result in the mid-18th century. == Analytic functions == If f (x) is given by a convergent power series in an open disk centred at b in the complex plane (or an interval in the real line), it is said to be analytic in this region. Thus for x in this region, f is given by a convergent power series f ( x ) = ∑ n = 0 ∞ a n ( x − b ) n . {\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}(x-b)^{n}.} Differentiating by x the above formula n times, then setting x = b gives: f ( n ) ( b ) n ! = a n {\displaystyle {\frac {f^{(n)}(b)}{n!}}=a_{n}} and so the power series expansion agrees with the Taylor series. Thus a function is analytic in an open disk centered at b if and only if its Taylor series converges to the value of the function at each point of the disk. If f (x) is equal to the sum of its Taylor series for all x in the complex plane, it is called entire. The polynomials, exponential function ex, and the trigonometric functions sine and cosine, are examples of entire functions. Examples of functions that are not entire include the square root, the logarithm, the trigonometric function tangent, and its inverse, arctan. For these functions the Taylor series do not converge if x is far from b. That is, the Taylor series diverges at x if the distance between x and b is larger than the radius of convergence. The Taylor series can be used to calculate the value of an entire function at every point, if the value of the function, and of all of its derivatives, are known at a single point. Uses of the Taylor series for analytic functions include: The partial sums (the Taylor polynomials) of the series can be used as approximations of the function. These approximations are good if sufficiently many terms are included. Differentiation and integration of power series can be performed term by term and is hence particularly easy. An analytic function is uniquely extended to a holomorphic function on an open disk in the complex plane. This makes the machinery of complex analysis available. The (truncated) series can be used to compute function values numerically, (often by recasting the polynomial into the Chebyshev form and evaluating it with the Clenshaw algorithm). Algebraic operations can be done readily on the power series representation; for instance, Euler's formula follows from Taylor series expansions for trigonometric and exponential functions. This result is of fundamental importance in such fields as harmonic analysis. Approximations using the first few terms of a Taylor series can make otherwise unsolvable problems possible for a restricted domain; this approach is often used in physics. == Approximation error and convergence == Pictured is an accurate approximation of sin x around the point x = 0. The pink curve is a polynomial of degree seven: sin x ≈ x − x 3 3 ! + x 5 5 ! − x 7 7 ! . {\displaystyle \sin {x}\approx x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}.\!} The error in this approximation is no more than |x|9 / 9!. For a full cycle centered at the origin (−π < x < π) the error is less than 0.08215. In particular, for −1 < x < 1, the error is less than 0.000003. In contrast, also shown is a picture of the natural logarithm function ln(1 + x) and some of its Taylor polynomials around a = 0. These approximations converge to the function only in the region −1 < x ≤ 1; outside of this region the higher-degree Taylor polynomials are worse approximations for the function. The error incurred in approximating a function by its nth-degree Taylor polynomial is called the remainder or residual and is denoted by the function Rn(x). Taylor's theorem can be used to obtain a bound on the size of the remainder. In general, Taylor series need not be convergent at all. In fact, the set of functions with a convergent Taylor series is a meager set in the Fréchet space of smooth functions. Even if the Taylor series of a function f does converge, its limit need not be equal to the value of the function f (x). For example, the function f ( x ) = { e − 1 / x 2 if x ≠ 0 0 if x = 0 {\displaystyle f(x)={\begin{cases}e^{-1/x^{2}}&{\text{if }}x\neq 0\\[3mu]0&{\text{if }}x=0\end{cases}}} is infinitely differentiable at x = 0, and has all derivatives zero there. Consequently, the Taylor series of f (x) about x = 0 is identically zero. However, f (x) is not the zero function, so does not equal its Taylor series around the origin. Thus, f (x) is an example of a non-analytic smooth function. In real analysis, this example shows that there are infinitely differentiable functions f (x) whose Taylor series are not equal to f (x) even if they converge. By contrast, the holomorphic functions studied in complex analysis always possess a convergent Taylor series, and even the Taylor series of meromorphic functions, which might have singularities, never converge to a value different from the function itself. The complex function e−1/z2, however, does not approach 0 when z approaches 0 along the imaginary axis, so it is not continuous in the complex plane and its Taylor series is undefined at 0. More generally, every sequence of real or complex numbers can appear as coefficients in the Taylor series of an infinitely differentiable function defined on the real line, a consequence of Borel's lemma. As a result, the radius of convergence of a Taylor series can be zero. There are even infinitely differentiable functions defined on the real line whose Taylor series have a radius of convergence 0 everywhere. A function cannot be written as a Taylor series centred at a singularity; in these cases, one can often still achieve a series expansion if one allows also negative powers of the variable x; see Laurent series. For example, f (x) = e−1/x2 can be written as a Laurent series. === Generalization === The generalization of the Taylor series does converge to the value of the function itself for any bounded continuous function on (0,∞), and this can be done by using the calculus of finite differences. Specifically, the following theorem, due to Einar Hille, that for any t > 0, lim h → 0 + ∑ n = 0 ∞ t n n ! Δ h n f ( a ) h n = f ( a + t ) . {\displaystyle \lim _{h\to 0^{+}}\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}{\frac {\Delta _{h}^{n}f(a)}{h^{n}}}=f(a+t).} Here Δnh is the nth finite difference operator with step size h. The series is precisely the Taylor series, except that divided differences appear in place of differentiation: the series is formally similar to the Newton series. When the function f is analytic at a, the terms in the series converge to the terms of the Taylor series, and in this sense generalizes the usual Taylor series. In general, for any infinite sequence ai, the following power series identity holds: ∑ n = 0 ∞ u n n ! Δ n a i = e − u ∑ j = 0 ∞ u j j ! a i + j . {\displaystyle \sum _{n=0}^{\infty }{\frac {u^{n}}{n!}}\Delta ^{n}a_{i}=e^{-u}\sum _{j=0}^{\infty }{\frac {u^{j}}{j!}}a_{i+j}.} So in particular, f ( a + t ) = lim h → 0 + e − t / h ∑ j = 0 ∞ f ( a + j h ) ( t / h ) j j ! . {\displaystyle f(a+t)=\lim _{h\to 0^{+}}e^{-t/h}\sum _{j=0}^{\infty }f(a+jh){\frac {(t/h)^{j}}{j!}}.} The series on the right is the expected value of f (a + X), where X is a Poisson-distributed random variable that takes the value jh with probability e−t/h·(t/h)j/j!. Hence, f ( a + t ) = lim h → 0 + ∫ − ∞ ∞ f ( a + x ) d P t / h , h ( x ) . {\displaystyle f(a+t)=\lim _{h\to 0^{+}}\int _{-\infty }^{\infty }f(a+x)dP_{t/h,h}(x).} The law of large numbers implies that the identity holds. == List of Maclaurin series of some common functions == Several important Maclaurin series expansions follow. All these expansions are valid for complex arguments x. === Exponential function === The exponential function e x {\displaystyle e^{x}} (with base e) has Maclaurin series e x = ∑ n = 0 ∞ x n n ! = 1 + x + x 2 2 ! + x 3 3 ! + ⋯ . {\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+\cdots .} It converges for all x. The exponential generating function of the Bell numbers is the exponential function of the predecessor of the exponential function: exp ( exp x − 1 ) = ∑ n = 0 ∞ B n n ! x n {\displaystyle \exp(\exp {x}-1)=\sum _{n=0}^{\infty }{\frac {B_{n}}{n!}}x^{n}} === Natural logarithm === The natural logarithm (with base e) has Maclaurin series ln ( 1 − x ) = − ∑ n = 1 ∞ x n n = − x − x 2 2 − x 3 3 − ⋯ , ln ( 1 + x ) = ∑ n = 1 ∞ ( − 1 ) n + 1 x n n = x − x 2 2 + x 3 3 − ⋯ . {\displaystyle {\begin{aligned}\ln(1-x)&=-\sum _{n=1}^{\infty }{\frac {x^{n}}{n}}=-x-{\frac {x^{2}}{2}}-{\frac {x^{3}}{3}}-\cdots ,\\\ln(1+x)&=\sum _{n=1}^{\infty }(-1)^{n+1}{\frac {x^{n}}{n}}=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}-\cdots .\end{aligned}}} The last series is known as Mercator series, named after Nicholas Mercator (since it was published in his 1668 treatise Logarithmotechnia). Both of these series converge for | x | < 1 {\displaystyle |x|<1} . (In addition, the series for ln(1 − x) converges for x = −1, and the series for ln(1 + x) converges for x = 1.) === Geometric series === The geometric series and its derivatives have Maclaurin series 1 1 − x = ∑ n = 0 ∞ x n 1 ( 1 − x ) 2 = ∑ n = 1 ∞ n x n − 1 1 ( 1 − x ) 3 = ∑ n = 2 ∞ ( n − 1 ) n 2 x n − 2 . {\displaystyle {\begin{aligned}{\frac {1}{1-x}}&=\sum _{n=0}^{\infty }x^{n}\\{\frac {1}{(1-x)^{2}}}&=\sum _{n=1}^{\infty }nx^{n-1}\\{\frac {1}{(1-x)^{3}}}&=\sum _{n=2}^{\infty }{\frac {(n-1)n}{2}}x^{n-2}.\end{aligned}}} All are convergent for | x | < 1 {\displaystyle |x|<1} . These are special cases of the binomial series given in the next section. === Binomial series === The binomial series is the power series ( 1 + x ) α = ∑ n = 0 ∞ ( α n ) x n {\displaystyle (1+x)^{\alpha }=\sum _{n=0}^{\infty }{\binom {\alpha }{n}}x^{n}} whose coefficients are the generalized binomial coefficients ( α n ) = ∏ k = 1 n α − k + 1 k = α ( α − 1 ) ⋯ ( α − n + 1 ) n ! . {\displaystyle {\binom {\alpha }{n}}=\prod _{k=1}^{n}{\frac {\alpha -k+1}{k}}={\frac {\alpha (\alpha -1)\cdots (\alpha -n+1)}{n!}}.} (If n = 0, this product is an empty product and has value 1.) It converges for | x | < 1 {\displaystyle |x|<1} for any real or complex number α. When α = −1, this is essentially the infinite geometric series mentioned in the previous section. The special cases α = 1/2 and α = −1/2 give the square root function and its inverse: ( 1 + x ) 1 2 = 1 + 1 2 x − 1 8 x 2 + 1 16 x 3 − 5 128 x 4 + 7 256 x 5 − ⋯ = ∑ n = 0 ∞ ( − 1 ) n − 1 ( 2 n ) ! 4 n ( n ! ) 2 ( 2 n − 1 ) x n , ( 1 + x ) − 1 2 = 1 − 1 2 x + 3 8 x 2 − 5 16 x 3 + 35 128 x 4 − 63 256 x 5 + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! 4 n ( n ! ) 2 x n . {\displaystyle {\begin{aligned}(1+x)^{\frac {1}{2}}&=1+{\frac {1}{2}}x-{\frac {1}{8}}x^{2}+{\frac {1}{16}}x^{3}-{\frac {5}{128}}x^{4}+{\frac {7}{256}}x^{5}-\cdots &=\sum _{n=0}^{\infty }{\frac {(-1)^{n-1}(2n)!}{4^{n}(n!)^{2}(2n-1)}}x^{n},\\(1+x)^{-{\frac {1}{2}}}&=1-{\frac {1}{2}}x+{\frac {3}{8}}x^{2}-{\frac {5}{16}}x^{3}+{\frac {35}{128}}x^{4}-{\frac {63}{256}}x^{5}+\cdots &=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{4^{n}(n!)^{2}}}x^{n}.\end{aligned}}} When only the linear term is retained, this simplifies to the binomial approximation. === Trigonometric functions === The usual trigonometric functions and their inverses have the following Maclaurin series: sin x = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 = x − x 3 3 ! + x 5 5 ! − ⋯ for all x cos x = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n = 1 − x 2 2 ! + x 4 4 ! − ⋯ for all x tan x = ∑ n = 1 ∞ B 2 n ( − 4 ) n ( 1 − 4 n ) ( 2 n ) ! x 2 n − 1 = x + x 3 3 + 2 x 5 15 + ⋯ for | x | < π 2 sec x = ∑ n = 0 ∞ ( − 1 ) n E 2 n ( 2 n ) ! x 2 n = 1 + x 2 2 + 5 x 4 24 + ⋯ for | x | < π 2 arcsin x = ∑ n = 0 ∞ ( 2 n ) ! 4 n ( n ! ) 2 ( 2 n + 1 ) x 2 n + 1 = x + x 3 6 + 3 x 5 40 + ⋯ for | x | ≤ 1 arccos x = π 2 − arcsin x = π 2 − ∑ n = 0 ∞ ( 2 n ) ! 4 n ( n ! ) 2 ( 2 n + 1 ) x 2 n + 1 = π 2 − x − x 3 6 − 3 x 5 40 − ⋯ for | x | ≤ 1 arctan x = ∑ n = 0 ∞ ( − 1 ) n 2 n + 1 x 2 n + 1 = x − x 3 3 + x 5 5 − ⋯ for | x | ≤ 1 , x ≠ ± i {\displaystyle {\begin{aligned}\sin x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}&&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-\cdots &&{\text{for all }}x\\[6pt]\cos x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}&&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-\cdots &&{\text{for all }}x\\[6pt]\tan x&=\sum _{n=1}^{\infty }{\frac {B_{2n}(-4)^{n}\left(1-4^{n}\right)}{(2n)!}}x^{2n-1}&&=x+{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}+\cdots &&{\text{for }}|x|<{\frac {\pi }{2}}\\[6pt]\sec x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}E_{2n}}{(2n)!}}x^{2n}&&=1+{\frac {x^{2}}{2}}+{\frac {5x^{4}}{24}}+\cdots &&{\text{for }}|x|<{\frac {\pi }{2}}\\[6pt]\arcsin x&=\sum _{n=0}^{\infty }{\frac {(2n)!}{4^{n}(n!)^{2}(2n+1)}}x^{2n+1}&&=x+{\frac {x^{3}}{6}}+{\frac {3x^{5}}{40}}+\cdots &&{\text{for }}|x|\leq 1\\[6pt]\arccos x&={\frac {\pi }{2}}-\arcsin x\\&={\frac {\pi }{2}}-\sum _{n=0}^{\infty }{\frac {(2n)!}{4^{n}(n!)^{2}(2n+1)}}x^{2n+1}&&={\frac {\pi }{2}}-x-{\frac {x^{3}}{6}}-{\frac {3x^{5}}{40}}-\cdots &&{\text{for }}|x|\leq 1\\[6pt]\arctan x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}x^{2n+1}&&=x-{\frac {x^{3}}{3}}+{\frac {x^{5}}{5}}-\cdots &&{\text{for }}|x|\leq 1,\ x\neq \pm i\end{aligned}}} All angles are expressed in radians. The numbers Bk appearing in the expansions of tan x are the Bernoulli numbers. The Ek in the expansion of sec x are Euler numbers. === Hyperbolic functions === The hyperbolic functions have Maclaurin series closely related to the series for the corresponding trigonometric functions: sinh x = ∑ n = 0 ∞ x 2 n + 1 ( 2 n + 1 ) ! = x + x 3 3 ! + x 5 5 ! + ⋯ for all x cosh x = ∑ n = 0 ∞ x 2 n ( 2 n ) ! = 1 + x 2 2 ! + x 4 4 ! + ⋯ for all x tanh x = ∑ n = 1 ∞ B 2 n 4 n ( 4 n − 1 ) ( 2 n ) ! x 2 n − 1 = x − x 3 3 + 2 x 5 15 − 17 x 7 315 + ⋯ for | x | < π 2 arsinh x = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! 4 n ( n ! ) 2 ( 2 n + 1 ) x 2 n + 1 = x − x 3 6 + 3 x 5 40 − ⋯ for | x | ≤ 1 artanh x = ∑ n = 0 ∞ x 2 n + 1 2 n + 1 = x + x 3 3 + x 5 5 + ⋯ for | x | ≤ 1 , x ≠ ± 1 {\displaystyle {\begin{aligned}\sinh x&=\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{(2n+1)!}}&&=x+{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}+\cdots &&{\text{for all }}x\\[6pt]\cosh x&=\sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}&&=1+{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}+\cdots &&{\text{for all }}x\\[6pt]\tanh x&=\sum _{n=1}^{\infty }{\frac {B_{2n}4^{n}\left(4^{n}-1\right)}{(2n)!}}x^{2n-1}&&=x-{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}-{\frac {17x^{7}}{315}}+\cdots &&{\text{for }}|x|<{\frac {\pi }{2}}\\[6pt]\operatorname {arsinh} x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{4^{n}(n!)^{2}(2n+1)}}x^{2n+1}&&=x-{\frac {x^{3}}{6}}+{\frac {3x^{5}}{40}}-\cdots &&{\text{for }}|x|\leq 1\\[6pt]\operatorname {artanh} x&=\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{2n+1}}&&=x+{\frac {x^{3}}{3}}+{\frac {x^{5}}{5}}+\cdots &&{\text{for }}|x|\leq 1,\ x\neq \pm 1\end{aligned}}} The numbers Bk appearing in the series for tanh x are the Bernoulli numbers. === Polylogarithmic functions === The polylogarithms have these defining identities: Li 2 ( x ) = ∑ n = 1 ∞ 1 n 2 x n Li 3 ( x ) = ∑ n = 1 ∞ 1 n 3 x n {\displaystyle {\begin{aligned}{\text{Li}}_{2}(x)&=\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}x^{n}\\{\text{Li}}_{3}(x)&=\sum _{n=1}^{\infty }{\frac {1}{n^{3}}}x^{n}\end{aligned}}} The Legendre chi functions are defined as follows: χ 2 ( x ) = ∑ n = 0 ∞ 1 ( 2 n + 1 ) 2 x 2 n + 1 χ 3 ( x ) = ∑ n = 0 ∞ 1 ( 2 n + 1 ) 3 x 2 n + 1 {\displaystyle {\begin{aligned}\chi _{2}(x)&=\sum _{n=0}^{\infty }{\frac {1}{(2n+1)^{2}}}x^{2n+1}\\\chi _{3}(x)&=\sum _{n=0}^{\infty }{\frac {1}{(2n+1)^{3}}}x^{2n+1}\end{aligned}}} And the formulas presented below are called inverse tangent integrals: Ti 2 ( x ) = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) 2 x 2 n + 1 Ti 3 ( x ) = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) 3 x 2 n + 1 {\displaystyle {\begin{aligned}{\text{Ti}}_{2}(x)&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)^{2}}}x^{2n+1}\\{\text{Ti}}_{3}(x)&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)^{3}}}x^{2n+1}\end{aligned}}} In statistical thermodynamics these formulas are of great importance. === Elliptic functions === The complete elliptic integrals of first kind K and of second kind E can be defined as follows: 2 π K ( x ) = ∑ n = 0 ∞ [ ( 2 n ) ! ] 2 16 n ( n ! ) 4 x 2 n 2 π E ( x ) = ∑ n = 0 ∞ [ ( 2 n ) ! ] 2 ( 1 − 2 n ) 16 n ( n ! ) 4 x 2 n {\displaystyle {\begin{aligned}{\frac {2}{\pi }}K(x)&=\sum _{n=0}^{\infty }{\frac {[(2n)!]^{2}}{16^{n}(n!)^{4}}}x^{2n}\\{\frac {2}{\pi }}E(x)&=\sum _{n=0}^{\infty }{\frac {[(2n)!]^{2}}{(1-2n)16^{n}(n!)^{4}}}x^{2n}\end{aligned}}} The Jacobi theta functions describe the world of the elliptic modular functions and they have these Taylor series: ϑ 00 ( x ) = 1 + 2 ∑ n = 1 ∞ x n 2 ϑ 01 ( x ) = 1 + 2 ∑ n = 1 ∞ ( − 1 ) n x n 2 {\displaystyle {\begin{aligned}\vartheta _{00}(x)&=1+2\sum _{n=1}^{\infty }x^{n^{2}}\\\vartheta _{01}(x)&=1+2\sum _{n=1}^{\infty }(-1)^{n}x^{n^{2}}\end{aligned}}} The regular partition number sequence P(n) has this generating function: ϑ 00 ( x ) − 1 / 6 ϑ 01 ( x ) − 2 / 3 [ ϑ 00 ( x ) 4 − ϑ 01 ( x ) 4 16 x ] − 1 / 24 = ∑ n = 0 ∞ P ( n ) x n = ∏ k = 1 ∞ 1 1 − x k {\displaystyle \vartheta _{00}(x)^{-1/6}\vartheta _{01}(x)^{-2/3}{\biggl [}{\frac {\vartheta _{00}(x)^{4}-\vartheta _{01}(x)^{4}}{16\,x}}{\biggr ]}^{-1/24}=\sum _{n=0}^{\infty }P(n)x^{n}=\prod _{k=1}^{\infty }{\frac {1}{1-x^{k}}}} The strict partition number sequence Q(n) has that generating function: ϑ 00 ( x ) 1 / 6 ϑ 01 ( x ) − 1 / 3 [ ϑ 00 ( x ) 4 − ϑ 01 ( x ) 4 16 x ] 1 / 24 = ∑ n = 0 ∞ Q ( n ) x n = ∏ k = 1 ∞ 1 1 − x 2 k − 1 {\displaystyle \vartheta _{00}(x)^{1/6}\vartheta _{01}(x)^{-1/3}{\biggl [}{\frac {\vartheta _{00}(x)^{4}-\vartheta _{01}(x)^{4}}{16\,x}}{\biggr ]}^{1/24}=\sum _{n=0}^{\infty }Q(n)x^{n}=\prod _{k=1}^{\infty }{\frac {1}{1-x^{2k-1}}}} == Calculation of Taylor series == Several methods exist for the calculation of Taylor series of a large number of functions. One can attempt to use the definition of the Taylor series, though this often requires generalizing the form of the coefficients according to a readily apparent pattern. Alternatively, one can use manipulations such as substitution, multiplication or division, addition or subtraction of standard Taylor series to construct the Taylor series of a function, by virtue of Taylor series being power series. In some cases, one can also derive the Taylor series by repeatedly applying integration by parts. Particularly convenient is the use of computer algebra systems to calculate Taylor series. === First example === In order to compute the 7th degree Maclaurin polynomial for the function f ( x ) = ln ( cos x ) , x ∈ ( − π 2 , π 2 ) , {\displaystyle f(x)=\ln(\cos x),\quad x\in {\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )},} one may first rewrite the function as f ( x ) = ln ( 1 + ( cos x − 1 ) ) , {\displaystyle f(x)={\ln }{\bigl (}1+(\cos x-1){\bigr )},} the composition of two functions x ↦ ln ( 1 + x ) {\displaystyle x\mapsto \ln(1+x)} and x ↦ cos x − 1. {\displaystyle x\mapsto \cos x-1.} The Taylor series for the natural logarithm is (using big O notation) ln ( 1 + x ) = x − x 2 2 + x 3 3 + O ( x 4 ) {\displaystyle \ln(1+x)=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}+O{\left(x^{4}\right)}} and for the cosine function cos x − 1 = − x 2 2 + x 4 24 − x 6 720 + O ( x 8 ) . {\displaystyle \cos x-1=-{\frac {x^{2}}{2}}+{\frac {x^{4}}{24}}-{\frac {x^{6}}{720}}+O{\left(x^{8}\right)}.} The first several terms from the second series can be substituted into each term of the first series. Because the first term in the second series has degree 2, three terms of the first series suffice to give a 7th-degree polynomial: f ( x ) = ln ( 1 + ( cos x − 1 ) ) = ( cos x − 1 ) − 1 2 ( cos x − 1 ) 2 + 1 3 ( cos x − 1 ) 3 + O ( ( cos x − 1 ) 4 ) = − x 2 2 − x 4 12 − x 6 45 + O ( x 8 ) . {\displaystyle {\begin{aligned}f(x)&=\ln {\bigl (}1+(\cos x-1){\bigr )}\\&=(\cos x-1)-{\tfrac {1}{2}}(\cos x-1)^{2}+{\tfrac {1}{3}}(\cos x-1)^{3}+O{\left((\cos x-1)^{4}\right)}\\&=-{\frac {x^{2}}{2}}-{\frac {x^{4}}{12}}-{\frac {x^{6}}{45}}+O{\left(x^{8}\right)}.\end{aligned}}\!} Since the cosine is an even function, the coefficients for all the odd powers are zero. === Second example === Suppose we want the Taylor series at 0 of the function g ( x ) = e x cos x . {\displaystyle g(x)={\frac {e^{x}}{\cos x}}.\!} The Taylor series for the exponential function is e x = 1 + x + x 2 2 ! + x 3 3 ! + x 4 4 ! + ⋯ , {\displaystyle e^{x}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\cdots ,} and the series for cosine is cos x = 1 − x 2 2 ! + x 4 4 ! − ⋯ . {\displaystyle \cos x=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-\cdots .} Assume the series for their quotient is e x cos x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + c 4 x 4 + ⋯ {\displaystyle {\frac {e^{x}}{\cos x}}=c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}+c_{4}x^{4}+\cdots } Multiplying both sides by the denominator cos x {\displaystyle \cos x} and then expanding it as a series yields e x = ( c 0 + c 1 x + c 2 x 2 + c 3 x 3 + c 4 x 4 + ⋯ ) ( 1 − x 2 2 ! + x 4 4 ! − ⋯ ) = c 0 + c 1 x + ( c 2 − c 0 2 ) x 2 + ( c 3 − c 1 2 ) x 3 + ( c 4 − c 2 2 + c 0 4 ! ) x 4 + ⋯ {\displaystyle {\begin{aligned}e^{x}&=\left(c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}+c_{4}x^{4}+\cdots \right)\left(1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-\cdots \right)\\[5mu]&=c_{0}+c_{1}x+\left(c_{2}-{\frac {c_{0}}{2}}\right)x^{2}+\left(c_{3}-{\frac {c_{1}}{2}}\right)x^{3}+\left(c_{4}-{\frac {c_{2}}{2}}+{\frac {c_{0}}{4!}}\right)x^{4}+\cdots \end{aligned}}} Comparing the coefficients of g ( x ) cos x {\displaystyle g(x)\cos x} with the coefficients of e x , {\displaystyle e^{x},} c 0 = 1 , c 1 = 1 , c 2 − 1 2 c 0 = 1 2 , c 3 − 1 2 c 1 = 1 6 , c 4 − 1 2 c 2 + 1 24 c 0 = 1 24 , … . {\displaystyle c_{0}=1,\ \ c_{1}=1,\ \ c_{2}-{\tfrac {1}{2}}c_{0}={\tfrac {1}{2}},\ \ c_{3}-{\tfrac {1}{2}}c_{1}={\tfrac {1}{6}},\ \ c_{4}-{\tfrac {1}{2}}c_{2}+{\tfrac {1}{24}}c_{0}={\tfrac {1}{24}},\ \ldots .} The coefficients c i {\displaystyle c_{i}} of the series for g ( x ) {\displaystyle g(x)} can thus be computed one at a time, amounting to long division of the series for e x {\displaystyle e^{x}} and cos x {\displaystyle \cos x} : e x cos x = 1 + x + x 2 + 2 3 x 3 + 1 2 x 4 + ⋯ . {\displaystyle {\frac {e^{x}}{\cos x}}=1+x+x^{2}+{\tfrac {2}{3}}x^{3}+{\tfrac {1}{2}}x^{4}+\cdots .} === Third example === Here we employ a method called "indirect expansion" to expand the given function. This method uses the known Taylor expansion of the exponential function. In order to expand (1 + x)ex as a Taylor series in x, we use the known Taylor series of function ex: e x = ∑ n = 0 ∞ x n n ! = 1 + x + x 2 2 ! + x 3 3 ! + x 4 4 ! + ⋯ . {\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\cdots .} Thus, ( 1 + x ) e x = e x + x e x = ∑ n = 0 ∞ x n n ! + ∑ n = 0 ∞ x n + 1 n ! = 1 + ∑ n = 1 ∞ x n n ! + ∑ n = 0 ∞ x n + 1 n ! = 1 + ∑ n = 1 ∞ x n n ! + ∑ n = 1 ∞ x n ( n − 1 ) ! = 1 + ∑ n = 1 ∞ ( 1 n ! + 1 ( n − 1 ) ! ) x n = 1 + ∑ n = 1 ∞ n + 1 n ! x n = ∑ n = 0 ∞ n + 1 n ! x n . {\displaystyle {\begin{aligned}(1+x)e^{x}&=e^{x}+xe^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}+\sum _{n=0}^{\infty }{\frac {x^{n+1}}{n!}}=1+\sum _{n=1}^{\infty }{\frac {x^{n}}{n!}}+\sum _{n=0}^{\infty }{\frac {x^{n+1}}{n!}}\\&=1+\sum _{n=1}^{\infty }{\frac {x^{n}}{n!}}+\sum _{n=1}^{\infty }{\frac {x^{n}}{(n-1)!}}=1+\sum _{n=1}^{\infty }\left({\frac {1}{n!}}+{\frac {1}{(n-1)!}}\right)x^{n}\\&=1+\sum _{n=1}^{\infty }{\frac {n+1}{n!}}x^{n}\\&=\sum _{n=0}^{\infty }{\frac {n+1}{n!}}x^{n}.\end{aligned}}} == Taylor series as definitions == Classically, algebraic functions are defined by an algebraic equation, and transcendental functions (including those discussed above) are defined by some property that holds for them, such as a differential equation. For example, the exponential function is the function which is equal to its own derivative everywhere, and assumes the value 1 at the origin. However, one may equally well define an analytic function by its Taylor series. Taylor series are used to define functions and "operators" in diverse areas of mathematics. In particular, this is true in areas where the classical definitions of functions break down. For example, using Taylor series, one may extend analytic functions to sets of matrices and operators, such as the matrix exponential or matrix logarithm. In other areas, such as formal analysis, it is more convenient to work directly with the power series themselves. Thus one may define a solution of a differential equation as a power series which, one hopes to prove, is the Taylor series of the desired solution. == Taylor series in several variables == The Taylor series may also be generalized to functions of more than one variable with T ( x 1 , … , x d ) = ∑ n 1 = 0 ∞ ⋯ ∑ n d = 0 ∞ ( x 1 − a 1 ) n 1 ⋯ ( x d − a d ) n d n 1 ! ⋯ n d ! ( ∂ n 1 + ⋯ + n d f ∂ x 1 n 1 ⋯ ∂ x d n d ) ( a 1 , … , a d ) = f ( a 1 , … , a d ) + ∑ j = 1 d ∂ f ( a 1 , … , a d ) ∂ x j ( x j − a j ) + 1 2 ! ∑ j = 1 d ∑ k = 1 d ∂ 2 f ( a 1 , … , a d ) ∂ x j ∂ x k ( x j − a j ) ( x k − a k ) + 1 3 ! ∑ j = 1 d ∑ k = 1 d ∑ l = 1 d ∂ 3 f ( a 1 , … , a d ) ∂ x j ∂ x k ∂ x l ( x j − a j ) ( x k − a k ) ( x l − a l ) + ⋯ {\displaystyle {\begin{aligned}T(x_{1},\ldots ,x_{d})&=\sum _{n_{1}=0}^{\infty }\cdots \sum _{n_{d}=0}^{\infty }{\frac {(x_{1}-a_{1})^{n_{1}}\cdots (x_{d}-a_{d})^{n_{d}}}{n_{1}!\cdots n_{d}!}}\,\left({\frac {\partial ^{n_{1}+\cdots +n_{d}}f}{\partial x_{1}^{n_{1}}\cdots \partial x_{d}^{n_{d}}}}\right)(a_{1},\ldots ,a_{d})\\&=f(a_{1},\ldots ,a_{d})+\sum _{j=1}^{d}{\frac {\partial f(a_{1},\ldots ,a_{d})}{\partial x_{j}}}(x_{j}-a_{j})+{\frac {1}{2!}}\sum _{j=1}^{d}\sum _{k=1}^{d}{\frac {\partial ^{2}f(a_{1},\ldots ,a_{d})}{\partial x_{j}\partial x_{k}}}(x_{j}-a_{j})(x_{k}-a_{k})\\&\qquad \qquad +{\frac {1}{3!}}\sum _{j=1}^{d}\sum _{k=1}^{d}\sum _{l=1}^{d}{\frac {\partial ^{3}f(a_{1},\ldots ,a_{d})}{\partial x_{j}\partial x_{k}\partial x_{l}}}(x_{j}-a_{j})(x_{k}-a_{k})(x_{l}-a_{l})+\cdots \end{aligned}}} For example, for a function f ( x , y ) {\displaystyle f(x,y)} that depends on two variables, x and y, the Taylor series to second order about the point (a, b) is f ( a , b ) + ( x − a ) f x ( a , b ) + ( y − b ) f y ( a , b ) + 1 2 ! ( ( x − a ) 2 f x x ( a , b ) + 2 ( x − a ) ( y − b ) f x y ( a , b ) + ( y − b ) 2 f y y ( a , b ) ) {\displaystyle f(a,b)+(x-a)f_{x}(a,b)+(y-b)f_{y}(a,b)+{\frac {1}{2!}}{\Big (}(x-a)^{2}f_{xx}(a,b)+2(x-a)(y-b)f_{xy}(a,b)+(y-b)^{2}f_{yy}(a,b){\Big )}} where the subscripts denote the respective partial derivatives. === Second-order Taylor series in several variables === A second-order Taylor series expansion of a scalar-valued function of more than one variable can be written compactly as T ( x ) = f ( a ) + ( x − a ) T D f ( a ) + 1 2 ! ( x − a ) T { D 2 f ( a ) } ( x − a ) + ⋯ , {\displaystyle T(\mathbf {x} )=f(\mathbf {a} )+(\mathbf {x} -\mathbf {a} )^{\mathsf {T}}Df(\mathbf {a} )+{\frac {1}{2!}}(\mathbf {x} -\mathbf {a} )^{\mathsf {T}}\left\{D^{2}f(\mathbf {a} )\right\}(\mathbf {x} -\mathbf {a} )+\cdots ,} where D f (a) is the gradient of f evaluated at x = a and D2 f (a) is the Hessian matrix. Applying the multi-index notation the Taylor series for several variables becomes T ( x ) = ∑ | α | ≥ 0 ( x − a ) α α ! ( ∂ α f ) ( a ) , {\displaystyle T(\mathbf {x} )=\sum _{|\alpha |\geq 0}{\frac {(\mathbf {x} -\mathbf {a} )^{\alpha }}{\alpha !}}\left({\mathrm {\partial } ^{\alpha }}f\right)(\mathbf {a} ),} which is to be understood as a still more abbreviated multi-index version of the first equation of this paragraph, with a full analogy to the single variable case. === Example === In order to compute a second-order Taylor series expansion around point (a, b) = (0, 0) of the function f ( x , y ) = e x ln ( 1 + y ) , {\displaystyle f(x,y)=e^{x}\ln(1+y),} one first computes all the necessary partial derivatives: f x = e x ln ( 1 + y ) f y = e x 1 + y f x x = e x ln ( 1 + y ) f y y = − e x ( 1 + y ) 2 f x y = f y x = e x 1 + y . {\displaystyle {\begin{aligned}f_{x}&=e^{x}\ln(1+y)\\[6pt]f_{y}&={\frac {e^{x}}{1+y}}\\[6pt]f_{xx}&=e^{x}\ln(1+y)\\[6pt]f_{yy}&=-{\frac {e^{x}}{(1+y)^{2}}}\\[6pt]f_{xy}&=f_{yx}={\frac {e^{x}}{1+y}}.\end{aligned}}} Evaluating these derivatives at the origin gives the Taylor coefficients f x ( 0 , 0 ) = 0 f y ( 0 , 0 ) = 1 f x x ( 0 , 0 ) = 0 f y y ( 0 , 0 ) = − 1 f x y ( 0 , 0 ) = f y x ( 0 , 0 ) = 1. {\displaystyle {\begin{aligned}f_{x}(0,0)&=0\\f_{y}(0,0)&=1\\f_{xx}(0,0)&=0\\f_{yy}(0,0)&=-1\\f_{xy}(0,0)&=f_{yx}(0,0)=1.\end{aligned}}} Substituting these values in to the general formula T ( x , y ) = f ( a , b ) + ( x − a ) f x ( a , b ) + ( y − b ) f y ( a , b ) + 1 2 ! ( ( x − a ) 2 f x x ( a , b ) + 2 ( x − a ) ( y − b ) f x y ( a , b ) + ( y − b ) 2 f y y ( a , b ) ) + ⋯ {\displaystyle {\begin{aligned}T(x,y)=&f(a,b)+(x-a)f_{x}(a,b)+(y-b)f_{y}(a,b)\\&{}+{\frac {1}{2!}}\left((x-a)^{2}f_{xx}(a,b)+2(x-a)(y-b)f_{xy}(a,b)+(y-b)^{2}f_{yy}(a,b)\right)+\cdots \end{aligned}}} produces T ( x , y ) = 0 + 0 ( x − 0 ) + 1 ( y − 0 ) + 1 2 ( 0 ( x − 0 ) 2 + 2 ( x − 0 ) ( y − 0 ) + ( − 1 ) ( y − 0 ) 2 ) + ⋯ = y + x y − 1 2 y 2 + ⋯ {\displaystyle {\begin{aligned}T(x,y)&=0+0(x-0)+1(y-0)+{\frac {1}{2}}{\big (}0(x-0)^{2}+2(x-0)(y-0)+(-1)(y-0)^{2}{\big )}+\cdots \\&=y+xy-{\tfrac {1}{2}}y^{2}+\cdots \end{aligned}}} Since ln(1 + y) is analytic in |y| < 1, we have e x ln ( 1 + y ) = y + x y − 1 2 y 2 + ⋯ , | y | < 1. {\displaystyle e^{x}\ln(1+y)=y+xy-{\tfrac {1}{2}}y^{2}+\cdots ,\qquad |y|<1.} == Comparison with Fourier series == The trigonometric Fourier series enables one to express a periodic function (or a function defined on a closed interval [a,b]) as an infinite sum of trigonometric functions (sines and cosines). In this sense, the Fourier series is analogous to Taylor series, since the latter allows one to express a function as an infinite sum of powers. Nevertheless, the two series differ from each other in several relevant issues: The finite truncations of the Taylor series of f (x) about the point x = a are all exactly equal to f at a. In contrast, the Fourier series is computed by integrating over an entire interval, so there is generally no such point where all the finite truncations of the series are exact. The computation of Taylor series requires the knowledge of the function on an arbitrary small neighbourhood of a point, whereas the computation of the Fourier series requires knowing the function on its whole domain interval. In a certain sense one could say that the Taylor series is "local" and the Fourier series is "global". The Taylor series is defined for a function which has infinitely many derivatives at a single point, whereas the Fourier series is defined for any integrable function. In particular, the function could be nowhere differentiable. (For example, f (x) could be a Weierstrass function.) The convergence of both series has very different properties. Even if the Taylor series has positive convergence radius, the resulting series may not coincide with the function; but if the function is analytic then the series converges pointwise to the function, and uniformly on every compact subset of the convergence interval. Concerning the Fourier series, if the function is square-integrable then the series converges in quadratic mean, but additional requirements are needed to ensure the pointwise or uniform convergence (for instance, if the function is periodic and of class C1 then the convergence is uniform). Finally, in practice one wants to approximate the function with a finite number of terms, say with a Taylor polynomial or a partial sum of the trigonometric series, respectively. In the case of the Taylor series the error is very small in a neighbourhood of the point where it is computed, while it may be very large at a distant point. In the case of the Fourier series the error is distributed along the domain of the function. == See also == Asymptotic expansion Newton polynomial Padé approximant – best approximation by a rational function Puiseux series – Power series with rational exponents Approximation theory Function approximation == Notes == == References == == External links == "Taylor series", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Taylor Series". MathWorld.
|
Wikipedia:Teaching Mathematics and Its Applications#0
|
The Institute of Mathematics and its Applications (IMA) is the UK's chartered professional body for mathematicians and one of the UK's learned societies for mathematics (another being the London Mathematical Society). The IMA aims to advance mathematics and its applications, promote and foster research and other enquiries directed the advancement, teaching and application of mathematics, to seek to establish and maintain high standards of professional conduct for members and to seek to promote, encourage and guide the development of education and training in mathematics. == History == In 1959, the need for a professional and learned body for mathematics and its applications was recognised independently by both Sir James Lighthill and a committee of the heads of the mathematics departments of several colleges of technology together with some interested mathematicians from universities, industry and government research establishments. After much discussion, the name and constitution of the institute were confirmed in 1963, and the IMA was approved as a company limited by guarantee on 23 April 1964. In 1990, the institute was incorporated as a royal charter company, and it was registered as a charity in 1993. == Governance == The institute is governed via a Council, made up of between 25 and 31 individuals including a president, three past presidents, elected and co-opted members, and honorary officers. === IMA president === The president normally serves a two-year term. This is a list of the presidents of the IMA: 1964–1966: Sir James Lighthill FRS 1966–1967: Professor Sir Bryan Thwaites 1968–1969: Dr Peter Wakely FRS 1970–1971: Professor George Barnard 1972–1973: Professor Charles Coulson FRS 1974–1975: Sir Hermann Bondi FRS 1976–1977: HRH The Duke of Edinburgh 1978–1979: Dame Kathleen Ollerenshaw 1980–1981: Sir Samuel Edwards FRS 1982–1983: Dr Peter Trier 1984–1985: Sir Harry Pitt FRS 1986–1987: Professor Bob Churchhouse FRS 1988–1989: Professor Douglas Jones FRS 1990–1991: Sir Roy Harding 1992-1993: J H McDonnell 1993–1995: Professor Lord Julian Hunt FRS 1996–1997: Professor David Crighton FRS 1998–1999: Professor Henry Beker 2000–2001: Professor Stephen Reid 2002–2003: Professor John McWhirter FREng, FRS 2004–2005: Professor Tim Pedley FRS 2006–2007: Professor Peter Grindrod CBE 2008–2009: Professor David Abrahams 2010–2011: Professor Michael Walker OBE, FRS 2012–2013: Professor Robert MacKay FRS 2014–2015: Professor Dame Celia Hoyles 2016–2017: Professor Chris Linton 2018–2019: Professor Alistair Fitt 2020–2021: Professor Nira Chamberlain OBE 2022–2023: Professor Paul Glendinning 2024–Present: Professor Hannah Fry HonFREng === Honorary officers === In addition to the president, the six honorary officer roles are listed below with their incumbents: == Membership == The IMA has 5,000 members, ten percent of whom live outside the United Kingdom. Forty percent of members are employed in education (schools through to universities) and sixty percent work in commercial and governmental organisations. The institute awards five grades of membership within three groups. === Corporate membership === Fellow (FIMA) Fellows are peer-reviewed by external reference and selected internally through election by the membership committee. Qualifications include a minimum of seven years experience and hold a senior managerial or technical position involving the use of, or training in, mathematics. A Fellow has made outstanding contributions to the development or application of mathematics. Member (MIMA) Members have an appropriate degree, a minimum period of three years training and experience after graduation and a position of responsibility involving the application of mathematical knowledge or training in mathematics. === Leading to corporate membership === Associate Member (AMIMA) Associate Member hold a degree in mathematics, a joint degree in mathematics with another subject or a degree with a sufficient mathematical component such as would be expected in physics or engineering. Students Student Members are undertaking a course of study which will lead to a qualification that meets Associate Member requirements. === Non-professional membership === Affiliate No requirements are necessary for entry into this grade. == Professional status == In 1990 the institute was incorporated by royal charter and was subsequently granted the right to award Chartered Mathematician (CMath) status. The institute may also nominate individuals for the award of Chartered Scientist (CSci) under license from the Science Council. The institute can also award individual Chartered Mathematics Teacher (CMathTeach). == Publications == === Mathematics Today === Mathematics Today is a general-interest mathematics publication aimed primarily at Institute members, published six times a year and containing articles, reviews, reports and other news on developments in mathematics and its applications. === Research journals === Eight research journals are published by Oxford University Press on behalf of the IMA. IMA Journal of Applied Mathematics IMA Journal of Numerical Analysis Mathematical Medicine and Biology IMA Journal of Mathematical Control and Information IMA Journal of Management Mathematics Teaching Mathematics and its Applications Information and Inference: A Journal of the IMA Transactions of Mathematics and its Applications === Other publications === The IMA began publishing a podcast, Travels in a Mathematical World, on 4 October 2008. The IMA also publishes conference proceedings, monographs and special interest group newsletters. == Conferences == The institute runs 8–10 conferences most years. These are specialist meetings where new research is presented and discussed. == Education activities == The IMA runs a wide range of mathematical activities through the Higher Education Services Area and the Schools and Further Education Group committees. The IMA operates a Programme Approval Scheme, which provides an 'approval in principle' for degree courses that meet the educational requirements for Chartered Mathematician. For programmes to be approved, the IMA requires the programme to be an honours degree of at least three years length, which meets the required mathematical content threshold of two-thirds. The programmes also need to meet the QAA benchmark for Mathematics and the Framework for High Education Qualification. The IMA provides education grants of up to £600 to allow individuals from the UK working in schools or further/higher education to help with the attendance at or the organisation of a mathematics educational activity such as attendance at a conference, expenses to cover a speaker coming into a school, organising a session for a conference. The IMA also employs a university liaison officer to promote mathematics and the IMA to university students undertaking mathematics and help act as a means of support. As part of this support the IMA runs the University Liaison Grants Scheme to provide university mathematical societies with grants of up to £400 to organise more activities and work more closely with the IMA. == Prizes == The councils of the IMA and the London Mathematical Society jointly award the Christopher Zeeman Medal, dedicated to recognising excellence in the communication of mathematics and the David Crighton Award dedicated to the recognition of service to mathematics and the wider mathematics community. The IMA in cooperation with the British Applied Mathematics Colloquium (BAMC) award the biennial IMA Lighthill-Thwaites Prize for early career applied mathematicians. The IMA awards the Leslie Fox Prize for Numerical Analysis, the Catherine Richards Prize for the best articles in Mathematics Today, the John Blake University Teaching Medal and the IMA Gold Medal for outstanding contribution to mathematics and its applications over the years. The IMA awards student-level prizes at most universities which offer mathematics around the UK. Each student prize is a year's membership of the IMA. == Branches == The IMA has Branches in the regions London, East Midlands, Lancashire and the North West, West Midlands, West of England, Ireland and Scotland, which run local activities (like talks by well known mathematicians). Its headquarters are in Southend-on-Sea, Essex. == Early Career Mathematicians Group == The Early Career Mathematicians Group of the IMA hold a series of conferences for mathematicians in the first 15 years of their career among other activities. == Social networking == As well as all the conferences, meetings and group activities that are held across the country the IMA operates groups on Facebook and LinkedIn, and has a Twitter feed. == Interaction with other bodies == Along with the London Mathematical Society, the Royal Statistical Society, the Edinburgh Mathematical Society and the Operational Research Society, forms the Council for the Mathematical Sciences. The IMA is a member of the Joint Mathematical Council (JMC) and informs the deliberations of the Advisory Committee on Mathematics Education (ACME). The IMA has representatives on Bath University Court, Bradford University Court, Cranfield University Court, Engineering Technology Board and Engineering Council, Engineering and Physical Sciences Research Council, EPSRC Public Understanding of Science Committee, Heads of Departments of Mathematical Sciences, International Council for Industrial and Applied Mathematics, Joint Mathematical Council, LMS Computer Science Committee, LMS International Affairs Committee, LMS Women in Maths Committee, Maths, Stats & OR Network (part of the HEA), Parliamentary and Scientific Committee, Qualifications and Curriculum Authority, Science Council, Science Council Registration Authority, The Association of Management Sciences (TAMS) and University of Wales, Swansea Court == See also == List of Mathematical Societies Council for the Mathematical Sciences Leslie Fox Prize for Numerical Analysis == Notes == == External links == The Institute of Mathematics and its Applications The origins of the Institute Travels in a Mathematical World Podcast
|
Wikipedia:Tech City College#0
|
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England. It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students. == References == == External links == Tech City College official website
|
Wikipedia:Teixeira Mendes#0
|
Raimundo Teixeira Mendes (5 January 1855 – 28 June 1927) was a Brazilian philosopher and mathematician. He is credited with creating the national motto, "Order and Progress", as well as the national flag on which it appears. Teixeira Mendes was born in Caxias, Maranhão. == Comtean Positivism == Teixeira Mendes was heavily influenced by Comtism and is classed as a "Humanity Apostle" by Brazil's Religion of Humanity, which is called "Igreja Positivista do Brasil" or in English "Positivist Church of Brazil." In life he led the Positivist Church after 1903. For him the Positivist viewpoint meant he opposed most wars and believed in the eventual disappearance of nations. He also opposed Christian missionary work toward the indigenous Brazilians and instead favored a policy based on protection and gradual assimilation. He deemed their societies "fetishistic", but believed a gradual non-coercive assimilation was the way to turn them into Positivists. He died in Rio de Janeiro, aged 72. == References ==
|
Wikipedia:Ten Computational Canons#0
|
The Ten Computational Canons (traditional Chinese: 算經十書; simplified Chinese: 算经十书) was a collection of ten Chinese mathematical works dating from pre-Han dynasty to early Tang dynasty, compiled by the early Tang mathematician Li Chunfeng (602–670) in the 650s, as the official mathematical texts for imperial examinations in mathematics. In 1084 during the Northern Song dynasty, the text Shushu Jiyi was selected to be part of this collection, replacing Zhui Shu. Thus Shushu Jiyi has appeared in the subsequent issuing of the catalogue. The original Ten Computational Canons includes: Zhoubi Suanjing (Zhou Shadow Mathematical Classic) Jiuzhang Suanshu (The Nine Chapters on the Mathematical Art) Haidao Suanjing (The Sea Island Mathematical Classic) Sunzi Suanjing (The Mathematical Classic of Sun Zi) Zhang Qiujian Suanjing (The Mathematical Classic of Zhang Qiujian) Wucao Suanjing (Computational Canon of the Five Administrative Sections) Xiahou Yang Suanjing (The Mathematical Classic of Xiahou Yang) Wujing Suanshu (Computational Prescriptions of the Five Classics) Jigu Suanjing (Continuation of Ancient Mathematical Classic) Zhui Shu (Method of Interpolation) It was specified in Tang dynasty laws on examination that Sunzi Suanjing and the Computational Canon of the Five Administrative Sections together required one year of study; The Nine Chapters on the Mathematical Art plus Haidao Suanjing three years; Jigu Suanjing three years; Zhui Shu four years; and Zhang Qiujian and Xia Houyang one year each. The government of the Song dynasty actively promoted the study of mathematics. There were two government xylograph editions of The Ten Computational Canons in the years 1084 and 1213. The wide availability of these mathematical texts contributed to the flourishing of mathematics in the Song and Yuan dynasties, inspiring mathematicians such as Jia Xian, Qin Jiushao, Yang Hui, Li Zhi and Zhu Shijie. In the Ming dynasty during the reign of the Yongle Emperor, some of the Ten Canons were copied into the Yongle Encyclopedia. During the reign of the Qianlong Emperor in the Qing dynasty, scholar Dai Zhen made copies of the Zhoubi Suanjing, The Nine Chapters on the Mathematical Art, Haidao Suanjing, Sunzi Suanjing, Zhang Qiujian Suanjing, Computational Canon of the Five Administrative Sections, Xiahou Yang Suanjing, Computational Prescriptions of the Five Classics, Jigu Suanjing, and Shushu Jiyi from the Yongle Encyclopedia and transferred them into another encyclopedia, the Complete Library of the Four Treasuries. == References == Jean Claude Martzloff, A History of Chinese Mathematics, pp. 123–126. ISBN 3-540-33782-2.
|
Wikipedia:Tensor field#0
|
In mathematics and physics, a tensor field is a function assigning a tensor to each point of a region of a mathematical space (typically a Euclidean space or manifold) or of the physical space. Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis of stress and strain in material object, and in numerous applications in the physical sciences. As a tensor is a generalization of a scalar (a pure number representing a value, for example speed) and a vector (a magnitude and a direction, like velocity), a tensor field is a generalization of a scalar field and a vector field that assigns, respectively, a scalar or vector to each point of space. If a tensor A is defined on a vector fields set X(M) over a module M, we call A a tensor field on M. A tensor field, in common usage, is often referred to in the shorter form "tensor". For example, the Riemann curvature tensor refers a tensor field, as it associates a tensor to each point of a Riemannian manifold, a topological space. == Definition == Let M {\displaystyle M} be a manifold, for instance the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} . Definition. A tensor field of type ( p , q ) {\displaystyle (p,q)} is a section T ∈ Γ ( M , V ⊗ p ⊗ ( V ∗ ) ⊗ q ) {\displaystyle T\ \in \ \Gamma (M,V^{\otimes p}\otimes (V^{*})^{\otimes q})} where V {\displaystyle V} is a vector bundle on M {\displaystyle M} , V ∗ {\displaystyle V^{*}} is its dual and ⊗ {\displaystyle \otimes } is the tensor product of vector bundles. Equivalently, it is a collection of elements T x ∈ V x ⊗ p ⊗ ( V x ∗ ) ⊗ q {\displaystyle T_{x}\in V_{x}^{\otimes p}\otimes (V_{x}^{*})^{\otimes q}} for every point x ∈ M {\displaystyle x\in M} , such that it constitutes a smooth map T : M → V ⊗ p ⊗ ( V ∗ ) ⊗ q {\displaystyle T:M\rightarrow V^{\otimes p}\otimes (V^{*})^{\otimes q}} . The elements T x {\displaystyle T_{x}} are called tensors. Often we take V = T M {\displaystyle V=TM} to be the tangent bundle of M {\displaystyle M} . == Geometric introduction == Intuitively, a vector field is best visualized as an "arrow" attached to each point of a region, with variable length and direction. One example of a vector field on a curved space is a weather map showing horizontal wind velocity at each point of the Earth's surface. Now consider more complicated fields. For example, if the manifold is Riemannian, then it has a metric field g {\displaystyle g} , such that given any two vectors v , w {\displaystyle v,w} at point x {\displaystyle x} , their inner product is g x ( v , w ) {\displaystyle g_{x}(v,w)} . The field g {\displaystyle g} could be given in matrix form, but it depends on a choice of coordinates. It could instead be given as an ellipsoid of radius 1 at each point, which is coordinate-free. Applied to the Earth's surface, this is Tissot's indicatrix. In general, we want to specify tensor fields in a coordinate-independent way: It should exist independently of latitude and longitude, or whatever particular "cartographic projection" we are using to introduce numerical coordinates. == Via coordinate transitions == Following Schouten (1951) and McConnell (1957), the concept of a tensor relies on a concept of a reference frame (or coordinate system), which may be fixed (relative to some background reference frame), but in general may be allowed to vary within some class of transformations of these coordinate systems. For example, coordinates belonging to the n-dimensional real coordinate space R n {\displaystyle \mathbb {R} ^{n}} may be subjected to arbitrary affine transformations: x k ↦ A j k x j + a k {\displaystyle x^{k}\mapsto A_{j}^{k}x^{j}+a^{k}} (with n-dimensional indices, summation implied). A covariant vector, or covector, is a system of functions v k {\displaystyle v_{k}} that transforms under this affine transformation by the rule v k ↦ v i A k i . {\displaystyle v_{k}\mapsto v_{i}A_{k}^{i}.} The list of Cartesian coordinate basis vectors e k {\displaystyle \mathbf {e} _{k}} transforms as a covector, since under the affine transformation e k ↦ A k i e i {\displaystyle \mathbf {e} _{k}\mapsto A_{k}^{i}\mathbf {e} _{i}} . A contravariant vector is a system of functions v k {\displaystyle v^{k}} of the coordinates that, under such an affine transformation undergoes a transformation v k ↦ ( A − 1 ) j k v j . {\displaystyle v^{k}\mapsto (A^{-1})_{j}^{k}v^{j}.} This is precisely the requirement needed to ensure that the quantity v k e k {\displaystyle v^{k}\mathbf {e} _{k}} is an invariant object that does not depend on the coordinate system chosen. More generally, the coordinates of a tensor of valence (p,q) have p upper indices and q lower indices, with the transformation law being T i 1 ⋯ i p j 1 ⋯ j q ↦ A i 1 ′ i 1 ⋯ A i p ′ i p T i 1 ′ ⋯ i p ′ j 1 ′ ⋯ j q ′ ( A − 1 ) j 1 j 1 ′ ⋯ ( A − 1 ) j q j q ′ . {\displaystyle {T^{i_{1}\cdots i_{p}}}_{j_{1}\cdots j_{q}}\mapsto A_{i'_{1}}^{i_{1}}\cdots A_{i'_{p}}^{i_{p}}{T^{i'_{1}\cdots i'_{p}}}_{j'_{1}\cdots j'_{q}}(A^{-1})_{j_{1}}^{j'_{1}}\cdots (A^{-1})_{j_{q}}^{j'_{q}}.} The concept of a tensor field may be obtained by specializing the allowed coordinate transformations to be smooth (or differentiable, analytic, etc.). A covector field is a function v k {\displaystyle v_{k}} of the coordinates that transforms by the Jacobian of the transition functions (in the given class). Likewise, a contravariant vector field v k {\displaystyle v^{k}} transforms by the inverse Jacobian. == Tensor bundles == A tensor bundle is a fiber bundle where the fiber is a tensor product of any number of copies of the tangent space and/or cotangent space of the base space, which is a manifold. As such, the fiber is a vector space and the tensor bundle is a special kind of vector bundle. The vector bundle is a natural idea of "vector space depending continuously (or smoothly) on parameters" – the parameters being the points of a manifold M. For example, a vector space of one dimension depending on an angle could look like a Möbius strip or alternatively like a cylinder. Given a vector bundle V over M, the corresponding field concept is called a section of the bundle: for m varying over M, a choice of vector vm in Vm, where Vm is the vector space "at" m. Since the tensor product concept is independent of any choice of basis, taking the tensor product of two vector bundles on M is routine. Starting with the tangent bundle (the bundle of tangent spaces) the whole apparatus explained at component-free treatment of tensors carries over in a routine way – again independently of coordinates, as mentioned in the introduction. We therefore can give a definition of tensor field, namely as a section of some tensor bundle. (There are vector bundles that are not tensor bundles: the Möbius band for instance.) This is then guaranteed geometric content, since everything has been done in an intrinsic way. More precisely, a tensor field assigns to any given point of the manifold a tensor in the space V ⊗ ⋯ ⊗ V ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ , {\displaystyle V\otimes \cdots \otimes V\otimes V^{*}\otimes \cdots \otimes V^{*},} where V is the tangent space at that point and V∗ is the cotangent space. See also tangent bundle and cotangent bundle. Given two tensor bundles E → M and F → M, a linear map A: Γ(E) → Γ(F) from the space of sections of E to sections of F can be considered itself as a tensor section of E ∗ ⊗ F {\displaystyle \scriptstyle E^{*}\otimes F} if and only if it satisfies A(fs) = fA(s), for each section s in Γ(E) and each smooth function f on M. Thus a tensor section is not only a linear map on the vector space of sections, but a C∞(M)-linear map on the module of sections. This property is used to check, for example, that even though the Lie derivative and covariant derivative are not tensors, the torsion and curvature tensors built from them are. == Notation == The notation for tensor fields can sometimes be confusingly similar to the notation for tensor spaces. Thus, the tangent bundle TM = T(M) might sometimes be written as T 0 1 ( M ) = T ( M ) = T M {\displaystyle T_{0}^{1}(M)=T(M)=TM} to emphasize that the tangent bundle is the range space of the (1,0) tensor fields (i.e., vector fields) on the manifold M. This should not be confused with the very similar looking notation T 0 1 ( V ) {\displaystyle T_{0}^{1}(V)} ; in the latter case, we just have one tensor space, whereas in the former, we have a tensor space defined for each point in the manifold M. Curly (script) letters are sometimes used to denote the set of infinitely-differentiable tensor fields on M. Thus, T n m ( M ) {\displaystyle {\mathcal {T}}_{n}^{m}(M)} are the sections of the (m,n) tensor bundle on M that are infinitely-differentiable. A tensor field is an element of this set. == Tensor fields as multilinear forms == There is another more abstract (but often useful) way of characterizing tensor fields on a manifold M, which makes tensor fields into honest tensors (i.e. single multilinear mappings), though of a different type (although this is not usually why one often says "tensor" when one really means "tensor field"). First, we may consider the set of all smooth (C∞) vector fields on M, X ( M ) := T 0 1 ( M ) {\displaystyle {\mathfrak {X}}(M):={\mathcal {T}}_{0}^{1}(M)} (see the section on notation above) as a single space – a module over the ring of smooth functions, C∞(M), by pointwise scalar multiplication. The notions of multilinearity and tensor products extend easily to the case of modules over any commutative ring. As a motivating example, consider the space Ω 1 ( M ) = T 1 0 ( M ) {\displaystyle \Omega ^{1}(M)={\mathcal {T}}_{1}^{0}(M)} of smooth covector fields (1-forms), also a module over the smooth functions. These act on smooth vector fields to yield smooth functions by pointwise evaluation, namely, given a covector field ω and a vector field X, we define ω ~ ( X ) ( p ) := ω ( p ) ( X ( p ) ) . {\displaystyle {\tilde {\omega }}(X)(p):=\omega (p)(X(p)).} Because of the pointwise nature of everything involved, the action of ω ~ {\displaystyle {\tilde {\omega }}} on X is a C∞(M)-linear map, that is, ω ~ ( f X ) ( p ) = ω ( p ) ( ( f X ) ( p ) ) = ω ( p ) ( f ( p ) X ( p ) ) = f ( p ) ω ( p ) ( X ( p ) ) = ( f ω ) ( p ) ( X ( p ) ) = ( f ω ~ ) ( X ) ( p ) {\displaystyle {\tilde {\omega }}(fX)(p)=\omega (p)((fX)(p))=\omega (p)(f(p)X(p))=f(p)\omega (p)(X(p))=(f\omega )(p)(X(p))=(f{\tilde {\omega }})(X)(p)} for any p in M and smooth function f. Thus we can regard covector fields not just as sections of the cotangent bundle, but also linear mappings of vector fields into functions. By the double-dual construction, vector fields can similarly be expressed as mappings of covector fields into functions (namely, we could start "natively" with covector fields and work up from there). In a complete parallel to the construction of ordinary single tensors (not tensor fields!) on M as multilinear maps on vectors and covectors, we can regard general (k,l) tensor fields on M as C∞(M)-multilinear maps defined on k copies of X ( M ) {\displaystyle {\mathfrak {X}}(M)} and l copies of Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} into C∞(M). Now, given any arbitrary mapping T from a product of k copies of X ( M ) {\displaystyle {\mathfrak {X}}(M)} and l copies of Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} into C∞(M), it turns out that it arises from a tensor field on M if and only if it is multilinear over C∞(M). Namely C∞(M)-module of tensor fields of type ( k , l ) {\displaystyle (k,l)} over M is canonically isomorphic to C∞(M)-module of C∞(M)-multilinear forms Ω 1 ( M ) × … × Ω 1 ( M ) ⏟ l t i m e s × X ( M ) × … × X ( M ) ⏟ k t i m e s → C ∞ ( M ) . {\displaystyle \underbrace {\Omega ^{1}(M)\times \ldots \times \Omega ^{1}(M)} _{l\ \mathrm {times} }\times \underbrace {{\mathfrak {X}}(M)\times \ldots \times {\mathfrak {X}}(M)} _{k\ \mathrm {times} }\to C^{\infty }(M).} This kind of multilinearity implicitly expresses the fact that we're really dealing with a pointwise-defined object, i.e. a tensor field, as opposed to a function which, even when evaluated at a single point, depends on all the values of vector fields and 1-forms simultaneously. A frequent example application of this general rule is showing that the Levi-Civita connection, which is a mapping of smooth vector fields ( X , Y ) ↦ ∇ X Y {\displaystyle (X,Y)\mapsto \nabla _{X}Y} taking a pair of vector fields to a vector field, does not define a tensor field on M. This is because it is only R {\displaystyle \mathbb {R} } -linear in Y (in place of full C∞(M)-linearity, it satisfies the Leibniz rule, ∇ X ( f Y ) = ( X f ) Y + f ∇ X Y {\displaystyle \nabla _{X}(fY)=(Xf)Y+f\nabla _{X}Y} )). Nevertheless, it must be stressed that even though it is not a tensor field, it still qualifies as a geometric object with a component-free interpretation. == Applications == The curvature tensor is discussed in differential geometry and the stress–energy tensor is important in physics, and these two tensors are related by Einstein's theory of general relativity. In electromagnetism, the electric and magnetic fields are combined into an electromagnetic tensor field. Differential forms, used in defining integration on manifolds, are a type of tensor field. == Tensor calculus == In theoretical physics and other fields, differential equations posed in terms of tensor fields provide a very general way to express relationships that are both geometric in nature (guaranteed by the tensor nature) and conventionally linked to differential calculus. Even to formulate such equations requires a fresh notion, the covariant derivative. This handles the formulation of variation of a tensor field along a vector field. The original absolute differential calculus notion, which was later called tensor calculus, led to the isolation of the geometric concept of connection. == Twisting by a line bundle == An extension of the tensor field idea incorporates an extra line bundle L on M. If W is the tensor product bundle of V with L, then W is a bundle of vector spaces of just the same dimension as V. This allows one to define the concept of tensor density, a 'twisted' type of tensor field. A tensor density is the special case where L is the bundle of densities on a manifold, namely the determinant bundle of the cotangent bundle. (To be strictly accurate, one should also apply the absolute value to the transition functions – this makes little difference for an orientable manifold.) For a more traditional explanation see the tensor density article. One feature of the bundle of densities (again assuming orientability) L is that Ls is well-defined for real number values of s; this can be read from the transition functions, which take strictly positive real values. This means for example that we can take a half-density, the case where s = 1/2. In general we can take sections of W, the tensor product of V with Ls, and consider tensor density fields with weight s. Half-densities are applied in areas such as defining integral operators on manifolds, and geometric quantization. == Flat case == When M is a Euclidean space and all the fields are taken to be invariant by translations by the vectors of M, we get back to a situation where a tensor field is synonymous with a tensor 'sitting at the origin'. This does no great harm, and is often used in applications. As applied to tensor densities, it does make a difference. The bundle of densities cannot seriously be defined 'at a point'; and therefore a limitation of the contemporary mathematical treatment of tensors is that tensor densities are defined in a roundabout fashion. == Cocycles and chain rules == As an advanced explanation of the tensor concept, one can interpret the chain rule in the multivariable case, as applied to coordinate changes, also as the requirement for self-consistent concepts of tensor giving rise to tensor fields. Abstractly, we can identify the chain rule as a 1-cocycle. It gives the consistency required to define the tangent bundle in an intrinsic way. The other vector bundles of tensors have comparable cocycles, which come from applying functorial properties of tensor constructions to the chain rule itself; this is why they also are intrinsic (read, 'natural') concepts. What is usually spoken of as the 'classical' approach to tensors tries to read this backwards – and is therefore a heuristic, post hoc approach rather than truly a foundational one. Implicit in defining tensors by how they transform under a coordinate change is the kind of self-consistency the cocycle expresses. The construction of tensor densities is a 'twisting' at the cocycle level. Geometers have not been in any doubt about the geometric nature of tensor quantities; this kind of descent argument justifies abstractly the whole theory. == Generalizations == === Tensor densities === The concept of a tensor field can be generalized by considering objects that transform differently. An object that transforms as an ordinary tensor field under coordinate transformations, except that it is also multiplied by the determinant of the Jacobian of the inverse coordinate transformation to the wth power, is called a tensor density with weight w. Invariantly, in the language of multilinear algebra, one can think of tensor densities as multilinear maps taking their values in a density bundle such as the (1-dimensional) space of n-forms (where n is the dimension of the space), as opposed to taking their values in just R. Higher "weights" then just correspond to taking additional tensor products with this space in the range. A special case are the scalar densities. Scalar 1-densities are especially important because it makes sense to define their integral over a manifold. They appear, for instance, in the Einstein–Hilbert action in general relativity. The most common example of a scalar 1-density is the volume element, which in the presence of a metric tensor g is the square root of its determinant in coordinates, denoted det g {\displaystyle {\sqrt {\det g}}} . The metric tensor is a covariant tensor of order 2, and so its determinant scales by the square of the coordinate transition: det ( g ′ ) = ( det ∂ x ∂ x ′ ) 2 det ( g ) , {\displaystyle \det(g')=\left(\det {\frac {\partial x}{\partial x'}}\right)^{2}\det(g),} which is the transformation law for a scalar density of weight +2. More generally, any tensor density is the product of an ordinary tensor with a scalar density of the appropriate weight. In the language of vector bundles, the determinant bundle of the tangent bundle is a line bundle that can be used to 'twist' other bundles w times. While locally the more general transformation law can indeed be used to recognise these tensors, there is a global question that arises, reflecting that in the transformation law one may write either the Jacobian determinant, or its absolute value. Non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight of a density, in that sense, is not restricted to integer values. Restricting to changes of coordinates with positive Jacobian determinant is possible on orientable manifolds, because there is a consistent global way to eliminate the minus signs; but otherwise the line bundle of densities and the line bundle of n-forms are distinct. For more on the intrinsic meaning, see Density on a manifold. == See also == Bitensor – Tensorial object depending on two points in a manifold Jet bundle – Construction in differential topology Ricci calculus – Tensor index notation for tensor-based calculations Spinor field – Geometric structurePages displaying short descriptions of redirect targets == Notes == == References == O'neill, Barrett (1983). Semi-Riemannian Geometry With Applications to Relativity. Elsevier Science. ISBN 9780080570570. Frankel, T. (2012), The Geometry of Physics (3rd edition), Cambridge University Press, ISBN 978-1-107-60260-1. Lambourne [Open University], R.J.A. (2010), Relativity, Gravitation, and Cosmology, Cambridge University Press, Bibcode:2010rgc..book.....L, ISBN 978-0-521-13138-4. Lerner, R.G.; Trigg, G.L. (1991), Encyclopaedia of Physics (2nd Edition), VHC Publishers. McConnell, A. J. (1957), Applications of Tensor Analysis, Dover Publications, ISBN 9780486145020 {{citation}}: ISBN / Date incompatibility (help). McMahon, D. (2006), Relativity DeMystified, McGraw Hill (USA), ISBN 0-07-145545-0. C. Misner, K. S. Thorne, J. A. Wheeler (1973), Gravitation, W.H. Freeman & Co, ISBN 0-7167-0344-0{{citation}}: CS1 maint: multiple names: authors list (link). Parker, C.B. (1994), McGraw Hill Encyclopaedia of Physics (2nd Edition), McGraw Hill, ISBN 0-07-051400-3. Schouten, Jan Arnoldus (1951), Tensor Analysis for Physicists, Oxford University Press. Steenrod, Norman (5 April 1999). The Topology of Fibre Bundles. Princeton Mathematical Series. Vol. 14. Princeton, N.J.: Princeton University Press. ISBN 978-0-691-00548-5. OCLC 40734875.
|
Wikipedia:Tensor operator#0
|
In pure and applied mathematics, quantum mechanics and computer graphics, a tensor operator generalizes the notion of operators which are scalars and vectors. A special class of these are spherical tensor operators which apply the notion of the spherical basis and spherical harmonics. The spherical basis closely relates to the description of angular momentum in quantum mechanics and spherical harmonic functions. The coordinate-free generalization of a tensor operator is known as a representation operator. == The general notion of scalar, vector, and tensor operators == In quantum mechanics, physical observables that are scalars, vectors, and tensors, must be represented by scalar, vector, and tensor operators, respectively. Whether something is a scalar, vector, or tensor depends on how it is viewed by two observers whose coordinate frames are related to each other by a rotation. Alternatively, one may ask how, for a single observer, a physical quantity transforms if the state of the system is rotated. Consider, for example, a system consisting of a molecule of mass M {\displaystyle M} , traveling with a definite center of mass momentum, p z ^ {\displaystyle p{\mathbf {\hat {z}} }} , in the z {\displaystyle z} direction. If we rotate the system by 90 ∘ {\displaystyle 90^{\circ }} about the y {\displaystyle y} axis, the momentum will change to p x ^ {\displaystyle p{\mathbf {\hat {x}} }} , which is in the x {\displaystyle x} direction. The center-of-mass kinetic energy of the molecule will, however, be unchanged at p 2 / 2 M {\displaystyle p^{2}/2M} . The kinetic energy is a scalar and the momentum is a vector, and these two quantities must be represented by a scalar and a vector operator, respectively. By the latter in particular, we mean an operator whose expected values in the initial and the rotated states are p z ^ {\displaystyle p{\mathbf {\hat {z}} }} and p x ^ {\displaystyle p{\mathbf {\hat {x}} }} . The kinetic energy on the other hand must be represented by a scalar operator, whose expected value must be the same in the initial and the rotated states. In the same way, tensor quantities must be represented by tensor operators. An example of a tensor quantity (of rank two) is the electrical quadrupole moment of the above molecule. Likewise, the octupole and hexadecapole moments would be tensors of rank three and four, respectively. Other examples of scalar operators are the total energy operator (more commonly called the Hamiltonian), the potential energy, and the dipole-dipole interaction energy of two atoms. Examples of vector operators are the momentum, the position, the orbital angular momentum, L {\displaystyle {\mathbf {L} }} , and the spin angular momentum, S {\displaystyle {\mathbf {S} }} . (Fine print: Angular momentum is a vector as far as rotations are concerned, but unlike position or momentum it does not change sign under space inversion, and when one wishes to provide this information, it is said to be a pseudovector.) Scalar, vector and tensor operators can also be formed by products of operators. For example, the scalar product L ⋅ S {\displaystyle {\mathbf {L} }\cdot {\mathbf {S} }} of the two vector operators, L {\displaystyle {\mathbf {L} }} and S {\displaystyle {\mathbf {S} }} , is a scalar operator, which figures prominently in discussions of the spin–orbit interaction. Similarly, the quadrupole moment tensor of our example molecule has the nine components Q i j = ∑ α q α ( 3 r α , i r α , j − r α 2 δ i j ) . {\displaystyle Q_{ij}=\sum _{\alpha }q_{\alpha }\left(3r_{\alpha ,i}r_{\alpha ,j}-r_{\alpha }^{2}\delta _{ij}\right).} Here, the indices i {\displaystyle i} and j {\displaystyle j} can independently take on the values 1, 2, and 3 (or x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} ) corresponding to the three Cartesian axes, the index α {\displaystyle \alpha } runs over all particles (electrons and nuclei) in the molecule, q α {\displaystyle q_{\alpha }} is the charge on particle α {\displaystyle \alpha } , and r α , i {\displaystyle r_{\alpha ,i}} is the i {\displaystyle i} -th component of the position of this particle. Each term in the sum is a tensor operator. In particular, the nine products r α , i r α , j {\displaystyle r_{\alpha ,i}r_{\alpha ,j}} together form a second rank tensor, formed by taking the outer product of the vector operator r α {\displaystyle {\mathbf {r} }_{\alpha }} with itself. == Rotations of quantum states == === Quantum rotation operator === The rotation operator about the unit vector n (defining the axis of rotation) through angle θ is U [ R ( θ , n ^ ) ] = exp ( − i θ ℏ n ^ ⋅ J ) {\displaystyle U[R(\theta ,{\hat {\mathbf {n} }})]=\exp \left(-{\frac {i\theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} \right)} where J = (Jx, Jy, Jz) are the rotation generators (also the angular momentum matrices): J x = ℏ 2 ( 0 1 0 1 0 1 0 1 0 ) J y = ℏ 2 ( 0 i 0 − i 0 i 0 − i 0 ) J z = ℏ ( − 1 0 0 0 0 0 0 0 1 ) {\displaystyle J_{x}={\frac {\hbar }{\sqrt {2}}}{\begin{pmatrix}0&1&0\\1&0&1\\0&1&0\end{pmatrix}}\,\quad J_{y}={\frac {\hbar }{\sqrt {2}}}{\begin{pmatrix}0&i&0\\-i&0&i\\0&-i&0\end{pmatrix}}\,\quad J_{z}=\hbar {\begin{pmatrix}-1&0&0\\0&0&0\\0&0&1\end{pmatrix}}} and let R ^ = R ^ ( θ , n ^ ) {\displaystyle {\widehat {R}}={\widehat {R}}(\theta ,{\hat {\mathbf {n} }})} be a rotation matrix. According to the Rodrigues' rotation formula, the rotation operator then amounts to U [ R ( θ , n ^ ) ] = 1 1 − i sin θ ℏ n ^ ⋅ J − 1 − cos θ ℏ 2 ( n ^ ⋅ J ) 2 . {\displaystyle U[R(\theta ,{\hat {\mathbf {n} }})]=1\!\!1-{\frac {i\sin \theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} -{\frac {1-\cos \theta }{\hbar ^{2}}}({\hat {\mathbf {n} }}\cdot \mathbf {J} )^{2}.} An operator Ω ^ {\displaystyle {\widehat {\Omega }}} is invariant under a unitary transformation U if Ω ^ = U † Ω ^ U ; {\displaystyle {\widehat {\Omega }}={U}^{\dagger }{\widehat {\Omega }}U;} in this case for the rotation U ^ ( R ) {\displaystyle {\widehat {U}}(R)} , Ω ^ = U ( R ) † Ω ^ U ( R ) = exp ( i θ ℏ n ^ ⋅ J ) Ω ^ exp ( − i θ ℏ n ^ ⋅ J ) . {\displaystyle {\widehat {\Omega }}={U(R)}^{\dagger }{\widehat {\Omega }}U(R)=\exp \left({\frac {i\theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} \right){\widehat {\Omega }}\exp \left(-{\frac {i\theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} \right).} === Angular momentum eigenkets === The orthonormal basis set for total angular momentum is | j , m ⟩ {\displaystyle |j,m\rangle } , where j is the total angular momentum quantum number and m is the magnetic angular momentum quantum number, which takes values −j, −j + 1, ..., j − 1, j. A general state within the j subspace | ψ ⟩ = ∑ m c j m | j , m ⟩ {\displaystyle |\psi \rangle =\sum _{m}c_{jm}|j,m\rangle } rotates to a new state by: | ψ ¯ ⟩ = U ( R ) | ψ ⟩ = ∑ m c j m U ( R ) | j , m ⟩ {\displaystyle |{\bar {\psi }}\rangle =U(R)|\psi \rangle =\sum _{m}c_{jm}U(R)|j,m\rangle } Using the completeness condition: I = ∑ m ′ | j , m ′ ⟩ ⟨ j , m ′ | {\displaystyle I=\sum _{m'}|j,m'\rangle \langle j,m'|} we have | ψ ¯ ⟩ = I U ( R ) | ψ ⟩ = ∑ m m ′ c j m | j , m ′ ⟩ ⟨ j , m ′ | U ( R ) | j , m ⟩ {\displaystyle |{\bar {\psi }}\rangle =IU(R)|\psi \rangle =\sum _{mm'}c_{jm}|j,m'\rangle \langle j,m'|U(R)|j,m\rangle } Introducing the Wigner D matrix elements: D ( R ) m ′ m ( j ) = ⟨ j , m ′ | U ( R ) | j , m ⟩ {\displaystyle {D(R)}_{m'm}^{(j)}=\langle j,m'|U(R)|j,m\rangle } gives the matrix multiplication: | ψ ¯ ⟩ = ∑ m m ′ c j m D m ′ m ( j ) | j , m ′ ⟩ ⇒ | ψ ¯ ⟩ = D ( j ) | ψ ⟩ {\displaystyle |{\bar {\psi }}\rangle =\sum _{mm'}c_{jm}D_{m'm}^{(j)}|j,m'\rangle \quad \Rightarrow \quad |{\bar {\psi }}\rangle =D^{(j)}|\psi \rangle } For one basis ket: | j , m ¯ ⟩ = ∑ m ′ D ( R ) m ′ m ( j ) | j , m ′ ⟩ {\displaystyle |{\overline {j,m}}\rangle =\sum _{m'}{D(R)}_{m'm}^{(j)}|j,m'\rangle } For the case of orbital angular momentum, the eigenstates | ℓ , m ⟩ {\displaystyle |\ell ,m\rangle } of the orbital angular momentum operator L and solutions of Laplace's equation on a 3d sphere are spherical harmonics: Y ℓ m ( θ , ϕ ) = ⟨ θ , ϕ | ℓ , m ⟩ = ( 2 ℓ + 1 ) 4 π ( ℓ − m ) ! ( ℓ + m ) ! P ℓ m ( cos θ ) e i m ϕ {\displaystyle Y_{\ell }^{m}(\theta ,\phi )=\langle \theta ,\phi |\ell ,m\rangle ={\sqrt {{(2\ell +1) \over 4\pi }{(\ell -m)! \over (\ell +m)!}}}\,P_{\ell }^{m}(\cos {\theta })\,e^{im\phi }} where Pℓm is an associated Legendre polynomial, ℓ is the orbital angular momentum quantum number, and m is the orbital magnetic quantum number which takes the values −ℓ, −ℓ + 1, ... ℓ − 1, ℓ The formalism of spherical harmonics have wide applications in applied mathematics, and are closely related to the formalism of spherical tensors, as shown below. Spherical harmonics are functions of the polar and azimuthal angles, ϕ and θ respectively, which can be conveniently collected into a unit vector n(θ, ϕ) pointing in the direction of those angles, in the Cartesian basis it is: n ^ ( θ , ϕ ) = cos ϕ sin θ e x + sin ϕ sin θ e y + cos θ e z {\displaystyle {\hat {\mathbf {n} }}(\theta ,\phi )=\cos \phi \sin \theta \mathbf {e} _{x}+\sin \phi \sin \theta \mathbf {e} _{y}+\cos \theta \mathbf {e} _{z}} So a spherical harmonic can also be written Y ℓ m = ⟨ n | ℓ m ⟩ {\displaystyle Y_{\ell }^{m}=\langle \mathbf {n} |\ell m\rangle } . Spherical harmonic states | m , ℓ ⟩ {\displaystyle |m,\ell \rangle } rotate according to the inverse rotation matrix U ( R − 1 ) {\displaystyle U(R^{-1})} , while | ℓ , m ⟩ {\displaystyle |\ell ,m\rangle } rotates by the initial rotation matrix U ^ ( R ) {\displaystyle {\widehat {U}}(R)} . | ℓ , m ¯ ⟩ = ∑ m ′ D m ′ m ( ℓ ) [ U ( R − 1 ) ] | ℓ , m ′ ⟩ , | n ^ ¯ ⟩ = U ( R ) | n ^ ⟩ {\displaystyle |{\overline {\ell ,m}}\rangle =\sum _{m'}D_{m'm}^{(\ell )}[U(R^{-1})]|\ell ,m'\rangle \,,\quad |{\overline {\hat {\mathbf {n} }}}\rangle =U(R)|{\hat {\mathbf {n} }}\rangle } == Rotation of tensor operators == We define the Rotation of an operator by requiring that the expectation value of the original operator A ^ {\displaystyle {\widehat {\mathbf {A} }}} with respect to the initial state be equal to the expectation value of the rotated operator with respect to the rotated state, ⟨ ψ ′ | A ′ ^ | ψ ′ ⟩ = ⟨ ψ | A ^ | ψ ⟩ {\displaystyle \langle \psi '|{\widehat {A'}}|\psi '\rangle =\langle \psi |{\widehat {A}}|\psi \rangle } Now as, | ψ ⟩ → | ψ ′ ⟩ = U ( R ) | ψ ⟩ , ⟨ ψ | → ⟨ ψ ′ | = ⟨ ψ | U † ( R ) {\displaystyle |\psi \rangle ~\rightarrow ~|\psi '\rangle =U(R)|\psi \rangle \,,\quad \langle \psi |~\rightarrow ~\langle \psi '|=\langle \psi |U^{\dagger }(R)} we have, ⟨ ψ | U † ( R ) A ^ ′ U ( R ) | ψ ⟩ = ⟨ ψ | A ^ | ψ ⟩ {\displaystyle \langle \psi |U^{\dagger }(R){\widehat {A}}'U(R)|\psi \rangle =\langle \psi |{\widehat {A}}|\psi \rangle } since, | ψ ⟩ {\displaystyle |\psi \rangle } is arbitrary, U † ( R ) A ^ ′ U ( R ) = A ^ {\displaystyle U^{\dagger }(R){\widehat {A}}'U(R)={\widehat {A}}} === Scalar operators === A scalar operator is invariant under rotations: U ( R ) † S ^ U ( R ) = S ^ {\displaystyle U(R)^{\dagger }{\widehat {S}}U(R)={\widehat {S}}} This is equivalent to saying a scalar operator commutes with the rotation generators: [ S ^ , J ^ ] = 0 {\displaystyle \left[{\widehat {S}},{\widehat {\mathbf {J} }}\right]=0} Examples of scalar operators include the energy operator: E ^ ψ = i ℏ ∂ ∂ t ψ {\displaystyle {\widehat {E}}\psi =i\hbar {\frac {\partial }{\partial t}}\psi } potential energy V (in the case of a central potential only) V ^ ( r , t ) ψ ( r , t ) = V ( r , t ) ψ ( r , t ) {\displaystyle {\widehat {V}}(r,t)\psi (\mathbf {r} ,t)=V(r,t)\psi (\mathbf {r} ,t)} kinetic energy T: T ^ ψ ( r , t ) = − ℏ 2 2 m ( ∇ 2 ψ ) ( r , t ) {\displaystyle {\widehat {T}}\psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}(\nabla ^{2}\psi )(\mathbf {r} ,t)} the spin–orbit coupling: L ^ ⋅ S ^ = L ^ x S ^ x + L ^ y S ^ y + L ^ z S ^ z . {\displaystyle {\widehat {\mathbf {L} }}\cdot {\widehat {\mathbf {S} }}={\widehat {L}}_{x}{\widehat {S}}_{x}+{\widehat {L}}_{y}{\widehat {S}}_{y}+{\widehat {L}}_{z}{\widehat {S}}_{z}\,.} === Vector operators === Vector operators (as well as pseudovector operators) are a set of 3 operators that can be rotated according to: U ( R ) † V ^ i U ( R ) = ∑ j R i j V ^ j {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{i}U(R)=\sum _{j}R_{ij}{\widehat {V}}_{j}} Any observable vector quantity of a quantum mechanical system should be invariant of the choice of frame of reference. The transformation of expectation value vector which applies for any wavefunction, ensures the above equality. In Dirac notation: ⟨ ψ ¯ | V ^ a | ψ ¯ ⟩ = ⟨ ψ | U ( R ) † V ^ a U ( R ) | ψ ⟩ = ∑ b R a b ⟨ ψ | V ^ b | ψ ⟩ {\displaystyle \langle {\bar {\psi }}|{\widehat {V}}_{a}|{\bar {\psi }}\rangle =\langle \psi |{U(R)}^{\dagger }{\widehat {V}}_{a}U(R)|\psi \rangle =\sum _{b}R_{ab}\langle \psi |{\widehat {V}}_{b}|\psi \rangle } where the RHS is due to the rotation transformation acting on the vector formed by expectation values. Since |Ψ⟩ is any quantum state, the same result follows: U ( R ) † V ^ a U ( R ) = ∑ b R a b V ^ b {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{a}U(R)=\sum _{b}R_{ab}{\widehat {V}}_{b}} Note that here, the term "vector" is used two different ways: kets such as |ψ⟩ are elements of abstract Hilbert spaces, while the vector operator is defined as a quantity whose components transform in a certain way under rotations. From the above relation for infinitesimal rotations and the Baker Hausdorff lemma, by equating coefficients of order δ θ {\displaystyle \delta \theta } , one can derive the commutation relation with the rotation generator: [ V ^ a , J ^ b ] = ∑ c i ℏ ε a b c V ^ c {\displaystyle {\left[{\widehat {V}}_{a},{\widehat {J}}_{b}\right]=\sum _{c}i\hbar \varepsilon _{abc}{\widehat {V}}_{c}}} where εijk is the Levi-Civita symbol, which all vector operators must satisfy, by construction. The above commutator rule can also be used as an alternative definition for vector operators which can be shown by using the Baker Hausdorff lemma. As the symbol εijk is a pseudotensor, pseudovector operators are invariant up to a sign: +1 for proper rotations and −1 for improper rotations. Since operators can be shown to form a vector operator by their commutation relation with angular momentum components (which are generators of rotation), its examples include: the position operator: r ^ ψ = r ψ {\displaystyle {\widehat {\mathbf {r} }}\psi =\mathbf {r} \psi } the momentum operator: p ^ ψ = − i ℏ ∇ ψ {\displaystyle {\widehat {\mathbf {p} }}\psi =-i\hbar \nabla \psi } and peusodovector operators include the orbital angular momentum operator: L ^ ψ = − i ℏ r × ∇ ψ {\displaystyle {\widehat {\mathbf {L} }}\psi =-i\hbar \mathbf {r} \times \nabla \psi } as well the spin operator S, and hence the total angular momentum J ^ = L ^ + S ^ . {\displaystyle {\widehat {\mathbf {J} }}={\widehat {\mathbf {L} }}+{\widehat {\mathbf {S} }}\,.} ==== Scalar operators from vector operators ==== If V → {\displaystyle {\vec {V}}} and W → {\displaystyle {\vec {W}}} are two vector operators, the dot product between the two vector operators can be defined as: V → ⋅ W → = ∑ i = 1 3 V i ^ W i ^ {\displaystyle {\vec {V}}\cdot {\vec {W}}=\sum _{i=1}^{3}{\hat {V_{i}}}{\hat {W_{i}}}} Under rotation of coordinates, the newly defined operator transforms as: U ( R ) † ( V → ⋅ W → ) U ( R ) = U ( R ) † ( ∑ i = 1 3 V i ^ W i ^ ) U ( R ) = ∑ i = 1 3 ( U ( R ) † V ^ i U ( R ) ) ( U ( R ) † W ^ i U ( R ) ) = ∑ i = 1 3 ( ∑ j = 1 3 R i j V ^ j ⋅ ∑ k = 1 3 R i k W ^ k ) {\displaystyle {U(R)}^{\dagger }({\vec {V}}\cdot {\vec {W}})U(R)={U(R)}^{\dagger }\left(\sum _{i=1}^{3}{\hat {V_{i}}}{\hat {W_{i}}}\right)U(R)=\sum _{i=1}^{3}({U(R)}^{\dagger }{\hat {V}}_{i}U(R))({U(R)}^{\dagger }{\hat {W}}_{i}U(R))=\sum _{i=1}^{3}\left(\sum _{j=1}^{3}R_{ij}{\widehat {V}}_{j}\cdot \sum _{k=1}^{3}R_{ik}{\widehat {W}}_{k}\right)} Rearranging terms and using transpose of rotation matrix as its inverse property: U ( R ) † ( V → ⋅ W → ) U ( R ) = ∑ k = 1 3 ∑ j = 1 3 ( ∑ i = 1 3 R j i T R i k ) V ^ j W ^ k = ∑ k = 1 3 ∑ j = 1 3 δ j , k V ^ j W ^ k = ∑ i = 1 3 V ^ i W ^ i {\displaystyle {U(R)}^{\dagger }({\vec {V}}\cdot {\vec {W}})U(R)=\sum _{k=1}^{3}\sum _{j=1}^{3}\left(\sum _{i=1}^{3}R_{ji}^{T}R_{ik}\right){\widehat {V}}_{j}{\widehat {W}}_{k}=\sum _{k=1}^{3}\sum _{j=1}^{3}\delta _{j,k}{\widehat {V}}_{j}{\widehat {W}}_{k}=\sum _{i=1}^{3}{\widehat {V}}_{i}{\widehat {W}}_{i}} Where the RHS is the V → ⋅ W → {\displaystyle {\vec {V}}\cdot {\vec {W}}} operator originally defined. Since the dot product defined is invariant under rotation transformation, it is said to be a scalar operator. === Spherical vector operators === A vector operator in the spherical basis is V = (V+1, V0, V−1) where the components are: V + 1 = − 1 2 ( V x + i V y ) V − 1 = 1 2 ( V x − i V y ) , V 0 = V z , {\displaystyle V_{+1}=-{\frac {1}{\sqrt {2}}}(V_{x}+iV_{y})\,\quad V_{-1}={\frac {1}{\sqrt {2}}}(V_{x}-iV_{y})\,,\quad V_{0}=V_{z}\,,} using J ± = J x ± i J y , {\textstyle J_{\pm }=J_{x}\pm iJ_{y}\,,} the various commutators with the rotation generators and ladder operators are: [ J z , V + 1 ] = + ℏ V + 1 [ J z , V 0 ] = 0 V 0 [ J z , V − 1 ] = − ℏ V − 1 [ J + , V + 1 ] = 0 [ J + , V 0 ] = 2 ℏ V + 1 [ J + , V − 1 ] = 2 ℏ V 0 [ J − , V + 1 ] = 2 ℏ V 0 [ J − , V 0 ] = 2 ℏ V − 1 [ J − , V − 1 ] = 0 {\displaystyle {\begin{aligned}\left[J_{z},V_{+1}\right]&=+\hbar V_{+1}\\[1ex]\left[J_{z},V_{0}\right]&=0V_{0}\\[1ex]\left[J_{z},V_{-1}\right]&=-\hbar V_{-1}\\[2ex]\left[J_{+},V_{+1}\right]&=0\\[1ex]\left[J_{+},V_{0}\right]&={\sqrt {2}}\hbar V_{+1}\\[1ex]\left[J_{+},V_{-1}\right]&={\sqrt {2}}\hbar V_{0}\\[2ex]\left[J_{-},V_{+1}\right]&={\sqrt {2}}\hbar V_{0}\\[1ex]\left[J_{-},V_{0}\right]&={\sqrt {2}}\hbar V_{-1}\\[1ex]\left[J_{-},V_{-1}\right]&=0\\[1ex]\end{aligned}}} which are of similar form of J z | 1 , + 1 ⟩ = + ℏ | 1 , + 1 ⟩ J z | 1 , 0 ⟩ = 0 | 1 , 0 ⟩ J z | 1 , − 1 ⟩ = − ℏ | 1 , − 1 ⟩ J + | 1 , + 1 ⟩ = 0 J + | 1 , 0 ⟩ = 2 ℏ | 1 , + 1 ⟩ J + | 1 , − 1 ⟩ = 2 ℏ | 1 , 0 ⟩ J − | 1 , + 1 ⟩ = 2 ℏ | 1 , 0 ⟩ J − | 1 , 0 ⟩ = 2 ℏ | 1 , − 1 ⟩ J − | 1 , − 1 ⟩ = 0 {\displaystyle {\begin{aligned}J_{z}|1,+1\rangle &=+\hbar |1,+1\rangle \\[1ex]J_{z}|1,0\rangle &=0|1,0\rangle \\[1ex]J_{z}|1,-1\rangle &=-\hbar |1,-1\rangle \\[2ex]J_{+}|1,+1\rangle &=0\\[1ex]J_{+}|1,0\rangle &={\sqrt {2}}\hbar |1,+1\rangle \\[1ex]J_{+}|1,-1\rangle &={\sqrt {2}}\hbar |1,0\rangle \\[2ex]J_{-}|1,+1\rangle &={\sqrt {2}}\hbar |1,0\rangle \\[1ex]J_{-}|1,0\rangle &={\sqrt {2}}\hbar |1,-1\rangle \\[1ex]J_{-}|1,-1\rangle &=0\\[1ex]\end{aligned}}} In the spherical basis, the generators of rotation are: J ± 1 = ∓ 1 2 J ± , J 0 = J z {\displaystyle J_{\pm 1}=\mp {\frac {1}{\sqrt {2}}}J_{\pm }\,,\quad J_{0}=J_{z}} From the transformation of operators and Baker Hausdorff lemma: U ( R ) † V ^ q U ( R ) = V ^ q + i θ ℏ [ n ^ ⋅ J → , V ^ q ] + ∑ k = 2 ∞ ( i θ ℏ [ n ^ ⋅ J → , . ] ) k k ! V ^ q = e x p ( i θ ℏ n ^ ⋅ A d J → ) V ^ q {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{q}U(R)={\widehat {V}}_{q}+i{\frac {\theta }{\hbar }}\left[{\hat {n}}\cdot {\vec {J}},{\widehat {V}}_{q}\right]+\sum _{k=2}^{\infty }{\frac {\left(i{\frac {\theta }{\hbar }}[{\hat {n}}\cdot {\vec {J}},.]\right)^{k}}{k!}}{\widehat {V}}_{q}=exp\left({i{\frac {\theta }{\hbar }}{\hat {n}}\cdot Ad_{\vec {J}}}\right){\widehat {V}}_{q}} compared to U ( R ) | j , k ⟩ = | j , k ⟩ − i θ ℏ n ^ ⋅ J → | j , k ⟩ + ∑ k = 2 ∞ ( − i θ ℏ n ^ ⋅ J → ) k k ! | j , k ⟩ = e x p ( − i θ ℏ n ^ ⋅ J → ) | j , k ⟩ {\displaystyle U(R)|j,k\rangle =|j,k\rangle -i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}|j,k\rangle +\sum _{k=2}^{\infty }{\frac {\left(-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}\right)^{k}}{k!}}|j,k\rangle =exp\left({-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,k\rangle } it can be argued that the commutator with operator replaces the action of operator on state for transformations of operators as compared with that of states: U ( R ) | j , k ⟩ = exp ( − i θ ℏ n ^ ⋅ J → ) | j , k ⟩ = ∑ j ′ , k ′ | j ′ , k ′ ⟩ ⟨ j ′ , k ′ | exp ( − i θ ℏ n ^ ⋅ J → ) | j , k ⟩ = ∑ k ′ D k ′ k ( j ) ( R ) | j , k ′ ⟩ {\displaystyle U(R)|j,k\rangle =\exp \left({-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,k\rangle =\sum _{j',k'}|j',k'\rangle \langle j',k'|\exp \left({-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,k\rangle =\sum _{k'}D_{k'k}^{(j)}(R)|j,k'\rangle } The rotation transformation in the spherical basis (originally written in the Cartesian basis) is then, due to similarity of commutation and operator shown above: U ( R ) † V ^ q U ( R ) = ∑ q ′ D q ′ q ( 1 ) ( R − 1 ) V ^ q ′ {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{q}U(R)=\sum _{q'}{{D_{q'q}^{(1)}}(R^{-1})}{\widehat {V}}_{q'}} One can generalize the vector operator concept easily to tensorial operators, shown next. === Tensor operators === In general, a tensor operator is one that transforms according to a tensor: U ( R ) † T ^ p q r ⋯ a b c ⋯ U ( R ) = R p , α R q , β R r , γ ⋯ T ^ i j k ⋯ α β γ ⋯ R i , a − 1 R j , b − 1 R k , c − 1 ⋯ {\displaystyle U(R)^{\dagger }{\widehat {T}}_{pqr\cdots }^{abc\cdots }U(R)=R_{p,\alpha }R_{q,\beta }R_{r,\gamma }\cdots {\widehat {T}}_{ijk\cdots }^{\alpha \beta \gamma \cdots }R_{i,a}^{-1}R_{j,b}^{-1}R_{k,c}^{-1}\cdots } where the basis are transformed by R − 1 {\displaystyle R^{-1}} or the vector components transform by R {\displaystyle R} . In the subsequent discussion surrounding tensor operators, the index notation regarding covariant/contravariant behavior is ignored entirely. Instead, contravariant components is implied by context. Hence for an n times contravariant tensor: U ( R ) † T ^ p q r ⋯ U ( R ) = R p i R q j R r k ⋯ T ^ i j k ⋯ {\displaystyle U(R)^{\dagger }{\widehat {T}}_{pqr\cdots }U(R)=R_{pi}R_{qj}R_{rk}\cdots {\widehat {T}}_{ijk\cdots }} ==== Examples of tensor operators ==== The Quadrupole moment operator, Q i j = ∑ α q α ( 3 r α i r α j − r α 2 δ i j ) {\displaystyle Q_{ij}=\sum _{\alpha }q_{\alpha }(3r_{\alpha i}r_{\alpha j}-r_{\alpha }^{2}\delta _{ij})} Components of two tensor vector operators can be multiplied to give another Tensor operator. T i j = V i W j {\displaystyle T_{ij}=V_{i}W_{j}} In general, n number of tensor operators will also give another tensor operator T p q r ⋯ k = V p ( 1 ) V q ( 2 ) V r 3 ) ⋯ V k ( n ) {\displaystyle T_{pqr\cdots k}=V_{p}^{(1)}V_{q}^{(2)}V_{r}^{3)}\cdots V_{k}^{(n)}} or, T i 1 i 2 ⋯ j 1 j 2 ⋯ = V i 1 i 2 ⋯ W j 1 j 2 ⋯ {\displaystyle T_{i_{1}i_{2}\cdots j_{1}j_{2}\cdots }=V_{i_{1}i_{2}\cdots }W_{j_{1}j_{2}\cdots }} Note: In general, a tensor operator cannot be written as the tensor product of other tensor operators as given in the above example. ==== Tensor operator from vector operators ==== If V → {\displaystyle {\vec {V}}} and W → {\displaystyle {\vec {W}}} are two three dimensional vector operators, then a rank 2 Cartesian dyadic tensors can be formed from nine operators of form T ^ i j = V i ^ W j ^ {\displaystyle {\hat {T}}_{ij}={\hat {V_{i}}}{\hat {W_{j}}}} , U ( R ) † T ^ i j U ( R ) = U ( R ) † ( V i ^ W j ^ ) U ( R ) = ( U ( R ) † V ^ i U ( R ) ) ( U ( R ) † W ^ j U ( R ) ) = ( ∑ l = 1 3 R i l V ^ l ⋅ ∑ k = 1 3 R j k W ^ k ) {\displaystyle {U(R)}^{\dagger }{\hat {T}}_{ij}U(R)={U(R)}^{\dagger }({\hat {V_{i}}}{\hat {W_{j}}})U(R)=({U(R)}^{\dagger }{\hat {V}}_{i}U(R))({U(R)}^{\dagger }{\hat {W}}_{j}U(R))=\left(\sum _{l=1}^{3}R_{il}{\hat {V}}_{l}\cdot \sum _{k=1}^{3}R_{jk}{\hat {W}}_{k}\right)} Rearranging terms, we get: U ( R ) † T ^ i j U ( R ) = ∑ k = 1 3 ∑ l = 1 3 ( R i l R j k T ^ l k ) {\displaystyle {U(R)}^{\dagger }{\hat {T}}_{ij}U(R)=\sum _{k=1}^{3}\sum _{l=1}^{3}\left(R_{il}R_{jk}{\hat {T}}_{lk}\right)} The RHS of the equation is change of basis equation for twice contravariant tensors where the basis are transformed by R − 1 {\displaystyle R^{-1}} or the vector components transform by R {\displaystyle R} which matches transformation of vector operator components. Hence the operator tensor described forms a rank 2 tensor, in tensor representation, T ^ = V → ⊗ W → = ( V ^ i W ^ j ) ( e i ⊗ e j ) {\displaystyle {\hat {\mathbf {T} }}={\vec {V}}\otimes {\vec {W}}=({\hat {V}}_{i}{\hat {W}}_{j})(\mathbf {e} _{i}\otimes \mathbf {e} _{j})} Similarly, an n-times contravariant tensor operator can be formed similarly by n vector operators. We observe that the subspace spanned by linear combinations of the rank two tensor components form an invariant subspace, ie. the subspace does not change under rotation since the transformed components itself is a linear combination of the tensor components. However, this subspace is not irreducible ie. it can be further divided into invariant subspaces under rotation. Otherwise, the subspace is called reducible. In other words, there exists specific sets of different linear combinations of the components such that they transforms into a linear combination of the same set under rotation. In the above example, we will show that the 9 independent tensor components can be divided into a set of 1, 3 and 5 combination of operators that each form irreducible invariant subspaces. === Irreducible tensor operators === The subspace spanned by { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} can be divided two subspaces; three independent antisymmetric components { A ^ i j } {\displaystyle \{{\hat {A}}_{ij}\}} and six independent symmetric component { S ^ i j } {\displaystyle \{{\hat {S}}_{ij}\}} , defined as A ^ i j = 1 2 ( T ^ i j − T ^ j i ) {\displaystyle {\hat {A}}_{ij}={\frac {1}{2}}({\hat {T}}_{ij}-{\hat {T}}_{ji})} and S ^ i j = 1 2 ( T ^ i j + T ^ j i ) {\displaystyle {\hat {S}}_{ij}={\frac {1}{2}}({\hat {T}}_{ij}+{\hat {T}}_{ji})} . Using the { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} transformation under rotation formula, it can be shown that both { A ^ i j } {\displaystyle \{{\hat {A}}_{ij}\}} and { S ^ i j } {\displaystyle \{{\hat {S}}_{ij}\}} are transformed into a linear combination of members of its own sets. Although { A ^ i j } {\displaystyle \{{\hat {A}}_{ij}\}} is irreducible, the same cannot be said about { S ^ i j } {\displaystyle \{{\hat {S}}_{ij}\}} . The six independent symmetric component set can be divided into five independent traceless symmetric component and the invariant trace can be its own subspace. Hence, the invariant subspaces of { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} are formed respectively by: One invariant trace of the tensor, t ^ = ∑ k = 1 3 T ^ k k {\displaystyle {\hat {t}}=\sum _{k=1}^{3}{\hat {T}}_{kk}} Three linearly independent antisymmetric components from: A ^ i j = 1 2 ( T ^ i j − T ^ j i ) {\displaystyle {\hat {A}}_{ij}={\frac {1}{2}}({\hat {T}}_{ij}-{\hat {T}}_{ji})} Five linearly independent traceless symmetric components from S ^ i j = 1 2 ( T ^ i j + T ^ j i ) − 1 3 t ^ δ i j {\displaystyle {\hat {S}}_{ij}={\frac {1}{2}}({\hat {T}}_{ij}+{\hat {T}}_{ji})-{\frac {1}{3}}{\hat {t}}\delta _{ij}} If T ^ i j = V i ^ W j ^ {\displaystyle {\hat {T}}_{ij}={\hat {V_{i}}}{\hat {W_{j}}}} , the invariant subspaces of { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} formed are represented by: One invariant scalar operator V → ⋅ W → {\displaystyle {\vec {V}}\cdot {\vec {W}}} Three linearly independent components from 1 2 ( V ^ i W ^ j − V ^ j W ^ i ) {\displaystyle {\frac {1}{2}}({\hat {V}}_{i}{\hat {W}}_{j}-{\hat {V}}_{j}{\hat {W}}_{i})} Five linearly independent components from 1 2 ( V ^ i W ^ j + V ^ j W ^ i ) − 1 3 ( V → ⋅ W → ) δ i j {\displaystyle {\frac {1}{2}}({\hat {V}}_{i}{\hat {W}}_{j}+{\hat {V}}_{j}{\hat {W}}_{i})-{\frac {1}{3}}({\vec {V}}\cdot {\vec {W}})\delta _{ij}} From the above examples, the nine component { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} are split into subspaces formed by one, three and five components. These numbers add up to the number of components of the original tensor in a manner similar to the dimension of vector subspaces adding to the dimension of the space that is a direct sum of these subspaces. Similarly, every element of { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} can be expressed in terms of a linear combination of components from its invariant subspaces: T ^ i j = 1 3 t ^ δ i j + A ^ i j + S ^ i j {\displaystyle {\hat {T}}_{ij}={\frac {1}{3}}{\hat {t}}\delta _{ij}+{\hat {A}}_{ij}+{\hat {S}}_{ij}} or T ^ i j = 1 3 ( V → ⋅ W → ) δ i j + ( 1 2 ( V ^ i W ^ j − V ^ j W ^ i ) ) + ( 1 2 ( V ^ i W ^ j + V ^ j W ^ i ) − 1 3 ( V → ⋅ W → ) δ i j ) = T ( 0 ) + T ( 1 ) + T ( 2 ) {\displaystyle {\hat {T}}_{ij}={\frac {1}{3}}({\vec {V}}\cdot {\vec {W}})\delta _{ij}+\left({\frac {1}{2}}({\hat {V}}_{i}{\hat {W}}_{j}-{\hat {V}}_{j}{\hat {W}}_{i})\right)+\left({\frac {1}{2}}({\hat {V}}_{i}{\hat {W}}_{j}+{\hat {V}}_{j}{\hat {W}}_{i})-{\frac {1}{3}}({\vec {V}}\cdot {\vec {W}})\delta _{ij}\right)=\mathbf {T} ^{(0)}+\mathbf {T} ^{(1)}+\mathbf {T} ^{(2)}} where: T ^ i j ( 0 ) = V ^ k W ^ k 3 δ i j {\displaystyle {\widehat {T}}_{ij}^{(0)}={\frac {{\widehat {V}}_{k}{\widehat {W}}_{k}}{3}}\delta _{ij}} T ^ i j ( 1 ) = 1 2 [ V ^ i W ^ j − V ^ j W ^ i ] = V ^ [ i W ^ j ] {\displaystyle {\widehat {T}}_{ij}^{(1)}={\frac {1}{2}}\left[{\widehat {V}}_{i}{\widehat {W}}_{j}-{\widehat {V}}_{j}{\widehat {W}}_{i}\right]={\widehat {V}}_{[i}{\widehat {W}}_{j]}} T ^ i j ( 2 ) = 1 2 ( V ^ i W ^ j + V ^ j W ^ i ) − 1 3 V ^ k W ^ k δ i j = V ^ ( i W ^ j ) − T i j ( 0 ) {\displaystyle {\widehat {T}}_{ij}^{(2)}={\tfrac {1}{2}}\left({\widehat {V}}_{i}{\widehat {W}}_{j}+{\widehat {V}}_{j}{\widehat {W}}_{i}\right)-{\tfrac {1}{3}}{\widehat {V}}_{k}{\widehat {W}}_{k}\delta _{ij}={\widehat {V}}_{(i}{\widehat {W}}_{j)}-T_{ij}^{(0)}} In general cartesian tensors of rank greater than 1 are reducible. In quantum mechanics, this particular example bears resemblance to the addition of two spin one particles where both are 3 dimensional, hence the total space being 9 dimensional, can be formed by spin 0, spin 1 and spin 2 systems each having 1 dimensional, 3 dimensional and 5 dimensional space respectively. These three terms are irreducible, which means they cannot be decomposed further and still be tensors satisfying the defining transformation laws under which they must be invariant. Each of the irreducible representations T(0), T(1), T(2) ... transform like angular momentum eigenstates according to the number of independent components. It is possible that a given tensor may have one or more of these components vanish. For example, the quadrupole moment tensor is already symmetric and traceless, and hence has only 5 independent components to begin with. === Spherical tensor operators === Spherical tensor operators are generally defined as operators with the following transformation rule, under rotation of coordinate system: T ^ m ( j ) → U ( R ) † T ^ m ( j ) U ( R ) = ∑ m ′ D m ′ m ( j ) ( R − 1 ) T ^ m ′ ( j ) {\displaystyle {\widehat {T}}_{m}^{(j)}\rightarrow U(R)^{\dagger }{\widehat {T}}_{m}^{(j)}U(R)=\sum _{m'}D_{m'm}^{(j)}(R^{-1}){\widehat {T}}_{m'}^{(j)}} The commutation relations can be found by expanding LHS and RHS as: U ( R ) † T ^ m ( j ) U ( R ) = ( 1 + i ϵ n ^ ⋅ J → ℏ + O ( ϵ 2 ) ) T ^ m ( j ) ( 1 − i ϵ n ^ ⋅ J → ℏ + O ( ϵ 2 ) ) = ∑ m ′ ⟨ j , m ′ | ( 1 + i ϵ n ^ ⋅ J → ℏ + O ( ϵ 2 ) ) | j , m ⟩ T ^ m ′ ( j ) {\displaystyle U(R)^{\dagger }{\widehat {T}}_{m}^{(j)}U(R)=\left(1+{\frac {i\epsilon {\hat {n}}\cdot {\vec {J}}}{\hbar }}+{\mathcal {O}}(\epsilon ^{2})\right){\widehat {T}}_{m}^{(j)}\left(1-{\frac {i\epsilon {\hat {n}}\cdot {\vec {J}}}{\hbar }}+{\mathcal {O}}(\epsilon ^{2})\right)=\sum _{m'}\langle j,m'|\left(1+{\frac {i\epsilon {\hat {n}}\cdot {\vec {J}}}{\hbar }}+{\mathcal {O}}(\epsilon ^{2})\right)|j,m\rangle {\widehat {T}}_{m'}^{(j)}} Simplifying and applying limits to select only first order terms, we get: [ n ^ ⋅ J → , T ^ m ( j ) ] = ∑ m ′ T ^ m ′ ( j ) ⟨ j , m ′ | J → ⋅ n ^ | j , m ⟩ {\displaystyle {[{\hat {n}}\cdot {\vec {J}}},{\widehat {T}}_{m}^{(j)}]=\sum _{m'}{\widehat {T}}_{m'}^{(j)}\langle j,m'|{\vec {J}}\cdot {\hat {n}}|j,m\rangle } For choices of n ^ = x ^ ± i y ^ {\displaystyle {\hat {n}}={\hat {x}}\pm i{\hat {y}}} or n ^ = z ^ {\displaystyle {\hat {n}}={\hat {z}}} , we get: [ J ± , T ^ m ( j ) ] = ℏ ( j ∓ m ) ( j ± m + 1 ) T ^ m ± 1 ( j ) [ J z , T ^ m ( j ) ] = ℏ m T ^ m ( j ) {\displaystyle {\begin{aligned}\left[J_{\pm },{\widehat {T}}_{m}^{(j)}\right]&=\hbar {\sqrt {(j\mp m)(j\pm m+1)}}{\widehat {T}}_{m\pm 1}^{(j)}\\[1ex]\left[J_{z},{\widehat {T}}_{m}^{(j)}\right]&=\hbar m{\widehat {T}}_{m}^{(j)}\end{aligned}}} Note the similarity of the above to: J ± | j , m ⟩ = ℏ ( j ∓ m ) ( j ± m + 1 ) | j , m ± 1 ⟩ J z | j , m ⟩ = ℏ m | j , m ⟩ {\displaystyle {\begin{aligned}J_{\pm }|j,m\rangle &=\hbar {\sqrt {(j\mp m)(j\pm m+1)}}|j,m\pm 1\rangle \\[1ex]J_{z}|j,m\rangle &=\hbar m|j,m\rangle \end{aligned}}} Since J x {\displaystyle J_{x}} and J y {\displaystyle J_{y}} are linear combinations of J ± {\displaystyle J_{\pm }} , they share the same similarity due to linearity. If, only the commutation relations hold, using the following relation, | j , m ⟩ → U ( R ) | j , m ⟩ = e x p ( − i θ ℏ n ^ ⋅ J → ) | j , m ⟩ = ∑ m ′ D m ′ m ( j ) ( R ) | j , m ′ ⟩ {\displaystyle |j,m\rangle \rightarrow U(R)|j,m\rangle =exp\left(-{i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,m\rangle =\sum _{m'}D_{m'm}^{(j)}(R)|j,m'\rangle } we find due to similarity of actions of J {\displaystyle J} on wavefunction | j , m ⟩ {\displaystyle |j,m\rangle } and the commutation relations on T ^ m ( j ) {\displaystyle {\widehat {T}}_{m}^{(j)}} , that: T ^ m ( j ) → U ( R ) † T ^ m ( j ) U ( R ) = e x p ( i θ ℏ n ^ ⋅ a d J → ) T ^ m ( j ) = ∑ m ′ D m ′ m ( j ) ( R − 1 ) T ^ m ′ ( j ) {\displaystyle {\widehat {T}}_{m}^{(j)}\rightarrow U(R)^{\dagger }{\widehat {T}}_{m}^{(j)}U(R)=exp\left({i{\frac {\theta }{\hbar }}{\hat {n}}\cdot ad_{\vec {J}}}\right){\widehat {T}}_{m}^{(j)}=\sum _{m'}D_{m'm}^{(j)}(R^{-1}){\widehat {T}}_{m'}^{(j)}} where the exponential form is given by Baker–Hausdorff lemma. Hence, the above commutation relations and the transformation property are equivalent definitions of spherical tensor operators. It can also be shown that { a d J ^ i } {\displaystyle \{ad_{{\hat {J}}_{i}}\}} transform like a vector due to their commutation relation. In the following section, construction of spherical tensors will be discussed. For example, since example of spherical vector operators is shown, it can be used to construct higher order spherical tensor operators. In general, spherical tensor operators can be constructed from two perspectives. One way is to specify how spherical tensors transform under a physical rotation - a group theoretical definition. A rotated angular momentum eigenstate can be decomposed into a linear combination of the initial eigenstates: the coefficients in the linear combination consist of Wigner rotation matrix entries. Or by continuing the previous example of the second order dyadic tensor T = a ⊗ b, casting each of a and b into the spherical basis and substituting into T gives the spherical tensor operators of the second order. ==== Construction using Clebsch–Gordan coefficients ==== Combination of two spherical tensors A q 1 ( k 1 ) {\displaystyle A_{q_{1}}^{(k_{1})}} and B q 2 ( k 2 ) {\displaystyle B_{q_{2}}^{(k_{2})}} in the following manner involving the Clebsch–Gordan coefficients can be proved to give another spherical tensor of the form: T q ( k ) = ∑ q 1 , q 2 ⟨ k 1 , k 2 ; q 1 , q 2 | k 1 , k 2 ; k , q ⟩ A q 1 ( k 1 ) B q 2 ( k 2 ) {\displaystyle T_{q}^{(k)}=\sum _{q_{1},q_{2}}\langle k_{1},k_{2};q_{1},q_{2}|k_{1},k_{2};k,q\rangle A_{q_{1}}^{(k_{1})}B_{q_{2}}^{(k_{2})}} This equation can be used to construct higher order spherical tensor operators, for example, second order spherical tensor operators using two first order spherical tensor operators, say A and B, discussed previously: T ^ ± 2 ( 2 ) = a ^ ± 1 b ^ ± 1 T ^ ± 1 ( 2 ) = 1 2 ( a ^ ± 1 b ^ 0 + a ^ 0 b ^ ± 1 ) T ^ 0 ( 2 ) = 1 6 ( a ^ + 1 b ^ − 1 + a ^ − 1 b ^ + 1 + 2 a ^ 0 b ^ 0 ) {\displaystyle {\begin{aligned}{\widehat {T}}_{\pm 2}^{(2)}&={\widehat {a}}_{\pm 1}{\widehat {b}}_{\pm 1}\\[1ex]{\widehat {T}}_{\pm 1}^{(2)}&={\tfrac {1}{\sqrt {2}}}\left({\widehat {a}}_{\pm 1}{\widehat {b}}_{0}+{\widehat {a}}_{0}{\widehat {b}}_{\pm 1}\right)\\[1ex]{\widehat {T}}_{0}^{(2)}&={\tfrac {1}{\sqrt {6}}}\left({\widehat {a}}_{+1}{\widehat {b}}_{-1}+{\widehat {a}}_{-1}{\widehat {b}}_{+1}+2{\widehat {a}}_{0}{\widehat {b}}_{0}\right)\end{aligned}}} Using the infinitesimal rotation operator and its Hermitian conjugate, one can derive the commutation relation in the spherical basis: [ J a , T ^ q ( 2 ) ] = ∑ q ′ D ( J a ) q q ′ ( 2 ) T ^ q ′ ( 2 ) = ∑ q ′ ⟨ j = 2 , m = q | J a | j = 2 , m = q ′ ⟩ T ^ q ′ ( 2 ) {\displaystyle \left[J_{a},{\widehat {T}}_{q}^{(2)}\right]=\sum _{q'}{D(J_{a})}_{qq'}^{(2)}{\widehat {T}}_{q'}^{(2)}=\sum _{q'}\langle j{=}2,m{=}q|J_{a}|j{=}2,m{=}q'\rangle {\widehat {T}}_{q'}^{(2)}} and the finite rotation transformation in the spherical basis can be verified: U ( R ) † T ^ q ( 2 ) U ( R ) = ∑ q ′ D ( R ) q q ′ ( 2 ) ∗ T ^ q ′ ( 2 ) {\displaystyle {U(R)}^{\dagger }{\widehat {T}}_{q}^{(2)}U(R)=\sum _{q'}{{D(R)}_{qq'}^{(2)}}^{*}{\widehat {T}}_{q'}^{(2)}} ==== Using Spherical Harmonics ==== Define an operator by its spectrum: Υ l m | r ⟩ = r l Y l m ( θ , ϕ ) | r ⟩ = Υ l m ( r → ) | r ⟩ {\displaystyle \Upsilon _{l}^{m}|r\rangle =r^{l}Y_{l}^{m}(\theta ,\phi )|r\rangle =\Upsilon _{l}^{m}({\vec {r}})|r\rangle } Since for spherical harmonics under rotation: Y ℓ = k m = q ( n ) = ⟨ n | k , q ⟩ → U ( R ) † Y ℓ = k m = q ( n ) U ( R ) = Y ℓ = k m = q ( R n ) = ⟨ n | D ( R ) † | k , q ⟩ = ∑ q ′ D q ′ , q ( k ) ( R − 1 ) Y ℓ = k m = q ′ ( n ) {\displaystyle Y_{\ell =k}^{m=q}(\mathbf {n} )=\langle \mathbf {n} |k,q\rangle \rightarrow U(R)^{\dagger }Y_{\ell =k}^{m=q}(\mathbf {n} )U(R)=Y_{\ell =k}^{m=q}(R\mathbf {n} )=\langle \mathbf {n} |D(R)^{\dagger }|k,q\rangle =\sum _{q'}D_{q',q}^{(k)}(R^{-1})Y_{\ell =k}^{m=q'}(\mathbf {n} )} It can also been shown that: Υ l m ( r → ) → U ( R ) † Υ l m ( r → ) U ( R ) = ∑ m ′ D m ′ , m ( l ) ( R − 1 ) Υ l m ′ ( r → ) {\displaystyle \Upsilon _{l}^{m}({\vec {r}})\rightarrow U(R)^{\dagger }\Upsilon _{l}^{m}({\vec {r}})U(R)=\sum _{m'}D_{m',m}^{(l)}(R^{-1})\Upsilon _{l}^{m'}({\vec {r}})} Then Υ l m ( V → ) {\displaystyle \Upsilon _{l}^{m}({\vec {V}})} , where V → {\displaystyle {\vec {V}}} is a vector operator, also transforms in the same manner ie, is a spherical tensor operator. The process involves expressing Υ l m ( r → ) = r l Y l m ( θ , ϕ ) = Υ l m ( x , y , z ) {\displaystyle \Upsilon _{l}^{m}({\vec {r}})=r^{l}Y_{l}^{m}(\theta ,\phi )=\Upsilon _{l}^{m}(x,y,z)} in terms of x, y and z and replacing x, y and z with operators Vx Vy and Vz which from vector operator. The resultant operator is hence a spherical tensor operator T ^ m ( l ) {\displaystyle {\hat {T}}_{m}^{(l)}} .^ This may include constant due to normalization from spherical harmonics which is meaningless in context of operators. The Hermitian adjoint of a spherical tensor may be defined as ( T † ) q ( k ) = ( − 1 ) k − q ( T − q ( k ) ) † . {\displaystyle (T^{\dagger })_{q}^{(k)}=(-1)^{k-q}(T_{-q}^{(k)})^{\dagger }.} There is some arbitrariness in the choice of the phase factor: any factor containing (−1)±q will satisfy the commutation relations. The above choice of phase has the advantages of being real and that the tensor product of two commuting Hermitian operators is still Hermitian. Some authors define it with a different sign on q, without the k, or use only the floor of k. == Angular momentum and spherical harmonics == === Orbital angular momentum and spherical harmonics === Orbital angular momentum operators have the ladder operators: L ± = L x ± i L y {\displaystyle L_{\pm }=L_{x}\pm iL_{y}} which raise or lower the orbital magnetic quantum number mℓ by one unit. This has almost exactly the same form as the spherical basis, aside from constant multiplicative factors. === Spherical tensor operators and quantum spin === Spherical tensors can also be formed from algebraic combinations of the spin operators Sx, Sy, Sz, as matrices, for a spin system with total quantum number j = ℓ + s (and ℓ = 0). Spin operators have the ladder operators: S ± = S x ± i S y {\displaystyle S_{\pm }=S_{x}\pm iS_{y}} which raise or lower the spin magnetic quantum number ms by one unit. == Applications == Spherical bases have broad applications in pure and applied mathematics and physical sciences where spherical geometries occur. === Dipole radiative transitions in a single-electron atom (alkali) === The transition amplitude is proportional to matrix elements of the dipole operator between the initial and final states. We use an electrostatic, spinless model for the atom and we consider the transition from the initial energy level Enℓ to final level En′ℓ′. These levels are degenerate, since the energy does not depend on the magnetic quantum number m or m′. The wave functions have the form, ψ n ℓ m ( r , θ , ϕ ) = R n ℓ ( r ) Y ℓ m ( θ , ϕ ) {\displaystyle \psi _{n\ell m}(r,\theta ,\phi )=R_{n\ell }(r)Y_{\ell m}(\theta ,\phi )} The dipole operator is proportional to the position operator of the electron, so we must evaluate matrix elements of the form, ⟨ n ′ ℓ ′ m ′ | r | n ℓ m ⟩ {\displaystyle \langle n'\ell 'm'|\mathbf {r} |n\ell m\rangle } where, the initial state is on the right and the final one on the left. The position operator r has three components, and the initial and final levels consist of 2ℓ + 1 and 2ℓ′ + 1 degenerate states, respectively. Therefore if we wish to evaluate the intensity of a spectral line as it would be observed, we really have to evaluate 3(2ℓ′+ 1)(2ℓ+ 1) matrix elements, for example, 3×3×5 = 45 in a 3d → 2p transition. This is actually an exaggeration, as we shall see, because many of the matrix elements vanish, but there are still many non-vanishing matrix elements to be calculated. A great simplification can be achieved by expressing the components of r, not with respect to the Cartesian basis, but with respect to the spherical basis. First we define, r q = e ^ q ⋅ r {\displaystyle r_{q}={\hat {\mathbf {e} }}_{q}\cdot \mathbf {r} } Next, by inspecting a table of the Yℓm′s, we find that for ℓ = 1 we have, r Y 11 ( θ , ϕ ) = − r 3 8 π sin ( θ ) e i ϕ = 3 4 π ( − x + i y 2 ) r Y 10 ( θ , ϕ ) = r 3 4 π cos ( θ ) = 3 4 π z r Y 1 − 1 ( θ , ϕ ) = r 3 8 π sin ( θ ) e − i ϕ = 3 4 π ( x − i y 2 ) {\displaystyle {\begin{aligned}rY_{11}(\theta ,\phi )&=&&-r{\sqrt {\frac {3}{8\pi }}}\sin(\theta )e^{i\phi }&=&{\sqrt {\frac {3}{4\pi }}}\left(-{\frac {x+iy}{\sqrt {2}}}\right)\\rY_{10}(\theta ,\phi )&=&&r{\sqrt {\frac {3}{4\pi }}}\cos(\theta )&=&{\sqrt {\frac {3}{4\pi }}}z\\rY_{1-1}(\theta ,\phi )&=&&r{\sqrt {\frac {3}{8\pi }}}\sin(\theta )e^{-i\phi }&=&{\sqrt {\frac {3}{4\pi }}}\left({\frac {x-iy}{\sqrt {2}}}\right)\end{aligned}}} where, we have multiplied each Y1m by the radius r. On the right hand side we see the spherical components rq of the position vector r. The results can be summarized by, r Y 1 q ( θ , ϕ ) = 3 4 π r q {\displaystyle rY_{1q}(\theta ,\phi )={\sqrt {\frac {3}{4\pi }}}r_{q}} for q = 1, 0, −1, where q appears explicitly as a magnetic quantum number. This equation reveals a relationship between vector operators and the angular momentum value ℓ = 1, something we will have more to say about presently. Now the matrix elements become a product of a radial integral times an angular integral, ⟨ n ′ ℓ ′ m ′ | r q | n ℓ m ⟩ = ( ∫ 0 ∞ r 2 d r R n ′ ℓ ′ ∗ ( r ) r R n ℓ ( r ) ) ( 4 π 3 ∫ sin ( θ ) d Ω Y ℓ ′ m ′ ∗ ( θ , ϕ ) Y 1 q ( θ , ϕ ) Y ℓ m ( θ , ϕ ) ) {\displaystyle \langle n'\ell 'm'|r_{q}|n\ell m\rangle =\left(\int _{0}^{\infty }r^{2}drR_{n'\ell '}^{*}(r)rR_{n\ell }(r)\right)\left({\sqrt {\frac {4\pi }{3}}}\int \sin {(\theta )}d\Omega Y_{\ell 'm'}^{*}(\theta ,\phi )Y_{1q}(\theta ,\phi )Y_{\ell m}(\theta ,\phi )\right)} We see that all the dependence on the three magnetic quantum numbers (m′,q,m) is contained in the angular part of the integral. Moreover, the angular integral can be evaluated by the three-Yℓm formula, whereupon it becomes proportional to the Clebsch-Gordan coefficient, ⟨ ℓ ′ m ′ | ℓ 1 m q ⟩ {\displaystyle \langle \ell 'm'|\ell 1mq\rangle } The radial integral is independent of the three magnetic quantum numbers (m′, q, m), and the trick we have just used does not help us to evaluate it. But it is only one integral, and after it has been done, all the other integrals can be evaluated just by computing or looking up Clebsch–Gordan coefficients. The selection rule m′ = q + m in the Clebsch–Gordan coefficient means that many of the integrals vanish, so we have exaggerated the total number of integrals that need to be done. But had we worked with the Cartesian components ri of r, this selection rule might not have been obvious. In any case, even with the selection rule, there may still be many nonzero integrals to be done (nine, in the case 3d → 2p). The example we have just given of simplifying the calculation of matrix elements for a dipole transition is really an application of the Wigner–Eckart theorem, which we take up later in these notes. === Magnetic resonance === The spherical tensor formalism provides a common platform for treating coherence and relaxation in nuclear magnetic resonance. In NMR and EPR, spherical tensor operators are employed to express the quantum dynamics of particle spin, by means of an equation of motion for the density matrix entries, or to formulate dynamics in terms of an equation of motion in Liouville space. The Liouville space equation of motion governs the observable averages of spin variables. When relaxation is formulated using a spherical tensor basis in Liouville space, insight is gained because the relaxation matrix exhibits the cross-relaxation of spin observables directly. === Image processing and computer graphics === == See also == Wigner–Eckart theorem Structure tensor Clebsch–Gordan coefficients for SU(3) == References == === Notes === === Sources === === Further reading === ==== Spherical harmonics ==== G.W.F. Drake (2006). Springer Handbook of Atomic, Molecular, and Optical Physics (2nd ed.). Springer. p. 57. ISBN 978-0-3872-6308-3. F.A. Dahlen; J. Tromp (1998). Theoretical global seismology (2nd ed.). Princeton University Press. p. appendix C. ISBN 978-0-69100-1241. D.O. Thompson; D.E. Chimenti (1997). Review of Progress in Quantitative Nondestructive Evaluation. Vol. 16. Springer. p. 1708. ISBN 978-0-3064-55971. H. Paetz; G. Schieck (2011). Nuclear Physics with Polarized Particles. Lecture Notes in Physics. Vol. 842. Springer. p. 31. ISBN 978-364-224-225-0. V. Devanathan (1999). Angular Momentum Techniques in Quantum Mechanics. Fundamental Theories of Physics. Vol. 108. Springer. pp. 34, 61. ISBN 978-0-7923-5866-4. V.D. Kleiman; R.N. Zare (1998). "5". A Companion to Angular Momentum. John Wiley & Sons. p. 112. ISBN 978-0-4711-9249-7. ==== Angular momentum and spin ==== Devanathan, V (2002). "Vectors and Tensors in Spherical Basis". Angular Momentum Techniques in Quantum Mechanics. Fundamental Theories of Physics. Vol. 108. pp. 24–33. doi:10.1007/0-306-47123-X_3. ISBN 978-0-306-47123-0. K.T. Hecht (2000). Quantum mechanics. Graduate texts in contemporary physics. Springer. ISBN 978-0-387-989-198. ==== Condensed matter physics ==== J.A. Mettes; J.B. Keith; R.B. McClurg (2002). "Molecular Crystal Global Phase Diagrams:I Method of Construction" (PDF). B.Henderson, R.H. Bartram (2005). Crystal-Field Engineering of Solid-State Laser Materials. Cambridge Studies in Modern Optics. Vol. 25. Cambridge University Press. p. 49. ISBN 978-0-52101-8012. Edward U. Condon, and Halis Odabaşı (1980). Atomic Structure. CUP Archive. ISBN 978-0-5212-98933. Melinda J. Duer, ed. (2008). "3". Solid State NMR Spectroscopy: Principles and Applications. John Wiley & Sons. p. 113. ISBN 978-0-4709-9938-7. K.D. Bonin; V.V. Kresin (1997). "2". Electric - Dipole Polarizabilities of Atoms, Molecules and Clusters. World Scientific. pp. 14–15. ISBN 978-981-022-493-6. A.E. McDermott, T.Polenova (2012). Solid State NMR Studies of Biopolymers. EMR handbooks. John Wiley & Sons. p. 42. ISBN 978-111-858-889-5. ==== Magnetic resonance ==== L.J. Mueller (2011). "Tensors and rotations in NMR". Concepts in Magnetic Resonance Part A. 38A (5): 221–235. doi:10.1002/cmr.a.20224. S2CID 8889942. M.S. Anwar (2004). "Spherical Tensor Operators in NMR" (PDF). P. Callaghan (1993). Principles of nuclear magnetic resonance microscopy. Oxford University Press. pp. 56–57. ISBN 978-0-198-539-971. ==== Image processing ==== M. Reisert; H. Burkhardt (2009). S. Aja-Fernández (ed.). Tensors in Image Processing and Computer Vision. Springer. ISBN 978-184-8822-993. D.H. Laidlaw; J. Weickert (2009). Visualization and Processing of Tensor Fields: Advances and Perspectives. Mathematics and Visualization. Springer. ISBN 978-354-088-378-4. M. Felsberg; E. Jonsson (2005). Energy Tensors: Quadratic, Phase Invariant Image Operators. Lecture Notes in Computer Science. Vol. 3663. Springer. pp. 493–500. E. König; S. Kremer (1979). "Tensor Operator Algebra for Point Groups". Magnetism Diagrams for Transition Metal Ions. Lecture Notes in Computer Science. Vol. 3663. Springer. pp. 13–20. doi:10.1007/978-1-4613-3003-5_3. ISBN 978-1-4613-3005-9. == External links == (2012) Clebsch-Gordon (sic) coefficients and the tensor spherical harmonics The tensor spherical harmonics (2010) Irreducible Tensor Operators and the Wigner-Eckart Theorem Archived 2014-07-20 at the Wayback Machine Tensor operators M. Fowler (2008), Tensor Operators Tensor_Operators (2009) Tensor Operators and the Wigner Eckart Theorem The Wigner-Eckart theorem (2004) Rotational Transformations and Spherical Tensor Operators Tensor operators Evaluation of the matrix elements for radiative transitions D.K. Ghosh, (2013) Angular Momentum - III : Wigner- Eckart Theorem B. Baragiola (2002) Tensor Operators Spherical Tensors
|
Wikipedia:Tensor product of Hilbert spaces#0
|
In mathematics, and in particular functional analysis, the tensor product of Hilbert spaces is a way to extend the tensor product construction so that the result of taking a tensor product of two Hilbert spaces is another Hilbert space. Roughly speaking, the tensor product is the metric space completion of the ordinary tensor product. This is an example of a topological tensor product. The tensor product allows Hilbert spaces to be collected into a symmetric monoidal category. == Definition == Since Hilbert spaces have inner products, one would like to introduce an inner product, and thereby a topology, on the tensor product that arises naturally from the inner products on the factors. Let H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} be two Hilbert spaces with inner products ⟨ ⋅ , ⋅ ⟩ 1 {\displaystyle \langle \cdot ,\cdot \rangle _{1}} and ⟨ ⋅ , ⋅ ⟩ 2 , {\displaystyle \langle \cdot ,\cdot \rangle _{2},} respectively. Construct the tensor product of H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} as vector spaces as explained in the article on tensor products. We can turn this vector space tensor product into an inner product space by defining ⟨ ϕ 1 ⊗ ϕ 2 , ψ 1 ⊗ ψ 2 ⟩ = ⟨ ϕ 1 , ψ 1 ⟩ 1 ⟨ ϕ 2 , ψ 2 ⟩ 2 for all ϕ 1 , ψ 1 ∈ H 1 and ϕ 2 , ψ 2 ∈ H 2 {\displaystyle \left\langle \phi _{1}\otimes \phi _{2},\psi _{1}\otimes \psi _{2}\right\rangle =\left\langle \phi _{1},\psi _{1}\right\rangle _{1}\,\left\langle \phi _{2},\psi _{2}\right\rangle _{2}\quad {\mbox{for all }}\phi _{1},\psi _{1}\in H_{1}{\mbox{ and }}\phi _{2},\psi _{2}\in H_{2}} and extending by linearity. That this inner product is the natural one is justified by the identification of scalar-valued bilinear maps on H 1 × H 2 {\displaystyle H_{1}\times H_{2}} and linear functionals on their vector space tensor product. Finally, take the completion under this inner product. The resulting Hilbert space is the tensor product of H 1 {\displaystyle H_{1}} and H 2 . {\displaystyle H_{2}.} === Explicit construction === The tensor product can also be defined without appealing to the metric space completion. If H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} are two Hilbert spaces, one associates to every simple tensor product x 1 ⊗ x 2 {\displaystyle x_{1}\otimes x_{2}} the rank one operator from H 1 ∗ {\displaystyle H_{1}^{*}} to H 2 {\displaystyle H_{2}} that maps a given x ∗ ∈ H 1 ∗ {\displaystyle x^{*}\in H_{1}^{*}} as x ∗ ↦ x ∗ ( x 1 ) x 2 . {\displaystyle x^{*}\mapsto x^{*}(x_{1})\,x_{2}.} This extends to a linear identification between H 1 ⊗ H 2 {\displaystyle H_{1}\otimes H_{2}} and the space of finite rank operators from H 1 ∗ {\displaystyle H_{1}^{*}} to H 2 . {\displaystyle H_{2}.} The finite rank operators are embedded in the Hilbert space H S ( H 1 ∗ , H 2 ) {\displaystyle HS(H_{1}^{*},H_{2})} of Hilbert–Schmidt operators from H 1 ∗ {\displaystyle H_{1}^{*}} to H 2 . {\displaystyle H_{2}.} The scalar product in H S ( H 1 ∗ , H 2 ) {\displaystyle HS(H_{1}^{*},H_{2})} is given by ⟨ T 1 , T 2 ⟩ = ∑ n ⟨ T 1 e n ∗ , T 2 e n ∗ ⟩ , {\displaystyle \langle T_{1},T_{2}\rangle =\sum _{n}\left\langle T_{1}e_{n}^{*},T_{2}e_{n}^{*}\right\rangle ,} where ( e n ∗ ) {\displaystyle \left(e_{n}^{*}\right)} is an arbitrary orthonormal basis of H 1 ∗ . {\displaystyle H_{1}^{*}.} Under the preceding identification, one can define the Hilbertian tensor product of H 1 {\displaystyle H_{1}} and H 2 , {\displaystyle H_{2},} that is isometrically and linearly isomorphic to H S ( H 1 ∗ , H 2 ) . {\displaystyle HS(H_{1}^{*},H_{2}).} === Universal property === The Hilbert tensor product H 1 ⊗ H 2 {\displaystyle H_{1}\otimes H_{2}} is characterized by the following universal property (Kadison & Ringrose 1997, Theorem 2.6.4): A weakly Hilbert-Schmidt mapping L : H 1 × H 2 → K {\displaystyle L:H_{1}\times H_{2}\to K} is defined as a bilinear map for which a real number d {\displaystyle d} exists, such that ∑ i , j = 1 ∞ | ⟨ L ( e i , f j ) , u ⟩ | 2 ≤ d 2 ‖ u ‖ 2 {\displaystyle \sum _{i,j=1}^{\infty }{\bigl |}\left\langle L(e_{i},f_{j}),u\right\rangle {\bigr |}^{2}\leq d^{2}\,\|u\|^{2}} for all u ∈ K {\displaystyle u\in K} and one (hence all) orthonormal bases e 1 , e 2 , … {\displaystyle e_{1},e_{2},\ldots } of H 1 {\displaystyle H_{1}} and f 1 , f 2 , … {\displaystyle f_{1},f_{2},\ldots } of H 2 . {\displaystyle H_{2}.} As with any universal property, this characterizes the tensor product H uniquely, up to isomorphism. The same universal property, with obvious modifications, also applies for the tensor product of any finite number of Hilbert spaces. It is essentially the same universal property shared by all definitions of tensor products, irrespective of the spaces being tensored: this implies that any space with a tensor product is a symmetric monoidal category, and Hilbert spaces are a particular example thereof. === Infinite tensor products === Two different definitions have historically been proposed for the tensor product of an arbitrary-sized collection { H n } n ∈ N {\textstyle \{H_{n}\}_{n\in N}} of Hilbert spaces. Von Neumann's traditional definition simply takes the "obvious" tensor product: to compute ⨂ n H n {\textstyle \bigotimes _{n}{H_{n}}} , first collect all simple tensors of the form ⨂ n ∈ N e n {\textstyle \bigotimes _{n\in N}{e_{n}}} such that ∏ n ∈ N ‖ e n ‖ < ∞ {\textstyle \prod _{n\in N}{\|e_{n}\|}<\infty } . The latter describes a pre-inner product through the polarization identity, so take the closed span of such simple tensors modulo that inner product's isotropy subspaces. This definition is almost never separable, in part because, in physical applications, "most" of the space describes impossible states. Modern authors typically use instead a definition due to Guichardet: to compute ⨂ n H n {\textstyle \bigotimes _{n}{H_{n}}} , first select a unit vector v n ∈ H n {\textstyle v_{n}\in H_{n}} in each Hilbert space, and then collect all simple tensors of the form ⨂ n ∈ N e n {\textstyle \bigotimes _{n\in N}{e_{n}}} , in which only finitely-many e n {\textstyle e_{n}} are not v n {\textstyle v_{n}} . Then take the L 2 {\displaystyle L^{2}} completion of these simple tensors. === Operator algebras === Let A i {\displaystyle {\mathfrak {A}}_{i}} be the von Neumann algebra of bounded operators on H i {\displaystyle H_{i}} for i = 1 , 2. {\displaystyle i=1,2.} Then the von Neumann tensor product of the von Neumann algebras is the strong completion of the set of all finite linear combinations of simple tensor products A 1 ⊗ A 2 {\displaystyle A_{1}\otimes A_{2}} where A i ∈ A i {\displaystyle A_{i}\in {\mathfrak {A}}_{i}} for i = 1 , 2. {\displaystyle i=1,2.} This is exactly equal to the von Neumann algebra of bounded operators of H 1 ⊗ H 2 . {\displaystyle H_{1}\otimes H_{2}.} Unlike for Hilbert spaces, one may take infinite tensor products of von Neumann algebras, and for that matter C*-algebras of operators, without defining reference states. This is one advantage of the "algebraic" method in quantum statistical mechanics. == Properties == If H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} have orthonormal bases { ϕ k } {\displaystyle \left\{\phi _{k}\right\}} and { ψ l } , {\displaystyle \left\{\psi _{l}\right\},} respectively, then { ϕ k ⊗ ψ l } {\displaystyle \left\{\phi _{k}\otimes \psi _{l}\right\}} is an orthonormal basis for H 1 ⊗ H 2 . {\displaystyle H_{1}\otimes H_{2}.} In particular, the Hilbert dimension of the tensor product is the product (as cardinal numbers) of the Hilbert dimensions. == Examples and applications == The following examples show how tensor products arise naturally. Given two measure spaces X {\displaystyle X} and Y {\displaystyle Y} , with measures μ {\displaystyle \mu } and ν {\displaystyle \nu } respectively, one may look at L 2 ( X × Y ) , {\displaystyle L^{2}(X\times Y),} the space of functions on X × Y {\displaystyle X\times Y} that are square integrable with respect to the product measure μ × ν . {\displaystyle \mu \times \nu .} If f {\displaystyle f} is a square integrable function on X , {\displaystyle X,} and g {\displaystyle g} is a square integrable function on Y , {\displaystyle Y,} then we can define a function h {\displaystyle h} on X × Y {\displaystyle X\times Y} by h ( x , y ) = f ( x ) g ( y ) . {\displaystyle h(x,y)=f(x)g(y).} The definition of the product measure ensures that all functions of this form are square integrable, so this defines a bilinear mapping L 2 ( X ) × L 2 ( Y ) → L 2 ( X × Y ) . {\displaystyle L^{2}(X)\times L^{2}(Y)\to L^{2}(X\times Y).} Linear combinations of functions of the form f ( x ) g ( y ) {\displaystyle f(x)g(y)} are also in L 2 ( X × Y ) . {\displaystyle L^{2}(X\times Y).} It turns out that the set of linear combinations is in fact dense in L 2 ( X × Y ) , {\displaystyle L^{2}(X\times Y),} if L 2 ( X ) {\displaystyle L^{2}(X)} and L 2 ( Y ) {\displaystyle L^{2}(Y)} are separable. This shows that L 2 ( X ) ⊗ L 2 ( Y ) {\displaystyle L^{2}(X)\otimes L^{2}(Y)} is isomorphic to L 2 ( X × Y ) , {\displaystyle L^{2}(X\times Y),} and it also explains why we need to take the completion in the construction of the Hilbert space tensor product. Similarly, we can show that L 2 ( X ; H ) {\displaystyle L^{2}(X;H)} , denoting the space of square integrable functions X → H , {\displaystyle X\to H,} is isomorphic to L 2 ( X ) ⊗ H {\displaystyle L^{2}(X)\otimes H} if this space is separable. The isomorphism maps f ( x ) ⊗ ϕ ∈ L 2 ( X ) ⊗ H {\displaystyle f(x)\otimes \phi \in L^{2}(X)\otimes H} to f ( x ) ϕ ∈ L 2 ( X ; H ) . {\displaystyle f(x)\phi \in L^{2}(X;H).} We can combine this with the previous example and conclude that L 2 ( X ) ⊗ L 2 ( Y ) {\displaystyle L^{2}(X)\otimes L^{2}(Y)} and L 2 ( X × Y ) {\displaystyle L^{2}(X\times Y)} are both isomorphic to L 2 ( X ; L 2 ( Y ) ) . {\displaystyle L^{2}\left(X;L^{2}(Y)\right).} Tensor products of Hilbert spaces arise often in quantum mechanics. If some particle is described by the Hilbert space H 1 , {\displaystyle H_{1},} and another particle is described by H 2 , {\displaystyle H_{2},} then the system consisting of both particles is described by the tensor product of H 1 {\displaystyle H_{1}} and H 2 . {\displaystyle H_{2}.} For example, the state space of a quantum harmonic oscillator is L 2 ( R ) , {\displaystyle L^{2}(\mathbb {R} ),} so the state space of two oscillators is L 2 ( R ) ⊗ L 2 ( R ) , {\displaystyle L^{2}(\mathbb {R} )\otimes L^{2}(\mathbb {R} ),} which is isomorphic to L 2 ( R 2 ) . {\displaystyle L^{2}\left(\mathbb {R} ^{2}\right).} Therefore, the two-particle system is described by wave functions of the form ψ ( x 1 , x 2 ) . {\displaystyle \psi \left(x_{1},x_{2}\right).} A more intricate example is provided by the Fock spaces, which describe a variable number of particles. == References == == Bibliography == Kadison, Richard V.; Ringrose, John R. (1997). Fundamentals of the theory of operator algebras. Vol. I. Graduate Studies in Mathematics. Vol. 15. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-0819-1. MR 1468229.. Weidmann, Joachim (1980). Linear operators in Hilbert spaces. Graduate Texts in Mathematics. Vol. 68. Berlin, New York: Springer-Verlag. ISBN 978-0-387-90427-6. MR 0566954..
|
Wikipedia:Tensor product of quadratic forms#0
|
In mathematics, the tensor product V ⊗ W {\displaystyle V\otimes W} of two vector spaces V and W (over the same field) is a vector space to which is associated a bilinear map V × W → V ⊗ W {\displaystyle V\times W\rightarrow V\otimes W} that maps a pair ( v , w ) , v ∈ V , w ∈ W {\displaystyle (v,w),\ v\in V,w\in W} to an element of V ⊗ W {\displaystyle V\otimes W} denoted v ⊗ w {\displaystyle v\otimes w} . An element of the form v ⊗ w {\displaystyle v\otimes w} is called the tensor product of v and w. An element of V ⊗ W {\displaystyle V\otimes W} is a tensor, and the tensor product of two vectors is sometimes called an elementary tensor or a decomposable tensor. The elementary tensors span V ⊗ W {\displaystyle V\otimes W} in the sense that every element of V ⊗ W {\displaystyle V\otimes W} is a sum of elementary tensors. If bases are given for V and W, a basis of V ⊗ W {\displaystyle V\otimes W} is formed by all tensor products of a basis element of V and a basis element of W. The tensor product of two vector spaces captures the properties of all bilinear maps in the sense that a bilinear map from V × W {\displaystyle V\times W} into another vector space Z factors uniquely through a linear map V ⊗ W → Z {\displaystyle V\otimes W\to Z} (see the section below titled 'Universal property'), i.e. the bilinear map is associated to a unique linear map from the tensor product V ⊗ W {\displaystyle V\otimes W} to Z. Tensor products are used in many application areas, including physics and engineering. For example, in general relativity, the gravitational field is described through the metric tensor, which is a tensor field with one tensor at each point of the space-time manifold, and each belonging to the tensor product of the cotangent space at the point with itself. == Definitions and constructions == The tensor product of two vector spaces is a vector space that is defined up to an isomorphism. There are several equivalent ways to define it. Most consist of defining explicitly a vector space that is called a tensor product, and, generally, the equivalence proof results almost immediately from the basic properties of the vector spaces that are so defined. The tensor product can also be defined through a universal property; see § Universal property, below. As for every universal property, all objects that satisfy the property are isomorphic through a unique isomorphism that is compatible with the universal property. When this definition is used, the other definitions may be viewed as constructions of objects satisfying the universal property and as proofs that there are objects satisfying the universal property, that is that tensor products exist. === From bases === Let V and W be two vector spaces over a field F, with respective bases B V {\displaystyle B_{V}} and B W {\displaystyle B_{W}} . The tensor product V ⊗ W {\displaystyle V\otimes W} of V and W is a vector space that has as a basis the set of all v ⊗ w {\displaystyle v\otimes w} with v ∈ B V {\displaystyle v\in B_{V}} and w ∈ B W {\displaystyle w\in B_{W}} . This definition can be formalized in the following way (this formalization is rarely used in practice, as the preceding informal definition is generally sufficient): V ⊗ W {\displaystyle V\otimes W} is the set of the functions from the Cartesian product B V × B W {\displaystyle B_{V}\times B_{W}} to F that have a finite number of nonzero values. The pointwise operations make V ⊗ W {\displaystyle V\otimes W} a vector space. The function that maps ( v , w ) {\displaystyle (v,w)} to 1 and the other elements of B V × B W {\displaystyle B_{V}\times B_{W}} to 0 is denoted v ⊗ w {\displaystyle v\otimes w} . The set { v ⊗ w ∣ v ∈ B V , w ∈ B W } {\displaystyle \{v\otimes w\mid v\in B_{V},w\in B_{W}\}} is then straightforwardly a basis of V ⊗ W {\displaystyle V\otimes W} , which is called the tensor product of the bases B V {\displaystyle B_{V}} and B W {\displaystyle B_{W}} . We can equivalently define V ⊗ W {\displaystyle V\otimes W} to be the set of bilinear forms on V × W {\displaystyle V\times W} that are nonzero at only a finite number of elements of B V × B W {\displaystyle B_{V}\times B_{W}} . To see this, given ( x , y ) ∈ V × W {\displaystyle (x,y)\in V\times W} and a bilinear form B : V × W → F {\displaystyle B:V\times W\to F} , we can decompose x {\displaystyle x} and y {\displaystyle y} in the bases B V {\displaystyle B_{V}} and B W {\displaystyle B_{W}} as: x = ∑ v ∈ B V x v v and y = ∑ w ∈ B W y w w , {\displaystyle x=\sum _{v\in B_{V}}x_{v}\,v\quad {\text{and}}\quad y=\sum _{w\in B_{W}}y_{w}\,w,} where only a finite number of x v {\displaystyle x_{v}} 's and y w {\displaystyle y_{w}} 's are nonzero, and find by the bilinearity of B {\displaystyle B} that: B ( x , y ) = ∑ v ∈ B V ∑ w ∈ B W x v y w B ( v , w ) {\displaystyle B(x,y)=\sum _{v\in B_{V}}\sum _{w\in B_{W}}x_{v}y_{w}\,B(v,w)} Hence, we see that the value of B {\displaystyle B} for any ( x , y ) ∈ V × W {\displaystyle (x,y)\in V\times W} is uniquely and totally determined by the values that it takes on B V × B W {\displaystyle B_{V}\times B_{W}} . This lets us extend the maps v ⊗ w {\displaystyle v\otimes w} defined on B V × B W {\displaystyle B_{V}\times B_{W}} as before into bilinear maps v ⊗ w : V × W → F {\displaystyle v\otimes w:V\times W\to F} , by letting: ( v ⊗ w ) ( x , y ) := ∑ v ′ ∈ B V ∑ w ′ ∈ B W x v ′ y w ′ ( v ⊗ w ) ( v ′ , w ′ ) = x v y w . {\displaystyle (v\otimes w)(x,y):=\sum _{v'\in B_{V}}\sum _{w'\in B_{W}}x_{v'}y_{w'}\,(v\otimes w)(v',w')=x_{v}\,y_{w}.} Then we can express any bilinear form B {\displaystyle B} as a (potentially infinite) formal linear combination of the v ⊗ w {\displaystyle v\otimes w} maps according to: B = ∑ v ∈ B V ∑ w ∈ B W B ( v , w ) ( v ⊗ w ) {\displaystyle B=\sum _{v\in B_{V}}\sum _{w\in B_{W}}B(v,w)(v\otimes w)} making these maps similar to a Schauder basis for the vector space Hom ( V , W ; F ) {\displaystyle {\text{Hom}}(V,W;F)} of all bilinear forms on V × W {\displaystyle V\times W} . To instead have it be a proper Hamel basis, it only remains to add the requirement that B {\displaystyle B} is nonzero at an only a finite number of elements of B V × B W {\displaystyle B_{V}\times B_{W}} , and consider the subspace of such maps instead. In either construction, the tensor product of two vectors is defined from their decomposition on the bases. More precisely, taking the basis decompositions of x ∈ V {\displaystyle x\in V} and y ∈ W {\displaystyle y\in W} as before: x ⊗ y = ( ∑ v ∈ B V x v v ) ⊗ ( ∑ w ∈ B W y w w ) = ∑ v ∈ B V ∑ w ∈ B W x v y w v ⊗ w . {\displaystyle {\begin{aligned}x\otimes y&={\biggl (}\sum _{v\in B_{V}}x_{v}\,v{\biggr )}\otimes {\biggl (}\sum _{w\in B_{W}}y_{w}\,w{\biggr )}\\[5mu]&=\sum _{v\in B_{V}}\sum _{w\in B_{W}}x_{v}y_{w}\,v\otimes w.\end{aligned}}} This definition is quite clearly derived from the coefficients of B ( v , w ) {\displaystyle B(v,w)} in the expansion by bilinearity of B ( x , y ) {\displaystyle B(x,y)} using the bases B V {\displaystyle B_{V}} and B W {\displaystyle B_{W}} , as done above. It is then straightforward to verify that with this definition, the map ⊗ : ( x , y ) ↦ x ⊗ y {\displaystyle {\otimes }:(x,y)\mapsto x\otimes y} is a bilinear map from V × W {\displaystyle V\times W} to V ⊗ W {\displaystyle V\otimes W} satisfying the universal property that any construction of the tensor product satisfies (see below). If arranged into a rectangular array, the coordinate vector of x ⊗ y {\displaystyle x\otimes y} is the outer product of the coordinate vectors of x {\displaystyle x} and y {\displaystyle y} . Therefore, the tensor product is a generalization of the outer product, that is, an abstraction of it beyond coordinate vectors. A limitation of this definition of the tensor product is that, if one changes bases, a different tensor product is defined. However, the decomposition on one basis of the elements of the other basis defines a canonical isomorphism between the two tensor products of vector spaces, which allows identifying them. Also, contrarily to the two following alternative definitions, this definition cannot be extended into a definition of the tensor product of modules over a ring. === As a quotient space === A construction of the tensor product that is basis independent can be obtained in the following way. Let V and W be two vector spaces over a field F. One considers first a vector space L that has the Cartesian product V × W {\displaystyle V\times W} as a basis. That is, the basis elements of L are the pairs ( v , w ) {\displaystyle (v,w)} with v ∈ V {\displaystyle v\in V} and w ∈ W {\displaystyle w\in W} . To get such a vector space, one can define it as the vector space of the functions V × W → F {\displaystyle V\times W\to F} that have a finite number of nonzero values and identifying ( v , w ) {\displaystyle (v,w)} with the function that takes the value 1 on ( v , w ) {\displaystyle (v,w)} and 0 otherwise. Let R be the linear subspace of L that is spanned by the relations that the tensor product must satisfy. More precisely, R is spanned by the elements of one of the forms: ( v 1 + v 2 , w ) − ( v 1 , w ) − ( v 2 , w ) , ( v , w 1 + w 2 ) − ( v , w 1 ) − ( v , w 2 ) , ( s v , w ) − s ( v , w ) , ( v , s w ) − s ( v , w ) , {\displaystyle {\begin{aligned}(v_{1}+v_{2},w)&-(v_{1},w)-(v_{2},w),\\(v,w_{1}+w_{2})&-(v,w_{1})-(v,w_{2}),\\(sv,w)&-s(v,w),\\(v,sw)&-s(v,w),\end{aligned}}} where v , v 1 , v 2 ∈ V {\displaystyle v,v_{1},v_{2}\in V} , w , w 1 , w 2 ∈ W {\displaystyle w,w_{1},w_{2}\in W} and s ∈ F {\displaystyle s\in F} . Then, the tensor product is defined as the quotient space: V ⊗ W = L / R , {\displaystyle V\otimes W=L/R,} and the image of ( v , w ) {\displaystyle (v,w)} in this quotient is denoted v ⊗ w {\displaystyle v\otimes w} . It is straightforward to prove that the result of this construction satisfies the universal property considered below. (A very similar construction can be used to define the tensor product of modules.) === Universal property === In this section, the universal property satisfied by the tensor product is described. As for every universal property, two objects that satisfy the property are related by a unique isomorphism. It follows that this is a (non-constructive) way to define the tensor product of two vector spaces. In this context, the preceding constructions of tensor products may be viewed as proofs of existence of the tensor product so defined. A consequence of this approach is that every property of the tensor product can be deduced from the universal property, and that, in practice, one may forget the method that has been used to prove its existence. The "universal-property definition" of the tensor product of two vector spaces is the following (recall that a bilinear map is a function that is separately linear in each of its arguments): The tensor product of two vector spaces V and W is a vector space denoted as V ⊗ W {\displaystyle V\otimes W} , together with a bilinear map ⊗ : ( v , w ) ↦ v ⊗ w {\displaystyle {\otimes }:(v,w)\mapsto v\otimes w} from V × W {\displaystyle V\times W} to V ⊗ W {\displaystyle V\otimes W} , such that, for every bilinear map h : V × W → Z {\displaystyle h:V\times W\to Z} , there is a unique linear map h ~ : V ⊗ W → Z {\displaystyle {\tilde {h}}:V\otimes W\to Z} , such that h = h ~ ∘ ⊗ {\displaystyle h={\tilde {h}}\circ {\otimes }} (that is, h ( v , w ) = h ~ ( v ⊗ w ) {\displaystyle h(v,w)={\tilde {h}}(v\otimes w)} for every v ∈ V {\displaystyle v\in V} and w ∈ W {\displaystyle w\in W} ). === Linearly disjoint === Like the universal property above, the following characterization may also be used to determine whether or not a given vector space and given bilinear map form a tensor product. For example, it follows immediately that if X = C m {\displaystyle X=\mathbb {C} ^{m}} and Y = C n {\displaystyle Y=\mathbb {C} ^{n}} , where m {\displaystyle m} and n {\displaystyle n} are positive integers, then one may set Z = C m n {\displaystyle Z=\mathbb {C} ^{mn}} and define the bilinear map as T : C m × C n → C m n ( x , y ) = ( ( x 1 , … , x m ) , ( y 1 , … , y n ) ) ↦ ( x i y j ) j = 1 , … , n i = 1 , … , m {\displaystyle {\begin{aligned}T:\mathbb {C} ^{m}\times \mathbb {C} ^{n}&\to \mathbb {C} ^{mn}\\(x,y)=((x_{1},\ldots ,x_{m}),(y_{1},\ldots ,y_{n}))&\mapsto (x_{i}y_{j})_{\stackrel {i=1,\ldots ,m}{j=1,\ldots ,n}}\end{aligned}}} to form the tensor product of X {\displaystyle X} and Y {\displaystyle Y} . Often, this map T {\displaystyle T} is denoted by ⊗ {\displaystyle \,\otimes \,} so that x ⊗ y = T ( x , y ) . {\displaystyle x\otimes y=T(x,y).} As another example, suppose that C S {\displaystyle \mathbb {C} ^{S}} is the vector space of all complex-valued functions on a set S {\displaystyle S} with addition and scalar multiplication defined pointwise (meaning that f + g {\displaystyle f+g} is the map s ↦ f ( s ) + g ( s ) {\displaystyle s\mapsto f(s)+g(s)} and c f {\displaystyle cf} is the map s ↦ c f ( s ) {\displaystyle s\mapsto cf(s)} ). Let S {\displaystyle S} and T {\displaystyle T} be any sets and for any f ∈ C S {\displaystyle f\in \mathbb {C} ^{S}} and g ∈ C T {\displaystyle g\in \mathbb {C} ^{T}} , let f ⊗ g ∈ C S × T {\displaystyle f\otimes g\in \mathbb {C} ^{S\times T}} denote the function defined by ( s , t ) ↦ f ( s ) g ( t ) {\displaystyle (s,t)\mapsto f(s)g(t)} . If X ⊆ C S {\displaystyle X\subseteq \mathbb {C} ^{S}} and Y ⊆ C T {\displaystyle Y\subseteq \mathbb {C} ^{T}} are vector subspaces then the vector subspace Z := span { f ⊗ g : f ∈ X , g ∈ Y } {\displaystyle Z:=\operatorname {span} \left\{f\otimes g:f\in X,g\in Y\right\}} of C S × T {\displaystyle \mathbb {C} ^{S\times T}} together with the bilinear map: X × Y → Z ( f , g ) ↦ f ⊗ g {\displaystyle {\begin{alignedat}{4}\;&&X\times Y&&\;\to \;&Z\\[0.3ex]&&(f,g)&&\;\mapsto \;&f\otimes g\\\end{alignedat}}} form a tensor product of X {\displaystyle X} and Y {\displaystyle Y} . == Properties == === Dimension === If V and W are vector spaces of finite dimension, then V ⊗ W {\displaystyle V\otimes W} is finite-dimensional, and its dimension is the product of the dimensions of V and W. This results from the fact that a basis of V ⊗ W {\displaystyle V\otimes W} is formed by taking all tensor products of a basis element of V and a basis element of W. === Associativity === The tensor product is associative in the sense that, given three vector spaces U , V , W {\displaystyle U,V,W} , there is a canonical isomorphism: ( U ⊗ V ) ⊗ W ≅ U ⊗ ( V ⊗ W ) , {\displaystyle (U\otimes V)\otimes W\cong U\otimes (V\otimes W),} that maps ( u ⊗ v ) ⊗ w {\displaystyle (u\otimes v)\otimes w} to u ⊗ ( v ⊗ w ) {\displaystyle u\otimes (v\otimes w)} . This allows omitting parentheses in the tensor product of more than two vector spaces or vectors. === Commutativity as vector space operation === The tensor product of two vector spaces V {\displaystyle V} and W {\displaystyle W} is commutative in the sense that there is a canonical isomorphism: V ⊗ W ≅ W ⊗ V , {\displaystyle V\otimes W\cong W\otimes V,} that maps v ⊗ w {\displaystyle v\otimes w} to w ⊗ v {\displaystyle w\otimes v} . On the other hand, even when V = W {\displaystyle V=W} , the tensor product of vectors is not commutative; that is v ⊗ w ≠ w ⊗ v {\displaystyle v\otimes w\neq w\otimes v} , in general. The map x ⊗ y ↦ y ⊗ x {\displaystyle x\otimes y\mapsto y\otimes x} from V ⊗ V {\displaystyle V\otimes V} to itself induces a linear automorphism that is called a braiding map. More generally and as usual (see tensor algebra), let V ⊗ n {\displaystyle V^{\otimes n}} denote the tensor product of n copies of the vector space V. For every permutation s of the first n positive integers, the map: x 1 ⊗ ⋯ ⊗ x n ↦ x s ( 1 ) ⊗ ⋯ ⊗ x s ( n ) {\displaystyle x_{1}\otimes \cdots \otimes x_{n}\mapsto x_{s(1)}\otimes \cdots \otimes x_{s(n)}} induces a linear automorphism of V ⊗ n → V ⊗ n {\displaystyle V^{\otimes n}\to V^{\otimes n}} , which is called a braiding map. == Tensor product of linear maps == Given a linear map f : U → V {\displaystyle f:U\to V} , and a vector space W, the tensor product: f ⊗ W : U ⊗ W → V ⊗ W {\displaystyle f\otimes W:U\otimes W\to V\otimes W} is the unique linear map such that: ( f ⊗ W ) ( u ⊗ w ) = f ( u ) ⊗ w . {\displaystyle (f\otimes W)(u\otimes w)=f(u)\otimes w.} The tensor product W ⊗ f {\displaystyle W\otimes f} is defined similarly. Given two linear maps f : U → V {\displaystyle f:U\to V} and g : W → Z {\displaystyle g:W\to Z} , their tensor product: f ⊗ g : U ⊗ W → V ⊗ Z {\displaystyle f\otimes g:U\otimes W\to V\otimes Z} is the unique linear map that satisfies: ( f ⊗ g ) ( u ⊗ w ) = f ( u ) ⊗ g ( w ) . {\displaystyle (f\otimes g)(u\otimes w)=f(u)\otimes g(w).} One has: f ⊗ g = ( f ⊗ Z ) ∘ ( U ⊗ g ) = ( V ⊗ g ) ∘ ( f ⊗ W ) . {\displaystyle f\otimes g=(f\otimes Z)\circ (U\otimes g)=(V\otimes g)\circ (f\otimes W).} In terms of category theory, this means that the tensor product is a bifunctor from the category of vector spaces to itself. If f and g are both injective or surjective, then the same is true for all above defined linear maps. In particular, the tensor product with a vector space is an exact functor; this means that every exact sequence is mapped to an exact sequence (tensor products of modules do not transform injections into injections, but they are right exact functors). By choosing bases of all vector spaces involved, the linear maps f and g can be represented by matrices. Then, depending on how the tensor v ⊗ w {\displaystyle v\otimes w} is vectorized, the matrix describing the tensor product f ⊗ g {\displaystyle f\otimes g} is the Kronecker product of the two matrices. For example, if V, X, W, and U above are all two-dimensional and bases have been fixed for all of them, and f and g are given by the matrices: A = [ a 1 , 1 a 1 , 2 a 2 , 1 a 2 , 2 ] , B = [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] , {\displaystyle A={\begin{bmatrix}a_{1,1}&a_{1,2}\\a_{2,1}&a_{2,2}\\\end{bmatrix}},\qquad B={\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}},} respectively, then the tensor product of these two matrices is: [ a 1 , 1 a 1 , 2 a 2 , 1 a 2 , 2 ] ⊗ [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] = [ a 1 , 1 [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] a 1 , 2 [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] a 2 , 1 [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] a 2 , 2 [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] ] = [ a 1 , 1 b 1 , 1 a 1 , 1 b 1 , 2 a 1 , 2 b 1 , 1 a 1 , 2 b 1 , 2 a 1 , 1 b 2 , 1 a 1 , 1 b 2 , 2 a 1 , 2 b 2 , 1 a 1 , 2 b 2 , 2 a 2 , 1 b 1 , 1 a 2 , 1 b 1 , 2 a 2 , 2 b 1 , 1 a 2 , 2 b 1 , 2 a 2 , 1 b 2 , 1 a 2 , 1 b 2 , 2 a 2 , 2 b 2 , 1 a 2 , 2 b 2 , 2 ] . {\displaystyle {\begin{aligned}{\begin{bmatrix}a_{1,1}&a_{1,2}\\a_{2,1}&a_{2,2}\\\end{bmatrix}}\otimes {\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&={\begin{bmatrix}a_{1,1}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&a_{1,2}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}\\[3pt]a_{2,1}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&a_{2,2}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}\\\end{bmatrix}}\\&={\begin{bmatrix}a_{1,1}b_{1,1}&a_{1,1}b_{1,2}&a_{1,2}b_{1,1}&a_{1,2}b_{1,2}\\a_{1,1}b_{2,1}&a_{1,1}b_{2,2}&a_{1,2}b_{2,1}&a_{1,2}b_{2,2}\\a_{2,1}b_{1,1}&a_{2,1}b_{1,2}&a_{2,2}b_{1,1}&a_{2,2}b_{1,2}\\a_{2,1}b_{2,1}&a_{2,1}b_{2,2}&a_{2,2}b_{2,1}&a_{2,2}b_{2,2}\\\end{bmatrix}}.\end{aligned}}} The resultant rank is at most 4, and thus the resultant dimension is 4. rank here denotes the tensor rank i.e. the number of requisite indices (while the matrix rank counts the number of degrees of freedom in the resulting array). Tr A ⊗ B = Tr A × Tr B {\displaystyle \operatorname {Tr} A\otimes B=\operatorname {Tr} A\times \operatorname {Tr} B} . A dyadic product is the special case of the tensor product between two vectors of the same dimension. == General tensors == For non-negative integers r and s a type ( r , s ) {\displaystyle (r,s)} tensor on a vector space V is an element of: T s r ( V ) = V ⊗ ⋯ ⊗ V ⏟ r ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ s = V ⊗ r ⊗ ( V ∗ ) ⊗ s . {\displaystyle T_{s}^{r}(V)=\underbrace {V\otimes \cdots \otimes V} _{r}\otimes \underbrace {V^{*}\otimes \cdots \otimes V^{*}} _{s}=V^{\otimes r}\otimes \left(V^{*}\right)^{\otimes s}.} Here V ∗ {\displaystyle V^{*}} is the dual vector space (which consists of all linear maps f from V to the ground field K). There is a product map, called the (tensor) product of tensors: T s r ( V ) ⊗ K T s ′ r ′ ( V ) → T s + s ′ r + r ′ ( V ) . {\displaystyle T_{s}^{r}(V)\otimes _{K}T_{s'}^{r'}(V)\to T_{s+s'}^{r+r'}(V).} It is defined by grouping all occurring "factors" V together: writing v i {\displaystyle v_{i}} for an element of V and f i {\displaystyle f_{i}} for an element of the dual space: ( v 1 ⊗ f 1 ) ⊗ ( v 1 ′ ) = v 1 ⊗ v 1 ′ ⊗ f 1 . {\displaystyle (v_{1}\otimes f_{1})\otimes (v'_{1})=v_{1}\otimes v'_{1}\otimes f_{1}.} If V is finite dimensional, then picking a basis of V and the corresponding dual basis of V ∗ {\displaystyle V^{*}} naturally induces a basis of T s r ( V ) {\displaystyle T_{s}^{r}(V)} (this basis is described in the article on Kronecker products). In terms of these bases, the components of a (tensor) product of two (or more) tensors can be computed. For example, if F and G are two covariant tensors of orders m and n respectively (i.e. F ∈ T m 0 {\displaystyle F\in T_{m}^{0}} and G ∈ T n 0 {\displaystyle G\in T_{n}^{0}} ), then the components of their tensor product are given by: ( F ⊗ G ) i 1 i 2 ⋯ i m + n = F i 1 i 2 ⋯ i m G i m + 1 i m + 2 i m + 3 ⋯ i m + n . {\displaystyle (F\otimes G)_{i_{1}i_{2}\cdots i_{m+n}}=F_{i_{1}i_{2}\cdots i_{m}}G_{i_{m+1}i_{m+2}i_{m+3}\cdots i_{m+n}}.} Thus, the components of the tensor product of two tensors are the ordinary product of the components of each tensor. Another example: let U be a tensor of type (1, 1) with components U β α {\displaystyle U_{\beta }^{\alpha }} , and let V be a tensor of type ( 1 , 0 ) {\displaystyle (1,0)} with components V γ {\displaystyle V^{\gamma }} . Then: ( U ⊗ V ) α β γ = U α β V γ {\displaystyle \left(U\otimes V\right)^{\alpha }{}_{\beta }{}^{\gamma }=U^{\alpha }{}_{\beta }V^{\gamma }} and: ( V ⊗ U ) μ ν σ = V μ U ν σ . {\displaystyle (V\otimes U)^{\mu \nu }{}_{\sigma }=V^{\mu }U^{\nu }{}_{\sigma }.} Tensors equipped with their product operation form an algebra, called the tensor algebra. === Evaluation map and tensor contraction === For tensors of type (1, 1) there is a canonical evaluation map: V ⊗ V ∗ → K {\displaystyle V\otimes V^{*}\to K} defined by its action on pure tensors: v ⊗ f ↦ f ( v ) . {\displaystyle v\otimes f\mapsto f(v).} More generally, for tensors of type ( r , s ) {\displaystyle (r,s)} , with r, s > 0, there is a map, called tensor contraction: T s r ( V ) → T s − 1 r − 1 ( V ) . {\displaystyle T_{s}^{r}(V)\to T_{s-1}^{r-1}(V).} (The copies of V {\displaystyle V} and V ∗ {\displaystyle V^{*}} on which this map is to be applied must be specified.) On the other hand, if V {\displaystyle V} is finite-dimensional, there is a canonical map in the other direction (called the coevaluation map): { K → V ⊗ V ∗ λ ↦ ∑ i λ v i ⊗ v i ∗ {\displaystyle {\begin{cases}K\to V\otimes V^{*}\\\lambda \mapsto \sum _{i}\lambda v_{i}\otimes v_{i}^{*}\end{cases}}} where v 1 , … , v n {\displaystyle v_{1},\ldots ,v_{n}} is any basis of V {\displaystyle V} , and v i ∗ {\displaystyle v_{i}^{*}} is its dual basis. This map does not depend on the choice of basis. The interplay of evaluation and coevaluation can be used to characterize finite-dimensional vector spaces without referring to bases. === Adjoint representation === The tensor product T s r ( V ) {\displaystyle T_{s}^{r}(V)} may be naturally viewed as a module for the Lie algebra E n d ( V ) {\displaystyle \mathrm {End} (V)} by means of the diagonal action: for simplicity let us assume r = s = 1 {\displaystyle r=s=1} , then, for each u ∈ E n d ( V ) {\displaystyle u\in \mathrm {End} (V)} , u ( a ⊗ b ) = u ( a ) ⊗ b − a ⊗ u ∗ ( b ) , {\displaystyle u(a\otimes b)=u(a)\otimes b-a\otimes u^{*}(b),} where u ∗ ∈ E n d ( V ∗ ) {\displaystyle u^{*}\in \mathrm {End} \left(V^{*}\right)} is the transpose of u, that is, in terms of the obvious pairing on V ⊗ V ∗ {\displaystyle V\otimes V^{*}} , ⟨ u ( a ) , b ⟩ = ⟨ a , u ∗ ( b ) ⟩ . {\displaystyle \langle u(a),b\rangle =\langle a,u^{*}(b)\rangle .} There is a canonical isomorphism T 1 1 ( V ) → E n d ( V ) {\displaystyle T_{1}^{1}(V)\to \mathrm {End} (V)} given by: ( a ⊗ b ) ( x ) = ⟨ x , b ⟩ a . {\displaystyle (a\otimes b)(x)=\langle x,b\rangle a.} Under this isomorphism, every u in E n d ( V ) {\displaystyle \mathrm {End} (V)} may be first viewed as an endomorphism of T 1 1 ( V ) {\displaystyle T_{1}^{1}(V)} and then viewed as an endomorphism of E n d ( V ) {\displaystyle \mathrm {End} (V)} . In fact it is the adjoint representation ad(u) of E n d ( V ) {\displaystyle \mathrm {End} (V)} . == Linear maps as tensors == Given two finite dimensional vector spaces U, V over the same field K, denote the dual space of U as U*, and the K-vector space of all linear maps from U to V as Hom(U,V). There is an isomorphism: U ∗ ⊗ V ≅ H o m ( U , V ) , {\displaystyle U^{*}\otimes V\cong \mathrm {Hom} (U,V),} defined by an action of the pure tensor f ⊗ v ∈ U ∗ ⊗ V {\displaystyle f\otimes v\in U^{*}\otimes V} on an element of U {\displaystyle U} , ( f ⊗ v ) ( u ) = f ( u ) v . {\displaystyle (f\otimes v)(u)=f(u)v.} Its "inverse" can be defined using a basis { u i } {\displaystyle \{u_{i}\}} and its dual basis { u i ∗ } {\displaystyle \{u_{i}^{*}\}} as in the section "Evaluation map and tensor contraction" above: { H o m ( U , V ) → U ∗ ⊗ V F ↦ ∑ i u i ∗ ⊗ F ( u i ) . {\displaystyle {\begin{cases}\mathrm {Hom} (U,V)\to U^{*}\otimes V\\F\mapsto \sum _{i}u_{i}^{*}\otimes F(u_{i}).\end{cases}}} This result implies: dim ( U ⊗ V ) = dim ( U ) dim ( V ) , {\displaystyle \dim(U\otimes V)=\dim(U)\dim(V),} which automatically gives the important fact that { u i ⊗ v j } {\displaystyle \{u_{i}\otimes v_{j}\}} forms a basis of U ⊗ V {\displaystyle U\otimes V} where { u i } , { v j } {\displaystyle \{u_{i}\},\{v_{j}\}} are bases of U and V. Furthermore, given three vector spaces U, V, W the tensor product is linked to the vector space of all linear maps, as follows: H o m ( U ⊗ V , W ) ≅ H o m ( U , H o m ( V , W ) ) . {\displaystyle \mathrm {Hom} (U\otimes V,W)\cong \mathrm {Hom} (U,\mathrm {Hom} (V,W)).} This is an example of adjoint functors: the tensor product is "left adjoint" to Hom. == Tensor products of modules over a ring == The tensor product of two modules A and B over a commutative ring R is defined in exactly the same way as the tensor product of vector spaces over a field: A ⊗ R B := F ( A × B ) / G , {\displaystyle A\otimes _{R}B:=F(A\times B)/G,} where now F ( A × B ) {\displaystyle F(A\times B)} is the free R-module generated by the cartesian product and G is the R-module generated by these relations. More generally, the tensor product can be defined even if the ring is non-commutative. In this case A has to be a right-R-module and B is a left-R-module, and instead of the last two relations above, the relation: ( a r , b ) ∼ ( a , r b ) {\displaystyle (ar,b)\sim (a,rb)} is imposed. If R is non-commutative, this is no longer an R-module, but just an abelian group. The universal property also carries over, slightly modified: the map φ : A × B → A ⊗ R B {\displaystyle \varphi :A\times B\to A\otimes _{R}B} defined by ( a , b ) ↦ a ⊗ b {\displaystyle (a,b)\mapsto a\otimes b} is a middle linear map (referred to as "the canonical middle linear map"); that is, it satisfies: φ ( a + a ′ , b ) = φ ( a , b ) + φ ( a ′ , b ) φ ( a , b + b ′ ) = φ ( a , b ) + φ ( a , b ′ ) φ ( a r , b ) = φ ( a , r b ) {\displaystyle {\begin{aligned}\varphi (a+a',b)&=\varphi (a,b)+\varphi (a',b)\\\varphi (a,b+b')&=\varphi (a,b)+\varphi (a,b')\\\varphi (ar,b)&=\varphi (a,rb)\end{aligned}}} The first two properties make φ a bilinear map of the abelian group A × B {\displaystyle A\times B} . For any middle linear map ψ {\displaystyle \psi } of A × B {\displaystyle A\times B} , a unique group homomorphism f of A ⊗ R B {\displaystyle A\otimes _{R}B} satisfies ψ = f ∘ φ {\displaystyle \psi =f\circ \varphi } , and this property determines φ {\displaystyle \varphi } within group isomorphism. See the main article for details. === Tensor product of modules over a non-commutative ring === Let A be a right R-module and B be a left R-module. Then the tensor product of A and B is an abelian group defined by: A ⊗ R B := F ( A × B ) / G {\displaystyle A\otimes _{R}B:=F(A\times B)/G} where F ( A × B ) {\displaystyle F(A\times B)} is a free abelian group over A × B {\displaystyle A\times B} and G is the subgroup of F ( A × B ) {\displaystyle F(A\times B)} generated by relations: ∀ a , a 1 , a 2 ∈ A , ∀ b , b 1 , b 2 ∈ B , for all r ∈ R : ( a 1 , b ) + ( a 2 , b ) − ( a 1 + a 2 , b ) , ( a , b 1 ) + ( a , b 2 ) − ( a , b 1 + b 2 ) , ( a r , b ) − ( a , r b ) . {\displaystyle {\begin{aligned}&\forall a,a_{1},a_{2}\in A,\forall b,b_{1},b_{2}\in B,{\text{ for all }}r\in R:\\&(a_{1},b)+(a_{2},b)-(a_{1}+a_{2},b),\\&(a,b_{1})+(a,b_{2})-(a,b_{1}+b_{2}),\\&(ar,b)-(a,rb).\\\end{aligned}}} The universal property can be stated as follows. Let G be an abelian group with a map q : A × B → G {\displaystyle q:A\times B\to G} that is bilinear, in the sense that: q ( a 1 + a 2 , b ) = q ( a 1 , b ) + q ( a 2 , b ) , q ( a , b 1 + b 2 ) = q ( a , b 1 ) + q ( a , b 2 ) , q ( a r , b ) = q ( a , r b ) . {\displaystyle {\begin{aligned}q(a_{1}+a_{2},b)&=q(a_{1},b)+q(a_{2},b),\\q(a,b_{1}+b_{2})&=q(a,b_{1})+q(a,b_{2}),\\q(ar,b)&=q(a,rb).\end{aligned}}} Then there is a unique map q ¯ : A ⊗ B → G {\displaystyle {\overline {q}}:A\otimes B\to G} such that q ¯ ( a ⊗ b ) = q ( a , b ) {\displaystyle {\overline {q}}(a\otimes b)=q(a,b)} for all a ∈ A {\displaystyle a\in A} and b ∈ B {\displaystyle b\in B} . Furthermore, we can give A ⊗ R B {\displaystyle A\otimes _{R}B} a module structure under some extra conditions: If A is a (S,R)-bimodule, then A ⊗ R B {\displaystyle A\otimes _{R}B} is a left S-module, where s ( a ⊗ b ) := ( s a ) ⊗ b {\displaystyle s(a\otimes b):=(sa)\otimes b} . If B is a (R,S)-bimodule, then A ⊗ R B {\displaystyle A\otimes _{R}B} is a right S-module, where ( a ⊗ b ) s := a ⊗ ( b s ) {\displaystyle (a\otimes b)s:=a\otimes (bs)} . If A is a (S,R)-bimodule and B is a (R,T)-bimodule, then A ⊗ R B {\displaystyle A\otimes _{R}B} is a (S,T)-bimodule, where the left and right actions are defined in the same way as the previous two examples. If R is a commutative ring, then A and B are (R,R)-bimodules where r a := a r {\displaystyle ra:=ar} and b r := r b {\displaystyle br:=rb} . By 3), we can conclude A ⊗ R B {\displaystyle A\otimes _{R}B} is a (R,R)-bimodule. === Computing the tensor product === For vector spaces, the tensor product V ⊗ W {\displaystyle V\otimes W} is quickly computed since bases of V of W immediately determine a basis of V ⊗ W {\displaystyle V\otimes W} , as was mentioned above. For modules over a general (commutative) ring, not every module is free. For example, Z/nZ is not a free abelian group (Z-module). The tensor product with Z/nZ is given by: M ⊗ Z Z / n Z = M / n M . {\displaystyle M\otimes _{\mathbf {Z} }\mathbf {Z} /n\mathbf {Z} =M/nM.} More generally, given a presentation of some R-module M, that is, a number of generators m i ∈ M , i ∈ I {\displaystyle m_{i}\in M,i\in I} together with relations: ∑ j ∈ J a j i m i = 0 , a i j ∈ R , {\displaystyle \sum _{j\in J}a_{ji}m_{i}=0,\qquad a_{ij}\in R,} the tensor product can be computed as the following cokernel: M ⊗ R N = coker ( N J → N I ) {\displaystyle M\otimes _{R}N=\operatorname {coker} \left(N^{J}\to N^{I}\right)} Here N J = ⊕ j ∈ J N {\displaystyle N^{J}=\oplus _{j\in J}N} , and the map N J → N I {\displaystyle N^{J}\to N^{I}} is determined by sending some n ∈ N {\displaystyle n\in N} in the jth copy of N J {\displaystyle N^{J}} to a i j n {\displaystyle a_{ij}n} (in N I {\displaystyle N^{I}} ). Colloquially, this may be rephrased by saying that a presentation of M gives rise to a presentation of M ⊗ R N {\displaystyle M\otimes _{R}N} . This is referred to by saying that the tensor product is a right exact functor. It is not in general left exact, that is, given an injective map of R-modules M 1 → M 2 {\displaystyle M_{1}\to M_{2}} , the tensor product: M 1 ⊗ R N → M 2 ⊗ R N {\displaystyle M_{1}\otimes _{R}N\to M_{2}\otimes _{R}N} is not usually injective. For example, tensoring the (injective) map given by multiplication with n, n : Z → Z with Z/nZ yields the zero map 0 : Z/nZ → Z/nZ, which is not injective. Higher Tor functors measure the defect of the tensor product being not left exact. All higher Tor functors are assembled in the derived tensor product. == Tensor product of algebras == Let R be a commutative ring. The tensor product of R-modules applies, in particular, if A and B are R-algebras. In this case, the tensor product A ⊗ R B {\displaystyle A\otimes _{R}B} is an R-algebra itself by putting: ( a 1 ⊗ b 1 ) ⋅ ( a 2 ⊗ b 2 ) = ( a 1 ⋅ a 2 ) ⊗ ( b 1 ⋅ b 2 ) . {\displaystyle (a_{1}\otimes b_{1})\cdot (a_{2}\otimes b_{2})=(a_{1}\cdot a_{2})\otimes (b_{1}\cdot b_{2}).} For example: R [ x ] ⊗ R R [ y ] ≅ R [ x , y ] . {\displaystyle R[x]\otimes _{R}R[y]\cong R[x,y].} A particular example is when A and B are fields containing a common subfield R. The tensor product of fields is closely related to Galois theory: if, say, A = R[x] / f(x), where f is some irreducible polynomial with coefficients in R, the tensor product can be calculated as: A ⊗ R B ≅ B [ x ] / f ( x ) {\displaystyle A\otimes _{R}B\cong B[x]/f(x)} where now f is interpreted as the same polynomial, but with its coefficients regarded as elements of B. In the larger field B, the polynomial may become reducible, which brings in Galois theory. For example, if A = B is a Galois extension of R, then: A ⊗ R A ≅ A [ x ] / f ( x ) {\displaystyle A\otimes _{R}A\cong A[x]/f(x)} is isomorphic (as an A-algebra) to the A deg ( f ) {\displaystyle A^{\operatorname {deg} (f)}} . == Eigenconfigurations of tensors == Square matrices A {\displaystyle A} with entries in a field K {\displaystyle K} represent linear maps of vector spaces, say K n → K n {\displaystyle K^{n}\to K^{n}} , and thus linear maps ψ : P n − 1 → P n − 1 {\displaystyle \psi :\mathbb {P} ^{n-1}\to \mathbb {P} ^{n-1}} of projective spaces over K {\displaystyle K} . If A {\displaystyle A} is nonsingular then ψ {\displaystyle \psi } is well-defined everywhere, and the eigenvectors of A {\displaystyle A} correspond to the fixed points of ψ {\displaystyle \psi } . The eigenconfiguration of A {\displaystyle A} consists of n {\displaystyle n} points in P n − 1 {\displaystyle \mathbb {P} ^{n-1}} , provided A {\displaystyle A} is generic and K {\displaystyle K} is algebraically closed. The fixed points of nonlinear maps are the eigenvectors of tensors. Let A = ( a i 1 i 2 ⋯ i d ) {\displaystyle A=(a_{i_{1}i_{2}\cdots i_{d}})} be a d {\displaystyle d} -dimensional tensor of format n × n × ⋯ × n {\displaystyle n\times n\times \cdots \times n} with entries ( a i 1 i 2 ⋯ i d ) {\displaystyle (a_{i_{1}i_{2}\cdots i_{d}})} lying in an algebraically closed field K {\displaystyle K} of characteristic zero. Such a tensor A ∈ ( K n ) ⊗ d {\displaystyle A\in (K^{n})^{\otimes d}} defines polynomial maps K n → K n {\displaystyle K^{n}\to K^{n}} and P n − 1 → P n − 1 {\displaystyle \mathbb {P} ^{n-1}\to \mathbb {P} ^{n-1}} with coordinates: ψ i ( x 1 , … , x n ) = ∑ j 2 = 1 n ∑ j 3 = 1 n ⋯ ∑ j d = 1 n a i j 2 j 3 ⋯ j d x j 2 x j 3 ⋯ x j d for i = 1 , … , n {\displaystyle \psi _{i}(x_{1},\ldots ,x_{n})=\sum _{j_{2}=1}^{n}\sum _{j_{3}=1}^{n}\cdots \sum _{j_{d}=1}^{n}a_{ij_{2}j_{3}\cdots j_{d}}x_{j_{2}}x_{j_{3}}\cdots x_{j_{d}}\;\;{\mbox{for }}i=1,\ldots ,n} Thus each of the n {\displaystyle n} coordinates of ψ {\displaystyle \psi } is a homogeneous polynomial ψ i {\displaystyle \psi _{i}} of degree d − 1 {\displaystyle d-1} in x = ( x 1 , … , x n ) {\displaystyle \mathbf {x} =\left(x_{1},\ldots ,x_{n}\right)} . The eigenvectors of A {\displaystyle A} are the solutions of the constraint: rank ( x 1 x 2 ⋯ x n ψ 1 ( x ) ψ 2 ( x ) ⋯ ψ n ( x ) ) ≤ 1 {\displaystyle {\mbox{rank}}{\begin{pmatrix}x_{1}&x_{2}&\cdots &x_{n}\\\psi _{1}(\mathbf {x} )&\psi _{2}(\mathbf {x} )&\cdots &\psi _{n}(\mathbf {x} )\end{pmatrix}}\leq 1} and the eigenconfiguration is given by the variety of the 2 × 2 {\displaystyle 2\times 2} minors of this matrix. == Other examples of tensor products == === Topological tensor products === Hilbert spaces generalize finite-dimensional vector spaces to arbitrary dimensions. There is an analogous operation, also called the "tensor product," that makes Hilbert spaces a symmetric monoidal category. It is essentially constructed as the metric space completion of the algebraic tensor product discussed above. However, it does not satisfy the obvious analogue of the universal property defining tensor products; the morphisms for that property must be restricted to Hilbert–Schmidt operators. In situations where the imposition of an inner product is inappropriate, one can still attempt to complete the algebraic tensor product, as a topological tensor product. However, such a construction is no longer uniquely specified: in many cases, there are multiple natural topologies on the algebraic tensor product. === Tensor product of graded vector spaces === Some vector spaces can be decomposed into direct sums of subspaces. In such cases, the tensor product of two spaces can be decomposed into sums of products of the subspaces (in analogy to the way that multiplication distributes over addition). === Tensor product of representations === Vector spaces endowed with an additional multiplicative structure are called algebras. The tensor product of such algebras is described by the Littlewood–Richardson rule. === Tensor product of quadratic forms === === Tensor product of multilinear forms === Given two multilinear forms f ( x 1 , … , x k ) {\displaystyle f(x_{1},\dots ,x_{k})} and g ( x 1 , … , x m ) {\displaystyle g(x_{1},\dots ,x_{m})} on a vector space V {\displaystyle V} over the field K {\displaystyle K} their tensor product is the multilinear form: ( f ⊗ g ) ( x 1 , … , x k + m ) = f ( x 1 , … , x k ) g ( x k + 1 , … , x k + m ) . {\displaystyle (f\otimes g)(x_{1},\dots ,x_{k+m})=f(x_{1},\dots ,x_{k})g(x_{k+1},\dots ,x_{k+m}).} This is a special case of the product of tensors if they are seen as multilinear maps (see also tensors as multilinear maps). Thus the components of the tensor product of multilinear forms can be computed by the Kronecker product. === Tensor product of sheaves of modules === === Tensor product of line bundles === === Tensor product of fields === === Tensor product of graphs === It should be mentioned that, though called "tensor product", this is not a tensor product of graphs in the above sense; actually it is the category-theoretic product in the category of graphs and graph homomorphisms. However it is actually the Kronecker tensor product of the adjacency matrices of the graphs. Compare also the section Tensor product of linear maps above. === Monoidal categories === The most general setting for the tensor product is the monoidal category. It captures the algebraic essence of tensoring, without making any specific reference to what is being tensored. Thus, all tensor products can be expressed as an application of the monoidal category to some particular setting, acting on some particular objects. == Quotient algebras == A number of important subspaces of the tensor algebra can be constructed as quotients: these include the exterior algebra, the symmetric algebra, the Clifford algebra, the Weyl algebra, and the universal enveloping algebra in general. The exterior algebra is constructed from the exterior product. Given a vector space V, the exterior product V ∧ V {\displaystyle V\wedge V} is defined as: V ∧ V := V ⊗ V / { v ⊗ v ∣ v ∈ V } . {\displaystyle V\wedge V:=V\otimes V{\big /}\{v\otimes v\mid v\in V\}.} When the underlying field of V does not have characteristic 2, then this definition is equivalent to: V ∧ V := V ⊗ V / { v 1 ⊗ v 2 + v 2 ⊗ v 1 ∣ ( v 1 , v 2 ) ∈ V 2 } . {\displaystyle V\wedge V:=V\otimes V{\big /}{\bigl \{}v_{1}\otimes v_{2}+v_{2}\otimes v_{1}\mid (v_{1},v_{2})\in V^{2}{\bigr \}}.} The image of v 1 ⊗ v 2 {\displaystyle v_{1}\otimes v_{2}} in the exterior product is usually denoted v 1 ∧ v 2 {\displaystyle v_{1}\wedge v_{2}} and satisfies, by construction, v 1 ∧ v 2 = − v 2 ∧ v 1 {\displaystyle v_{1}\wedge v_{2}=-v_{2}\wedge v_{1}} . Similar constructions are possible for V ⊗ ⋯ ⊗ V {\displaystyle V\otimes \dots \otimes V} (n factors), giving rise to Λ n V {\displaystyle \Lambda ^{n}V} , the nth exterior power of V. The latter notion is the basis of differential n-forms. The symmetric algebra is constructed in a similar manner, from the symmetric product: V ⊙ V := V ⊗ V / { v 1 ⊗ v 2 − v 2 ⊗ v 1 ∣ ( v 1 , v 2 ) ∈ V 2 } . {\displaystyle V\odot V:=V\otimes V{\big /}{\bigl \{}v_{1}\otimes v_{2}-v_{2}\otimes v_{1}\mid (v_{1},v_{2})\in V^{2}{\bigr \}}.} More generally: Sym n V := V ⊗ ⋯ ⊗ V ⏟ n / ( ⋯ ⊗ v i ⊗ v i + 1 ⊗ ⋯ − ⋯ ⊗ v i + 1 ⊗ v i ⊗ … ) {\displaystyle \operatorname {Sym} ^{n}V:=\underbrace {V\otimes \dots \otimes V} _{n}{\big /}(\dots \otimes v_{i}\otimes v_{i+1}\otimes \dots -\dots \otimes v_{i+1}\otimes v_{i}\otimes \dots )} That is, in the symmetric algebra two adjacent vectors (and therefore all of them) can be interchanged. The resulting objects are called symmetric tensors. == Tensor product in programming == === Array programming languages === Array programming languages may have this pattern built in. For example, in APL the tensor product is expressed as ○.× (for example A ○.× B or A ○.× B ○.× C). In J the tensor product is the dyadic form of */ (for example a */ b or a */ b */ c). J's treatment also allows the representation of some tensor fields, as a and b may be functions instead of constants. This product of two functions is a derived function, and if a and b are differentiable, then a */ b is differentiable. However, these kinds of notation are not universally present in array languages. Other array languages may require explicit treatment of indices (for example, MATLAB), and/or may not support higher-order functions such as the Jacobian derivative (for example, Fortran/APL). == See also == Dyadics – Second order tensor in vector algebra Extension of scalars Monoidal category – Category admitting tensor products Tensor algebra – Universal construction in multilinear algebra Tensor contraction – Operation in mathematics and physics Topological tensor product – Tensor product constructions for topological vector spaces == Notes == == References == Bourbaki, Nicolas (1989). Elements of mathematics, Algebra I. Springer-Verlag. ISBN 3-540-64243-9. Gowers, Timothy. "How to lose your fear of tensor products". Archived from the original on 7 May 2021. Grillet, Pierre A. (2007). Abstract Algebra. Springer Science+Business Media, LLC. ISBN 978-0387715674. Halmos, Paul (1974). Finite dimensional vector spaces. Springer. ISBN 0-387-90093-4. Hungerford, Thomas W. (2003). Algebra. Springer. ISBN 0387905189. Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556, Zbl 0984.00001 Mac Lane, S.; Birkhoff, G. (1999). Algebra. AMS Chelsea. ISBN 0-8218-1646-2. Aguiar, M.; Mahajan, S. (2010). Monoidal functors, species and Hopf algebras. CRM Monograph Series Vol 29. ISBN 978-0-8218-4776-3. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. "Bibliography on the nonabelian tensor product of groups".
|
Wikipedia:Teo Mora#0
|
Ferdinando 'Teo' Mora is an Italian mathematician, and since 1990 until 2019 a professor of algebra at the University of Genoa. == Life and work == Mora's degree is in mathematics from the University of Genoa in 1974. Mora's publications span forty years; his notable contributions in computergebra are the tangent cone algorithm and its extension of Buchberger theory of Gröbner bases and related algorithm earlier to non-commutative polynomial rings and more recently to effective rings; less significant the notion of Gröbner fan; marginal, with respect to the other authors, his contribution to the FGLM algorithm. Mora is on the managing-editorial-board of the journal Applicable Algebra in Engineering, Communication and Computing published by Springer, and was also formerly an editor of the Bulletin of the Iranian Mathematical Society. He is the author of the tetralogy Solving Polynomial Equation Systems: Solving Polynomial Equation Systems I: The Kronecker-Duval Philosophy, on equations in one variable Solving Polynomial Equation Systems II: Macaulay's paradigm and Gröbner technology, on equations in several variables Solving Polynomial Equation Systems III: Algebraic Solving, Solving Polynomial Equation Systems IV: Buchberger Theory and Beyond, on the Buchberger algorithm == Personal life == Mora lives in Genoa. Mora published a book trilogy in 1977-1978 (reprinted 2001-2003) called Storia del cinema dell'orrore on the history of horror films. Italian television said in 2014 that the books are an "authoritative guide with in-depth detailed descriptions and analysis." == See also == FGLM algorithm, Buchberger's algorithm Gröbner fan, Gröbner basis Algebraic geometry#Computational algebraic geometry, System of polynomial equations == References == == Notes == == Further reading == Teo Mora (1977). Storia del cinema dell'orrore. Vol. 1. Fanucci. ISBN 978-88-347-0800-2.. "Second". and "third". volumes: ISBN 88-347-0850-4, ISBN 88-347-0897-0. Reprinted 2001. George M Bergman (1978). "The diamond lemma for ring theory". Advances in Mathematics. 29 (2): 178–218. doi:10.1016/0001-8708(78)90010-5. F. Mora (1982). "An algorithm to compute the equations of tangent cones". Computer Algebra: EUROCAM '82, European Computer Algebra Conference, Marseilles, France, April 5-7, 1982. Lecture Notes in Computer Science. Vol. 144. pp. 158–165. doi:10.1007/3-540-11607-9_18. ISBN 978-3-540-11607-3. F. Mora (1986). "Groebner bases for non-commutative polynomial rings". Algebraic Algorithms and Error-Correcting Codes: 3rd International Conference, AAECC-3, Grenoble, France, July 15-19, 1985, Proceedings (PDF). Lecture Notes in Computer Science. Vol. 229. pp. 353–362. doi:10.1007/3-540-16776-5_740. ISBN 978-3-540-16776-1. David Bayer; Ian Morrison (1988). "Standard bases and geometric invariant theory I. Initial ideals and state polytopes". Journal of Symbolic Computation. 6 (2–3): 209–218. doi:10.1016/S0747-7171(88)80043-9. also in: Lorenzo Robbiano, ed. (1989). Computational Aspects of Commutative Algebra. Vol. 6. London: Academic Press. Teo Mora (1988). "Seven variations on standard bases". Gerhard Pfister; T.Mora; Carlo Traverso (1992). Christoph M Hoffmann (ed.). "An introduction to the tangent cone algorithm". Issues in Robotics and Nonlinear Geometry (Advances in Computing Research). 6: 199–270. T. Mora (1994). "An introduction to commutative and non-commutative Gröbner bases". Theoretical Computer Science. 134: 131–173. doi:10.1016/0304-3975(94)90283-6. Hans-Gert Gräbe (1995). "Algorithms in Local Algebra". Journal of Symbolic Computation. 19 (6): 545–557. doi:10.1006/jsco.1995.1031. Gert-Martin Greuel; G. Pfister (1996). "Advances and improvements in the theory of standard bases and syzygies". CiteSeerX 10.1.1.49.1231. M.Caboara, T.Mora (2002). "The Chen-Reed-Helleseth-Truong Decoding Algorithm and the Gianni-Kalkbrenner Gröbner Shape Theorem". Journal of Applicable Algebra. 13 (3): 209–232. doi:10.1007/s002000200097. S2CID 2505343. M.E. Alonso; M.G. Marinari; M.T. Mora (2003). "The Big Mother of All the Dualities, I: Möller Algorithm". Communications in Algebra. 31 (2): 783–818. CiteSeerX 10.1.1.57.7799. doi:10.1081/AGB-120017343. S2CID 120556267. Teo Mora (March 1, 2003). Solving Polynomial Equation Systems I: The Kronecker-Duval Philosophy. Encyclopedia of Mathematics and its Application. Vol. 88. Cambridge University Press. doi:10.1017/cbo9780511542831. ISBN 9780521811545. S2CID 118216321. T. Mora (2005). Solving Polynomial Equation Systems II: Macaulay's Paradigm and Gröbner Technology. Encyclopedia of Mathematics and its Applications. Vol. 99. Cambridge University Press. T. Mora (2015). Solving Polynomial Equation Systems III: Algebraic Solving. Encyclopedia of Mathematics and its Applications. Vol. 157. Cambridge University Press. T Mora (2016). Solving Polynomial Equation Systems IV: Buchberger Theory and Beyond. Encyclopedia of Mathematics and its Applications. Vol. 158. Cambridge University Press. ISBN 9781107109636. T. Mora (2015). "De Nugis Groebnerialium 4: Zacharias, Spears, Möller". Proceedings of the 2015 ACM on International Symposium on Symbolic and Algebraic Computation, ISSAC '15. pp. 283–290. doi:10.1145/2755996.2756640. ISBN 9781450334358. S2CID 14654596. Michela Ceria; Teo Mora (2016). "Buchberger–Weispfenning theory for effective associative rings". Journal of Symbolic Computation. 83: 112–146. arXiv:1611.08846. doi:10.1016/j.jsc.2016.11.008. S2CID 10363249. T Mora (2016). Solving Polynomial Equation Systems IV: Buchberger Theory and Beyond. Encyclopedia of Mathematics and its Applications. Vol. 158. Cambridge University Press. ISBN 9781107109636. == External links == Teo Mora and Michela Ceria, Do It Yourself: Buchberger and Janet bases over effective rings, Part 1: Buchberger Algorithm via Spear’s Theorem, Zacharias’ Representation, Weisspfenning Multiplication, Part 2: Moeller Lifting Theorem vs Buchberger Criteria, Part 3: What happens to involutive bases?. Invited talk at ICMS 2020 International Congress on Mathematical Software , Braunschweig, 13-16 July 2020
|
Wikipedia:Teofilo Bruni#0
|
Teofilo Bruni (Verona, 1569 - Vicenza, 1638 ) was an Italian mathematician and astronomer. == Life == Born in Verona, he was a capuchin friar known by the name of Raffaele. He wrote mainly about mathematics and astronomy, but he published a book about clocks and other tools based on mathematical concepts. == Works == Armonia astronomica e geometrica (in Italian). Venezia: Giovanni Varisco & Varisco Varisco. 1622. Novo planisferio, o Astrolabio universale (in Italian). Vicenza: Francesco Grossi. 1625. == References ==
|
Wikipedia:Teragon#0
|
Tarragon (Artemisia dracunculus), also known as estragon, is a species of perennial herb in the family Asteraceae. It is widespread in the wild across much of Eurasia and North America and is cultivated for culinary and medicinal purposes. One subspecies, Artemisia dracunculus var. sativa, is cultivated to use the leaves as an aromatic culinary herb. In some other subspecies, the characteristic aroma is largely absent. Informal names for distinguishing the variations include "French tarragon" (best for culinary use) and "Russian tarragon". Tarragon grows to 120–150 centimetres (4–5 feet) tall, with slender branches. The leaves are lanceolate, 2–8 cm (1–3 in) long and 2–10 mm (1⁄8–3⁄8 in) broad, glossy green, with an entire margin. The flowers are produced in small capitula 2–4 mm (1⁄16–3⁄16 in) diameter, each capitulum containing up to 40 yellow or greenish-yellow florets. French tarragon, however, seldom produces any flowers (or seeds). Some tarragon plants produce seeds that are generally sterile. Others produce viable seeds. Tarragon has rhizomatous roots that it uses to spread and readily reproduce. == Cultivation == French tarragon is the variety used for cooking in the kitchen and is not grown from seed, as the flowers are sterile; instead, it is propagated by root division. Russian tarragon (A. dracunculoides L.) can be grown from seed but is much weaker in flavor when compared to the French variety. However, Russian tarragon is a far more hardy and vigorous plant, spreading at the roots and growing over a meter tall. This tarragon actually prefers poor soils and happily tolerates drought and neglect. It is not as intensely aromatic and flavorsome as its French cousin, but it produces many more leaves from early spring onwards that are mild and good in salads and cooked food. Russian tarragon loses what flavor it has as it ages and is widely considered useless as a culinary herb, though it is sometimes used in crafts. The young stems in early spring can be cooked as an asparagus substitute. Horticulturists recommend that Russian tarragon be grown indoors from seed and planted in summer. The spreading plants can be divided easily. A better substitute for Russian tarragon is Mexican tarragon (Tagetes lucida), also known as Mexican mint marigold, Texas tarragon, or winter tarragon. It is much more reminiscent of French tarragon, with a hint of anise. Although not in the same genus as the other tarragons, Mexican tarragon has a more robust flavor than Russian tarragon that does not diminish significantly with age. It can not however be grown as a perennial in cold climates. == Health == Tarragon has a flavor and odor profile reminiscent of anise due largely to the presence of estragole, a known carcinogen and teratogen in mice. Estragole concentration in fresh tarragon leaves is about 2900 mg/kg. However, a European Union investigation concluded that the danger of estragole is minimal. Research studying rat livers found a BMDL10 (Approximately the dose that would cause a 10% increase in background tumor rate) of estragole to be 3.3–6.5 mg/kg body weight per day, which for an 80 kg human would be ~400 mg per day, or 130 g of fresh tarragon leaves per day. As used as a culinary herb, a typical quantity used in a dish could be 5 g of fresh leaves. Estragole, along with other oils that provide tarragon its flavor, are highly volatile and will vaporise as the leaf is dried, reducing both the health risk and the useability of the herb. Several other herbs, such as basil, also contain estragole. == Uses == === Culinary use === In Syria, fresh tarragon is eaten with white Syrian cheese, and also used with dishes such as shish barak and kibbeh labaniyeh. In Iran, tarragon is used as a side dish in sabzi khordan (fresh herbs), or in stews and Persian-style pickles, particularly khiar shoor (pickled cucumbers). Tarragon is one of the four fines herbes of French cooking and is particularly suitable for chicken, fish, and egg dishes. Tarragon is the main flavoring component of Béarnaise sauce. Fresh, lightly bruised tarragon sprigs are steeped in vinegar to produce tarragon vinegar. Pounded with butter, it produces an excellent topping for grilled salmon or beef. Tarragon is used to flavor a popular carbonated soft drink in Armenia, Azerbaijan, Georgia (where it originally comes from), and, by extension, Russia, Ukraine and Kazakhstan. The drink, named Tarkhuna, is made out of sugar, carbonated water, and tarragon leaves which give it its signature green color. Tarragon is one of the main ingredients in Chakapuli, a Georgian national dish. In Slovenia, tarragon is used in a variation of the traditional nut roll sweet cake, called potica. In Hungary, a popular chicken soup is flavored with tarragon. == Chemistry == Gas chromatography/mass spectrometry analysis has revealed that A. dracunculus oil contains predominantly phenylpropanoids such as estragole (16.2%), methyl eugenol (35.8%), and trans-anethole (21.1%). The other major constituents were terpenes and terpenoids, including α-trans-ocimene (20.6%), limonene (12.4%), α-pinene (5.1%), allo-ocimene (4.8%), methyl eugenol (2.2%), β-pinene (0.8%), α-terpinolene (0.5%), bornyl acetate (0.5%) and bicyclogermacrene (0.5%). The organic compound capillin was initially isolated from Artemisia capillaris in 1956. cis-Pellitorin, an isobutyramide eliciting a pungent taste, has been isolated from the tarragon plant. == Name == The plant is commonly known as dragon in Swedish and Dutch. The use of Dragon for the herb or plant in German is outdated. The species name, dracunculus, means "little dragon", and the plant seems to be so named due to its coiled roots. See Artemisia for the genus name derivative. == References == == External links == Flora of Pakistan: Artemisia dracunculus "Tarragon" at Purdue Guide to Medicinal and Aromatic Plants Voigt, Chuck (9 January 2014). Propagating and Growing French Tarragon (PDF). Illinois Specialty Crops, Agritourism and Organic Conference. Springfield, IL.
|
Wikipedia:Tertiary ideal#0
|
In mathematics, a tertiary ideal is a two-sided ideal in a perhaps noncommutative ring that cannot be expressed as a nontrivial intersection of a right fractional ideal with another ideal. Tertiary ideals generalize primary ideals to the case of noncommutative rings. Although primary decompositions do not exist in general for ideals in noncommutative rings, tertiary decompositions do, at least if the ring is Noetherian. Every primary ideal is tertiary. Tertiary ideals and primary ideals coincide for commutative rings. To any (two-sided) ideal, a tertiary ideal can be associated called the tertiary radical, defined as t ( I ) = { r ∈ R | ∀ s ∉ I , ∃ x ∈ ( s ) x ∉ I and ( x ) ( r ) ⊂ I } . {\displaystyle t(I)=\{r\in R{\mbox{ }}|{\mbox{ }}\forall s\notin I,{\mbox{ }}\exists x\in (s){\mbox{ }}x\notin I{\text{ and }}(x)(r)\subset I\}.} Then t(I) always contains I. If R is a (not necessarily commutative) Noetherian ring and I a right ideal in R, then I has a unique irredundant decomposition into tertiary ideals I = T 1 ∩ ⋯ ∩ T n {\displaystyle I=T_{1}\cap \dots \cap T_{n}} . == References == Riley, J.A. (1962), "Axiomatic primary and tertiary decomposition theory", Trans. Amer. Math. Soc., 105 (2): 177–201, doi:10.1090/s0002-9947-1962-0141683-4 Tertiary ideal, Encyclopedia of Mathematics, Springer Online Reference Works. Behrens, Ernst-August (1972), Ring Theory, Verlag Academic Press, ISBN 9780080873572 Kurata, Yoshiki (1965), "On an additive ideal theory in a non-associative ring", Mathematische Zeitschrift, 88 (2): 129–135, doi:10.1007/BF01112095, S2CID 119531162
|
Wikipedia:Test of Mathematics for University Admission#0
|
The Test of Mathematics for University Admission (TMUA) is a test used by universities in the United Kingdom to assess the mathematical thinking and reasoning skills of students applying for undergraduate mathematics courses or courses featuring mathematics like Computer science or Economics. It is usually sat by students in the UK; however, students applying from other countries will need to do so as well if their university requires it. A number of universities across the world accept the test as an optional part of their application process for mathematics-based courses. The TMUA exams from 2017 were paper-based; however, since 2024 it has transitioned to being administered through a computer, where applicants may use a Whiteboard notebook to write their working out. == History == The test was developed by Cambridge Assessment Admissions Testing and launched in 2016. It was designed to assess the key skills that students need to succeed on demanding university-level mathematics courses, and assist university mathematics tutors in making admissions decisions. Durham University and Lancaster University began using the test in 2016, with the University of Warwick, the University of Sheffield and the University of Southampton recognising the test in 2017, and the London School of Economics and Political Science (LSE) and Cardiff University in 2018 Research indicates that the test has good predictive validity, with good correlation between candidates' scores in the test and their performance in their exams at the end of first year university study. There is also correlation between A-level Further Maths performance and performance in the test. == Changes == Before 2024, the test was administered by Cambridge Assessment Admissions Testing, but since the 2024 round, it has been adminstered by Pearson Vue instead. Candidates now complete the exam in a Pearson test center, when the previously would have sat the test at their school or at a registered test centre. Under the new format, mathematical working is completed in booklets of laminated paper using whiteboard markers, where candidates may request new booklets for writing in when needed. == Test format and specification == The Test of Mathematics for University Admission is a paper-based 2 hour and 30 minute long test, which is to be completed without dictionaries or calculators. It has two papers which are taken consecutively: Paper 1: Mathematical Thinking Paper 1 has 20 multiple-choice questions, with 75 minutes allowed to complete the paper. This paper assesses a candidate’s ability to apply their knowledge of mathematics in new situations. It comprises a core set of ideas from Pure Mathematics. These ideas reflect those that would be met early on in a typical A Level Mathematics course: algebra, basic functions, sequences and series, coordinate geometry, trigonometry, exponentials and logarithms, differentiation, integration, graphs of functions. In addition, knowledge of the GCSE curriculum is assumed. Paper 2: Mathematical Reasoning Paper 2 has 20 multiple-choice questions, with 75 minutes allowed to complete the paper. The second paper assesses a candidate’s ability to justify and interpret mathematical arguments and conjectures, and deal with elementary concepts from logic. It assumes knowledge of the Paper 1 specification and, in addition, requires students to have some knowledge of the structure of proof and basic logical concepts. == Scoring == There is no pass/fail for the test, however a higher score will generally improve the candidate's chances of being admitted to their university of choice. Candidates’ scores is the total number of correct answers given in both papers. As it is multiple choice, working out is not counted. Each question has the same weighting, and no penalties are given for incorrect answers. Raw scores are converted to a scale of 1.0 to 9.0 (with 9.0 being the highest). A score is also reported for each of the two papers (also reported on the 1.0 to 9.0 scale), but these are for candidate information only and do not form part of the formal test result. == Timing and results == Since 2024, the test is made available twice a year, first in late October or early November and secondly in January (Previously, it used to be just in October). Entry for the test typically opens in September and candidates must be registered by early October. Results are released in late November. Candidates can access their results online and share them with their chosen institutions. == Preparation == Students generally spend several weeks preparing for the TMUA exam. There are various different preparation materials available for students wanting to get ready for the exam such as textbooks, courses and online materials. Completing past papers is generally agreed to be a highly effective means of preparing for the test, as they are directly representative of what the exam is like. The past papers are freely available from the exam administrator, and various other sources. Answer keys are also released alongside TMUA past papers. == References == == External links == http://www.admissionstesting.org Cambridge Assessment Admissions Testing
|
Wikipedia:Tetractys#0
|
The tetractys (Greek: τετρακτύς), or tetrad, or the tetractys of the decad is a triangular figure consisting of ten points arranged in four rows: one, two, three, and four points in each row, which is the geometrical representation of the fourth triangular number. As a mystical symbol, it was very important to the secret worship of Pythagoreanism. There were four seasons, and the number was also associated with planetary motions and music. == Pythagorean symbol == The first four numbers symbolize the musica universalis and the Cosmos as: Monad – Unity Dyad – Power – Limit/Unlimited (peras/apeiron) Triad – Harmony Tetrad – Kosmos The four rows add up to ten, which was unity of a higher order (The Dekad). The Tetractys symbolizes the four classical elements—air, fire, water, and earth. The Tetractys represented the organization of space: the first row represented zero dimensions (a point) the second row represented one dimension (a line of two points) the third row represented two dimensions (a plane defined by a triangle of three points) the fourth row represented three dimensions (a tetrahedron defined by four points) A prayer of the Pythagoreans shows the importance of the Tetractys (sometimes called the "Mystic Tetrad"), as the prayer was addressed to it. Bless us, divine number, thou who generated gods and men! O holy, holy Tetractys, thou that containest the root and source of the eternally flowing creation! For the divine number begins with the profound, pure unity until it comes to the holy four; then it begets the mother of all, the all-comprising, all-bounding, the first-born, the never-swerving, the never-tiring holy ten, the keyholder of all. The Pythagorean oath also mentioned the Tetractys: By that pure, holy, four lettered name on high, nature's eternal fountain and supply, the parent of all souls that living be, by him, with faith find oath, I swear to thee. It is said that the Pythagorean musical system was based on the Tetractys as the rows can be read as the ratios of 4:3 (perfect fourth), 3:2 (perfect fifth), 2:1 (octave), forming the basic intervals of the Pythagorean scales. That is, Pythagorean scales are generated from combining pure fourths (in a 4:3 relation), pure fifths (in a 3:2 relation), and the simple ratios of the unison 1:1 and the octave 2:1. Note that the diapason, 2:1 (octave), and the diapason plus diapente, 3:1 (compound fifth or perfect twelfth), are consonant intervals according to the tetractys of the decad, but that the diapason plus diatessaron, 8:3 (compound fourth or perfect eleventh), is not. The Tetractys [also known as the decad] is an equilateral triangle formed from the sequence of the first ten numbers aligned in four rows. It is both a mathematical idea and a metaphysical symbol that embraces within itself—in seedlike form—the principles of the natural world, the harmony of the cosmos, the ascent to the divine, and the mysteries of the divine realm. So revered was this ancient symbol that it inspired ancient philosophers to swear by the name of the one who brought this gift to humanity. == Kabbalist symbol == In the work by anthropologist Raphael Patai entitled The Hebrew Goddess, the author argues that the tetractys and its mysteries influenced the early Kabbalah. A Hebrew tetractys has the letters of the Tetragrammaton inscribed on the ten positions of the tetractys, from right to left. It has been argued that the Kabbalistic Tree of Life, with its ten spheres of emanation, is in some way connected to the tetractys, but its form is not that of a triangle. The occultist Dion Fortune writes: The point is assigned to Kether; the line to Chokmah; the two-dimensional plane to Binah; consequently the three-dimensional solid naturally falls to Chesed. The relationship between geometrical shapes and the first four Sephirot is analogous to the geometrical correlations in Tetraktys, shown above under #Pythagorean symbol, and unveils the relevance of the Tree of Life with the Tetraktys. == Occurrence == The tetractys occurs (generally coincidentally) in the following: the baryon decuplet an archbishop's coat of arms the arrangement of bowling pins in ten-pin bowling the arrangement of billiard balls in ten-ball pool a Chinese checkers board the "Christmas Tree" formation in association football == In poetry == In English-language poetry, a tetractys is a syllable-counting form with five lines. The first line has one syllable, the second has two syllables, the third line has three syllables, the fourth line has four syllables, and the fifth line has ten syllables. A sample tetractys would look like this: Mantrum Your / fury / confuses / us all greatly. / Volatile, big-bodied tots are selfish. // The tetractys was created by Ray Stebbing, who said the following about his newly created form: "The tetractys could be Britain's answer to the haiku. Its challenge is to express a complete thought, profound or comic, witty or wise, within the narrow compass of twenty syllables. == See also == Pascal's triangle == References == == Further reading == von Franz, Marie-Louise. Number and Time: Reflections Leading Towards a Unification of Psychology and Physics. Rider & Company, London, 1974. ISBN 0-09-121020-8 Fideler, D. ed. The Pythagorean Sourcebook and Library Archived 2015-05-09 at the Wayback Machine. Phanes Press, 1987. The Theoretic Arithmetic of the Pythagoreans – Thomas Taylor == External links == Examples of Tetractys poems
|
Wikipedia:Tetraview#0
|
A tetraview is an attempt to graph a complex function of a complex variable, by a method invented by Davide P. Cervone. A graph of a real function of a real variable is the set of ordered pairs (x,y) such that y = f(x). This is the ordinary two-dimensional Cartesian graph studied in school algebra. Every complex number has both a real part and an imaginary part, so one complex variable is two-dimensional and a BBC pair of complex variables is four-dimensional. A tetraview is an attempt to give a picture of a four-dimensional object using a two-dimensional representation—either on a piece of paper or on a computer screen, showing a still picture consisting of five views, one in the center and one at each corner. This is roughly analogous to a picture of a three-dimensional object by giving a front view, a side view, and a view from above. A picture of a three-dimensional object is a projection of that object from three dimensions into two dimensions. A tetraview is set of five projections, first from four dimensions into three dimensions, and then from three dimensions into two dimensions. A complex function w = f(z), where z = a + bi and w = c + di are complex numbers, has a graph in four-space (four dimensional space) R4 consisting of all points (a, b, c, d) such that c + di = f(a + bi). To construct a tetraview, we begin with the four points (1,0,0,0), (0, 1, 0, 0), (0, 0, 1, 0), and (0, 0, 0, 1), which are vertices of a spherical tetrahedron on the unit three-sphere S3 in R4. We project the four-dimensional graph onto the three-dimensional sphere along one of the four coordinate axes, and then give a two-dimensional picture of the resulting three-dimensional graph. This provides the four corner graph. The graph in the center is a similar picture "taken" from the point of view of the origin. == External links == http://www.math.union.edu/~dpvc/professional/art/tetra-exp.html
|
Wikipedia:Thabit number#0
|
In number theory, a Thabit number, Thâbit ibn Qurra number, or 321 number is an integer of the form 3 ⋅ 2 n − 1 {\displaystyle 3\cdot 2^{n}-1} for a non-negative integer n. The first few Thabit numbers are: 2, 5, 11, 23, 47, 95, 191, 383, 767, 1535, 3071, 6143, 12287, 24575, 49151, 98303, 196607, 393215, 786431, 1572863, ... (sequence A055010 in the OEIS) The 9th century mathematician, physician, astronomer and translator Thābit ibn Qurra is credited as the first to study these numbers and their relation to amicable numbers. == Properties == The binary representation of the Thabit number 3·2n−1 is n+2 digits long, consisting of "10" followed by n 1s. The first few Thabit numbers that are prime (Thabit primes or 321 primes): 2, 5, 11, 23, 47, 191, 383, 6143, 786431, 51539607551, 824633720831, ... (sequence A007505 in the OEIS) As of October 2023, there are 68 known prime Thabit numbers. Their n values are: 0, 1, 2, 3, 4, 6, 7, 11, 18, 34, 38, 43, 55, 64, 76, 94, 103, 143, 206, 216, 306, 324, 391, 458, 470, 827, 1274, 3276, 4204, 5134, 7559, 12676, 14898, 18123, 18819, 25690, 26459, 41628, 51387, 71783, 80330, 85687, 88171, 97063, 123630, 155930, 164987, 234760, 414840, 584995, 702038, 727699, 992700, 1201046, 1232255, 2312734, 3136255, 4235414, 6090515, 11484018, 11731850, 11895718, 16819291, 17748034, 18196595, 18924988, 20928756, 22103376, ... (sequence A002235 in the OEIS) The primes for 234760 ≤ n ≤ 3136255 were found by the distributed computing project 321 search. In 2008, PrimeGrid took over the search for Thabit primes. It is still searching and has already found all currently known Thabit primes with n ≥ 4235414. It is also searching for primes of the form 3·2n+1, such primes are called Thabit primes of the second kind or 321 primes of the second kind. The first few Thabit numbers of the second kind are: 4, 7, 13, 25, 49, 97, 193, 385, 769, 1537, 3073, 6145, 12289, 24577, 49153, 98305, 196609, 393217, 786433, 1572865, ... (sequence A181565 in the OEIS) The first few Thabit primes of the second kind are: 7, 13, 97, 193, 769, 12289, 786433, 3221225473, 206158430209, 6597069766657, 221360928884514619393, ... (sequence A039687 in the OEIS) Their n values are: 1, 2, 5, 6, 8, 12, 18, 30, 36, 41, 66, 189, 201, 209, 276, 353, 408, 438, 534, 2208, 2816, 3168, 3189, 3912, 20909, 34350, 42294, 42665, 44685, 48150, 54792, 55182, 59973, 80190, 157169, 213321, 303093, 362765, 382449, 709968, 801978, 916773, 1832496, 2145353, 2291610, 2478785, 5082306, 7033641, 10829346, 16408818, ... (sequence A002253 in the OEIS) == Connection with amicable numbers == When both n {\displaystyle n} and n − 1 {\displaystyle n-1} yield Thabit primes (of the first kind), and 9 ⋅ 2 2 n − 1 − 1 {\displaystyle 9\cdot 2^{2n-1}-1} is also prime, a pair of amicable numbers can be calculated as follows: For example, n = 2 {\displaystyle n=2} gives the Thabit prime 11, n − 1 = 1 {\displaystyle n-1=1} gives the Thabit prime 5, and the third term is 71. Then, 22=4, multiplied by 5 and 11 results in 220, whose divisors add up to 284, and 4 times 71 is 284, whose divisors add up to 220. The only known values of n {\displaystyle n} satisfying these conditions are 2, 4 and 7, corresponding to the Thabit primes 11, 47 and 383 given by n {\displaystyle n} , the Thabit primes 5, 23 and 191 given by n − 1 {\displaystyle n-1} , and the third terms 71, 1151 and 73727. The corresponding amicable pairs are (220, 284), (17296, 18416) and (9363584, 9437056). == Generalization == For integer b ≥ 2, a Thabit number base b is a number of the form (b+1)·bn − 1 for a non-negative integer n. Also, for integer b ≥ 2, a Thabit number of the second kind base b is a number of the form (b+1)·bn + 1 for a non-negative integer n. The Williams numbers are also a generalization of Thabit numbers. For integer b ≥ 2, a Williams number base b is a number of the form (b−1)·bn − 1 for a non-negative integer n. Also, for integer b ≥ 2, a Williams number of the second kind base b is a number of the form (b−1)·bn + 1 for a non-negative integer n. For integer b ≥ 2, a Thabit prime base b is a Thabit number base b that is also prime. Similarly, for integer b ≥ 2, a Williams prime base b is a Williams number base b that is also prime. Every prime p is a Thabit prime of the first kind base p, a Williams prime of the first kind base p+2, and a Williams prime of the second kind base p; if p ≥ 5, then p is also a Thabit prime of the second kind base p−2. It is a conjecture that for every integer b ≥ 2, there are infinitely many Thabit primes of the first kind base b, infinitely many Williams primes of the first kind base b, and infinitely many Williams primes of the second kind base b; also, for every integer b ≥ 2 that is not congruent to 1 modulo 3, there are infinitely many Thabit primes of the second kind base b. (If the base b is congruent to 1 modulo 3, then all Thabit numbers of the second kind base b are divisible by 3 (and greater than 3, since b ≥ 2), so there are no Thabit primes of the second kind base b.) The exponent of Thabit primes of the second kind cannot congruent to 1 mod 3 (except 1 itself), the exponent of Williams primes of the first kind cannot congruent to 4 mod 6, and the exponent of Williams primes of the second kind cannot congruent to 1 mod 6 (except 1 itself), since the corresponding polynomial to b is a reducible polynomial. (If n ≡ 1 mod 3, then (b+1)·bn + 1 is divisible by b2 + b + 1; if n ≡ 4 mod 6, then (b−1)·bn − 1 is divisible by b2 − b + 1; and if n ≡ 1 mod 6, then (b−1)·bn + 1 is divisible by b2 − b + 1) Otherwise, the corresponding polynomial to b is an irreducible polynomial, so if Bunyakovsky conjecture is true, then there are infinitely many bases b such that the corresponding number (for fixed exponent n satisfying the condition) is prime. ((b+1)·bn − 1 is irreducible for all nonnegative integer n, so if Bunyakovsky conjecture is true, then there are infinitely many bases b such that the corresponding number (for fixed exponent n) is prime) Pierpont numbers 3 m ⋅ 2 n + 1 {\displaystyle 3^{m}\cdot 2^{n}+1} are a generalization of Thabit numbers of the second kind 3 ⋅ 2 n + 1 {\displaystyle 3\cdot 2^{n}+1} . == References == == External links == Weisstein, Eric W. "Thâbit ibn Kurrah Number". MathWorld. Weisstein, Eric W. "Thâbit ibn Kurrah Prime". MathWorld. Chris Caldwell, The Largest Known Primes Database at The Prime Pages A Thabit prime of the first kind base 2: (2+1)·211895718 − 1 A Thabit prime of the second kind base 2: (2+1)·210829346 + 1 A Williams prime of the first kind base 2: (2−1)·274207281 − 1 A Williams prime of the first kind base 3: (3−1)·31360104 − 1 A Williams prime of the second kind base 3: (3−1)·31175232 + 1 A Williams prime of the first kind base 10: (10−1)·10383643 − 1 A Williams prime of the first kind base 113: (113−1)·113286643 − 1 List of Williams primes PrimeGrid’s 321 Prime Search, about the discovery of the Thabit prime of the first kind base 2: (2+1)·26090515 − 1
|
Wikipedia:Thakkar Pheru#0
|
Thakkar Pheru (IAST: Ṭhakkura Pherū) was the treasurer of Khalji. He was active between 1291 and 1347. Alauddin Khalji recruited Ṭhakkura Pherū, a Shrimal Jain as an expert on coins, metals and gems. For the benefit of his son Hemapal, Pheru wrote several books on related subjects including Dravyaparīkṣa in 1318 based on his experience at the master mint, and the Ratnaparikṣa (Pkt. Rayaṇaparikkhā) in 1315 "having seen with my own eyes the vast collection of gems … in the treasury of Alāʾ al-Dīn Khaljī." He was continuously employed until the rule of Ghiasuddin Tughluq. He is also known for his work on mathematics Ganitasārakaumudi. == References ==
|
Wikipedia:Thales's theorem#0
|
In geometry, Thales's theorem states that if A, B, and C are distinct points on a circle where the line AC is a diameter, the angle ∠ ABC is a right angle. Thales's theorem is a special case of the inscribed angle theorem and is mentioned and proved as part of the 31st proposition in the third book of Euclid's Elements. It is generally attributed to Thales of Miletus, but it is sometimes attributed to Pythagoras. == History == Babylonian mathematicians knew this for special cases before Greek mathematicians proved it. Thales of Miletus (early 6th century BC) is traditionally credited with proving the theorem; however, even by the 5th century BC there was nothing extant of Thales' writing, and inventions and ideas were attributed to men of wisdom such as Thales and Pythagoras by later doxographers based on hearsay and speculation. Reference to Thales was made by Proclus (5th century AD), and by Diogenes Laërtius (3rd century AD) documenting Pamphila's (1st century AD) statement that Thales "was the first to inscribe in a circle a right-angle triangle". Thales was claimed to have traveled to Egypt and Babylonia, where he is supposed to have learned about geometry and astronomy and thence brought their knowledge to the Greeks, along the way inventing the concept of geometric proof and proving various geometric theorems. However, there is no direct evidence for any of these claims, and they were most likely invented speculative rationalizations. Modern scholars believe that Greek deductive geometry as found in Euclid's Elements was not developed until the 4th century BC, and any geometric knowledge Thales may have had would have been observational. The theorem appears in Book III of Euclid's Elements (c. 300 BC) as proposition 31: "In a circle the angle in the semicircle is right, that in a greater segment less than a right angle, and that in a less segment greater than a right angle; further the angle of the greater segment is greater than a right angle, and the angle of the less segment is less than a right angle." Dante Alighieri's Paradiso (canto 13, lines 101–102) refers to Thales's theorem in the course of a speech. == Proof == === First proof === The following facts are used: the sum of the angles in a triangle is equal to 180° and the base angles of an isosceles triangle are equal. Since OA = OB = OC, △OBA and △OBC are isosceles triangles, and by the equality of the base angles of an isosceles triangle, ∠ OBC = ∠ OCB and ∠ OBA = ∠ OAB. Let α = ∠ BAO and β = ∠ OBC. The three internal angles of the ∆ABC triangle are α, (α + β), and β. Since the sum of the angles of a triangle is equal to 180°, we have α + ( α + β ) + β = 180 ∘ 2 α + 2 β = 180 ∘ 2 ( α + β ) = 180 ∘ ∴ α + β = 90 ∘ . {\displaystyle {\begin{aligned}\alpha +(\alpha +\beta )+\beta &=180^{\circ }\\2\alpha +2\beta &=180^{\circ }\\2(\alpha +\beta )&=180^{\circ }\\\therefore \alpha +\beta &=90^{\circ }.\end{aligned}}} Q.E.D. === Second proof === The theorem may also be proven using trigonometry: Let O = (0, 0), A = (−1, 0), and C = (1, 0). Then B is a point on the unit circle (cos θ, sin θ). We will show that △ABC forms a right angle by proving that AB and BC are perpendicular — that is, the product of their slopes is equal to −1. We calculate the slopes for AB and BC: m A B = y B − y A x B − x A = sin θ cos θ + 1 m B C = y C − y B x C − x B = − sin θ − cos θ + 1 {\displaystyle {\begin{aligned}m_{AB}&={\frac {y_{B}-y_{A}}{x_{B}-x_{A}}}={\frac {\sin \theta }{\cos \theta +1}}\\[2pt]m_{BC}&={\frac {y_{C}-y_{B}}{x_{C}-x_{B}}}={\frac {-\sin \theta }{-\cos \theta +1}}\end{aligned}}} Then we show that their product equals −1: m A B ⋅ m B C = sin θ cos θ + 1 ⋅ − sin θ − cos θ + 1 = − sin 2 θ − cos 2 θ + 1 = − sin 2 θ sin 2 θ = − 1 {\displaystyle {\begin{aligned}&m_{AB}\cdot m_{BC}\\[4pt]={}&{\frac {\sin \theta }{\cos \theta +1}}\cdot {\frac {-\sin \theta }{-\cos \theta +1}}\\[4pt]={}&{\frac {-\sin ^{2}\theta }{-\cos ^{2}\theta +1}}\\[4pt]={}&{\frac {-\sin ^{2}\theta }{\sin ^{2}\theta }}\\[4pt]={}&{-1}\end{aligned}}} Note the use of the Pythagorean trigonometric identity sin 2 θ + cos 2 θ = 1. {\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1.} === Third proof === Let △ABC be a triangle in a circle where AB is a diameter in that circle. Then construct a new triangle △ABD by mirroring △ABC over the line AB and then mirroring it again over the line perpendicular to AB which goes through the center of the circle. Since lines AC and BD are parallel, likewise for AD and CB, the quadrilateral ACBD is a parallelogram. Since lines AB and CD, the diagonals of the parallelogram, are both diameters of the circle and therefore have equal length, the parallelogram must be a rectangle. All angles in a rectangle are right angles. === Fourth proof === The theorem can be proved using vector algebra. Let's take the vectors A B → {\displaystyle {\overrightarrow {AB}}} and C B → {\displaystyle {\overrightarrow {CB}}} . These vectors satisfy A B → = A O → + O B → C B → = C O → + O B → {\displaystyle {\overrightarrow {AB}}={\overrightarrow {AO}}+{\overrightarrow {OB}}\qquad \qquad {\overrightarrow {CB}}={\overrightarrow {CO}}+{\overrightarrow {OB}}} and their dot product can be expanded as A B → ⋅ C B → = ( A O → + O B → ) ⋅ ( C O → + O B → ) = A O → ⋅ C O → + ( A O → + C O → ) ⋅ O B → + O B → ⋅ O B → {\displaystyle {\overrightarrow {AB}}\cdot {\overrightarrow {CB}}=\left({\overrightarrow {AO}}+{\overrightarrow {OB}}\right)\cdot \left({\overrightarrow {CO}}+{\overrightarrow {OB}}\right)={\overrightarrow {AO}}\cdot {\overrightarrow {CO}}+({\overrightarrow {AO}}+{\overrightarrow {CO}})\cdot {\overrightarrow {OB}}+{\overrightarrow {OB}}\cdot {\overrightarrow {OB}}} but A O → = − C O → A O → ⋅ C O → = − R 2 O B → ⋅ O B → = R 2 {\displaystyle {\overrightarrow {AO}}=-{\overrightarrow {CO}}\qquad \qquad {\overrightarrow {AO}}\cdot {\overrightarrow {CO}}=-R^{2}\qquad \qquad {\overrightarrow {OB}}\cdot {\overrightarrow {OB}}=R^{2}} and the dot product vanishes A B → ⋅ C B → = − R 2 + 0 → ⋅ O B → + R 2 = 0 {\displaystyle {\overrightarrow {AB}}\cdot {\overrightarrow {CB}}=-R^{2}+{\overrightarrow {0}}\cdot {\overrightarrow {OB}}+R^{2}=0} and then the vectors A B → {\displaystyle {\overrightarrow {AB}}} and C B → {\displaystyle {\overrightarrow {CB}}} are orthogonal and the angle ABC is a right angle. == Converse == For any triangle, and, in particular, any right triangle, there is exactly one circle containing all three vertices of the triangle. This circle is called the circumcircle of the triangle. One way of formulating Thales's theorem is: if the center of a triangle's circumcircle lies on the triangle then the triangle is right, and the center of its circumcircle lies on its hypotenuse. The converse of Thales's theorem is then: the center of the circumcircle of a right triangle lies on its hypotenuse. (Equivalently, a right triangle's hypotenuse is a diameter of its circumcircle.) === Proof of the converse using geometry === This proof consists of 'completing' the right triangle to form a rectangle and noticing that the center of that rectangle is equidistant from the vertices and so is the center of the circumscribing circle of the original triangle, it utilizes two facts: adjacent angles in a parallelogram are supplementary (add to 180°) and, the diagonals of a rectangle are equal and cross each other in their median point. Let there be a right angle ∠ ABC, r a line parallel to BC passing by A, and s a line parallel to AB passing by C. Let D be the point of intersection of lines r and s. (It has not been proven that D lies on the circle.) The quadrilateral ABCD forms a parallelogram by construction (as opposite sides are parallel). Since in a parallelogram adjacent angles are supplementary (add to 180°) and ∠ ABC is a right angle (90°) then angles ∠ BAD, ∠ BCD, ∠ ADC are also right (90°); consequently ABCD is a rectangle. Let O be the point of intersection of the diagonals AC and BD. Then the point O, by the second fact above, is equidistant from A, B, and C. And so O is center of the circumscribing circle, and the hypotenuse of the triangle (AC) is a diameter of the circle. === Alternate proof of the converse using geometry === Given a right triangle ABC with hypotenuse AC, construct a circle Ω whose diameter is AC. Let O be the center of Ω. Let D be the intersection of Ω and the ray OB. By Thales's theorem, ∠ ADC is right. But then D must equal B. (If D lies inside △ABC, ∠ ADC would be obtuse, and if D lies outside △ABC, ∠ ADC would be acute.) === Proof of the converse using linear algebra === This proof utilizes two facts: two lines form a right angle if and only if the dot product of their directional vectors is zero, and the square of the length of a vector is given by the dot product of the vector with itself. Let there be a right angle ∠ ABC and circle M with AC as a diameter. Let M's center lie on the origin, for easier calculation. Then we know A = −C, because the circle centered at the origin has AC as diameter, and (A – B) · (B – C) = 0, because ∠ ABC is a right angle. It follows 0 = ( A − B ) ⋅ ( B − C ) = ( A − B ) ⋅ ( B + A ) = | A | 2 − | B | 2 . ∴ | A | = | B | . {\displaystyle {\begin{aligned}0&=(A-B)\cdot (B-C)\\&=(A-B)\cdot (B+A)\\&=|A|^{2}-|B|^{2}.\\[4pt]\therefore \ |A|&=|B|.\end{aligned}}} This means that A and B are equidistant from the origin, i.e. from the center of M. Since A lies on M, so does B, and the circle M is therefore the triangle's circumcircle. The above calculations in fact establish that both directions of Thales's theorem are valid in any inner product space. == Generalizations and related results == As stated above, Thales's theorem is a special case of the inscribed angle theorem (the proof of which is quite similar to the first proof of Thales's theorem given above): Given three points A, B and C on a circle with center O, the angle ∠ AOC is twice as large as the angle ∠ ABC. A related result to Thales's theorem is the following: If AC is a diameter of a circle, then: If B is inside the circle, then ∠ ABC > 90° If B is on the circle, then ∠ ABC = 90° If B is outside the circle, then ∠ ABC < 90°. == Applications == === Constructing a tangent to a circle passing through a point === Thales's theorem can be used to construct the tangent to a given circle that passes through a given point. In the figure at right, given circle k with centre O and the point P outside k, bisect OP at H and draw the circle of radius OH with centre H. OP is a diameter of this circle, so the triangles connecting OP to the points T and T′ where the circles intersect are both right triangles. === Finding the centre of a circle === Thales's theorem can also be used to find the centre of a circle using an object with a right angle, such as a set square or rectangular sheet of paper larger than the circle. The angle is placed anywhere on its circumference (figure 1). The intersections of the two sides with the circumference define a diameter (figure 2). Repeating this with a different set of intersections yields another diameter (figure 3). The centre is at the intersection of the diameters. == See also == Synthetic geometry Inverse Pythagorean theorem == Notes == == References == Agricola, Ilka; Friedrich, Thomas (2008). Elementary Geometry. AMS. p. 50. ISBN 978-0-8218-4347-5. Heath, T.L. (1921). A History of Greek Mathematics: From Thales to Euclid. Vol. I. Oxford. pp. 131ff. == External links == Weisstein, Eric W. "Thales' Theorem". MathWorld. Munching on Inscribed Angles Thales's theorem explained, with interactive animation Demos of Thales's theorem by Michael Schreiber, The Wolfram Demonstrations Project.
|
Wikipedia:Thamsanqa Kambule#0
|
Thamsanqa Kambule (15 January 1921 – 7 August 2009) was a South African Mathematician and Educator. He was the first black professor at the University of the Witwatersrand, and was the first black person to be awarded honorary membership to the Actuarial Society of South Africa. He was awarded the Order of the Baobab in 2002 for his services to mathematics education. == Early life and education == Kambule was born in Aliwal North. His mother died when he was 18 months old, and his aunt was responsible for raising him. He did not attend school until he was 11 years old, when he joined Anglican St Peter's School in Johannesburg. He completed a Teachers Diploma at Adams College in 1946 and a bachelor's degree at the University of South Africa in 1954. == Career == Kambule taught in Zambia, Malawi as well as several schools in South Africa before being appointed Principal of Orlando High School in Soweto in 1958. He campaigned to ensure the children had the best education possible, despite the restrictions of the Bantu Education Act, 1953. Orlando High School had a library named after Robert Birley, a visiting professor at the University of the Witwatersrand. He led the Rand Bursary Fund, a support program that provided scholarships for pupils in need. The fund allowed more than 1,000 students to complete high school. His former pupils included Desmond Tutu and Jackie Selebi. In 1976 during the Soweto uprising, the schoolchildren revolted against being forced to learn in the Afrikaans language. An undetermined number of children were shot dead by police, and education in townships fell apart. Kambule resigned in 1977 to protest against the Department of Bantu Education, and became the head of Pace College. In 1978 he joined the University of the Witwatersrand, where he became the first black professor. He published a series of maths textbooks for non-specialist teachers. He retired in 1976 and promptly became the Principal of O R T Step College of Technology. He was awarded an honorary doctorate in 1997 and a doctorate of education in 2006. In 2002 he was awarded the Order of the Baobab from Thabo Mbeki. He became known as the Rock for his transparent principles. Kambule died on 7 August 2009. He was a much loved teacher, and his former students Siphiwe Nyanda, Felicia Mabuza-Suttle and Mokotedi Mpshe attended his memorial service. His student Trevor Mdaka was his doctor at the Unitas Hospital in Centurion. === Legacy === In 2017 the University of the Witwatersrand named their Mathematical Sciences Building after him. Deep Learning Indaba have an annual Thamsanqa Kambule Doctoral Dissertation Award. Also his famous work "Huseyin Emir Bilgin Bu Yaziyi Degistirdi" (How I Became Successful) is one of the most important books written in the country. == References ==
|
Wikipedia:The Ancient Tradition of Geometric Problems#0
|
The Ancient Tradition of Geometric Problems is a book on ancient Greek mathematics, focusing on three problems now known to be impossible if one uses only the straightedge and compass constructions favored by the Greek mathematicians: squaring the circle, doubling the cube, and trisecting the angle. It was written by Wilbur Knorr (1945–1997), a historian of mathematics, and published in 1986 by Birkhäuser. Dover Publications reprinted it in 1993. == Topics == The Ancient Tradition of Geometric Problems studies the three classical problems of circle-squaring, cube-doubling, and angle trisection throughout the history of Greek mathematics, also considering several other problems studied by the Greeks in which a geometric object with certain properties is to be constructed, in many cases through transformations to other construction problems. The study runs from Plato and the story of the Delian oracle to the second century BC, when Archimedes and Apollonius of Perga flourished; Knorr suggests that the decline in Greek geometry after that time represented a shift in interest to other topics in mathematics rather than a decline in mathematics as a whole. Unlike the earlier work on this material by Thomas Heath, Knorr sticks to the source material as it is, reconstructing the motivation and lines of reasoning followed by the Greek mathematicians and their connections to each other, rather than adding justifications for the correctness of the constructions based on modern mathematical techniques. In modern times, the impossibility of solving the three classical problems by straightedge and compass, finally proven in the 19th century, has often been viewed as analogous to the foundational crisis of mathematics of the early 20th century, in which David Hilbert's program of reducing mathematics to a system of axioms and calculational rules struggled against logical inconsistencies in its axiom systems, intuitionist rejection of formalism and dualism, and Gödel's incompleteness theorems showing that no such axiom system could formalize all mathematical truths and remain consistent. However, Knorr argues in The Ancient Tradition of Geometric Problems that this point of view is anachronistic, and that the Greek mathematicians themselves were more interested in finding and classifying the mathematical tools that could solve these problems than they were in imposing artificial limitations on themselves and in the philosophical consequences of these limitations. When a geometric construction problem does not admit a compass-and-straightedge solution, then either the constraints on the problem or on the solution techniques can be relaxed, and Knorr argues that the Greeks did both. Constructions described by the book include the solution by Menaechmus of doubling the cube by finding the intersection points of two conic sections, several neusis constructions involving fitting a segment of a given length between two points or curves, and the use of the Quadratrix of Hippias for trisecting angles and squaring circles. Some specific theories on the authorship of Greek mathematics, put forward by the book, include the legitimacy of a letter on square-doubling from Eratosthenes to Ptolemy III Euergetes, a distinction between Socratic-era sophist Hippias and the Hippias who invented the quadratrix, and a similar distinction between Aristaeus the Elder, a mathematician of the time of Euclid, and the Aristaeus who authored a book on solids (mentioned by Pappus of Alexandria), and whom Knorr places at the time of Apollonius. The book is heavily illustrated, and many endnotes provide sources for quotations, additional discussion, and references to related research. == Audience and reception == The book is written for a general audience, unlike a follow-up work published by Knorr, Textual Studies in Ancient and Medieval Geometry (1989), which is aimed at other experts in the close reading of Greek mathematical texts. Nevertheless, reviewer Alan Stenger calls The Ancient Tradition of Geometric Problems "very specialized and scholarly". Reviewer Colin R. Fletcher calls it "essential reading" for understanding the background and content of the Greek mathematical problem-solving tradition. In its historical scholarship, historian of mathematics Tom Whiteside writes that the book's occasionally speculative nature is justified by its fresh interpretations, well-founded conjectures, and deep knowledge of the subject. == References == == External links == The Ancient Tradition of Geometric Problems at the Internet Archive
|
Wikipedia:The Archimedeans#0
|
The Archimedeans are the mathematical society of the University of Cambridge, founded in 1935. It currently has over 2000 active members, many of them alumni, making it one of the largest student societies in Cambridge. The society hosts regular talks at the Centre for Mathematical Sciences, including in the past by many well-known speakers in the field of mathematics. It publishes two magazines, Eureka and QARCH. One of several aims of the society, as laid down in its constitution, is to encourage co-operation between the existing mathematical societies of individual Cambridge colleges, which at present are just the Adams Society of St John's College, Queens' College Mathematics Society and the Trinity Mathematical Society, but in the past have included many more. The society is mentioned in G. H. Hardy's essay A Mathematician's Apology. Past presidents of The Archimedeans include Michael Atiyah and Richard Taylor. == Activity == The main focus of the society's activities are the regular talks, which generally concern topics from mathematics or theoretical physics, and are accessible to students on an undergraduate level. Among the list of recent speakers are Fields medalists Michael Atiyah, Wendelin Werner and Alain Connes, as well as authors Ian Stewart and Simon Singh. Many of the speakers are international, and are hosted by The Archimedeans during their visit. After exams and University-wide project deadlines, the society is also known to organise social events. == Publications == Eureka is a mathematical journal that is published annually by The Archimedeans. It includes articles on a variety of topics in mathematics, written by students and academics from all over the world, as well as a short summary of the activities of the society, problem sets, puzzles, artwork and book reviews. The magazine has been published 65 times since 1939, and authors include many famous mathematicians and scientists such as Paul Erdős, Martin Gardner, Douglas Hofstadter, Godfrey Hardy, Béla Bollobás, John Conway, Stephen Hawking, Roger Penrose, Ian Stewart, Chris Budd, Fields Medallist Timothy Gowers and Nobel laureate Paul Dirac. The Archimedeans also publish QARCH, a magazine containing problem sets and solutions or partial solutions submitted by readers. It is published on an irregular basis and distributed free of charge. == References ==
|
Wikipedia:The Beauty of Fractals#0
|
The Beauty of Fractals is a 1986 book by Heinz-Otto Peitgen and Peter Richter which publicises the fields of complex dynamics, chaos theory and the concept of fractals. It is lavishly illustrated and as a mathematics book became an unusual success. The book includes a total of 184 illustrations, including 88 full-colour pictures of Julia sets. Although the format suggests a coffee table book, the discussion of the background of the presented images addresses some sophisticated mathematics which would not be found in popular science books. In 1987 the book won an Award for distinguished technical communication. == Summary == The book starts with a general introduction to Complex Dynamics, Chaos and fractals. In particular the Feigenbaum scenario and the relation to Julia sets and the Mandelbrot set is discussed. The following special sections provide in depth detail for the shown images: Verhulst Dynamics, Julia Sets and Their Computergraphical Generation, Sullivan's Classification of Critical Points, The Mandelbrot Set, External Angles and Hubbard Trees, Newton's Method for Complex Polynomials: Cayley's Problem, Newtons's Method for Real Equations, A Discrete Volterra-Lotka System, Yang-Lee Zeros, Renormalization (Magnetism and Complex Boundaries). The book also includes invited Contributions by Benoît Mandelbrot, Adrien Douady, Gert Eilenberger and Herbert W. Franke, which provide additional formality and some historically interesting detail. Benoit Mandelbrot gives a very personal account of his discovery of fractals in general and the fractal named after him in particular. Adrien Douady explains the solved and unsolved problems relating to the almost amusingly complex Mandelbrot set. == The images == Part of the text was originally conceived as a supplemented catalogue to the exhibition Frontiers of Chaos of the German Goethe-Institut, first seen in Europe and the United States. It described the context and meaning of these images. The images were created at the "Computer Graphics Laboratory Dynamical Systems" at the University of Bremen in 1984 and 1985. Dedicated software had to be developed to make the necessary computations which at that time took hours of computer time to create a single image. For the exhibit and the book the computed images had to be captured as photographs. Digital image capturing and archiving were not feasible at that time. The book was cited and its images were reproduced in a number of publications. Some images were even used before the book was published. The cover article of the Scientific American August 1985 edition showed some of the images and provided reference to the book to be published. One particular image sequence of the book is the close up series "seahorse valley". While the first publication of such a close up series was the June 1984 cover article of the Magazine Geo, The Beauty of Fractals provided the first such publication within a book. == Translations == Italian translation: La Bellezza dei Frattali, Bollati Boringhieri, Torino 1987, ISBN 88-339-0420-2 Japanese translation: Springer-Verlag, Tokyo 1988, ISBN 3-540-15851-0 Russian translation: Krasota Fractalov, Mir, Moscow 1993, ISBN 5-03-001296-6 Chinese translation: Z.-J. Jing and X.-S. Zhang, Science Publishers, Beijing 1994, ISBN 7-03-004188-7/TP 374 == References == == External links == Web page of the Center for Complex System and Visualization Web page of the book at Springer-Verlag
|
Wikipedia:The Erdős Distance Problem#0
|
The Erdős Distance Problem is a monograph on the Erdős distinct distances problem in discrete geometry: how can one place n {\displaystyle n} points into d {\displaystyle d} -dimensional Euclidean space so that the pairs of points make the smallest possible distance set? It was written by Julia Garibaldi, Alex Iosevich, and Steven Senger, and published in 2011 by the American Mathematical Society as volume 56 of the Student Mathematical Library. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries. == Topics == The Erdős Distance Problem consists of twelve chapters and three appendices. After an introductory chapter describing the formulation of the problem by Paul Erdős and Erdős's proof that the number of distances is always at least proportional to n d {\textstyle {\sqrt[{d}]{n}}} , the next six chapters cover the two-dimensional version of the problem. They build on each other to describe successive improvements to the known results on the problem, reaching a lower bound proportional to n 44 / 51 {\textstyle n^{44/51}} in Chapter 7. These results connect the problem to other topics including the Cauchy–Schwarz inequality, the crossing number inequality, the Szemerédi–Trotter theorem on incidences between points and lines, and methods from information theory. Subsequent chapters discuss variations of the problem: higher dimensions, other metric spaces for the plane, the number of distinct inner products between vectors, and analogous problems in spaces whose coordinates come from a finite field instead of the real numbers. == Audience and reception == Although the book is largely self-contained, it assumes a level of mathematical sophistication aimed at advanced university-level mathematics students. Exercises are included, making it possible to use it as a textbook for a specialized course. Reviewer Michael Weiss suggests that the book is less successful than its authors hoped at reaching "readers at different levels of mathematical experience": the density of some of its material, needed to cover that material thoroughly, is incompatible with accessibility to beginning mathematicians. Weiss also complains about some minor mathematical errors in the book, which however do not interfere with its overall content. Much of the book's content, on the two-dimensional version of the problem, was made obsolete soon after its publication by new results of Larry Guth and Nets Katz, who proved that the number of distances in this case must be near-linear. Nevertheless, reviewer William Gasarch argues that this outcome should make the book more interesting to readers, not less, because it helps explain the barriers that Guth and Katz had to overcome in proving their result. Additionally, the techniques that the book describes have many uses in discrete geometry. == References ==
|
Wikipedia:The Fractal Dimension of Architecture#0
|
The Fractal Dimension of Architecture is a book that applies the mathematical concept of fractal dimension to the analysis of the architecture of buildings. It was written by Michael J. Ostwald and Josephine Vaughan, both of whom are architecture academics at the University of Newcastle (Australia); it was published in 2016 by Birkhäuser, as the first volume in their Mathematics and the Built Environment book series. == Topics == The book applies the box counting method for computing fractal dimension, via the ArchImage software system, to compute a fractal dimension from architectural drawings (elevations and floor plans) of buildings, drawn at multiple levels of detail. The results of the book suggest that the results are consistent enough to allow for comparisons from one building to another, as long as the general features of the images (such as margins, line thickness, and resolution), parameters of the box counting algorithm, and statistical processing of the results are carefully controlled. The first five chapters of the book introduce fractals and the fractal dimension, and explain the methodology used by the authors for this analysis, also applying the same analysis to classical fractal structures including the Apollonian gasket, Fibonacci word, Koch snowflake, Minkowski sausage, pinwheel tiling, terdragon, and Sierpiński triangle. The remaining six chapters explain the authors' choice of buildings to analyze, apply their methodology to 625 drawings from 85 homes, built between 1901 and 2007, and perform a statistical analysis of the results. The authors use this technique to study three main hypotheses, with a fractal structure of subsidiary hypotheses depending on them. These are That the decrease in the complexity of social family units over the period of study should have led to a corresponding decrease in the complexity of their homes, as measured by a reduction in the fractal dimension. That distinctive genres and movements in architecture can be characterized by their fractal dimensions, and That individual architects can also be characterized by the fractal dimensions of their designs. The first and third hypotheses are not convincingly supported by the analysis, but the results suggest further work in these directions. The second hypothesis, on distinctive fractal descriptions of genres and movements, does not appear to be true, leading the authors to weaker replacements for it. == Audience and reception == The book is aimed at architects and architecture students; its mathematical content is not deep, and it does not require much mathematical background of its readers. Reviewer Joel Haack suggests that it could also be used for general education courses in mathematics for liberal arts undergraduates. == Further reading == == References ==
|
Wikipedia:The Fractal Geometry of Nature#0
|
The Fractal Geometry of Nature is a 1982 book by the Franco-American mathematician Benoît Mandelbrot. == Overview == The Fractal Geometry of Nature is a revised and enlarged version of his 1977 book entitled Fractals: Form, Chance and Dimension, which in turn was a revised, enlarged, and translated version of his 1975 French book, Les Objets Fractals: Forme, Hasard et Dimension. American Scientist put the book in its one hundred books of 20th century science. As technology has improved, mathematically accurate, computer-drawn fractals have become more detailed. Early drawings were low-resolution black and white; later drawings were higher resolution and in color. Many examples were created by programmers working with Mandelbrot, primarily at IBM Research. These visualizations have added to persuasiveness of the books and their impact on the scientific community. == See also == Chaos theory == References ==
|
Wikipedia:The Math(s) Fix#0
|
The Math(s) Fix: An Education Blueprint for the AI Age is a 2020 book by British technologist and entrepreneur Conrad Wolfram. It argues for a fundamental shift in the way mathematics education is taught in schools, advocating for a curriculum that emphasizes a computational education and computer-based mathematics education for 21st century problem-solving skills with computers in the classroom, rather than manual handwritten calculation techniques. == Sections == Part I: The Problem Maths v. Maths Why Should Everyone Learn Maths? Maths and Computation in Today's World The 4-Step Maths/Computational Thinking Process Hand Calculating: Not the Essence of Maths Part II: The Fix "Thinking" Outcomes Defining the Core Computational Subject New Subject, New Pedagogy? What to Deliver? How to Build It? Part III: Achieving Change Objections to Computer-Based Core Computational Learning Roadmap for Change The Beginning of the Story Is Computation for Everything? What's Surprised Me on This Journey So Far Call to Action == See also == Comparison of software calculators Comparison of TeX editors Computational thinking List of graphing software List of mathematical art software List of mathematical software List of numerical analysis programming languages List of numerical-analysis software List of numerical libraries List of open-source software for mathematics Mathematical modeling and simulations Mathematical notation Mathematical visualization Mathethon - computational mathematics competition Music and mathematics STEM Stephen Wolfram Wolfram Language Wolfram Mathematica Wolfram Research == External links == computerbasedmath.org/ Official page on Wolfram.com Conrad Wolfram TED Talk == References ==
|
Wikipedia:The Mathematical Gazette#0
|
The Mathematical Gazette is a triannual peer-reviewed academic journal published by Cambridge University Press on behalf of the Mathematical Association. It covers mathematics education with a focus on the 15–20 years age range. The journal was established in 1894 by Edward Mann Langley as the successor to the Reports of the Association for the Improvement of Geometrical Teaching. William John Greenstreet was its editor-in-chief for more than thirty years (1897–1930). Since 2000, the editor is Gerry Leversha. == Editors-in-chief == The following persons are or have been editor-in-chief: == Abstracting and indexing == The journal is abstracted and indexed in EBSCO databases, Emerging Sources Citation Index, Scopus, and zbMATH Open. == References == == External links == Official website
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.