source
stringlengths
16
98
text
stringlengths
40
168k
Wikipedia:Ralph Pass#0
As minor planet discoveries are confirmed, they are given a permanent number by the IAU's Minor Planet Center (MPC), and the discoverers can then submit names for them, following the IAU's naming conventions. The list below concerns those minor planets in the specified number-range that have received names, and explains the meanings of those names. Official naming citations of newly named small Solar System bodies are approved and published in a bulletin by IAU's Working Group for Small Bodies Nomenclature (WGSBN). Before May 2021, citations were published in MPC's Minor Planet Circulars for many decades. Recent citations can also be found on the JPL Small-Body Database (SBDB). Until his death in 2016, German astronomer Lutz D. Schmadel compiled these citations into the Dictionary of Minor Planet Names (DMP) and regularly updated the collection. Based on Paul Herget's The Names of the Minor Planets, Schmadel also researched the unclear origin of numerous asteroids, most of which had been named prior to World War II. This article incorporates text from this source, which is in the public domain: SBDB New namings may only be added to this list below after official publication as the preannouncement of names is condemned. The WGSBN publishes a comprehensive guideline for the naming rules of non-cometary small Solar System bodies. == 7001–7100 == == 7101–7200 == == 7201–7300 == == 7301–7400 == == 7401–7500 == == 7501–7600 == == 7601–7700 == == 7701–7800 == == 7801–7900 == == 7901–8000 == == References ==
Wikipedia:Ralph Tambs Lyche#0
Ralph Tambs Lyche (6 September 1890 – 15 January 1991) was a Norwegian mathematician. He was born in Macon, Georgia as a son of Norwegian father Hans Tambs Lyche (1859–1898) and American mother Mary Rebecca Godden (1856–1938). He moved to Norway at the age of two. He finished his secondary education in Fredrikstad in 1908, and was hired as an assistant for Richard Birkeland at the Norwegian Institute of Technology in 1910. At the same time he studied at the Royal Frederick University, graduating with the cand.real. degree in 1916. He was hired as a docent in mathematics at the Norwegian Institute of Technology in 1918. He took his doctorate in Strasbourg in 1927 following a two-year fellowship there. In 1937 he was promoted to professor, a position he held until 1950. He was then a professor at the University of Oslo until his retirement in 1961, then a visiting professor at the University of Colorado, Boulder from 1961 to 1962. His fields were mathematical analysis, function theory, algebra and number theory. He penned about 60 mathematical works, and also a few publications in botany; he was a hobby herbarist. He also became widely known for his mathematical textbooks, both for the upper secondary school (Matematikk for den høgre skolen) and another for technical colleges and universities (Lærebok i matematisk analyse). He was an editorial board member of the journal Nordisk Matematisk Tidsskrift from 1954 to 1960. He was a member of the Royal Norwegian Society of Sciences and Letters from 1927, and of the Norwegian Academy of Science and Letters from 1929. From 1946 to 1950 he was the secretary-general of the Royal Norwegian Society of Sciences and Letters, and he chaired the Norwegian Mathematical Society from 1953 to 1959 and the Norwegian Botanical Society from 1957 to 1959. He chaired the Student Society in Trondheim in 1920, and later held speeches during political meetings there. He was a member of Clarté, affiliated with Mot Dag. He denounced communism after the Molotov–Ribbentrop Pact of 1939. During the martial law in Trondheim in 1942, organized by the occupying Nazi authorities, he was imprisoned at Falstad concentration camp. He was one of the first prisoners there; he had the prisoner number 53. He avoided execution unlike some others who were arrested during martial law, but he remained imprisoned from 9 March 1942 to 3 August 1943. Ralph Tambs Lyche was the father of solidarity activist Guri Tambs Lyche. His wife Elsa was a pioneer in maternal hygiene work. He died in January 1991, at the age of 100. == References == == External links == Ralph Tambs Lyche personal archive exists at NTNU University Library Dorabiblioteket
Wikipedia:Ralucca Gera#0
Ralucca Michelle Gera (née Muntean) is an American mathematician specializing in graph theory, including graph coloring, dominating sets, and spectral graph theory. Her interests also include personalized learning in mathematics education. She is a professor of mathematics at the Naval Postgraduate School. == Education and career == Gera was an undergraduate at Western Michigan University, graduating Phi Beta Kappa and with honors in mathematics in 1999. She remained at Western Michigan University for her doctoral studies, completing a PhD in 2005 under the supervision of Ping Zhang; her dissertation was Stratification and Domination in Graphs and Digraphs. She has been a faculty member at the Naval Postgraduate School since 2005, and became a full professor there in 2018. She served as Associate Provost For Graduate Education from 2018 to 2021, and at the same time served as the founding director of the Teaching and Learning Commons at the Naval Postgraduate School == Recognition == In 2016, the Naval Postgraduate School gave Gera their Richard W. Hamming Excellence in Teaching Award. == References == == External links == Home page Ralucca Gera publications indexed by Google Scholar
Wikipedia:Ram Kishore Saxena#0
Ram Kishore Saxena D.Sc, FNASc (11 November 1936) is an Indian mathematician and Emeritus professor, UGC Jai Narain Vyas University and former Professor and Head, Department of Mathematics. == Published work == Saxena has published 356 research papers; under his supervision many scholars has done PhD and post-doctoral research. Saxena has published books. == References ==
Wikipedia:Raman Parimala#0
Raman Parimala (born 21 November 1948) is an Indian mathematician known for her contributions to algebra. She is the Arts & Sciences Distinguished Professor of mathematics at Emory University. For many years, she was a professor at Tata Institute of Fundamental Research (TIFR), Mumbai. She was on the Mathematical Sciences jury for the Infosys Prize 2019—2022 and was on the Abel prize selection Committee 2021–2023. == Background == Parimala was born and raised in Tamil Nadu, India. She studied in Saradha Vidyalaya Girls' High School and Stella Maris College at Chennai. She received her M.Sc. from Madras University (1970) and Ph.D. from the University of Mumbai (1976); her advisor was R. Sridharan from TIFR. In 1987, she won the highest science award in India: The Shanti Swarup Bhatnagar Prize. She is a fellow of the Indian National Science Academy (New Delhi). == Selected publications == 2021: "Local triviality for G-torsors", Gille, P.; Parimala, R.; Suresh, V., Math. Ann. 380, no. 1-2, 539–567. MR4263691 2018: "Local-global principle for reduced norms over function fields of p-adic curves", Parimala, R.; Preeti, R.; Suresh, V., Compos. Math. 154, no. 2, 410–458. MR3732207 2014: "Period-index and u-invariant questions for function fields over complete discretely valued fields", Parimala, R.; Suresh, V., Invent. Math. 197, no. 1, 215–235. MR3219517 2001: "Hermitian analogue of a theorem of Springer", R Parimala, R. Sridharan, V Suresh - Journal of Algebra, - Elsevier doi:10.1006/jabr.2001.8830 1998: "Classical groups and the Hasse principle", E Bayer-Fluckiger, R Parimala - Annals of Mathematics, jstor.org doi:10.2307/120961 1995: "Galois cohomology of the classical groups over fields of cohomological dimension ≤2", Bayer-Fluckiger, E.; Parimala, R., Invent. Math. 122, no. 2, 195–229. MR1358975 1990: "Real components of algebraic varieties and étale cohomology", Colliot-Thélène, J.-L.; Parimala, R., Invent. Math. 101, no. 1, 81–99. MR1055712 1982: "Quadratic spaces over polynomial extensions of regular rings of dimension 2", Mathematische Annalen, vol. 261, pp. 287–292 doi:10.1007/BF01455449 1976: "Failure of a quadratic analogue of Serre's conjecture", Bulletin of the AMS, vol. 82, pp. 962–964, Colliot-Thélène, J.-L.; Parimala, R., Invent. Math. 101, no. 1, 81–99. MR0419427 == Honors == On National Science Day in 2020, Smriti Irani, head of the Ministry of Women and Child Development of the Government of India, announced the establishment of chairs at institutes across India in the names of Raman Parimala and other ten Indian women scientists. Parimala was an invited speaker at the International Congress of Mathematicians in Zurich in 1994 and gave a talk Study of quadratic forms — some connections with geometry Archived 3 October 2016 at the Wayback Machine. She gave a plenary address Arithmetic of linear algebraic groups over two dimensional fields at the Congress in Hyderabad in 2010. Fellow of the Indian Academy of Sciences Fellow of Indian National Science Academy Bhatnagar Award in 1987 Honorary Membership of the London Mathematical Society in 2024 Honorary doctorate from the University of Lausanne in 1999 Srinivasa Ramanujan Birth Centenary Award in 2003. TWAS Prize for Mathematics (2005). Fellow of the American Mathematical Society (2012) == Notes == == External links == Raman Parimala at the Mathematics Genealogy Project Home page at Emory Archived 14 October 2016 at the Wayback Machine Parimala's biography in the Agnes Scott College database of women mathematicians
Wikipedia:Ramanujan Institute for Advanced Study in Mathematics#0
Ramanujan Institute for Advanced Study in Mathematics (RIASM) is the Department of Mathematics of University of Madras. This name was adopted in 1967. == History == The University of Madras was incorporated in 1857 and the Department of Mathematics was an integral part of the university from its beginning. The department developed from its early years to become a centre of research in mathematics with the appointment of R. Vaidyanathaswamy as a Reader in Mathematics in 1927. The seeds of the Ramanujan Institute for Advanced Study in Mathematics were sown when the "Ramanujan Institute of Mathematics" was established by Alagappa Chettiar on 26 January 1950 as a memorial to the mathematician Srinivasa Ramanujan. It was governed by the Asoka Charitable Trust, Karaikudi, and was located at Krishna Vilas, Vepery, Chennai. The Ramanujan Institute of Mathematics was inaugurated by A. Lakshmanaswamy Mudaliar, Vice Chancellor of University of Madras, with T. Vijayaraghavan, a student of G.H. Hardy, as Director of the Institute. The institute faced a financial crisis when, in 1956, Asoka Charitable Trust expressed its inability to run the institute. However, due to the request from Subrahmanyan Chandrasekhar Jawaharlal Nehru took an initiative in such a way the management of the institute came to be vested with the University of Madras and the institute was taken over by the university in May 1957. The Asoka charitable Trust Started Ramanujan Institute of Mathematics in January, 1950 and as the Institute found itself in financial difficulties, Government of India agreed in 1953-54 to meet the expenses of a chair for Mathematics at the Institute Object to the condition that the Trust Would continue to spend the amount Previously spent by it for the activity of the Institute. Grants of 18,000 for each year were given the Institute for the years 1953-54 and 1994-55, but the audited accounts of the Institute revealed that the Trust was not fulfilling the condition of the Government grant and had instead built up a Reserve Fund. No Government grant was therefore paid in 1965-56 Towards the close of 1956, the Trust decided to close down the Institute, but as a result of discussions with the Government of India, Initiated by the founder of the Trust and the Vice-Chancellor of the Madras University, and later earned on by the Vice-Chancellor, it has been agreed that in view of the difficulty of manning an Institute with the limited number of available Professors of the requisite quality, the activities of the Institute may be continued in the Department of Mathematics in the University of Madras It has been decided to create a Ramanujam Professorship of Mathematics on a scale of Rs 1,000—1,500 with selection grade up to Rs 1,700 and attach the existing permanent research staff of the Institute to the said Professor The Government of India will give the necessary grants to the University to enable them to carry out this arrangement until such time as the University Grants Commission sanctions an appropriate grant. During the short period of existence, the institute had a string of prominent mathematicians as visitors, including S.S. Pillai, noted number theorist, V. Ganapathy Iyer, analyst and Norbert Wiener. After the demise of T. Vijayaraghavan in 1955, C.T. Rajagopal took over as the Director of the Institute. From 1957 to 1966, the Department of Mathematics and the Ramanujan Institute of Mathematics functioned as independent bodies under the University of Madras. In 1967 the University Grants Commission (India) proposed to make the Department of Mathematics of the University of Madras into one of its Centres of Advanced Study. In the same year these two institutions were amalgamated to form a UGC Centre for Advanced Study in Mathematics and named the center as the "Ramanujan Institute for Advanced Study in Mathematics" (RIASM). C.T. Rajagopal was appointed the first Director of the Ramanujan Institute for Advanced Study in Mathematics, and when he retired in 1969, the reins were taken over by T.S. Bhanumurthy. == Ramanujan Museum == Utilising a grant of Rs. 1 lakh received as UGC Special Assistance for Equipment and with the help of the Vikram A. Sarabhai Community Science Centre, Ahmedabad a Mathematical Laboratory was established in the institute. About 65 mathematical models were acquired under the scheme. These models were exhibited for the participants of several Refresher Courses conducted through the Academic Staff College of the University of Madras, at the Silver Jubilee conference of the Association of Mathematics Teachers of India held at Madras during 10–13 January 1991, in the Science Exhibition held at National Institute of Technology, Tiruchirapalli during 10–14 July 1991, and are being lent to be exhibited in several schools in and around Chennai. Later the institute received an amount of Rs. 2 lakh from the National Board for Higher Mathematics, Rs. 1 lakh from The Hindu newspaper and Rs. 9 lakh from the Ministry of Culture and Tourism, Government of India, and a matching grant of Rs. 9 lakh from the University of Madras. The amount was used to establish the Ramanujan Museum in the premises of the Ramanujan Institute for Advanced Study in Mathematics. A grant of rupees one crore was later sanctioned by the Ministry of Human Resource Development, Government of India, towards the establishment of Ramanujan Museum and Research Centre. == Courses offered == M.Sc. (Mathematics) full-time MPhil (Mathematics) full-time PhD (Mathematics) full-time and part-time == Current research areas == == External links == Official website == References ==
Wikipedia:Ramanujan Mathematical Society#0
Ramanujan Mathematical Society is an Indian organisation of persons formed with the aim of "promoting mathematics at all levels". The Society was founded in 1985 and registered in Tiruchirappalli, Tamil Nadu, India. Professor G. Shankaranarayanan was the first President, Professor R. Balakrishnan the first Secretary and Professor E. Sampathkumar the first Academic Secretary. The initial impetus for the formation of the Society was the deeply felt need of a new mathematical journal and the necessity of an organisation to launch and nourish the journal. == Publications == The publications of Ramanujan Mathematical Society include the following: Mathematics Newsletter: A journal catering to the needs of students, research scholars, and teachers. The Newsletter was launched in the year 1991 with Professor R Balakrishnan as Chief Editor. Currently, Professor S Ponnusamy of IIT Madras is the Chief Editor. Journal of the Ramanujan Mathematical Society : The Journal was started in 1986 with Professor K S Padmanabhan as Editor-in-Chief. Initially, it was a biannual Journal. Now it has four issues per year. The present Editor-in-Chief is Professor R Parimala of Emory University, Atlanta, United States and the Managing Editor is Professor E Sampathkumar of University of Mysore. Little Mathematical Treasures: This is envisaged as a series of books addressed to mathematically mature readers and to bright students. So far only one book has been published under this series: "Adventures in Iteration" by Dr Shilesh A Shirali. RMS Lecture Notes Series in Mathematics: This is a series consisting of monographs and proceedings of conferences. == Endowment Lectures == The Society organises the following endowment lectures every year. Professor W H Abdi Memorial Lecture: The lectures were started in the year 2000 and are sponsored by Department of Mathematics, Cochin University of Science and Technology, of which Professor Wazir Hasan Abdi (1922–1999) was the Head during the period 1977 – 1982. Professor C S Venkataraman Memorial Lectures: The lectures, started in 1996, are sponsored by Dr C S Venkataraman Memorial Trust, Thrissur, Kerala State. Professor M N Gopalan Endowment Lectures: The lectures, started in 2000, are sponsored by Professor M N Gopalan, Mysore. Prof J N Kapur Endowment Lectures: The lectures, started in 2002, are sponsored by Professor J N Kapur, New Delhi. New members are taken in based on their achievements and capabilities. == Executive committee == == References ==
Wikipedia:Ramanujan graph#0
In the mathematical field of spectral graph theory, a Ramanujan graph is a regular graph whose spectral gap is almost as large as possible (see extremal graph theory). Such graphs are excellent spectral expanders. As Murty's survey paper notes, Ramanujan graphs "fuse diverse branches of pure mathematics, namely, number theory, representation theory, and algebraic geometry". These graphs are indirectly named after Srinivasa Ramanujan; their name comes from the Ramanujan–Petersson conjecture, which was used in a construction of some of these graphs. == Definition == Let G {\displaystyle G} be a connected d {\displaystyle d} -regular graph with n {\displaystyle n} vertices, and let λ 1 ≥ λ 2 ≥ ⋯ ≥ λ n {\displaystyle \lambda _{1}\geq \lambda _{2}\geq \cdots \geq \lambda _{n}} be the eigenvalues of the adjacency matrix of G {\displaystyle G} (or the spectrum of G {\displaystyle G} ). Because G {\displaystyle G} is connected and d {\displaystyle d} -regular, its eigenvalues satisfy d = λ 1 > λ 2 {\displaystyle d=\lambda _{1}>\lambda _{2}} ≥ ⋯ ≥ λ n ≥ − d {\displaystyle \geq \cdots \geq \lambda _{n}\geq -d} . Define λ ( G ) = max i ≠ 1 | λ i | = max ( | λ 2 | , … , | λ n | ) {\displaystyle \lambda (G)=\max _{i\neq 1}|\lambda _{i}|=\max(|\lambda _{2}|,\ldots ,|\lambda _{n}|)} . A connected d {\displaystyle d} -regular graph G {\displaystyle G} is a Ramanujan graph if λ ( G ) ≤ 2 d − 1 {\displaystyle \lambda (G)\leq 2{\sqrt {d-1}}} . Many sources uses an alternative definition λ ′ ( G ) = max | λ i | < d | λ i | {\displaystyle \lambda '(G)=\max _{|\lambda _{i}|<d}|\lambda _{i}|} (whenever there exists λ i {\displaystyle \lambda _{i}} with | λ i | < d {\displaystyle |\lambda _{i}|<d} ) to define Ramanujan graphs. In other words, we allow − d {\displaystyle -d} in addition to the "small" eigenvalues. Since λ n = − d {\displaystyle \lambda _{n}=-d} if and only if the graph is bipartite, we will refer to the graphs that satisfy this alternative definition but not the first definition bipartite Ramanujan graphs. If G {\displaystyle G} is a Ramanujan graph, then G × K 2 {\displaystyle G\times K_{2}} is a bipartite Ramanujan graph, so the existence of Ramanujan graphs is stronger. As observed by Toshikazu Sunada, a regular graph is Ramanujan if and only if its Ihara zeta function satisfies an analog of the Riemann hypothesis. == Examples and constructions == === Explicit examples === The complete graph K d + 1 {\displaystyle K_{d+1}} has spectrum d , − 1 , − 1 , … , − 1 {\displaystyle d,-1,-1,\dots ,-1} , and thus λ ( K d + 1 ) = 1 {\displaystyle \lambda (K_{d+1})=1} and the graph is a Ramanujan graph for every d > 1 {\displaystyle d>1} . The complete bipartite graph K d , d {\displaystyle K_{d,d}} has spectrum d , 0 , 0 , … , 0 , − d {\displaystyle d,0,0,\dots ,0,-d} and hence is a bipartite Ramanujan graph for every d {\displaystyle d} . The Petersen graph has spectrum 3 , 1 , 1 , 1 , 1 , 1 , − 2 , − 2 , − 2 , − 2 {\displaystyle 3,1,1,1,1,1,-2,-2,-2,-2} , so it is a 3-regular Ramanujan graph. The icosahedral graph is a 5-regular Ramanujan graph. A Paley graph of order q {\displaystyle q} is q − 1 2 {\displaystyle {\frac {q-1}{2}}} -regular with all other eigenvalues being − 1 ± q 2 {\displaystyle {\frac {-1\pm {\sqrt {q}}}{2}}} , making Paley graphs an infinite family of Ramanujan graphs. More generally, let f ( x ) {\displaystyle f(x)} be a degree 2 or 3 polynomial over F q {\displaystyle \mathbb {F} _{q}} . Let S = { f ( x ) : x ∈ F q } {\displaystyle S=\{f(x)\,:\,x\in \mathbb {F} _{q}\}} be the image of f ( x ) {\displaystyle f(x)} as a multiset, and suppose S = − S {\displaystyle S=-S} . Then the Cayley graph for F q {\displaystyle \mathbb {F} _{q}} with generators from S {\displaystyle S} is a Ramanujan graph. Mathematicians are often interested in constructing infinite families of d {\displaystyle d} -regular Ramanujan graphs for every fixed d {\displaystyle d} . Such families are useful in applications. === Algebraic constructions === Several explicit constructions of Ramanujan graphs arise as Cayley graphs and are algebraic in nature. See Winnie Li's survey on Ramanujan's conjecture and other aspects of number theory relevant to these results. Lubotzky, Phillips and Sarnak and independently Margulis showed how to construct an infinite family of ( p + 1 ) {\displaystyle (p+1)} -regular Ramanujan graphs, whenever p {\displaystyle p} is a prime number and p ≡ 1 ( mod 4 ) {\displaystyle p\equiv 1{\pmod {4}}} . Both proofs use the Ramanujan conjecture, which led to the name of Ramanujan graphs. Besides being Ramanujan graphs, these constructions satisfies some other properties, for example, their girth is Ω ( log p ⁡ ( n ) ) {\displaystyle \Omega (\log _{p}(n))} where n {\displaystyle n} is the number of nodes. Let us sketch the Lubotzky-Phillips-Sarnak construction. Let q ≡ 1 mod 4 {\displaystyle q\equiv 1{\bmod {4}}} be a prime not equal to p {\displaystyle p} . By Jacobi's four-square theorem, there are p + 1 {\displaystyle p+1} solutions to the equation p = a 0 2 + a 1 2 + a 2 2 + a 3 2 {\displaystyle p=a_{0}^{2}+a_{1}^{2}+a_{2}^{2}+a_{3}^{2}} where a 0 > 0 {\displaystyle a_{0}>0} is odd and a 1 , a 2 , a 3 {\displaystyle a_{1},a_{2},a_{3}} are even. To each such solution associate the PGL ⁡ ( 2 , Z / q Z ) {\displaystyle \operatorname {PGL} (2,\mathbb {Z} /q\mathbb {Z} )} matrix α ~ = ( a 0 + i a 1 a 2 + i a 3 − a 2 + i a 3 a 0 − i a 1 ) , i a fixed solution to i 2 = − 1 mod q . {\displaystyle {\tilde {\alpha }}={\begin{pmatrix}a_{0}+ia_{1}&a_{2}+ia_{3}\\-a_{2}+ia_{3}&a_{0}-ia_{1}\end{pmatrix}},\qquad i{\text{ a fixed solution to }}i^{2}=-1{\bmod {q}}.} If p {\displaystyle p} is not a quadratic residue modulo q {\displaystyle q} let X p , q {\displaystyle X^{p,q}} be the Cayley graph of PGL ⁡ ( 2 , Z / q Z ) {\displaystyle \operatorname {PGL} (2,\mathbb {Z} /q\mathbb {Z} )} with these p + 1 {\displaystyle p+1} generators, and otherwise, let X p , q {\displaystyle X^{p,q}} be the Cayley graph of PSL ⁡ ( 2 , Z / q Z ) {\displaystyle \operatorname {PSL} (2,\mathbb {Z} /q\mathbb {Z} )} with the same generators. Then X p , q {\displaystyle X^{p,q}} is a ( p + 1 ) {\displaystyle (p+1)} -regular graph on n = q ( q 2 − 1 ) {\displaystyle n=q(q^{2}-1)} or q ( q 2 − 1 ) / 2 {\displaystyle q(q^{2}-1)/2} vertices depending on whether or not p {\displaystyle p} is a quadratic residue modulo q {\displaystyle q} . It is proved that X p , q {\displaystyle X^{p,q}} is a Ramanujan graph. Morgenstern later extended the construction of Lubotzky, Phillips and Sarnak. His extended construction holds whenever p {\displaystyle p} is a prime power. Arnold Pizer proved that the supersingular isogeny graphs are Ramanujan, although they tend to have lower girth than the graphs of Lubotzky, Phillips, and Sarnak. Like the graphs of Lubotzky, Phillips, and Sarnak, the degrees of these graphs are always a prime number plus one. === Probabilistic examples === Adam Marcus, Daniel Spielman and Nikhil Srivastava proved the existence of infinitely many d {\displaystyle d} -regular bipartite Ramanujan graphs for any d ≥ 3 {\displaystyle d\geq 3} . Later they proved that there exist bipartite Ramanujan graphs of every degree and every number of vertices. Michael B. Cohen showed how to construct these graphs in polynomial time. The initial work followed an approach of Bilu and Linial. They considered an operation called a 2-lift that takes a d {\displaystyle d} -regular graph G {\displaystyle G} with n {\displaystyle n} vertices and a sign on each edge, and produces a new d {\displaystyle d} -regular graph G ′ {\displaystyle G'} on 2 n {\displaystyle 2n} vertices. Bilu & Linial conjectured that there always exists a signing so that every new eigenvalue of G ′ {\displaystyle G'} has magnitude at most 2 d − 1 {\displaystyle 2{\sqrt {d-1}}} . This conjecture guarantees the existence of Ramanujan graphs with degree d {\displaystyle d} and 2 k ( d + 1 ) {\displaystyle 2^{k}(d+1)} vertices for any k {\displaystyle k} —simply start with the complete graph K d + 1 {\displaystyle K_{d+1}} , and iteratively take 2-lifts that retain the Ramanujan property. Using the method of interlacing polynomials, Marcus, Spielman, and Srivastava proved Bilu & Linial's conjecture holds when G {\displaystyle G} is already a bipartite Ramanujan graph, which is enough to conclude the existence result. The sequel proved the stronger statement that a sum of d {\displaystyle d} random bipartite matchings is Ramanujan with non-vanishing probability. Hall, Puder and Sawin extended the original work of Marcus, Spielman and Srivastava to r-lifts. It is still an open problem whether there are infinitely many d {\displaystyle d} -regular (non-bipartite) Ramanujan graphs for any d ≥ 3 {\displaystyle d\geq 3} . In particular, the problem is open for d = 7 {\displaystyle d=7} , the smallest case for which d − 1 {\displaystyle d-1} is not a prime power and hence not covered by Morgenstern's construction. == Ramanujan graphs as expander graphs == The constant 2 d − 1 {\displaystyle 2{\sqrt {d-1}}} in the definition of Ramanujan graphs is asymptotically sharp. More precisely, the Alon-Boppana bound states that for every d {\displaystyle d} and ϵ > 0 {\displaystyle \epsilon >0} , there exists n {\displaystyle n} such that all d {\displaystyle d} -regular graphs G {\displaystyle G} with at least n {\displaystyle n} vertices satisfy λ ( G ) > 2 d − 1 − ϵ {\displaystyle \lambda (G)>2{\sqrt {d-1}}-\epsilon } . This means that Ramanujan graphs are essentially the best possible expander graphs. Due to achieving the tight bound on λ ( G ) {\displaystyle \lambda (G)} , the expander mixing lemma gives excellent bounds on the uniformity of the distribution of the edges in Ramanujan graphs, and any random walks on the graphs has a logarithmic mixing time (in terms of the number of vertices): in other words, the random walk converges to the (uniform) stationary distribution very quickly. Therefore, the diameter of Ramanujan graphs are also bounded logarithmically in terms of the number of vertices. === Random graphs === Confirming a conjecture of Alon, Friedman showed that many families of random graphs are weakly-Ramanujan. This means that for every d {\displaystyle d} and ϵ > 0 {\displaystyle \epsilon >0} and for sufficiently large n {\displaystyle n} , a random d {\displaystyle d} -regular n {\displaystyle n} -vertex graph G {\displaystyle G} satisfies λ ( G ) < 2 d − 1 + ϵ {\displaystyle \lambda (G)<2{\sqrt {d-1}}+\epsilon } with high probability. While this result shows that random graphs are close to being Ramanujan, it cannot be used to prove the existence of Ramanujan graphs. It is conjectured, though, that random graphs are Ramanujan with substantial probability (roughly 52%). In addition to direct numerical evidence, there is some theoretical support for this conjecture: the spectral gap of a d {\displaystyle d} -regular graph seems to behave according to a Tracy-Widom distribution from random matrix theory, which would predict the same asymptotic. In 2024 a preprint by Jiaoyang Huang, Theo McKenzieand and Horng-Tzer Yau proved that λ ( G ) ≤ 2 d − 1 {\displaystyle \lambda (G)\leq 2{\sqrt {d-1}}} with the fraction of eigenvalues that hit the Alon-Boppana bound approximately 69% from proving that edge universality holds, that is they follow a Tracy-Widom distribution associated with the Gaussian Orthogonal Ensemble == Applications of Ramanujan graphs == Expander graphs have many applications to computer science, number theory, and group theory, see e.g Lubotzky's survey on applications to pure and applied math and Hoory, Linial, and Wigderson's survey which focuses on computer science. Ramanujan graphs are in some sense the best expanders, and so they are especially useful in applications where expanders are needed. Importantly, the Lubotzky, Phillips, and Sarnak graphs can be traversed extremely quickly in practice, so they are practical for applications. Some example applications include In an application to fast solvers for Laplacian linear systems, Lee, Peng, and Spielman relied on the existence of bipartite Ramanujan graphs of every degree in order to quickly approximate the complete graph. Lubetzky and Peres proved that the simple random walk exhibits cutoff phenomenon on all Ramanujan graphs. This means that the random walk undergoes a phase transition from being completely unmixed to completely mixed in the total variation norm. This result strongly relies on the graph being Ramanujan, not just an expander—some good expanders are known to not exhibit cutoff. Ramanujan graphs of Pizer have been proposed as the basis for post-quantum elliptic-curve cryptography. Ramanujan graphs can be used to construct expander codes, which are good error correcting codes. == See also == Expander graph Alon-Boppana bound Expander mixing lemma Spectral graph theory == References == == Further reading == Giuliana Davidoff; Peter Sarnak; Alain Valette (2003). Elementary number theory, group theory and Ramanujan graphs. LMS student texts. Vol. 55. Cambridge University Press. ISBN 0-521-53143-8. OCLC 50253269. Sunada, Toshikazu (1986). "L-functions in geometry and some applications". In Shiohama, Katsuhiro; Sakai, Takashi; Sunada, Toshikazu (eds.). Curvature and Topology of Riemannian Manifolds: Proceedings of the 17th International Taniguchi Symposium held in Katata, Japan, August 26–31, 1985. Lecture Notes in Mathematics. Vol. 1201. Berlin: Springer. pp. 266–284. doi:10.1007/BFb0075662. ISBN 978-3-540-16770-9. MR 0859591. == External links == Survey paper by M. Ram Murty Survey paper by Alexander Lubotzky Survey paper by Hoory, Linial, and Wigderson
Wikipedia:Rami Grossberg#0
Rami Grossberg (Hebrew: רמי גרוסברג) is a full professor of mathematics at Carnegie Mellon University and works in model theory. == Work == Grossberg's work in the past few years has revolved around the classification theory of non-elementary classes. In particular, he has provided, in joint work with Monica VanDieren, a proof of an upward "Morley's Categoricity Theorem" (a version of Shelah's categoricity conjecture) for Abstract Elementary Classes with the amalgamation property, that are tame. In another work with VanDieren, they also initiated the study of Tame abstract elementary class. Tameness is both a crucial technical property in categoricity transfer proofs and an independent notion of interest in the area – it has been studied by Baldwin, Hyttinen, Lessmann, Kesälä, Kolesnikov, Kueker among others. Other results include a best approximation to the main gap conjecture for AECs (with Olivier Lessmann), identifying AECs with JEP, AP, no maximal models and tameness as the uncountable analog to Fraïssé's constructions (with VanDieren), a stability spectrum theorem and the existence of Morley sequences for those classes (also with VanDieren). In addition to this work on the Categoricity Conjecture, more recently, with Boney and Vasey, new understanding of frames in AECs and forking (in the abstract elementary class setting) has been obtained. Some of Grossberg's work may be understood as part of the big project on Saharon Shelah's outstanding categoricity conjectures: Conjecture 1. (Categoricity for L ω 1 , ω {\displaystyle {\mathit {L}}_{{\omega _{1}},\omega }} ). Let ψ {\displaystyle \psi } be a sentence. If ψ {\displaystyle \psi } is categorical in a cardinal > ℶ ω 1 {\displaystyle \;>\beth _{\omega _{1}}} then ψ {\displaystyle \psi } is categorical in all cardinals > ℶ ω 1 {\displaystyle \;>\beth _{\omega _{1}}} . See Infinitary logic and Beth number. Conjecture 2. (Categoricity for AECs) See [1] and [2]. Let K be an AEC. There exists a cardinal μ(K) such that categoricity in a cardinal greater than μ(K) implies categoricity in all cardinals greater than μ(K). Furthermore, μ(K) is the Hanf number of K. Other examples of his results in pure model theory include: generalizing the Keisler–Shelah omitting types theorem for L ( Q ) {\displaystyle {\mathit {L(Q)}}} to successors of singular cardinals; with Shelah, introducing the notion of unsuper-stability for infinitary logics, and proving a nonstructure theorem, which is used to resolve a problem of Fuchs and Salce in the theory of modules; with Hart, proving a structure theorem for L ω 1 , ω {\displaystyle {\mathit {L}}_{\omega _{1},\omega }} , which resolves Morley's conjecture for excellent classes; and the notion of relative saturation and its connection to Shelah's conjecture for L ω 1 , ω {\displaystyle {\mathit {L}}_{\omega _{1},\omega }} . Examples of his results in applications to algebra include the finding that under the weak continuum hypothesis there is no universal object in the class of uncountable locally finite groups (answering a question of Macintyre and Shelah); with Shelah, showing that there is a jump in cardinality of the abelian group Extp(G, Z) at the first singular strong limit cardinal. == Personal life == In 1986, Grossberg attained his doctorate from the University of Jerusalem. He later married his former doctoral student and frequent collaborator, Monica VanDieren. == References == == External links == Rami Grossberg Rami Grossberg at the Mathematics Genealogy Project Rami Grossberg publications indexed by Google Scholar A survey of recent work on AECs
Wikipedia:Ramin Mahmudzade#0
Ramin Mahmudzade (Azerbaijani: Ramin Əlinazim oğlu Mahmudzadə; August 31, 1935, Baku – August 9, 2022) — candidate of physics-mathematical sciences, associate professor, head of preparatory work for the All-Union and International Olympiads in informatics for schoolchildren, rector of the Public Institute "Mathematical Methods in Production" under the Baku "Knowledge" Society (1985–1990), head of the Education Informatization Resource Center (1999–2003), Head of Focal Point UNESCO in Azerbaijan (2002–2005), Chairman of the "Information Technologies in Education" Methodological Council of the Ministry of Education, honored teacher, was awarded the Shohrat Order in 2005. He is known as a scientist who prepared the first Azerbaijani specialists in the field of IT. He had a great role in the development of computer science (informatics), which was emerging in Azerbaijan in the second half of the 20th century. In this regard, he was also considered the first programmer of Azerbaijan. == Life == Ramin Mahmudzade was born on August 31, 1935, in Baku. He graduated from high school in Ukraine in 1953. In 1953, he entered the Dnepropetrovsk Mining Institute. In 1956, he entered the Faculty of Mechanics and Mathematics of Azerbaijan State University. In 1958, he continued his studies at the Leningrad State University, and in 1961 he graduated from it. From 1961 to 1968, he worked as an assistant at the Faculty of Mechanics and Mathematics of Azerbaijan State University. In 1968-1973, he was the head of the department at the Institute of Theoretical Chemical Problems of Azerbaijan SA. In 1973, he returned to Baku State University and taught at the Faculty of Applied Mathematics and Cybernetics. First, he worked as a senior teacher, after defending his candidate's thesis, he worked as an associate professor and head of the department. In the early 1970s, he was the founder of programming classes in secondary schools numbered 164, then 134 in the republic. In 1989, he was the head of the All-Union Pilot Project on the introduction of computers in schools in the Republic. He was the chairman of the jury in almost all interschool informatics Olympiads across the country. Since 1989, he has been the head of the republic's All-Union Olympiad team in informatics, and since 1994, he has been the head of the International Olympiad team. In many IT companies and banks in the country and in several countries of the world, personnel who have studied Applied mathematics and Economic Cybernetics and passed the Ramin Mahmudzade school work. Since 1999, he has been the chairman of the "Information Technologies in Education" Methodological Council of the Ministry of Education. == References ==
Wikipedia:Ramiro Rampinelli#0
Ramiro Rampinelli, born Lodovico Rampinelli (1697 – 1759), was an Italian mathematician and physicist. He was a monk in the Olivetan Order. He had a decisive influence on the spread of mathematical analysis, algebra and mathematical physics in the best universities of Italy. He is one of the best known Italian scholars in the field of infinitesimal mathematics of the first half of the 18th century. == Biography == He was born in Brescia into the noble Rampinelli family and educated by the Jesuits; he learned the rudiments of mathematics from Giovan Battista Mazini. He studied first at the University of Bologna, where he was a disciple of Gabriele Manfredi, and took his monastic vows on 1 November 1722 at San Michele in Bosco. In 1727, after a brief stay at the Monastery of St. Helen in Venice, he entered the Abbey of St. Benedict in Padua, where he made the acquaintance of the best known professors of mathematics at the University of Padua, such as Marquess Giovanni Poleni and Count Jacopo Riccati; he formed a lasting friendship with the latter's family. In 1731 he was in Rome for a year, spending time with Celestino Galiani and Antonio Leprotti, studying subjects including architecture. After a period at the University of Naples Federico II, during which time he was always in contact with the best mathematicians, such as Nicola Antonio De Martino, he was assigned by his superiors to the University of Pavia for a year. He then returned to the University of Bologna in 1733, to teach mathematics. Here he completed his Istituzioni Fisiche con il metodo analitico. In 1740, after a stay at the monastery of St. Francis in Brescia, he transferred to the Olivetan monastery of San Vittore al Corso in Milan, where he was also mathematics tutor to the noblewoman Maria Gaetana Agnesi, who remembered him with gratitude in the preface to her Instituzioni Analitiche per la gioventù d'Italia. In 1747, the Senate of Milan appointed him (at double salary) to the chair in Mathematics and Physics at the University of Pavia. His expertise in river hydraulics also earned him the appointment as supervisor both for the construction of the Pavia-Milan canal and for the construction of the embankment to contain the Po River at Parpanese, in the Oltrepò Pavese. In 1758 his Lectiones opticæ Ramiri Rampinelii brixiani Congregationis Montis Oliveti monachi et in gymnasio Ticinensi Matheseos Professoris was published with the prestigious Brescia printer Bossini. This work on optics was to have been followed by Trigonometria and Applicazione dei principi matematici alla fisica pratica, but Rampinelli suffered a stroke on 10 April 1758. After a short period of recuperation in Brescia, he returned to the monastery of San Vittore al Corso in Milan, where, on 8 February 1759, he had a second stroke and died. Giordano Riccati wrote in a supplement to his eulogy dated 9 January 1760: In him were united doctrine and an indescribable modesty, and firm religious faith accompanied by all the moral and Christian virtues. His only thoughts were ever to fulfill the obligations of his own condition, and study his only innocent passion, by which he let himself be dominated, virtuously directing it outward in indefatigable service of his Religion and the Public. He dedicated himself willingly to others' benefit, and of benefits received, an indelible, grateful memory was preserved. == Works == Lectiones opticæ Ramiri Rampinelii brixiani Congregationis Montis Oliveti monachi et in gymnasio Ticinensi Matheseos Professoris. Brixiæ: excudebat Joannes Baptista Bossini. 1760. Other works by Rampinelli, said by contemporaries to be preserved in manuscript at the monastery of San Vittore in Milan, are now lost. Applicazione de' principi alla fisica pratica Trattato di trigonometria piana e sferica Istituzioni Fisiche con il metodo analitico Trattato di idrostatica (ad integrazione delle istituzioni fisiche) == References == == Sources and further reading == O'Connor, John J.; Robertson, Edmund F., "Ramiro Lodovico Rampinelli", MacTutor History of Mathematics Archive, University of St Andrews Excerpta Totius Italiae necnon Helvetiae literaturae Vol. III - 1759 C. G. Pozzi. "Elogio del P.D. Ramiro Rampinelli Bresciano". Giornale de' Letterati, Rome, 1760 F. Torricelli. "De Vita Rampinelli Epistola". in Lectiones Opticae. Brescia, 1760 A. Fabroni. Vitae Italorum doctrina excellentium. Vol. VIII. Pisa, 1781 F. Mandelli. Nuova raccolta di opuscoli scientifici e filosofici. Ed. A. Calogerà. Vol. XL. Venice, 1784 A. Brognoli. Elogi de' Bresciani per dottrina eccellenti nel secolo XVIII. Brescia, 1785 P. Verri. Memorie appartenenti alla vita ed agli studi di P. Frisi. Milan, 1787 A. F. Frisi. Elogio storico di Donna M. G. Agnesi Milanese. Milan: Galeazzi, 1799 V. Peroni. Biblioteca Bresciana. Vol. III. Brescia, 1821 P. Gambara. Ragionamenti di cose patrie. Vol. IV. Brescia, 1840 J. C. Poggendorf. Biographisch-literarisches Handwörterbuch zur Geschichte der exakten Wissenschaften. Vol. II. Leipzig, 1863 C. Cocchetti. Del movimento intellettuale nella provincia di Brescia. Brescia, 1880 U. Baldini. "L'insegnamento fisico matematico a Pavia alle soglie dell'età Teresiana". In Economia, istituzioni, cultura in Lombardia nell'età di M. Teresa. Vol. III. Milan: Il Mulino, 1980
Wikipedia:Range of a function#0
In mathematics, the range of a function may refer to either of two closely related concepts: the codomain of the function, or the image of the function. In some cases the codomain and the image of a function are the same set; such a function is called surjective or onto. For any non-surjective function f : X → Y , {\displaystyle f:X\to Y,} the codomain Y {\displaystyle Y} and the image Y ~ {\displaystyle {\tilde {Y}}} are different; however, a new function can be defined with the original function's image as its codomain, f ~ : X → Y ~ {\displaystyle {\tilde {f}}:X\to {\tilde {Y}}} where f ~ ( x ) = f ( x ) . {\displaystyle {\tilde {f}}(x)=f(x).} This new function is surjective. == Definitions == Given two sets X and Y, a binary relation f between X and Y is a function (from X to Y) if for every element x in X there is exactly one y in Y such that f relates x to y. The sets X and Y are called the domain and codomain of f, respectively. The image of the function f is the subset of Y consisting of only those elements y of Y such that there is at least one x in X with f(x) = y. == Usage == As the term "range" can have different meanings, it is considered a good practice to define it the first time it is used in a textbook or article. Older books, when they use the word "range", tend to use it to mean what is now called the codomain. More modern books, if they use the word "range" at all, generally use it to mean what is now called the image. To avoid any confusion, a number of modern books don't use the word "range" at all. == Elaboration and example == Given a function f : X → Y {\displaystyle f\colon X\to Y} with domain X {\displaystyle X} , the range of f {\displaystyle f} , sometimes denoted ran ⁡ ( f ) {\displaystyle \operatorname {ran} (f)} or Range ⁡ ( f ) {\displaystyle \operatorname {Range} (f)} , may refer to the codomain or target set Y {\displaystyle Y} (i.e., the set into which all of the output of f {\displaystyle f} is constrained to fall), or to f ( X ) {\displaystyle f(X)} , the image of the domain of f {\displaystyle f} under f {\displaystyle f} (i.e., the subset of Y {\displaystyle Y} consisting of all actual outputs of f {\displaystyle f} ). The image of a function is always a subset of the codomain of the function. As an example of the two different usages, consider the function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} as it is used in real analysis (that is, as a function that inputs a real number and outputs its square). In this case, its codomain is the set of real numbers R {\displaystyle \mathbb {R} } , but its image is the set of non-negative real numbers R + {\displaystyle \mathbb {R} ^{+}} , since x 2 {\displaystyle x^{2}} is never negative if x {\displaystyle x} is real. For this function, if we use "range" to mean codomain, it refers to R {\displaystyle \mathbb {\displaystyle \mathbb {R} ^{}} } ; if we use "range" to mean image, it refers to R + {\displaystyle \mathbb {R} ^{+}} . For some functions, the image and the codomain coincide; these functions are called surjective or onto. For example, consider the function f ( x ) = 2 x , {\displaystyle f(x)=2x,} which inputs a real number and outputs its double. For this function, both the codomain and the image are the set of all real numbers, so the word range is unambiguous. Even in cases where the image and codomain of a function are different, a new function can be uniquely defined with its codomain as the image of the original function. For example, as a function from the integers to the integers, the doubling function f ( n ) = 2 n {\displaystyle f(n)=2n} is not surjective because only the even integers are part of the image. However, a new function f ~ ( n ) = 2 n {\displaystyle {\tilde {f}}(n)=2n} whose domain is the integers and whose codomain is the even integers is surjective. For f ~ , {\displaystyle {\tilde {f}},} the word range is unambiguous. == See also == Bijection, injection and surjection Essential range == Notes and references == == Bibliography == Childs, Lindsay N. (2009). Childs, Lindsay N. (ed.). A Concrete Introduction to Higher Algebra. Undergraduate Texts in Mathematics (3rd ed.). Springer. doi:10.1007/978-0-387-74725-5. ISBN 978-0-387-74527-5. OCLC 173498962. Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). Wiley. ISBN 978-0-471-43334-7. OCLC 52559229. Hungerford, Thomas W. (1974). Algebra. Graduate Texts in Mathematics. Vol. 73. Springer. doi:10.1007/978-1-4612-6101-8. ISBN 0-387-90518-9. OCLC 703268. Rudin, Walter (1991). Functional Analysis (2nd ed.). McGraw Hill. ISBN 0-07-054236-8.
Wikipedia:Rank (graph theory)#0
In graph theory, a branch of mathematics, the rank of an undirected graph has two unrelated definitions. Let n equal the number of vertices of the graph. In the matrix theory of graphs the rank r of an undirected graph is defined as the rank of its adjacency matrix. Analogously, the nullity of the graph is the nullity of its adjacency matrix, which equals n − r. In the matroid theory of graphs the rank of an undirected graph is defined as the number n − c, where c is the number of connected components of the graph. Equivalently, the rank of a graph is the rank of the oriented incidence matrix associated with the graph. Analogously, the nullity of the graph is the nullity of its oriented incidence matrix, given by the formula m − n + c, where n and c are as above and m is the number of edges in the graph. The nullity is equal to the first Betti number of the graph. The sum of the rank and the nullity is the number of edges. == Examples == A sample graph and matrix: (corresponding to the four edges, e1–e4): In this example, the matrix theory rank of the matrix is 4, because its column vectors are linearly independent. == See also == Circuit rank Cycle rank Nullity (graph theory) == Notes == == References == Chen, Wai-Kai (1976), Applied Graph Theory, North Holland Publishing Company, ISBN 0-7204-2371-6. Hedetniemi, S. T., Jacobs, D. P., Laskar, R. (1989), Inequalities involving the rank of a graph. Journal of Combinatorial Mathematics and Combinatorial Computing, vol. 6, pp. 173–176. Bevis, Jean H., Blount, Kevin K., Davis, George J., Domke, Gayla S., Miller, Valerie A. (1997), The rank of a graph after vertex addition. Linear Algebra and its Applications, vol. 265, pp. 55–69.
Wikipedia:Rank (linear algebra)#0
In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics. The rank is commonly denoted by rank(A) or rk(A); sometimes the parentheses are not written, as in rank A. == Main definitions == In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these. The column rank of A is the dimension of the column space of A, while the row rank of A is the dimension of the row space of A. A fundamental result in linear algebra is that the column rank and the row rank are always equal. (Three proofs of this result are given in § Proofs that column rank = row rank, below.) This number (i.e., the number of linearly independent rows or columns) is simply called the rank of A. A matrix is said to have full rank if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be rank-deficient if it does not have full rank. The rank deficiency of a matrix is the difference between the lesser of the number of rows and columns, and the rank. The rank of a linear map or operator Φ {\displaystyle \Phi } is defined as the dimension of its image: rank ⁡ ( Φ ) := dim ⁡ ( img ⁡ ( Φ ) ) {\displaystyle \operatorname {rank} (\Phi ):=\dim(\operatorname {img} (\Phi ))} where dim {\displaystyle \dim } is the dimension of a vector space, and img {\displaystyle \operatorname {img} } is the image of a map. == Examples == The matrix [ 1 0 1 0 1 1 0 1 1 ] {\displaystyle {\begin{bmatrix}1&0&1\\0&1&1\\0&1&1\end{bmatrix}}} has rank 2: the first two columns are linearly independent, so the rank is at least 2, but since the third is a linear combination of the first two (the first column plus the second), the three columns are linearly dependent so the rank must be less than 3. The matrix A = [ 1 1 0 2 − 1 − 1 0 − 2 ] {\displaystyle A={\begin{bmatrix}1&1&0&2\\-1&-1&0&-2\end{bmatrix}}} has rank 1: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the transpose A T = [ 1 − 1 1 − 1 0 0 2 − 2 ] {\displaystyle A^{\mathrm {T} }={\begin{bmatrix}1&-1\\1&-1\\0&0\\2&-2\end{bmatrix}}} of A has rank 1. Indeed, since the column vectors of A are the row vectors of the transpose of A, the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e., rank(A) = rank(AT). == Computing the rank of a matrix == === Rank from row echelon forms === A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally row echelon form, by elementary row operations. Row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots (or basic columns) and also the number of non-zero rows. For example, the matrix A given by A = [ 1 2 1 − 2 − 3 1 3 5 0 ] {\displaystyle A={\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}} can be put in reduced row-echelon form by using the following elementary row operations: [ 1 2 1 − 2 − 3 1 3 5 0 ] → 2 R 1 + R 2 → R 2 [ 1 2 1 0 1 3 3 5 0 ] → − 3 R 1 + R 3 → R 3 [ 1 2 1 0 1 3 0 − 1 − 3 ] → R 2 + R 3 → R 3 [ 1 2 1 0 1 3 0 0 0 ] → − 2 R 2 + R 1 → R 1 [ 1 0 − 5 0 1 3 0 0 0 ] . {\displaystyle {\begin{aligned}{\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}&\xrightarrow {2R_{1}+R_{2}\to R_{2}} {\begin{bmatrix}1&2&1\\0&1&3\\3&5&0\end{bmatrix}}\xrightarrow {-3R_{1}+R_{3}\to R_{3}} {\begin{bmatrix}1&2&1\\0&1&3\\0&-1&-3\end{bmatrix}}\\&\xrightarrow {R_{2}+R_{3}\to R_{3}} \,\,{\begin{bmatrix}1&2&1\\0&1&3\\0&0&0\end{bmatrix}}\xrightarrow {-2R_{2}+R_{1}\to R_{1}} {\begin{bmatrix}1&0&-5\\0&1&3\\0&0&0\end{bmatrix}}~.\end{aligned}}} The final matrix (in reduced row echelon form) has two non-zero rows and thus the rank of matrix A is 2. === Computation === When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank-revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less computationally expensive choices, such as QR decomposition with pivoting (so-called rank-revealing QR factorization), which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application. == Proofs that column rank = row rank == === Proof using row reduction === The fact that the column and row ranks of any matrix are equal forms is fundamental in linear algebra. Many proofs have been given. One of the most elementary ones has been sketched in § Rank from row echelon forms. Here is a variant of this proof: It is straightforward to show that neither the row rank nor the column rank are changed by an elementary row operation. As Gaussian elimination proceeds by elementary row operations, the reduced row echelon form of a matrix has the same row rank and the same column rank as the original matrix. Further elementary column operations allow putting the matrix in the form of an identity matrix possibly bordered by rows and columns of zeros. Again, this changes neither the row rank nor the column rank. It is immediate that both the row and column ranks of this resulting matrix is the number of its nonzero entries. We present two other proofs of this result. The first uses only basic properties of linear combinations of vectors, and is valid over any field. The proof is based upon Wardlaw (2005). The second uses orthogonality and is valid for matrices over the real numbers; it is based upon Mackiw (1995). Both proofs can be found in the book by Banerjee and Roy (2014). === Proof using linear combinations === Let A be an m × n matrix. Let the column rank of A be r, and let c1, ..., cr be any basis for the column space of A. Place these as the columns of an m × r matrix C. Every column of A can be expressed as a linear combination of the r columns in C. This means that there is an r × n matrix R such that A = CR. R is the matrix whose ith column is formed from the coefficients giving the ith column of A as a linear combination of the r columns of C. In other words, R is the matrix which contains the multiples for the bases of the column space of A (which is C), which are then used to form A as a whole. Now, each row of A is given by a linear combination of the r rows of R. Therefore, the rows of R form a spanning set of the row space of A and, by the Steinitz exchange lemma, the row rank of A cannot exceed r. This proves that the row rank of A is less than or equal to the column rank of A. This result can be applied to any matrix, so apply the result to the transpose of A. Since the row rank of the transpose of A is the column rank of A and the column rank of the transpose of A is the row rank of A, this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of A. (Also see Rank factorization.) === Proof using orthogonality === Let A be an m × n matrix with entries in the real numbers whose row rank is r. Therefore, the dimension of the row space of A is r. Let x1, x2, …, xr be a basis of the row space of A. We claim that the vectors Ax1, Ax2, …, Axr are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c1, c2, …, cr: 0 = c 1 A x 1 + c 2 A x 2 + ⋯ + c r A x r = A ( c 1 x 1 + c 2 x 2 + ⋯ + c r x r ) = A v , {\displaystyle 0=c_{1}A\mathbf {x} _{1}+c_{2}A\mathbf {x} _{2}+\cdots +c_{r}A\mathbf {x} _{r}=A(c_{1}\mathbf {x} _{1}+c_{2}\mathbf {x} _{2}+\cdots +c_{r}\mathbf {x} _{r})=A\mathbf {v} ,} where v = c1x1 + c2x2 + ⋯ + crxr. We make two observations: (a) v is a linear combination of vectors in the row space of A, which implies that v belongs to the row space of A, and (b) since Av = 0, the vector v is orthogonal to every row vector of A and, hence, is orthogonal to every vector in the row space of A. The facts (a) and (b) together imply that v is orthogonal to itself, which proves that v = 0 or, by the definition of v, c 1 x 1 + c 2 x 2 + ⋯ + c r x r = 0. {\displaystyle c_{1}\mathbf {x} _{1}+c_{2}\mathbf {x} _{2}+\cdots +c_{r}\mathbf {x} _{r}=0.} But recall that the xi were chosen as a basis of the row space of A and so are linearly independent. This implies that c1 = c2 = ⋯ = cr = 0. It follows that Ax1, Ax2, …, Axr are linearly independent. Now, each Axi is obviously a vector in the column space of A. So, Ax1, Ax2, …, Axr is a set of r linearly independent vectors in the column space of A and, hence, the dimension of the column space of A (i.e., the column rank of A) must be at least as big as r. This proves that row rank of A is no larger than the column rank of A. Now apply this result to the transpose of A to get the reverse inequality and conclude as in the previous proof. == Alternative definitions == In all the definitions in this section, the matrix A is taken to be an m × n matrix over an arbitrary field F. === Dimension of image === Given the matrix A {\displaystyle A} , there is an associated linear mapping f : F n → F m {\displaystyle f:F^{n}\to F^{m}} defined by f ( x ) = A x . {\displaystyle f(x)=Ax.} The rank of A {\displaystyle A} is the dimension of the image of f {\displaystyle f} . This definition has the advantage that it can be applied to any linear map without need for a specific matrix. === Rank in terms of nullity === Given the same linear mapping f as above, the rank is n minus the dimension of the kernel of f. The rank–nullity theorem states that this definition is equivalent to the preceding one. === Column rank – dimension of column space === The rank of A is the maximal number of linearly independent columns c 1 , c 2 , … , c k {\displaystyle \mathbf {c} _{1},\mathbf {c} _{2},\dots ,\mathbf {c} _{k}} of A; this is the dimension of the column space of A (the column space being the subspace of Fm generated by the columns of A, which is in fact just the image of the linear map f associated to A). === Row rank – dimension of row space === The rank of A is the maximal number of linearly independent rows of A; this is the dimension of the row space of A. === Decomposition rank === The rank of A is the smallest positive integer k such that A can be factored as A = C R {\displaystyle A=CR} , where C is an m × k matrix and R is a k × n matrix. In fact, for all integers k, the following are equivalent: the column rank of A is less than or equal to k, there exist k columns c 1 , … , c k {\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}} of size m such that every column of A is a linear combination of c 1 , … , c k {\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}} , there exist an m × k {\displaystyle m\times k} matrix C and a k × n {\displaystyle k\times n} matrix R such that A = C R {\displaystyle A=CR} (when k is the rank, this is a rank factorization of A), there exist k rows r 1 , … , r k {\displaystyle \mathbf {r} _{1},\ldots ,\mathbf {r} _{k}} of size n such that every row of A is a linear combination of r 1 , … , r k {\displaystyle \mathbf {r} _{1},\ldots ,\mathbf {r} _{k}} , the row rank of A is less than or equal to k. Indeed, the following equivalences are obvious: ( 1 ) ⇔ ( 2 ) ⇔ ( 3 ) ⇔ ( 4 ) ⇔ ( 5 ) {\displaystyle (1)\Leftrightarrow (2)\Leftrightarrow (3)\Leftrightarrow (4)\Leftrightarrow (5)} . For example, to prove (3) from (2), take C to be the matrix whose columns are c 1 , … , c k {\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}} from (2). To prove (2) from (3), take c 1 , … , c k {\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}} to be the columns of C. It follows from the equivalence ( 1 ) ⇔ ( 5 ) {\displaystyle (1)\Leftrightarrow (5)} that the row rank is equal to the column rank. As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map f : V → W is the minimal dimension k of an intermediate space X such that f can be written as the composition of a map V → X and a map X → W. Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See rank factorization for details. === Rank in terms of singular values === The rank of A equals the number of non-zero singular values, which is the same as the number of non-zero diagonal elements in Σ in the singular value decomposition A = U Σ V ∗ {\displaystyle A=U\Sigma V^{*}} . === Determinantal rank – size of largest non-vanishing minor === The rank of A is the largest order of any non-zero minor in A. (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix. A non-vanishing p-minor (p × p submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of n vectors has dimension p, then p of those vectors span the space (equivalently, that one can choose a spanning set that is a subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of n vectors has dimension p, then p of these vectors span the space and there is a set of p coordinates on which they are linearly independent). === Tensor rank – minimum number of simple tensors === The rank of A is the smallest number k such that A can be written as a sum of k rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product c ⋅ r {\displaystyle c\cdot r} of a column vector c and a row vector r. This notion of rank is called tensor rank; it can be generalized in the separable models interpretation of the singular value decomposition. == Properties == We assume that A is an m × n matrix, and we define the linear map f by f(x) = Ax as above. The rank of an m × n matrix is a nonnegative integer and cannot be greater than either m or n. That is, rank ⁡ ( A ) ≤ min ( m , n ) . {\displaystyle \operatorname {rank} (A)\leq \min(m,n).} A matrix that has rank min(m, n) is said to have full rank; otherwise, the matrix is rank deficient. Only a zero matrix has rank zero. f is injective (or "one-to-one") if and only if A has rank n (in this case, we say that A has full column rank). f is surjective (or "onto") if and only if A has rank m (in this case, we say that A has full row rank). If A is a square matrix (i.e., m = n), then A is invertible if and only if A has rank n (that is, A has full rank). If B is any n × k matrix, then rank ⁡ ( A B ) ≤ min ( rank ⁡ ( A ) , rank ⁡ ( B ) ) . {\displaystyle \operatorname {rank} (AB)\leq \min(\operatorname {rank} (A),\operatorname {rank} (B)).} If B is an n × k matrix of rank n, then rank ⁡ ( A B ) = rank ⁡ ( A ) . {\displaystyle \operatorname {rank} (AB)=\operatorname {rank} (A).} If C is an l × m matrix of rank m, then rank ⁡ ( C A ) = rank ⁡ ( A ) . {\displaystyle \operatorname {rank} (CA)=\operatorname {rank} (A).} The rank of A is equal to r if and only if there exists an invertible m × m matrix X and an invertible n × n matrix Y such that X A Y = [ I r 0 0 0 ] , {\displaystyle XAY={\begin{bmatrix}I_{r}&0\\0&0\end{bmatrix}},} where Ir denotes the r × r identity matrix and the three zero matrices have the sizes r × (n − r), (m − r) × r and (m − r) × (n − r). Sylvester’s rank inequality: if A is an m × n matrix and B is n × k, then rank ⁡ ( A ) + rank ⁡ ( B ) − n ≤ rank ⁡ ( A B ) . {\displaystyle \operatorname {rank} (A)+\operatorname {rank} (B)-n\leq \operatorname {rank} (AB).} This is a special case of the next inequality. The inequality due to Frobenius: if AB, ABC and BC are defined, then rank ⁡ ( A B ) + rank ⁡ ( B C ) ≤ rank ⁡ ( B ) + rank ⁡ ( A B C ) . {\displaystyle \operatorname {rank} (AB)+\operatorname {rank} (BC)\leq \operatorname {rank} (B)+\operatorname {rank} (ABC).} Subadditivity: rank ⁡ ( A + B ) ≤ rank ⁡ ( A ) + rank ⁡ ( B ) {\displaystyle \operatorname {rank} (A+B)\leq \operatorname {rank} (A)+\operatorname {rank} (B)} when A and B are of the same dimension. As a consequence, a rank-k matrix can be written as the sum of k rank-1 matrices, but not fewer. The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix. (This is the rank–nullity theorem.) If A is a matrix over the real numbers then the rank of A and the rank of its corresponding Gram matrix are equal. Thus, for real matrices rank ⁡ ( A T A ) = rank ⁡ ( A A T ) = rank ⁡ ( A ) = rank ⁡ ( A T ) . {\displaystyle \operatorname {rank} (A^{\mathrm {T} }A)=\operatorname {rank} (AA^{\mathrm {T} })=\operatorname {rank} (A)=\operatorname {rank} (A^{\mathrm {T} }).} This can be shown by proving equality of their null spaces. The null space of the Gram matrix is given by vectors x for which A T A x = 0. {\displaystyle A^{\mathrm {T} }A\mathbf {x} =0.} If this condition is fulfilled, we also have 0 = x T A T A x = | A x | 2 . {\displaystyle 0=\mathbf {x} ^{\mathrm {T} }A^{\mathrm {T} }A\mathbf {x} =\left|A\mathbf {x} \right|^{2}.} If A is a matrix over the complex numbers and A ¯ {\displaystyle {\overline {A}}} denotes the complex conjugate of A and A∗ the conjugate transpose of A (i.e., the adjoint of A), then rank ⁡ ( A ) = rank ⁡ ( A ¯ ) = rank ⁡ ( A T ) = rank ⁡ ( A ∗ ) = rank ⁡ ( A ∗ A ) = rank ⁡ ( A A ∗ ) . {\displaystyle \operatorname {rank} (A)=\operatorname {rank} ({\overline {A}})=\operatorname {rank} (A^{\mathrm {T} })=\operatorname {rank} (A^{*})=\operatorname {rank} (A^{*}A)=\operatorname {rank} (AA^{*}).} == Applications == One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. According to the Rouché–Capelli theorem, the system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank. In this case (and assuming the system of equations is in the real or complex numbers) the system of equations has infinitely many solutions. In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable. In the field of communication complexity, the rank of the communication matrix of a function gives bounds on the amount of communication needed for two parties to compute the function. == Generalization == There are different generalizations of the concept of rank to matrices over arbitrary rings, where column rank, row rank, dimension of column space, and dimension of row space of a matrix may be different from the others or may not exist. Thinking of matrices as tensors, the tensor rank generalizes to arbitrary tensors; for tensors of order greater than 2 (matrices are order 2 tensors), rank is very hard to compute, unlike for matrices. There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative. == Matrices as tensors == Matrix rank should not be confused with tensor order, which is called tensor rank. Tensor order is the number of indices required to write a tensor, and thus matrices all have tensor order 2. More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see Tensor (intrinsic definition) for details. The tensor rank of a matrix can also mean the minimum number of simple tensors necessary to express the matrix as a linear combination, and that this definition does agree with matrix rank as here discussed. == See also == Matroid rank Nonnegative rank (linear algebra) Rank (differential topology) Multicollinearity Linear dependence == Notes == == References == == Sources == Axler, Sheldon (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 978-3-319-11079-0. Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-90093-4. Hefferon, Jim (2020). Linear Algebra (4th ed.). Orthogonal Publishing L3C. ISBN 978-1-944325-11-4. Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9. Roman, Steven (2005). Advanced Linear Algebra. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-24766-1. Valenza, Robert J. (1993) [1951]. Linear Algebra: An Introduction to Abstract Mathematics. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 3-540-94099-5. == Further reading == Roger A. Horn and Charles R. Johnson (1985). Matrix Analysis. Cambridge University Press. ISBN 978-0-521-38632-6. Kaw, Autar K. Two Chapters from the book Introduction to Matrix Algebra: 1. Vectors [1] and System of Equations [2] Mike Brookes: Matrix Reference Manual. [3]
Wikipedia:Rank factorization#0
In mathematics, given a field F {\displaystyle \mathbb {F} } , non-negative integers m , n {\displaystyle m,n} , and a matrix A ∈ F m × n {\displaystyle A\in \mathbb {F} ^{m\times n}} , a rank decomposition or rank factorization of A is a factorization of A of the form A = CF, where C ∈ F m × r {\displaystyle C\in \mathbb {F} ^{m\times r}} and F ∈ F r × n {\displaystyle F\in \mathbb {F} ^{r\times n}} , where r = rank ⁡ A {\displaystyle r=\operatorname {rank} A} is the rank of A {\displaystyle A} . == Existence == Every finite-dimensional matrix has a rank decomposition: Let A {\textstyle A} be an m × n {\textstyle m\times n} matrix whose column rank is r {\textstyle r} . Therefore, there are r {\textstyle r} linearly independent columns in A {\textstyle A} ; equivalently, the dimension of the column space of A {\textstyle A} is r {\textstyle r} . Let c 1 , c 2 , … , c r {\textstyle \mathbf {c} _{1},\mathbf {c} _{2},\ldots ,\mathbf {c} _{r}} be any basis for the column space of A {\textstyle A} and place them as column vectors to form the m × r {\textstyle m\times r} matrix C = [ c 1 c 2 ⋯ c r ] {\textstyle C={\begin{bmatrix}\mathbf {c} _{1}&\mathbf {c} _{2}&\cdots &\mathbf {c} _{r}\end{bmatrix}}} . Therefore, every column vector of A {\textstyle A} is a linear combination of the columns of C {\textstyle C} . To be precise, if A = [ a 1 a 2 ⋯ a n ] {\textstyle A={\begin{bmatrix}\mathbf {a} _{1}&\mathbf {a} _{2}&\cdots &\mathbf {a} _{n}\end{bmatrix}}} is an m × n {\textstyle m\times n} matrix with a j {\textstyle \mathbf {a} _{j}} as the j {\textstyle j} -th column, then a j = f 1 j c 1 + f 2 j c 2 + ⋯ + f r j c r , {\displaystyle \mathbf {a} _{j}=f_{1j}\mathbf {c} _{1}+f_{2j}\mathbf {c} _{2}+\cdots +f_{rj}\mathbf {c} _{r},} where f i j {\textstyle f_{ij}} 's are the scalar coefficients of a j {\textstyle \mathbf {a} _{j}} in terms of the basis c 1 , c 2 , … , c r {\textstyle \mathbf {c} _{1},\mathbf {c} _{2},\ldots ,\mathbf {c} _{r}} . This implies that A = C F {\textstyle A=CF} , where f i j {\textstyle f_{ij}} is the ( i , j ) {\textstyle (i,j)} -th element of F {\textstyle F} . == Non-uniqueness == If A = C 1 F 1 {\textstyle A=C_{1}F_{1}} is a rank factorization, taking C 2 = C 1 R {\textstyle C_{2}=C_{1}R} and F 2 = R − 1 F 1 {\textstyle F_{2}=R^{-1}F_{1}} gives another rank factorization for any invertible matrix R {\textstyle R} of compatible dimensions. Conversely, if A = F 1 G 1 = F 2 G 2 {\textstyle A=F_{1}G_{1}=F_{2}G_{2}} are two rank factorizations of A {\textstyle A} , then there exists an invertible matrix R {\textstyle R} such that F 1 = F 2 R {\textstyle F_{1}=F_{2}R} and G 1 = R − 1 G 2 {\textstyle G_{1}=R^{-1}G_{2}} . == Construction == === Rank factorization from reduced row echelon forms === In practice, we can construct one specific rank factorization as follows: we can compute B {\textstyle B} , the reduced row echelon form of A {\textstyle A} . Then C {\textstyle C} is obtained by removing from A {\textstyle A} all non-pivot columns (which can be determined by looking for columns in B {\textstyle B} which do not contain a pivot), and F {\textstyle F} is obtained by eliminating any all-zero rows of B {\textstyle B} . Note: For a full-rank square matrix (i.e. when n = m = r {\textstyle n=m=r} ), this procedure will yield the trivial result C = A {\textstyle C=A} and F = B = I n {\textstyle F=B=I_{n}} (the n × n {\textstyle n\times n} identity matrix). ==== Example ==== Consider the matrix A = [ 1 3 1 4 2 7 3 9 1 5 3 1 1 2 0 8 ] ∼ [ 1 0 − 2 0 0 1 1 0 0 0 0 1 0 0 0 0 ] = B . {\displaystyle A={\begin{bmatrix}1&3&1&4\\2&7&3&9\\1&5&3&1\\1&2&0&8\end{bmatrix}}\sim {\begin{bmatrix}1&0&-2&0\\0&1&1&0\\0&0&0&1\\0&0&0&0\end{bmatrix}}=B{\text{.}}} B {\textstyle B} is in reduced echelon form. Then C {\textstyle C} is obtained by removing the third column of A {\textstyle A} , the only one which is not a pivot column, and F {\textstyle F} by getting rid of the last row of zeroes from B {\textstyle B} , so C = [ 1 3 4 2 7 9 1 5 1 1 2 8 ] , F = [ 1 0 − 2 0 0 1 1 0 0 0 0 1 ] . {\displaystyle C={\begin{bmatrix}1&3&4\\2&7&9\\1&5&1\\1&2&8\end{bmatrix}}{\text{,}}\qquad F={\begin{bmatrix}1&0&-2&0\\0&1&1&0\\0&0&0&1\end{bmatrix}}{\text{.}}} It is straightforward to check that A = [ 1 3 1 4 2 7 3 9 1 5 3 1 1 2 0 8 ] = [ 1 3 4 2 7 9 1 5 1 1 2 8 ] [ 1 0 − 2 0 0 1 1 0 0 0 0 1 ] = C F . {\displaystyle A={\begin{bmatrix}1&3&1&4\\2&7&3&9\\1&5&3&1\\1&2&0&8\end{bmatrix}}={\begin{bmatrix}1&3&4\\2&7&9\\1&5&1\\1&2&8\end{bmatrix}}{\begin{bmatrix}1&0&-2&0\\0&1&1&0\\0&0&0&1\end{bmatrix}}=CF{\text{.}}} ==== Proof ==== Let P {\textstyle P} be an n × n {\textstyle n\times n} permutation matrix such that A P = ( C , D ) {\textstyle AP=(C,D)} in block partitioned form, where the columns of C {\textstyle C} are the r {\textstyle r} pivot columns of A {\textstyle A} . Every column of D {\textstyle D} is a linear combination of the columns of C {\textstyle C} , so there is a matrix G {\textstyle G} such that D = C G {\textstyle D=CG} , where the columns of G {\textstyle G} contain the coefficients of each of those linear combinations. So A P = ( C , C G ) = C ( I r , G ) {\textstyle AP=(C,CG)=C(I_{r},G)} , I r {\textstyle I_{r}} being the r × r {\textstyle r\times r} identity matrix. We will show now that ( I r , G ) = F P {\textstyle (I_{r},G)=FP} . Transforming A {\textstyle A} into its reduced row echelon form B {\textstyle B} amounts to left-multiplying by a matrix E {\textstyle E} which is a product of elementary matrices, so E A P = B P = E C ( I r , G ) {\textstyle EAP=BP=EC(I_{r},G)} , where E C = ( I r 0 ) {\textstyle EC={\begin{pmatrix}I_{r}\\0\end{pmatrix}}} . We then can write B P = ( I r G 0 0 ) {\textstyle BP={\begin{pmatrix}I_{r}&G\\0&0\end{pmatrix}}} , which allows us to identify ( I r , G ) = F P {\textstyle (I_{r},G)=FP} , i.e. the nonzero r {\textstyle r} rows of the reduced echelon form, with the same permutation on the columns as we did for A {\textstyle A} . We thus have A P = C F P {\textstyle AP=CFP} , and since P {\textstyle P} is invertible this implies A = C F {\textstyle A=CF} , and the proof is complete. === Singular value decomposition === If F ∈ { R , C } , {\displaystyle \mathbb {F} \in \{\mathbb {R} ,\mathbb {C} \},} then one can also construct a full-rank factorization of A {\textstyle A} via a singular value decomposition A = U Σ V ∗ = [ U 1 U 2 ] [ Σ r 0 0 0 ] [ V 1 ∗ V 2 ∗ ] = U 1 ( Σ r V 1 ∗ ) . {\displaystyle A=U\Sigma V^{*}={\begin{bmatrix}U_{1}&U_{2}\end{bmatrix}}{\begin{bmatrix}\Sigma _{r}&0\\0&0\end{bmatrix}}{\begin{bmatrix}V_{1}^{*}\\V_{2}^{*}\end{bmatrix}}=U_{1}\left(\Sigma _{r}V_{1}^{*}\right).} Since U 1 {\textstyle U_{1}} is a full-column-rank matrix and Σ r V 1 ∗ {\textstyle \Sigma _{r}V_{1}^{*}} is a full-row-rank matrix, we can take C = U 1 {\textstyle C=U_{1}} and F = Σ r V 1 ∗ {\textstyle F=\Sigma _{r}V_{1}^{*}} . == Consequences == === rank(A) = rank(AT) === An immediate consequence of rank factorization is that the rank of A {\textstyle A} is equal to the rank of its transpose A T {\textstyle A^{\textsf {T}}} . Since the columns of A {\textstyle A} are the rows of A T {\textstyle A^{\textsf {T}}} , the column rank of A {\textstyle A} equals its row rank. Proof: To see why this is true, let us first define rank to mean column rank. Since A = C F {\textstyle A=CF} , it follows that A T = F T C T {\textstyle A^{\textsf {T}}=F^{\textsf {T}}C^{\textsf {T}}} . From the definition of matrix multiplication, this means that each column of A T {\textstyle A^{\textsf {T}}} is a linear combination of the columns of F T {\textstyle F^{\textsf {T}}} . Therefore, the column space of A T {\textstyle A^{\textsf {T}}} is contained within the column space of F T {\textstyle F^{\textsf {T}}} and, hence, rank ⁡ ( A T ) ≤ rank ⁡ ( F T ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq \operatorname {rank} \left(F^{\textsf {T}}\right)} . Now, F T {\textstyle F^{\textsf {T}}} is n × r {\textstyle n\times r} , so there are r {\textstyle r} columns in F T {\textstyle F^{\textsf {T}}} and, hence, rank ⁡ ( A T ) ≤ r = rank ⁡ ( A ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq r=\operatorname {rank} \left(A\right)} . This proves that rank ⁡ ( A T ) ≤ rank ⁡ ( A ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq \operatorname {rank} \left(A\right)} . Now apply the result to A T {\textstyle A^{\textsf {T}}} to obtain the reverse inequality: since ( A T ) T = A {\textstyle \left(A^{\textsf {T}}\right)^{\textsf {T}}=A} , we can write rank ⁡ ( A ) = rank ⁡ ( ( A T ) T ) ≤ rank ⁡ ( A T ) {\textstyle \operatorname {rank} \left(A\right)=\operatorname {rank} \left(\left(A^{\textsf {T}}\right)^{\textsf {T}}\right)\leq \operatorname {rank} \left(A^{\textsf {T}}\right)} . This proves rank ⁡ ( A ) ≤ rank ⁡ ( A T ) {\textstyle \operatorname {rank} \left(A\right)\leq \operatorname {rank} \left(A^{\textsf {T}}\right)} . We have, therefore, proved rank ⁡ ( A T ) ≤ rank ⁡ ( A ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq \operatorname {rank} \left(A\right)} and rank ⁡ ( A ) ≤ rank ⁡ ( A T ) {\textstyle \operatorname {rank} \left(A\right)\leq \operatorname {rank} \left(A^{\textsf {T}}\right)} , so rank ⁡ ( A ) = rank ⁡ ( A T ) {\textstyle \operatorname {rank} \left(A\right)=\operatorname {rank} \left(A^{\textsf {T}}\right)} . == Notes == == References ==
Wikipedia:Rank-width#0
Rank-width is a graph width parameter used in graph theory and parameterized complexity, and defined using linear algebra. It is defined from hierarchical clusterings of the vertices of a given graph, which can be visualized as ternary trees having the vertices as their leaves. Removing any edge from such a tree disconnects it into two subtrees and partitions the vertices into two subsets. The graph edges that cross from one side of the partition to the other can be described by a biadjacency matrix; for the purposes of rank-width, this matrix is defined over the finite field GF(2) rather than using real numbers. The rank-width of a graph is the maximum of the ranks of the biadjacency matrices, for a clustering chosen to minimize this maximum. Rank-width is closely related to clique-width: k ≤ c ≤ 2 k + 1 − 1 {\displaystyle k\leq c\leq 2^{k+1}-1} , where c {\displaystyle c} is the clique-width and k {\displaystyle k} the rank-width. However, clique-width is NP-hard to compute, for graphs of large clique-width, and its parameterized complexity is unknown. In contrast, testing whether the rank-width is at most a constant k {\displaystyle k} takes polynomial time, and even when the rank-width is not constant it can be approximated, with a constant approximation ratio, in polynomial time. For this reason, rank-width can be used as a more easily computed substitute for clique-width. An example of a family of graphs with high rank-width is provided by the square grid graphs. For an n × n {\displaystyle n\times n} grid graph, the rank-width is exactly n − 1 {\displaystyle n-1} . == References ==
Wikipedia:Raphael Yuster#0
Raphael "Raphy" Yuster (Hebrew: רפאל יוסטר) is an Israeli mathematician specializing in combinatorics and graph theory. He is a professor of mathematics at the University of Haifa. He is a recipient of the Nerode Prize for his work on color-coding,[A] and is also known for the Alon–Yuster conjecture relating the chromatic numbers of graphs to the number of disjoint copies of a smaller graph that can be found in a larger one, later proven by János Komlós, Gábor N. Sárközy, and Endre Szemerédi.[B] == Education and career == Yuster was a student at Tel Aviv University, where he received a bachelor's degree in 1989, a master's degree in 1991, and a Ph.D. in 1995. His doctoral dissertation, Non Constructive Graph Theoretic Proofs and Their Algorithmic Aspects, was supervised by Noga Alon. He has been a faculty member at the University of Haifa since 2004. == Recognition == With Noga Alon and Uri Zwick, Yuster was a recipient of the 2019 Nerode Prize, given for their work on color coding, an application of the probabilistic method to subgraph isomorphism.[A] His work with Zwick on sparse matrix multiplication received the 2023 European Symposium on Algorithms Test-of-Time Award.[C] == Selected publications == == See also == Rainbow coloring, the topic of several works by Yuster == References == == External links == Home page Raphael Yuster publications indexed by Google Scholar
Wikipedia:Raphaèle Herbin#0
Raphaèle Herbin is a French applied mathematician; she is known for her work on the finite volume method. Herbin has been a professor at Aix-Marseille University since 1995, and directs the Institut de Mathématiques de Marseille. She earned her doctorate in 1986 at Claude Bernard University Lyon 1, with the dissertation Approximation numérique d'inéquations variationnelles non linéaires par des méthodes de continuation supervised by Francis Conrad. Herbin is a co-author of the books Mesure, intégration, probabilités (Ellipses, 2013) and The gradient discretisation method (Springer, 2018). In 2017 the CNRS gave Herbin their CNRS medal of innovation. == References == == External links == Home page Raphaèle Herbin publications indexed by Google Scholar
Wikipedia:Raphaël Cerf#0
Raphaël Cerf is a French mathematician at Paris-Sud 11 University. For his contributions to probability theory, he won the Rollo Davidson Prize in 1999, and the EMS Prize in 2000. He was an Invited Speaker at the ICM in 2006 in Madrid. == Selected works == Raphaël Cerf (15 October 2004). "Le modèle d'Ising et la coexistence des phases". images.math.cnrs.fr. The Wulff Crystal in Ising and Percolation models. Springer, Lecture Notes in Mathematics 1878, École d’été de probabilités de Saint-Flour, no. 34, 2004 On Cramérs Theory in infinite dimensions. Société Mathématique de France, 2007 Large deviations for three dimensional supercritical percolation. Société Mathématique de France, 2000 == References == == External links == Raphaël Cerf at the Mathematics Genealogy Project Website at Paris-Sud 11 University
Wikipedia:Rate function#0
In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. Such functions are used to formulate large deviation principles. A large deviation principle quantifies the asymptotic probability of rare events for a sequence of probabilities. A rate function is also called a Cramér function, after the Swedish probabilist Harald Cramér. == Definitions == Rate function An extended real-valued function I : X → [ 0 , + ∞ ] {\displaystyle I:X\to [0,+\infty ]} defined on a Hausdorff topological space X {\displaystyle X} is said to be a rate function if it is not identically + ∞ {\displaystyle +\infty } and is lower semi-continuous i.e. all the sub-level sets { x ∈ X ∣ I ( x ) ≤ c } for c ≥ 0 {\displaystyle \{x\in X\mid I(x)\leq c\}{\mbox{ for }}c\geq 0} are closed in X {\displaystyle X} . If, furthermore, they are compact, then I {\displaystyle I} is said to be a good rate function. A family of probability measures ( μ δ ) δ > 0 {\displaystyle (\mu _{\delta })_{\delta >0}} on X {\displaystyle X} is said to satisfy the large deviation principle with rate function I : X → [ 0 , + ∞ ) {\displaystyle I:X\to [0,+\infty )} (and rate 1 / δ {\displaystyle 1/\delta } ) if, for every closed set F ⊆ X {\displaystyle F\subseteq X} and every open set G ⊆ X {\displaystyle G\subseteq X} , lim sup δ ↓ 0 δ log ⁡ μ δ ( F ) ≤ − inf x ∈ F I ( x ) , (U) {\displaystyle \limsup _{\delta \downarrow 0}\delta \log \mu _{\delta }(F)\leq -\inf _{x\in F}I(x),\quad {\mbox{(U)}}} lim inf δ ↓ 0 δ log ⁡ μ δ ( G ) ≥ − inf x ∈ G I ( x ) . (L) {\displaystyle \liminf _{\delta \downarrow 0}\delta \log \mu _{\delta }(G)\geq -\inf _{x\in G}I(x).\quad {\mbox{(L)}}} If the upper bound (U) holds only for compact (instead of closed) sets F {\displaystyle F} , then ( μ δ ) δ > 0 {\displaystyle (\mu _{\delta })_{\delta >0}} is said to satisfy the weak large deviations principle (with rate 1 / δ {\displaystyle 1/\delta } and weak rate function I {\displaystyle I} ). === Remarks === The role of the open and closed sets in the large deviation principle is similar to their role in the weak convergence of probability measures: recall that ( μ δ ) δ > 0 {\displaystyle (\mu _{\delta })_{\delta >0}} is said to converge weakly to μ {\displaystyle \mu } if, for every closed set F ⊆ X {\displaystyle F\subseteq X} and every open set G ⊆ X {\displaystyle G\subseteq X} , lim sup δ ↓ 0 μ δ ( F ) ≤ μ ( F ) , {\displaystyle \limsup _{\delta \downarrow 0}\mu _{\delta }(F)\leq \mu (F),} lim inf δ ↓ 0 μ δ ( G ) ≥ μ ( G ) . {\displaystyle \liminf _{\delta \downarrow 0}\mu _{\delta }(G)\geq \mu (G).} There is some variation in the nomenclature used in the literature: for example, den Hollander (2000) uses simply "rate function" where this article — following Dembo & Zeitouni (1998) — uses "good rate function", and "weak rate function". Rassoul-Agha & Seppäläinen (2015) uses the term "tight rate function" instead of "good rate function" due to the connection with exponential tightness of a family of measures. Regardless of the nomenclature used for rate functions, examination of whether the upper bound inequality (U) is supposed to hold for closed or compact sets tells one whether the large deviation principle in use is strong or weak. == Properties == === Uniqueness === A natural question to ask, given the somewhat abstract setting of the general framework above, is whether the rate function is unique. This turns out to be the case: given a sequence of probability measures (μδ)δ>0 on X satisfying the large deviation principle for two rate functions I and J, it follows that I(x) = J(x) for all x ∈ X. === Exponential tightness === It is possible to convert a weak large deviation principle into a strong one if the measures converge sufficiently quickly. If the upper bound holds for compact sets F and the sequence of measures (μδ)δ>0 is exponentially tight, then the upper bound also holds for closed sets F. In other words, exponential tightness enables one to convert a weak large deviation principle into a strong one. === Continuity === Naïvely, one might try to replace the two inequalities (U) and (L) by the single requirement that, for all Borel sets S ⊆ X, lim δ ↓ 0 δ log ⁡ μ δ ( S ) = − inf x ∈ S I ( x ) . (E) {\displaystyle \lim _{\delta \downarrow 0}\delta \log \mu _{\delta }(S)=-\inf _{x\in S}I(x).\quad {\mbox{(E)}}} The equality (E) is far too restrictive, since many interesting examples satisfy (U) and (L) but not (E). For example, the measure μδ might be non-atomic for all δ, so the equality (E) could hold for S = {x} only if I were identically +∞, which is not permitted in the definition. However, the inequalities (U) and (L) do imply the equality (E) for so-called I-continuous sets S ⊆ X, those for which I ( S ∘ ) = I ( S ¯ ) , {\displaystyle I{\big (}{\stackrel {\circ }{S}}{\big )}=I{\big (}{\bar {S}}{\big )},} where S ∘ {\displaystyle {\stackrel {\circ }{S}}} and S ¯ {\displaystyle {\bar {S}}} denote the interior and closure of S in X respectively. In many examples, many sets/events of interest are I-continuous. For example, if I is a continuous function, then all sets S such that S ⊆ S ∘ ¯ {\displaystyle S\subseteq {\bar {\stackrel {\circ }{S}}}} are I-continuous; all open sets, for example, satisfy this containment. === Transformation of large deviation principles === Given a large deviation principle on one space, it is often of interest to be able to construct a large deviation principle on another space. There are several results in this area: the contraction principle tells one how a large deviation principle on one space "pushes forward" (via the pushforward of a probability measure) to a large deviation principle on another space via a continuous function; the Dawson-Gärtner theorem tells one how a sequence of large deviation principles on a sequence of spaces passes to the projective limit. the tilted large deviation principle gives a large deviation principle for integrals of exponential functionals. exponentially equivalent measures have the same large deviation principles. == History and basic development == The notion of a rate function emerged in the 1930s with the Swedish mathematician Harald Cramér's study of a sequence of i.i.d. random variables (Zi)i∈ N {\displaystyle \mathbb {N} } . Namely, among some considerations of scaling, Cramér studied the behavior of the distribution of the average X n = 1 n ∑ i = 1 n Z i {\textstyle X_{n}={\frac {1}{n}}\sum _{i=1}^{n}Z_{i}} as n→∞. He found that the tails of the distribution of Xn decay exponentially as e−nλ(x) where the factor λ(x) in the exponent is the Legendre–Fenchel transform (a.k.a. the convex conjugate) of the cumulant-generating function Ψ Z ( t ) = log ⁡ E ⁡ e t Z . {\displaystyle \Psi _{Z}(t)=\log \operatorname {E} e^{tZ}.} For this reason this particular function λ(x) is sometimes called the Cramér function. The rate function defined above in this article is a broad generalization of this notion of Cramér's, defined more abstractly on a probability space, rather than the state space of a random variable. == See also == Extreme value theory Moderate deviation principle == References == Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. xvi+396. ISBN 0-387-98406-2. MR1619036 den Hollander, Frank (2000). Large deviations. Fields Institute Monographs 14. Providence, RI: American Mathematical Society. p. x+143. ISBN 0-8218-1989-5. MR1739680 Rassoul-Agha, Firas; Seppäläinen, Timo (2015). A course on large deviations with an introduction to Gibbs measures. Graduate Studies in Mathematics 162. Providence, RI: American Mathematical Society. xiv+318. ISBN 978-0-8218-7578-0. MR3309619
Wikipedia:Ratio#0
Radio is the technology of communicating using radio waves. Radio waves are electromagnetic waves of frequency between 3 hertz (Hz) and 300 gigahertz (GHz). They are generated by an electronic device called a transmitter connected to an antenna which radiates the waves. They can be received by other antennas connected to a radio receiver; this is the fundamental principle of radio communication. In addition to communication, radio is used for radar, radio navigation, remote control, remote sensing, and other applications. In radio communication, used in radio and television broadcasting, cell phones, two-way radios, wireless networking, and satellite communication, among numerous other uses, radio waves are used to carry information across space from a transmitter to a receiver, by modulating the radio signal (impressing an information signal on the radio wave by varying some aspect of the wave) in the transmitter. In radar, used to locate and track objects like aircraft, ships, spacecraft and missiles, a beam of radio waves emitted by a radar transmitter reflects off the target object, and the reflected waves reveal the object's location to a receiver that is typically colocated with the transmitter. In radio navigation systems such as GPS and VOR, a mobile navigation instrument receives radio signals from multiple navigational radio beacons whose position is known, and by precisely measuring the arrival time of the radio waves the receiver can calculate its position on Earth. In wireless radio remote control devices like drones, garage door openers, and keyless entry systems, radio signals transmitted from a controller device control the actions of a remote device. The existence of radio waves was first proven by German physicist Heinrich Hertz on 11 November 1886. In the mid-1890s, building on techniques physicists were using to study electromagnetic waves, Italian physicist Guglielmo Marconi developed the first apparatus for long-distance radio communication, sending a wireless Morse Code message to a recipient over a kilometer away in 1895, and the first transatlantic signal on 12 December 1901. The first commercial radio broadcast was transmitted on 2 November 1920, when the live returns of the Harding-Cox presidential election were broadcast by Westinghouse Electric and Manufacturing Company in Pittsburgh, under the call sign KDKA. The emission of radio waves is regulated by law, coordinated by the International Telecommunication Union (ITU), which allocates frequency bands in the radio spectrum for various uses. == Etymology == The word radio is derived from the Latin word radius, meaning "spoke of a wheel, beam of light, ray." It was first applied to communications in 1881 when, at the suggestion of French scientist Ernest Mercadier, Alexander Graham Bell adopted radiophone (meaning "radiated sound") as an alternate name for his photophone optical transmission system. Following Hertz's discovery of the existence of radio waves in 1886, the term Hertzian waves was initially used for this radiation. The first practical radio communication systems, developed by Marconi in 1894–1895, transmitted telegraph signals by radio waves, so radio communication was first called wireless telegraphy. Up until about 1910 the term wireless telegraphy also included a variety of other experimental systems for transmitting telegraph signals without wires, including electrostatic induction, electromagnetic induction and aquatic and earth conduction, so there was a need for a more precise term referring exclusively to electromagnetic radiation. The French physicist Édouard Branly, who in 1890 developed the radio wave detecting coherer, called it in French a radio-conducteur. The radio- prefix was later used to form additional descriptive compound and hyphenated words, especially in Europe. For example, in early 1898 the British publication The Practical Engineer included a reference to the radiotelegraph and radiotelegraphy. The use of radio as a standalone word dates back to at least 30 December 1904, when instructions issued by the British Post Office for transmitting telegrams specified that "The word 'Radio'... is sent in the Service Instructions." This practice was universally adopted, and the word "radio" introduced internationally, by the 1906 Berlin Radiotelegraphic Convention, which included a Service Regulation specifying that "Radiotelegrams shall show in the preamble that the service is 'Radio'". The switch to radio in place of wireless took place slowly and unevenly in the English-speaking world. Lee de Forest helped popularize the new word in the United States—in early 1907, he founded the DeForest Radio Telephone Company, and his letter in the 22 June 1907 Electrical World about the need for legal restrictions warned that "Radio chaos will certainly be the result until such stringent regulation is enforced." The United States Navy would also play a role. Although its translation of the 1906 Berlin Convention used the terms wireless telegraph and wireless telegram, by 1912 it began to promote the use of radio instead. The term started to become preferred by the general public in the 1920s with the introduction of broadcasting. == History == Electromagnetic waves were predicted by James Clerk Maxwell in his 1873 theory of electromagnetism, now called Maxwell's equations, who proposed that a coupled oscillating electric field and magnetic field could travel through space as a wave, and proposed that light consisted of electromagnetic waves of short wavelength. On 11 November 1886, German physicist Heinrich Hertz, attempting to confirm Maxwell's theory, first observed radio waves he generated using a primitive spark-gap transmitter. Experiments by Hertz and physicists Jagadish Chandra Bose, Oliver Lodge, Lord Rayleigh, and Augusto Righi, among others, showed that radio waves like light demonstrated reflection, refraction, diffraction, polarization, standing waves, and traveled at the same speed as light, confirming that both light and radio waves were electromagnetic waves, differing only in frequency. In 1895, Guglielmo Marconi developed the first radio communication system, using a spark-gap transmitter to send Morse code over long distances. By December 1901, he had transmitted across the Atlantic Ocean. Marconi and Karl Ferdinand Braun shared the 1909 Nobel Prize in Physics "for their contributions to the development of wireless telegraphy". During radio's first two decades, called the radiotelegraphy era, the primitive radio transmitters could only transmit pulses of radio waves, not the continuous waves which were needed for audio modulation, so radio was used for person-to-person commercial, diplomatic and military text messaging. Starting around 1908 industrial countries built worldwide networks of powerful transoceanic transmitters to exchange telegram traffic between continents and communicate with their colonies and naval fleets. During World War I the development of continuous wave radio transmitters, rectifying electrolytic, and crystal radio receiver detectors enabled amplitude modulation (AM) radiotelephony to be achieved by Reginald Fessenden and others, allowing audio to be transmitted. On 2 November 1920, the first commercial radio broadcast was transmitted by Westinghouse Electric and Manufacturing Company in Pittsburgh, under the call sign KDKA featuring live coverage of the Harding-Cox presidential election. == Technology == Radio waves are radiated by electric charges undergoing acceleration. They are generated artificially by time-varying electric currents, consisting of electrons flowing back and forth in a metal conductor called an antenna. As they travel farther from the transmitting antenna, radio waves spread out so their signal strength (intensity in watts per square meter) decreases (see Inverse-square law), so radio transmissions can only be received within a limited range of the transmitter, the distance depending on the transmitter power, the antenna radiation pattern, receiver sensitivity, background noise level, and presence of obstructions between transmitter and receiver. An omnidirectional antenna transmits or receives radio waves in all directions, while a directional antenna transmits radio waves in a beam in a particular direction, or receives waves from only one direction. Radio waves travel at the speed of light in vacuum and at slightly lower velocity in air. The other types of electromagnetic waves besides radio waves, infrared, visible light, ultraviolet, X-rays and gamma rays, can also carry information and be used for communication. The wide use of radio waves for telecommunication is mainly due to their desirable propagation properties stemming from their longer wavelength. Radio waves have the ability to pass through the atmosphere in any weather, foliage, and at longer wavelengths through most building materials. By diffraction, longer wavelengths can bend around obstructions, and unlike other electromagnetic waves they tend to be scattered rather than absorbed by objects larger than their wavelength. == Radio communication == In radio communication systems, information is carried across space using radio waves. At the sending end, the information to be sent is converted by some type of transducer to a time-varying electrical signal called the modulation signal. The modulation signal may be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal consisting of a sequence of bits representing binary data from a computer. The modulation signal is applied to a radio transmitter. In the transmitter, an electronic oscillator generates an alternating current oscillating at a radio frequency, called the carrier wave because it serves to generate the radio waves that carry the information through the air. The modulation signal is used to modulate the carrier, varying some aspect of the carrier wave, impressing the information in the modulation signal onto the carrier. Different radio systems use different modulation methods: Amplitude modulation (AM) – in an AM transmitter, the amplitude (strength) of the radio carrier wave is varied by the modulation signal;: 3 Frequency modulation (FM) – in an FM transmitter, the frequency of the radio carrier wave is varied by the modulation signal;: 33 Frequency-shift keying (FSK) – used in wireless digital devices to transmit digital signals, the frequency of the carrier wave is shifted between frequencies.: 58 Orthogonal frequency-division multiplexing (OFDM) – a family of digital modulation methods widely used in high-bandwidth systems such as Wi-Fi networks, cellphones, digital television broadcasting, and digital audio broadcasting (DAB) to transmit digital data using a minimum of radio spectrum bandwidth. It has higher spectral efficiency and more resistance to fading than AM or FM. In OFDM, multiple radio carrier waves closely spaced in frequency are transmitted within the radio channel, with each carrier modulated with bits from the incoming bitstream so multiple bits are being sent simultaneously, in parallel. At the receiver, the carriers are demodulated and the bits are combined in the proper order into one bitstream. Many other types of modulation are also used. In some types, the carrier wave is suppressed, and only one or both modulation sidebands are transmitted. The modulated carrier is amplified in the transmitter and applied to a transmitting antenna which radiates the energy as radio waves. The radio waves carry the information to the receiver location. At the receiver, the radio wave induces a tiny oscillating voltage in the receiving antenna – a weaker replica of the current in the transmitting antenna. This voltage is applied to the radio receiver, which amplifies the weak radio signal so it is stronger, then demodulates it, extracting the original modulation signal from the modulated carrier wave. The modulation signal is converted by a transducer back to a human-usable form: an audio signal is converted to sound waves by a loudspeaker or earphones, a video signal is converted to images by a display, while a digital signal is applied to a computer or microprocessor, which interacts with human users. The radio waves from many transmitters pass through the air simultaneously without interfering with each other because each transmitter's radio waves oscillate at a different frequency, measured in hertz (Hz), kilohertz (kHz), megahertz (MHz) or gigahertz (GHz). The receiving antenna typically picks up the radio signals of many transmitters. The receiver uses tuned circuits to select the radio signal desired out of all the signals picked up by the antenna and reject the others. A tuned circuit acts like a resonator, similar to a tuning fork. It has a natural resonant frequency at which it oscillates. The resonant frequency of the receiver's tuned circuit is adjusted by the user to the frequency of the desired radio station; this is called tuning. The oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. Radio signals at other frequencies are blocked by the tuned circuit and not passed on. === Bandwidth === A modulated radio wave, carrying an information signal, occupies a range of frequencies. The information in a radio signal is usually concentrated in narrow frequency bands called sidebands (SB) just above and below the carrier frequency. The width in hertz of the frequency range that the radio signal occupies, the highest frequency minus the lowest frequency, is called its bandwidth (BW). For any given signal-to-noise ratio, a given bandwidth can carry the same amount of information regardless of where in the radio frequency spectrum it is located; bandwidth is a measure of information-carrying capacity. The bandwidth required by a radio transmission depends on the data rate of the information being sent, and the spectral efficiency of the modulation method used; how much data it can transmit in each unit of bandwidth. Different types of information signals carried by radio have different data rates. For example, a television signal has a greater data rate than an audio signal. The radio spectrum, the total range of radio frequencies that can be used for communication in a given area, is a limited resource. Each radio transmission occupies a portion of the total bandwidth available. Radio bandwidth is regarded as an economic good which has a monetary cost and is in increasing demand. In some parts of the radio spectrum, the right to use a frequency band or even a single radio channel is bought and sold for millions of dollars. So there is an incentive to employ technology to minimize the bandwidth used by radio services. A slow transition from analog to digital radio transmission technologies began in the late 1990s. Part of the reason for this is that digital modulation can often transmit more information (a greater data rate) in a given bandwidth than analog modulation, by using data compression algorithms, which reduce redundancy in the data to be sent, and more efficient modulation. Other reasons for the transition is that digital modulation has greater noise immunity than analog, digital signal processing chips have more power and flexibility than analog circuits, and a wide variety of types of information can be transmitted using the same digital modulation. Because it is a fixed resource which is in demand by an increasing number of users, the radio spectrum has become increasingly congested in recent decades, and the need to use it more effectively is driving many additional radio innovations such as trunked radio systems, spread spectrum (ultra-wideband) transmission, frequency reuse, dynamic spectrum management, frequency pooling, and cognitive radio. === ITU frequency bands === The ITU arbitrarily divides the radio spectrum into 12 bands, each beginning at a wavelength which is a power of ten (10n) metres, with corresponding frequency of 3 times a power of ten, and each covering a decade of frequency or wavelength. Each of these bands has a traditional name: It can be seen that the bandwidth, the range of frequencies, contained in each band is not equal but increases exponentially as the frequency increases; each band contains ten times the bandwidth of the preceding band. The term "tremendously low frequency" (TLF) has been used for wavelengths from 1–3 Hz (300,000–100,000 km), though the term has not been defined by the ITU. == Regulation == The airwaves are a resource shared by many users. Two radio transmitters in the same area that attempt to transmit on the same frequency will interfere with each other, causing garbled reception, so neither transmission may be received clearly. Interference with radio transmissions can not only have a large economic cost, but it can also be life-threatening (for example, in the case of interference with emergency communications or air traffic control). To prevent interference between different users, the emission of radio waves is strictly regulated by national laws, coordinated by an international body, the International Telecommunication Union (ITU), which allocates bands in the radio spectrum for different uses. Radio transmitters must be licensed by governments, under a variety of license classes depending on use, and are restricted to certain frequencies and power levels. In some classes, such as radio and television broadcasting stations, the transmitter is given a unique identifier consisting of a string of letters and numbers called a call sign, which must be used in all transmissions. In order to adjust, maintain, or internally repair radiotelephone transmitters, individuals must hold a government license, such as the general radiotelephone operator license in the US, obtained by taking a test demonstrating adequate technical and legal knowledge of safe radio operation. Exceptions to the above rules allow the unlicensed operation by the public of low power short-range transmitters in consumer products such as cell phones, cordless phones, wireless devices, walkie-talkies, citizens band radios, wireless microphones, garage door openers, and baby monitors. In the US, these fall under Part 15 of the Federal Communications Commission (FCC) regulations. Many of these devices use the ISM bands, a series of frequency bands throughout the radio spectrum reserved for unlicensed use. Although they can be operated without a license, like all radio equipment these devices generally must be type-approved before the sale. == Applications == Below are some of the most important uses of radio, organized by function. === Broadcasting === Broadcasting is the one-way transmission of information from a transmitter to receivers belonging to a public audience. Since the radio waves become weaker with distance, a broadcasting station can only be received within a limited distance of its transmitter. Systems that broadcast from satellites can generally be received over an entire country or continent. Older terrestrial radio and television are paid for by commercial advertising or governments. In subscription systems like satellite television and satellite radio the customer pays a monthly fee. In these systems, the radio signal is encrypted and can only be decrypted by the receiver, which is controlled by the company and can be deactivated if the customer does not pay. Broadcasting uses several parts of the radio spectrum, depending on the type of signals transmitted and the desired target audience. Longwave and medium wave signals can give reliable coverage of areas several hundred kilometers across, but have a more limited information-carrying capacity and so work best with audio signals (speech and music), and the sound quality can be degraded by radio noise from natural and artificial sources. The shortwave bands have a greater potential range but are more subject to interference by distant stations and varying atmospheric conditions that affect reception. In the very high frequency band, greater than 30 megahertz, the Earth's atmosphere has less of an effect on the range of signals, and line-of-sight propagation becomes the principal mode. These higher frequencies permit the great bandwidth required for television broadcasting. Since natural and artificial noise sources are less present at these frequencies, high-quality audio transmission is possible, using frequency modulation. ==== Audio: Radio broadcasting ==== Radio broadcasting means transmission of audio (sound) to radio receivers belonging to a public audience. Analog audio is the earliest form of radio broadcast. AM broadcasting began around 1920. FM broadcasting was introduced in the late 1930s with improved fidelity. A broadcast radio receiver is called a radio. Most radios can receive both AM and FM. AM (amplitude modulation) – in AM, the amplitude (strength) of the radio carrier wave is varied by the audio signal. AM broadcasting, the oldest broadcasting technology, is allowed in the AM broadcast bands, between 148 and 283 kHz in the low frequency (LF) band for longwave broadcasts and between 526 and 1706 kHz in the medium frequency (MF) band for medium-wave broadcasts. Because waves in these bands travel as ground waves following the terrain, AM radio stations can be received beyond the horizon at hundreds of miles distance, but AM has lower fidelity than FM. Radiated power (ERP) of AM stations in the US is usually limited to a maximum of 10 kW, although a few (clear-channel stations) are allowed to transmit at 50 kW. AM stations broadcast in monaural audio; AM stereo broadcast standards exist in most countries, but the radio industry has failed to upgrade to them, due to lack of demand. Shortwave broadcasting – AM broadcasting is also allowed in the shortwave bands by legacy radio stations. Since radio waves in these bands can travel intercontinental distances by reflecting off the ionosphere using skywave or "skip" propagation, shortwave is used by international stations, broadcasting to other countries. FM (frequency modulation) – in FM the frequency of the radio carrier signal is varied slightly by the audio signal. FM broadcasting is permitted in the FM broadcast bands between about 65 and 108 MHz in the very high frequency (VHF) range. Radio waves in this band travel by line-of-sight so FM reception is limited by the visual horizon to about 30–40 miles (48–64 km), and can be blocked by hills. However it is less susceptible to interference from radio noise (RFI, sferics, static), and has higher fidelity, better frequency response, and less audio distortion than AM. In the US, radiated power (ERP) of FM stations varies from 6–100 kW. Digital radio involves a variety of standards and technologies for broadcasting digital radio signals over the air. Some systems, such as HD Radio and DRM, operate in the same wavebands as analog broadcasts, either as a replacement for analog stations or as a complementary service. Others, such as DAB/DAB+ and ISDB_Tsb, operate in wavebands traditionally used for television or satellite services. Digital Audio Broadcasting (DAB) debuted in some countries in 1998. It transmits audio as a digital signal rather than an analog signal as AM and FM do. DAB has the potential to provide higher quality sound than FM (although many stations do not choose to transmit at such high quality), has greater immunity to radio noise and interference, makes better use of scarce radio spectrum bandwidth and provides advanced user features such as electronic program guides. Its disadvantage is that it is incompatible with previous radios so that a new DAB receiver must be purchased. Several nations have set dates to switch off analog FM networks in favor of DAB / DAB+, notably Norway in 2017 and Switzerland in 2024. A single DAB station transmits a 1,500 kHz bandwidth signal that carries from 9–12 channels of digital audio modulated by OFDM from which the listener can choose. Broadcasters can transmit a channel at a range of different bit rates, so different channels can have different audio quality. In different countries DAB stations broadcast in either Band III (174–240 MHz) or L band (1.452–1.492 GHz) in the UHF range, so like FM reception is limited by the visual horizon to about 40 miles (64 km). HD Radio is an alternative digital radio standard widely implemented in North America. An in-band on-channel technology, HD Radio broadcasts a digital signal in a subcarrier of a station's analog FM or AM signal. Stations are able to multicast more than one audio signal in the subcarrier, supporting the transmission of multiple audio services at varying bitrates. The digital signal is transmitted using OFDM with the HDC (High-Definition Coding) proprietary audio compression format. HDC is based on, but not compatible with, the MPEG-4 standard HE-AAC. It uses a modified discrete cosine transform (MDCT) audio data compression algorithm. Digital Radio Mondiale (DRM) is a competing digital terrestrial radio standard developed mainly by broadcasters as a higher spectral efficiency replacement for legacy AM and FM broadcasting. Mondiale means "worldwide" in French and Italian; DRM was developed in 2001, and is currently supported by 23 countries, and adopted by some European and Eastern broadcasters beginning in 2003. The DRM30 mode uses the commercial broadcast bands below 30 MHz, and is intended as a replacement for standard AM broadcast on the longwave, mediumwave, and shortwave bands. The DRM+ mode uses VHF frequencies centered around the FM broadcast band, and is intended as a replacement for FM broadcasting. It is incompatible with existing radio receivers, so it requires listeners to purchase a new DRM receiver. The modulation used is a form of OFDM called COFDM in which, up to 4 carriers are transmitted on a channel formerly occupied by a single AM or FM signal, modulated by quadrature amplitude modulation (QAM). The DRM system is designed to be as compatible as possible with existing AM and FM radio transmitters, so that much of the equipment in existing radio stations can continue in use, augmented with DRM modulation equipment. Satellite radio is a subscription radio service that broadcasts CD quality digital audio direct to subscribers' receivers using a microwave downlink signal from a direct broadcast communication satellite in geostationary orbit 22,000 miles (35,000 km) above the Earth. It is mostly intended for radios in vehicles. Satellite radio uses the 2.3 GHz S band in North America, in other parts of the world, it uses the 1.4 GHz L band allocated for DAB. ==== Audio/video: Television broadcasting ==== Television broadcasting is the transmission of moving images along with a synchronized audio (sound) channel by radio. The sequence of still images is displayed on a screen on a television receiver (a "television" or TV), which includes a loudspeaker. Television (video) signals occupy a wider bandwidth than broadcast radio (audio) signals. Analog television, the original television technology, required 6 MHz, so the television frequency bands are divided into 6 MHz channels, now called "RF channels". The current television standard, introduced beginning in 2006, is a digital format called high-definition television (HDTV), which transmits pictures at higher resolution, typically 1080 pixels high by 1920 pixels wide, at a rate of 25 or 30 frames per second. Digital television (DTV) transmission systems, which replaced older analog television in a transition beginning in 2006, use image compression and high-efficiency digital modulation such as OFDM and 8VSB to transmit HDTV video within a smaller bandwidth than the old analog channels, saving scarce radio spectrum space. Therefore, each of the 6 MHz analog RF channels now carries up to 7 DTV channels – these are called "virtual channels". Digital television receivers have different behavior in the presence of poor reception or noise than analog television, called the "digital cliff" effect. Unlike analog television, in which increasingly poor reception causes the picture quality to gradually degrade, in digital television picture quality is not affected by poor reception until, at a certain point, the receiver stops working and the screen goes black. Terrestrial television, over-the-air (OTA) television, or broadcast television – the oldest television technology, is the transmission of television signals from land-based television stations to television receivers (called televisions or TVs) in viewer's homes. Terrestrial television broadcasting uses the bands 41 – 88 MHz (VHF low band or Band I, carrying RF channels 1–6), 174 – 240 MHz, (VHF high band or Band III; carrying RF channels 7–13), and 470 – 614 MHz (UHF Band IV and Band V; carrying RF channels 14 and up). The exact frequency boundaries vary in different countries. Propagation is by line-of-sight, so reception is limited by the visual horizon. In the US, the effective radiated power (ERP) of television transmitters is regulated according to height above average terrain. Viewers closer to the television transmitter can use a simple "rabbit ears" dipole antenna on top of the TV, but viewers in fringe reception areas typically require an outdoor antenna mounted on the roof to get adequate reception. Satellite television – a set-top box which receives subscription direct-broadcast satellite television, and displays it on an ordinary television. A direct broadcast satellite in geostationary orbit 22,200 miles (35,700 km) above the Earth's equator transmits many channels (up to 900) modulated on a 12.2 to 12.7 GHz Ku band microwave downlink signal to a rooftop satellite dish antenna on the subscriber's residence. The microwave signal is converted to a lower intermediate frequency at the dish and conducted into the building by a coaxial cable to a set-top box connected to the subscriber's TV, where it is demodulated and displayed. The subscriber pays a monthly fee. ==== Time and frequency ==== Government standard frequency and time signal services operate time radio stations which continuously broadcast extremely accurate time signals produced by atomic clocks, as a reference to synchronize other clocks. Examples are BPC, DCF77, JJY, MSF, RTZ, TDF, WWV, and YVTO. One use is in radio clocks and watches, which include an automated receiver that periodically (usually weekly) receives and decodes the time signal and resets the watch's internal quartz clock to the correct time, thus allowing a small watch or desk clock to have the same accuracy as an atomic clock. Government time stations are declining in number because GPS satellites and the Internet Network Time Protocol (NTP) provide equally accurate time standards. === Voice communication === ==== Two-way voice communication ==== A two-way radio is an audio transceiver, a receiver and transmitter in the same device, used for bidirectional person-to-person voice communication with other users with similar radios. An older term for this mode of communication is radiotelephony. The radio link may be half-duplex, as in a walkie-talkie, using a single radio channel in which only one radio can transmit at a time, so different users take turns talking, pressing a "push to talk" button on their radio which switches off the receiver and switches on the transmitter. Or the radio link may be full duplex, a bidirectional link using two radio channels so both people can talk at the same time, as in a cell phone. Cell phone – a portable wireless telephone that is connected to the telephone network by radio signals exchanged with a local antenna at a cellular base station (cell tower). The service area covered by the provider is divided into small geographical areas called "cells", each served by a separate base station antenna and multichannel transceiver. All the cell phones in a cell communicate with this antenna on separate frequency channels, assigned from a common pool of frequencies. The purpose of cellular organization is to conserve radio bandwidth by frequency reuse. Low power transmitters are used so the radio waves used in a cell do not travel far beyond the cell, allowing the same frequencies to be reused in geographically separated cells. When a user carrying a cellphone crosses from one cell to another, his phone is automatically "handed off" seamlessly to the new antenna and assigned new frequencies. Cellphones have a highly automated full duplex digital transceiver using OFDM modulation using two digital radio channels, each carrying one direction of the bidirectional conversation, as well as a control channel that handles dialing calls and "handing off" the phone to another cell tower. Older 2G, 3G, and 4G networks use frequencies in the UHF and low microwave range, between 700 MHz and 3 GHz. The cell phone transmitter adjusts its power output to use the minimum power necessary to communicate with the cell tower; 0.6 W when near the tower, up to 3 W when farther away. Cell tower channel transmitter power is 50 W. Current generation phones, called smartphones, have many functions besides making telephone calls, and therefore have several other radio transmitters and receivers that connect them with other networks: usually a Wi-Fi modem, a Bluetooth modem, and a GPS receiver. 5G cellular network – next-generation cellular networks which began deployment in 2019. Their major advantage is much higher data rates than previous cellular networks, up to 10 Gbps; 100 times faster than the previous cellular technology, 4G LTE. The higher data rates are achieved partly by using higher frequency radio waves, in the higher microwave band 3–6 GHz, and millimeter wave band, around 28 and 39 GHz. Since these frequencies have a shorter range than previous cellphone bands, the cells will be smaller than the cells in previous cellular networks which could be many miles across. Millimeter-wave cells will only be a few blocks long, and instead of a cell base station and antenna tower, they will have many small antennas attached to utility poles and buildings. Satellite phone (satphone) – a portable wireless telephone similar to a cell phone, connected to the telephone network through a radio link to an orbiting communications satellite instead of through cell towers. They are more expensive than cell phones; but their advantage is that, unlike a cell phone which is limited to areas covered by cell towers, satphones can be used over most or all of the geographical area of the Earth. In order for the phone to communicate with a satellite using a small omnidirectional antenna, first-generation systems use satellites in low Earth orbit, about 400–700 miles (640–1,100 km) above the surface. With an orbital period of about 100 minutes, a satellite can only be in view of a phone for about 4 – 15 minutes, so the call is "handed off" to another satellite when one passes beyond the local horizon. Therefore, large numbers of satellites, about 40 to 70, are required to ensure that at least one satellite is in view continuously from each point on Earth. Other satphone systems use satellites in geostationary orbit in which only a few satellites are needed, but these cannot be used at high latitudes because of terrestrial interference. Cordless phone – a landline telephone in which the handset is portable and communicates with the rest of the phone by a short-range full duplex radio link, instead of being attached by a cord. Both the handset and the base station have low-power radio transceivers that handle the short-range bidirectional radio link. As of 2022, cordless phones in most nations use the DECT transmission standard. Land mobile radio system – short-range mobile or portable half-duplex radio transceivers operating in the VHF or UHF band that can be used without a license. They are often installed in vehicles, with the mobile units communicating with a dispatcher at a fixed base station. Special systems with reserved frequencies are used by first responder services; police, fire, ambulance, and emergency services, and other government services. Other systems are made for use by commercial firms such as taxi and delivery services. VHF systems use channels in the range 30–50 MHz and 150–172 MHz. UHF systems use the 450–470 MHz band and in some areas the 470–512 MHz range. In general, VHF systems have a longer range than UHF but require longer antennas. AM or FM modulation is mainly used, but digital systems such as DMR are being introduced. The radiated power is typically limited to 4 watts. These systems have a fairly limited range, usually 3 to 20 miles (4.8 to 32 km) depending on terrain. Repeaters installed on tall buildings, hills, or mountain peaks are often used to increase the range when it is desired to cover a larger area than line-of-sight. Examples of land mobile systems are CB, FRS, GMRS, and MURS. Modern digital systems, called trunked radio systems, have a digital channel management system using a control channel that automatically assigns frequency channels to user groups. Walkie-talkie – a battery-powered portable handheld half-duplex two-way radio, used in land mobile radio systems. Airband – Half-duplex radio system used by aircraft pilots to talk to other aircraft and ground-based air traffic controllers. This vital system is the main communication channel for air traffic control. For most communication in overland flights in air corridors a VHF-AM system using channels between 108 and 137 MHz in the VHF band is used. This system has a typical transmission range of 200 miles (320 km) for aircraft flying at cruising altitude. For flights in more remote areas, such as transoceanic airline flights, aircraft use the HF band or channels on the Inmarsat or Iridium satphone satellites. Military aircraft also use a dedicated UHF-AM band from 225.0 to 399.95 MHz. Marine radio – medium-range transceivers on ships, used for ship-to-ship, ship-to-air, and ship-to-shore communication with harbormasters They use FM channels between 156 and 174 MHz in the VHF band with up to 25 watts power, giving them a range of about 60 miles (97 km). Some channels are half-duplex and some are full-duplex, to be compatible with the telephone network, to allow users to make telephone calls through a marine operator. Amateur radio – long-range half-duplex two-way radio used by hobbyists for non-commercial purposes: recreational radio contacts with other amateurs, volunteer emergency communication during disasters, contests, and experimentation. Radio amateurs must hold an amateur radio license and are given a unique callsign that must be used as an identifier in transmissions. Amateur radio is restricted to small frequency bands, the amateur radio bands, spaced throughout the radio spectrum starting at 136 kHz. Within these bands, amateurs are allowed the freedom to transmit on any frequency using a wide variety of voice modulation methods, along with other forms of communication, such as slow-scan television (SSTV), and radioteletype (RTTY). Additionally, amateurs are among the only radio operators still using Morse code radiotelegraphy. ==== One-way voice communication ==== One way, unidirectional radio transmission is called simplex. Baby monitor – a crib-side appliance for parents of infants that transmits the baby's sounds to a receiver carried by the parent, so they can monitor the baby while they are in other parts of the house. The wavebands used vary by region, but analog baby monitors generally transmit with low power in the 16, 9.3–49.9 or 900 MHz wavebands, and digital systems in the 2.4 GHz waveband. Many baby monitors have duplex channels so the parent can talk to the baby, and cameras to show video of the baby. Wireless microphone – a battery-powered microphone with a short-range transmitter that is handheld or worn on a person's body which transmits its sound by radio to a nearby receiver unit connected to a sound system. Wireless microphones are used by public speakers, performers, and television personalities so they can move freely without trailing a microphone cord. Traditionally, analog models transmit in FM on unused portions of the television broadcast frequencies in the VHF and UHF bands. Some models transmit on two frequency channels for diversity reception to prevent nulls from interrupting transmission as the performer moves around. Some models use digital modulation to prevent unauthorized reception by scanner radio receivers; these operate in the 900 MHz, 2.4 GHz or 6 GHz ISM bands. European standards also support wireless multichannel audio systems (WMAS) that can better support the use of large numbers of wireless microphones at a single event or venue. As of 2021, U.S. regulators were considering adopting rules for WMAS. === Data communication === Wireless networking – automated radio links which transmit digital data between computers and other wireless devices using radio waves, linking the devices together transparently in a computer network. Computer networks can transmit any form of data: in addition to email and web pages, they also carry phone calls (VoIP), audio, and video content (called streaming media). Security is more of an issue for wireless networks than for wired networks since anyone nearby with a wireless modem can access the signal and attempt to log in. The radio signals of wireless networks are encrypted using WPA. Wireless LAN (wireless local area network or Wi-Fi) – based on the IEEE 802.11 standards, these are the most widely used computer networks, used to implement local area networks without cables, linking computers, laptops, cell phones, video game consoles, smart TVs and printers in a home or office together, and to a wireless router connecting them to the Internet with a wire or cable connection. Wireless routers in public places like libraries, hotels and coffee shops create wireless access points (hotspots) to allow the public to access the Internet with portable devices like smartphones, tablets or laptops. Each device exchanges data using a wireless modem (wireless network interface controller), an automated microwave transmitter and receiver with an omnidirectional antenna that works in the background, exchanging data packets with the router. Wi-Fi uses channels in the 2.4 GHz and 5 GHz ISM bands with OFDM (orthogonal frequency-division multiplexing) modulation to transmit data at high rates. The transmitters in Wi-Fi modems are limited to a radiated power of 200 mW to 1 watt, depending on country. They have a maximum indoor range of about 150 ft (50 m) on 2.4 GHz and 50 ft (20 m) on 5 GHz. Wireless WAN (wireless wide area network, WWAN) – a variety of technologies that provide wireless internet access over a wider area than Wi-Fi networks do – from an office building to a campus to a neighborhood, or to an entire city. The most common technologies used are: cellular modems, that exchange computer data by radio with cell towers; satellite internet access; and lower frequencies in the UHF band, which have a longer range than Wi-Fi frequencies. Since WWAN networks are much more expensive and complicated to administer than Wi-Fi networks, their use so far has generally been limited to private networks operated by large corporations. Bluetooth – a very short-range wireless interface on a portable wireless device used as a substitute for a wire or cable connection, mainly to exchange files between portable devices and connect cellphones and music players with wireless headphones. In the most widely used mode, transmission power is limited to 1 milliwatt, giving it a very short range of up to 10 m (30 feet). The system uses frequency-hopping spread spectrum transmission, in which successive data packets are transmitted in a pseudorandom order on one of 79 1 MHz Bluetooth channels between 2.4 and 2.83 GHz in the ISM band. This allows Bluetooth networks to operate in the presence of noise, other wireless devices and other Bluetooth networks using the same frequencies, since the chance of another device attempting to transmit on the same frequency at the same time as the Bluetooth modem is low. In the case of such a "collision", the Bluetooth modem just retransmits the data packet on another frequency. Packet radio – a long-distance peer-to-peer wireless ad-hoc network in which data packets are exchanged between computer-controlled radio modems (transmitter/receivers) called nodes, which may be separated by miles, and maybe mobile. Each node only communicates with neighboring nodes, so packets of data are passed from node to node until they reach their destination using the X.25 network protocol. Packet radio systems are used to a limited degree by commercial telecommunications companies and by the amateur radio community. Text messaging (texting) – this is a service on cell phones, allowing a user to type a short alphanumeric message and send it to another phone number, and the text is displayed on the recipient's phone screen. It is based on the Short Message Service (SMS) which transmits using spare bandwidth on the control radio channel used by cell phones to handle background functions like dialing and cell handoffs. Due to technical limitations of the channel, text messages are limited to 160 alphanumeric characters. Microwave relay – a long-distance high bandwidth point-to-point digital data transmission link consisting of a microwave transmitter connected to a dish antenna that transmits a beam of microwaves to another dish antenna and receiver. Since the antennas must be in line-of-sight, distances are limited by the visual horizon to 30–40 miles (48–64 km). Microwave links are used for private business data, wide area computer networks (WANs), and by telephone companies to transmit long-distance phone calls and television signals between cities. Telemetry – automated one-way (simplex) transmission of measurements and operation data from a remote process or device to a receiver for monitoring. Telemetry is used for in-flight monitoring of missiles, drones, satellites, and weather balloon radiosondes, sending scientific data back to Earth from interplanetary spacecraft, communicating with electronic biomedical sensors implanted in the human body, and well logging. Multiple channels of data are often transmitted using frequency-division multiplexing or time-division multiplexing. Telemetry is starting to be used in consumer applications such as: Automated meter reading – electric power meters, water meters, and gas meters that, when triggered by an interrogation signal, transmit their readings by radio to a utility reader vehicle at the curb, to eliminate the need for an employee to go on the customer's property to manually read the meter. Electronic toll collection – on toll roads, an alternative to manual collection of tolls at a toll booth, in which a transponder in a vehicle, when triggered by a roadside transmitter, transmits a signal to a roadside receiver to register the vehicle's use of the road, enabling the owner to be billed for the toll. Radio Frequency Identification (RFID) – identification tags containing a tiny radio transponder (receiver and transmitter) which are attached to merchandise. When it receives an interrogation pulse of radio waves from a nearby reader unit, the tag transmits back an ID number, which can be used to inventory goods. Passive tags, the most common type, have a chip powered by the radio energy received from the reader, rectified by a diode, and can be as small as a grain of rice. They are incorporated in products, clothes, railroad cars, library books, airline baggage tags and are implanted under the skin in pets and livestock (microchip implant) and even people. Privacy concerns have been addressed with tags that use encrypted signals and authenticate the reader before responding. Passive tags use 125–134 kHz, 13, 900 MHz and 2.4 and 5 GHz ISM bands and have a short range. Active tags, powered by a battery, are larger but can transmit a stronger signal, giving them a range of hundreds of meters. Submarine communication – When submerged, submarines are cut off from all ordinary radio communication with their military command authorities by the conductive seawater. However radio waves of low enough frequencies, in the VLF (30 to 3 kHz) and ELF (below 3 kHz) bands are able to penetrate seawater. Navies operate large shore transmitting stations with power output in the megawatt range to transmit encrypted messages to their submarines in the world's oceans. Due to the small bandwidth, these systems cannot transmit voice, only text messages at a slow data rate. The communication channel is one-way, since the long antennas needed to transmit VLF or ELF waves cannot fit on a submarine. VLF transmitters use miles long wire antennas like umbrella antennas. A few nations use ELF transmitters operating around 80 Hz, which can communicate with submarines at lower depths. These use even larger antennas called ground dipoles, consisting of two ground (Earth) connections 23–60 km (14–37 miles) apart, linked by overhead transmission lines to a power plant transmitter. === Space communication === This is radio communication between a spacecraft and an Earth-based ground station, or another spacecraft. Communication with spacecraft involves the longest transmission distances of any radio links, up to billions of kilometers for interplanetary spacecraft. In order to receive the weak signals from distant spacecraft, satellite ground stations use large parabolic "dish" antennas up to 25 metres (82 ft) in diameter and extremely sensitive receivers. High frequencies in the microwave band are used, since microwaves pass through the ionosphere without refraction, and at microwave frequencies the high-gain antennas needed to focus the radio energy into a narrow beam pointed at the receiver are small and take up a minimum of space in a satellite. Portions of the UHF, L, C, S, ku and ka band are allocated for space communication. A radio link that transmits data from the Earth's surface to a spacecraft is called an uplink, while a link that transmits data from the spacecraft to the ground is called a downlink. Communication satellite – an artificial satellite used as a telecommunications relay to transmit data between widely separated points on Earth. These are used because the microwaves used for telecommunications travel by line of sight and so cannot propagate around the curve of the Earth. As of 1 January 2021, there were 2,224 communications satellites in Earth orbit. Most are in geostationary orbit 22,200 miles (35,700 km) above the equator, so that the satellite appears stationary at the same point in the sky, so the satellite dish antennas of ground stations can be aimed permanently at that spot and do not have to move to track it. In a satellite ground station a microwave transmitter and large satellite dish antenna transmit a microwave uplink beam to the satellite. The uplink signal carries many channels of telecommunications traffic, such as long-distance telephone calls, television programs, and internet signals, using a technique called frequency-division multiplexing (FDM). On the satellite, a transponder receives the signal, translates it to a different downlink frequency to avoid interfering with the uplink signal, and retransmits it down to another ground station, which may be widely separated from the first. There the downlink signal is demodulated and the telecommunications traffic it carries is sent to its local destinations through landlines. Communication satellites typically have several dozen transponders on different frequencies, which are leased by different users. Direct broadcast satellite – a geostationary communication satellite that transmits retail programming directly to receivers in subscriber's homes and vehicles on Earth, in satellite radio and TV systems. It uses a higher transmitter power than other communication satellites, to allow the signal to be received by consumers with a small unobtrusive antenna. For example, satellite television uses downlink frequencies from 12.2 to 12.7 GHz in the ku band transmitted at 100 to 250 watts, which can be received by relatively small 43–80 cm (17–31 in) satellite dishes mounted on the outside of buildings. === Other applications === ==== Radar ==== Radar is a radiolocation method used to locate and track aircraft, spacecraft, missiles, ships, vehicles, and also to map weather patterns and terrain. A radar set consists of a transmitter and receiver. The transmitter emits a narrow beam of radio waves which is swept around the surrounding space. When the beam strikes a target object, radio waves are reflected back to the receiver. The direction of the beam reveals the object's location. Since radio waves travel at a constant speed close to the speed of light, by measuring the brief time delay between the outgoing pulse and the received "echo", the range to the target can be calculated. The targets are often displayed graphically on a map display called a radar screen. Doppler radar can measure a moving object's velocity, by measuring the change in frequency of the return radio waves due to the Doppler effect. Radar sets mainly use high frequencies in the microwave bands, because these frequencies create strong reflections from objects the size of vehicles and can be focused into narrow beams with compact antennas. Parabolic (dish) antennas are widely used. In most radars the transmitting antenna also serves as the receiving antenna; this is called a monostatic radar. A radar which uses separate transmitting and receiving antennas is called a bistatic radar. Airport surveillance radar – In aviation, radar is the main tool of air traffic control. A rotating dish antenna sweeps a vertical fan-shaped beam of microwaves around the airspace and the radar set shows the location of aircraft as "blips" of light on a display called a radar screen. Airport radar operates at 2.7 – 2.9 GHz in the microwave S band. In large airports the radar image is displayed on multiple screens in an operations room called the TRACON (Terminal Radar Approach Control), where air traffic controllers direct the aircraft by radio to maintain safe aircraft separation. Secondary surveillance radar – Aircraft carry radar transponders, transceivers which when triggered by the incoming radar signal transmit a return microwave signal. This causes the aircraft to show up more strongly on the radar screen. The radar which triggers the transponder and receives the return beam, usually mounted on top of the primary radar dish, is called the secondary surveillance radar. Since radar cannot measure an aircraft's altitude with any accuracy, the transponder also transmits back the aircraft's altitude measured by its altimeter, and an ID number identifying the aircraft, which is displayed on the radar screen. Electronic countermeasures (ECM) – Military defensive electronic systems designed to degrade enemy radar effectiveness, or deceive it with false information, to prevent enemies from locating local forces. It often consists of powerful microwave transmitters that can mimic enemy radar signals to create false target indications on the enemy radar screens. Marine radar – an S or X band radar on ships used to detect nearby ships and obstructions like bridges. A rotating antenna sweeps a vertical fan-shaped beam of microwaves around the water surface surrounding the craft out to the horizon. Weather radar – A Doppler radar which maps weather precipitation intensities and wind speeds with the echoes returned from raindrops and their radial velocity by their Doppler shift. Phased-array radar – a radar set that uses a phased array, a computer-controlled antenna that can steer the radar beam quickly to point in different directions without moving the antenna. Phased-array radars were developed by the military to track fast-moving missiles and aircraft. They are widely used in military equipment and are now spreading to civilian applications. Synthetic aperture radar (SAR) – a specialized airborne radar set that produces a high-resolution map of ground terrain. The radar is mounted on an aircraft or spacecraft and the radar antenna radiates a beam of radio waves sideways at right angles to the direction of motion, toward the ground. In processing the return radar signal, the motion of the vehicle is used to simulate a large antenna, giving the radar a higher resolution. Ground-penetrating radar – a specialized radar instrument that is rolled along the ground surface in a cart and transmits a beam of radio waves into the ground, producing an image of subsurface objects. Frequencies from 100 MHz to a few GHz are used. Since radio waves cannot penetrate very far into earth, the depth of GPR is limited to about 50 feet. Collision avoidance system – a short range radar or LIDAR system on an automobile or vehicle that detects if the vehicle is about to collide with an object and applies the brakes to prevent the collision. Radar fuze – a detonator for an aerial bomb which uses a radar altimeter to measure the height of the bomb above the ground as it falls and detonates it at a certain altitude. ==== Radiolocation ==== Radiolocation is a generic term covering a variety of techniques that use radio waves to find the location of objects, or for navigation. Global Navigation Satellite System (GNSS) or satnav system – A system of satellites which allows geographical location on Earth (latitude, longitude, and altitude/elevation) to be determined to high precision (within a few metres) by small portable navigation instruments, by timing the arrival of radio signals from the satellites. These are the most widely used navigation systems today. The main satellite navigation systems are the US Global Positioning System (GPS), Russia's GLONASS, China's BeiDou Navigation Satellite System (BDS) and the European Union's Galileo. Global Positioning System (GPS) – The most widely used satellite navigation system, maintained by the US Air Force, which uses a constellation of 31 satellites in low Earth orbit. The orbits of the satellites are distributed so at any time at least four satellites are above the horizon over each point on Earth. Each satellite has an onboard atomic clock and transmits a continuous radio signal containing a precise time signal as well as its current position. Two frequencies are used, 1.2276 and 1.57542 GHz. Since the velocity of radio waves is virtually constant, the delay of the radio signal from a satellite is proportional to the distance of the receiver from the satellite. By receiving the signals from at least four satellites a GPS receiver can calculate its position on Earth by comparing the arrival time of the radio signals. Since each satellite's position is known precisely at any given time, from the delay the position of the receiver can be calculated by a microprocessor in the receiver. The position can be displayed as latitude and longitude, or as a marker on an electronic map. GPS receivers are incorporated in almost all cellphones and in vehicles such as automobiles, aircraft, and ships, and are used to guide drones, missiles, cruise missiles, and even artillery shells to their target, and handheld GPS receivers are produced for hikers and the military. Radio beacon – a fixed location terrestrial radio transmitter which transmits a continuous radio signal used by aircraft and ships for navigation. The locations of beacons are plotted on navigational maps used by aircraft and ships. VHF omnidirectional range (VOR) – a worldwide aircraft radio navigation system consisting of fixed ground radio beacons transmitting between 108.00 and 117.95 MHz in the very high frequency (VHF) band. An automated navigational instrument on the aircraft displays a bearing to a nearby VOR transmitter. A VOR beacon transmits two signals simultaneously on different frequencies. A directional antenna transmits a beam of radio waves that rotates like a lighthouse at a fixed rate, 30 times per second. When the directional beam is facing north, an omnidirectional antenna transmits a pulse. By measuring the difference in phase of these two signals, an aircraft can determine its bearing (or "radial") from the station accurately. By taking a bearing on two VOR beacons an aircraft can determine its position (called a "fix") to an accuracy of about 90 metres (300 ft). Most VOR beacons also have a distance measuring capability, called distance measuring equipment (DME); these are called VOR/DME's. The aircraft transmits a radio signal to the VOR/DME beacon and a transponder transmits a return signal. From the propagation delay between the transmitted and received signal the aircraft can calculate its distance from the beacon. This allows an aircraft to determine its location "fix" from only one VOR beacon. Since line-of-sight VHF frequencies are used VOR beacons have a range of about 200 miles for aircraft at cruising altitude. TACAN is a similar military radio beacon system which transmits in 962–1213 MHz, and a combined VOR and TACAN beacon is called a VORTAC. The number of VOR beacons is declining as aviation switches to the RNAV system that relies on Global Positioning System satellite navigation. Instrument Landing System (ILS) - A short range radio navigation aid at airports which guides aircraft landing in low visibility conditions. It consists of multiple antennas at the end of each runway that radiate two beams of radio waves along the approach to the runway: the localizer (108 to 111.95 MHz frequency), which provides horizontal guidance, a heading line to keep the aircraft centered on the runway, and the glideslope (329.15 to 335 MHz) for vertical guidance, to keep the aircraft descending at the proper rate for a smooth touchdown at the correct point on the runway. Each aircraft has a receiver instrument and antenna which receives the beams, with an indicator to tell the pilot whether he is on the correct horizontal and vertical approach. The ILS beams are receivable for at least 15 miles, and have a radiated power of 25 watts. ILS systems at airports are being replaced by systems that use satellite navigation. Non-directional beacon (NDB) – Legacy fixed radio beacons used before the VOR system that transmit a simple signal in all directions for aircraft or ships to use for radio direction finding. Aircraft use automatic direction finder (ADF) receivers which use a directional antenna to determine the bearing to the beacon. By taking bearings on two beacons they can determine their position. NDBs use frequencies between 190 and 1750 kHz in the LF and MF bands which propagate beyond the horizon as ground waves or skywaves much farther than VOR beacons. They transmit a callsign consisting of one to 3 Morse code letters as an identifier. Emergency locator beacon – a portable battery powered radio transmitter used in emergencies to locate airplanes, vessels, and persons in distress and in need of immediate rescue. Various types of emergency locator beacons are carried by aircraft, ships, vehicles, hikers and cross-country skiers. In the event of an emergency, such as the aircraft crashing, the ship sinking, or a hiker becoming lost, the transmitter is deployed and begins to transmit a continuous radio signal, which is used by search and rescue teams to quickly find the emergency and render aid. The latest generation Emergency Position Indicating Rescue Beacons (EPIRBs) contain a GPS receiver, and broadcast to rescue teams their exact location within 20 meters. Cospas-Sarsat – an international humanitarian consortium of governmental and private agencies which acts as a dispatcher for search and rescue operations. It operates a network of about 47 satellites carrying radio receivers, which detect distress signals from emergency locator beacons anywhere on Earth transmitting on the international Cospas distress frequency of 406 MHz. The satellites calculate the geographic location of the beacon within 2 km by measuring the Doppler frequency shift of the radio waves due to the relative motion of the transmitter and the satellite, and quickly transmit the information to the appropriate local first responder organizations, which perform the search and rescue. Radio direction finding (RDF) – this is a general technique, used since the early 1900s, of using specialized radio receivers with directional antennas (RDF receivers) to determine the exact bearing of a radio signal, to determine the location of the transmitter. The location of a terrestrial transmitter can be determined by simple triangulation from bearings taken by two RDF stations separated geographically, as the point where the two bearing lines cross, this is called a "fix". Military forces use RDF to locate enemy forces by their tactical radio transmissions, counterintelligence services use it to locate clandestine transmitters used by espionage agents, and governments use it to locate unlicensed transmitters or interference sources. Older RDF receivers used rotatable loop antennas, the antenna is rotated until the radio signal strength is weakest, indicating the transmitter is in one of the antenna's two nulls. The nulls are used since they are sharper than the antenna's lobes (maxima). More modern receivers use phased array antennas which have a much greater angular resolution. Animal migration tracking – a widely used technique in wildlife biology, conservation biology, and wildlife management in which small battery-powered radio transmitters are attached to wild animals so their movements can be tracked with a directional RDF receiver. Sometimes the transmitter is implanted in the animal. The VHF band is typically used since antennas in this band are fairly compact. The receiver has a directional antenna (typically a small Yagi) which is rotated until the received signal is strongest; at this point the antenna is pointing in the direction of the animal. Sophisticated systems used in recent years use satellites to track the animal, or geolocation tags with GPS receivers which record and transmit a log of the animal's location. ==== Remote control ==== Radio remote control is the use of electronic control signals sent by radio waves from a transmitter to control the actions of a device at a remote location. Remote control systems may also include telemetry channels in the other direction, used to transmit real-time information on the state of the device back to the control station. Uncrewed spacecraft are an example of remote-controlled machines, controlled by commands transmitted by satellite ground stations. Most handheld remote controls used to control consumer electronics products like televisions or DVD players actually operate by infrared light rather than radio waves, so are not examples of radio remote control. A security concern with remote control systems is spoofing, in which an unauthorized person transmits an imitation of the control signal to take control of the device. Examples of radio remote control: Unmanned aerial vehicle (UAV, drone) – A drone is an aircraft without an onboard pilot, flown by remote control by a pilot in another location, usually in a piloting station on the ground. They are used by the military for reconnaissance and ground attack, and more recently by the civilian world for news reporting and aerial photography. The pilot uses aircraft controls like a joystick or steering wheel, which create control signals which are transmitted to the drone by radio to control the flight surfaces and engine. A telemetry system transmits back a video image from a camera in the drone to allow the pilot to see where the aircraft is going, and data from a GPS receiver giving the real-time position of the aircraft. UAVs have sophisticated onboard automatic pilot systems that maintain stable flight and only require manual control to change directions. Keyless entry system – a short-range handheld battery powered key fob transmitter, included with most modern cars, which can lock and unlock the doors of a vehicle from outside, eliminating the need to use a key. When a button is pressed, the transmitter sends a coded radio signal to a receiver in the vehicle, operating the locks. The fob must be close to the vehicle, typically within 5 to 20 meters. North America and Japan use a frequency of 315 MHz, while Europe uses 433.92 and 868 MHz. Some models can also remotely start the engine, to warm up the car. A security concern with all keyless entry systems is a replay attack, in which a thief uses a special receiver ("code grabber") to record the radio signal during opening, which can later be replayed to open the door. To prevent this, keyless systems use a rolling code system in which a pseudorandom number generator in the remote control generates a different random key each time it is used. To prevent thieves from simulating the pseudorandom generator to calculate the next key, the radio signal is also encrypted. Garage door opener – a short-range handheld transmitter which can open or close a building's electrically operated garage door from outside, so the owner can open the door upon arrival, and close it after departure. When a button is pressed the control transmits a coded FSK radio signal to a receiver in the opener, raising or lowering the door. Modern openers use 310, 315 or 390 MHz. To prevent a thief using a replay attack, modern openers use a rolling code system. Radio-controlled models – a popular hobby is playing with radio-controlled model boats, cars, airplanes, and helicopters (quadcopters) which are controlled by radio signals from a handheld console with a joystick. Most recent transmitters use the 2.4 GHz ISM band with multiple control channels modulated with PWM, PCM or FSK. Wireless doorbell – A residential doorbell that uses wireless technology to eliminate the need to run wires through the building walls. It consists of a doorbell button beside the door containing a small battery powered transmitter. When the doorbell is pressed it sends a signal to a receiver inside the house with a speaker that sounds chimes to indicate someone is at the door. They usually use the 2.4 GHz ISM band. The frequency channel used can usually be changed by the owner in case another nearby doorbell is using the same channel. ==== Scientific research ==== Radio astronomy is the scientific study of radio waves emitted by astronomical objects. Radio astronomers use radio telescopes, large radio antennas and receivers, to receive and study the radio waves from astronomical radio sources. Since astronomical radio sources are so far away, the radio waves from them are extremely weak, requiring extremely sensitive receivers, and radio telescopes are the most sensitive radio receivers in existence. They use large parabolic (dish) antennas up to 500 meters (2,000 ft) in diameter to collect enough radio wave energy to study. The RF front end electronics of the receiver is often cooled by liquid nitrogen to reduce thermal noise. Multiple antennas are often linked together in arrays which function as a single antenna, to increase collecting power. In Very Long Baseline Interferometry (VLBI) radio telescopes on different continents are linked, which can achieve the resolution of an antenna thousands of miles in diameter. Remote sensing – in radio, remote sensing is the reception of electromagnetic waves radiated by natural objects or the atmosphere for scientific research. All warm objects emit microwaves and the spectrum emitted can be used to determine temperature. Microwave radiometers are used in meteorology and earth sciences to determine temperature of the atmosphere and earth surface, as well as chemical reactions in the atmosphere. ==== Jamming ==== Radio jamming is the deliberate radiation of radio signals designed to interfere with the reception of other radio signals. Jamming devices are called "signal suppressors" or "interference generators" or just jammers. During wartime, militaries use jamming to interfere with enemies' tactical radio communication. Since radio waves can pass beyond national borders, some totalitarian countries which practice censorship use jamming to prevent their citizens from listening to broadcasts from radio stations in other countries. Jamming is usually accomplished by a powerful transmitter which generates noise on the same frequency as the target transmitter. US Federal law prohibits the nonmilitary operation or sale of any type of jamming devices, including ones that interfere with GPS, cellular, Wi-Fi and police radars. == See also == Electromagnetic radiation and health Internet radio List of radios – List of specific models of radios Outline of radio Radio quiet zone == References == == General references == Basic Radio Principles and Technology – Elsevier Science The Electronics of Radio – Cambridge University Press Radio Systems Engineering – Cambridge University Press Radio-Electronic Transmission Fundamentals – SciTech Publishing Analog Electronics, Analog Circuitry Explained – Elsevier Science == External links == "Radio". Merriam-Webster.com Dictionary. Merriam-Webster.
Wikipedia:Rational difference equation#0
A rational difference equation is a nonlinear difference equation of the form x n + 1 = α + ∑ i = 0 k β i x n − i A + ∑ i = 0 k B i x n − i , {\displaystyle x_{n+1}={\frac {\alpha +\sum _{i=0}^{k}\beta _{i}x_{n-i}}{A+\sum _{i=0}^{k}B_{i}x_{n-i}}}~,} where the initial conditions x 0 , x − 1 , … , x − k {\displaystyle x_{0},x_{-1},\dots ,x_{-k}} are such that the denominator never vanishes for any n. == First-order rational difference equation == A first-order rational difference equation is a nonlinear difference equation of the form w t + 1 = a w t + b c w t + d . {\displaystyle w_{t+1}={\frac {aw_{t}+b}{cw_{t}+d}}.} When a , b , c , d {\displaystyle a,b,c,d} and the initial condition w 0 {\displaystyle w_{0}} are real numbers, this difference equation is called a Riccati difference equation. Such an equation can be solved by writing w t {\displaystyle w_{t}} as a nonlinear transformation of another variable x t {\displaystyle x_{t}} which itself evolves linearly. Then standard methods can be used to solve the linear difference equation in x t {\displaystyle x_{t}} . Equations of this form arise from the infinite resistor ladder problem. == Solving a first-order equation == === First approach === One approach to developing the transformed variable x t {\displaystyle x_{t}} , when a d − b c ≠ 0 {\displaystyle ad-bc\neq 0} , is to write y t + 1 = α − β y t {\displaystyle y_{t+1}=\alpha -{\frac {\beta }{y_{t}}}} where α = ( a + d ) / c {\displaystyle \alpha =(a+d)/c} and β = ( a d − b c ) / c 2 {\displaystyle \beta =(ad-bc)/c^{2}} and where w t = y t − d / c {\displaystyle w_{t}=y_{t}-d/c} . Further writing y t = x t + 1 / x t {\displaystyle y_{t}=x_{t+1}/x_{t}} can be shown to yield x t + 2 − α x t + 1 + β x t = 0. {\displaystyle x_{t+2}-\alpha x_{t+1}+\beta x_{t}=0.} === Second approach === This approach gives a first-order difference equation for x t {\displaystyle x_{t}} instead of a second-order one, for the case in which ( d − a ) 2 + 4 b c {\displaystyle (d-a)^{2}+4bc} is non-negative. Write x t = 1 / ( η + w t ) {\displaystyle x_{t}=1/(\eta +w_{t})} implying w t = ( 1 − η x t ) / x t {\displaystyle w_{t}=(1-\eta x_{t})/x_{t}} , where η {\displaystyle \eta } is given by η = ( d − a + r ) / 2 c {\displaystyle \eta =(d-a+r)/2c} and where r = ( d − a ) 2 + 4 b c {\displaystyle r={\sqrt {(d-a)^{2}+4bc}}} . Then it can be shown that x t {\displaystyle x_{t}} evolves according to x t + 1 = ( d − η c η c + a ) x t + c η c + a . {\displaystyle x_{t+1}=\left({\frac {d-\eta c}{\eta c+a}}\right)\!x_{t}+{\frac {c}{\eta c+a}}.} === Third approach === The equation w t + 1 = a w t + b c w t + d {\displaystyle w_{t+1}={\frac {aw_{t}+b}{cw_{t}+d}}} can also be solved by treating it as a special case of the more general matrix equation X t + 1 = − ( E + B X t ) ( C + A X t ) − 1 , {\displaystyle X_{t+1}=-(E+BX_{t})(C+AX_{t})^{-1},} where all of A, B, C, E, and X are n × n matrices (in this case n = 1); the solution of this is X t = N t D t − 1 {\displaystyle X_{t}=N_{t}D_{t}^{-1}} where ( N t D t ) = ( − B − E A C ) t ( X 0 I ) . {\displaystyle {\begin{pmatrix}N_{t}\\D_{t}\end{pmatrix}}={\begin{pmatrix}-B&-E\\A&C\end{pmatrix}}^{t}{\begin{pmatrix}X_{0}\\I\end{pmatrix}}.} == Application == It was shown in that a dynamic matrix Riccati equation of the form H t − 1 = K + A ′ H t A − A ′ H t C ( C ′ H t C ) − 1 C ′ H t A , {\displaystyle H_{t-1}=K+A'H_{t}A-A'H_{t}C(C'H_{t}C)^{-1}C'H_{t}A,} which can arise in some discrete-time optimal control problems, can be solved using the second approach above if the matrix C has only one more row than column. == References == == Further reading == Simons, Stuart, "A non-linear difference equation," Mathematical Gazette 93, November 2009, 500–504.
Wikipedia:Rational representation#0
In mathematics, in the representation theory of algebraic groups, a linear representation of an algebraic group is said to be rational if, viewed as a map from the group to the general linear group, it is a rational map of algebraic varieties. Finite direct sums and products of rational representations are rational. A rational G {\displaystyle G} module is a module that can be expressed as a sum (not necessarily direct) of rational representations. == References == Bialynicki-Birula, A.; Hochschild, G.; Mostow, G. D. (1963). "Extensions of Representations of Algebraic Linear Groups". American Journal of Mathematics. 85 (1). Johns Hopkins University Press: 131–44. doi:10.2307/2373191. ISSN 1080-6377. JSTOR 2373191. Springer Online Reference Works: Rational Representation
Wikipedia:Rational series#0
In mathematics and computer science, a rational series is a generalisation of the concept of formal power series over a ring to the case when the basic algebraic structure is no longer a ring but a semiring, and the indeterminates adjoined are not assumed to commute. They can be regarded as algebraic expressions of a formal language over a finite alphabet. == Definition == Let R be a semiring and A a finite alphabet. A non-commutative polynomial over A is a finite formal sum of words over A. They form a semiring R ⟨ A ⟩ {\displaystyle R\langle A\rangle } . A formal series is a R-valued function c, on the free monoid A*, which may be written as ∑ w ∈ A ∗ c ( w ) w . {\displaystyle \sum _{w\in A^{*}}c(w)w.} The set of formal series is denoted R ⟨ ⟨ A ⟩ ⟩ {\displaystyle R\langle \langle A\rangle \rangle } and becomes a semiring under the operations c + d : w ↦ c ( w ) + d ( w ) {\displaystyle c+d:w\mapsto c(w)+d(w)} c ⋅ d : w ↦ ∑ u v = w c ( u ) ⋅ d ( v ) {\displaystyle c\cdot d:w\mapsto \sum _{uv=w}c(u)\cdot d(v)} A non-commutative polynomial thus corresponds to a function c on A* of finite support. In the case when R is a ring, then this is the Magnus ring over R. If L is a language over A, regarded as a subset of A* we can form the characteristic series of L as the formal series ∑ w ∈ L w {\displaystyle \sum _{w\in L}w} corresponding to the characteristic function of L. In R ⟨ ⟨ A ⟩ ⟩ {\displaystyle R\langle \langle A\rangle \rangle } one can define an operation of iteration expressed as S ∗ = ∑ n ≥ 0 S n {\displaystyle S^{*}=\sum _{n\geq 0}S^{n}} and formalised as c ∗ ( w ) = ∑ u 1 u 2 ⋯ u n = w c ( u 1 ) c ( u 2 ) ⋯ c ( u n ) . {\displaystyle c^{*}(w)=\sum _{u_{1}u_{2}\cdots u_{n}=w}c(u_{1})c(u_{2})\cdots c(u_{n}).} The rational operations are the addition and multiplication of formal series, together with iteration. A rational series is a formal series obtained by rational operations from R ⟨ A ⟩ . {\displaystyle R\langle A\rangle .} == See also == Formal power series Rational language Rational set Hahn series (Malcev–Neumann series) Weighted automaton == References == Berstel, Jean; Reutenauer, Christophe (2011). Noncommutative rational series with applications. Encyclopedia of Mathematics and Its Applications. Vol. 137. Cambridge: Cambridge University Press. ISBN 978-0-521-19022-0. Zbl 1250.68007. == Further reading == Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French by Reuben Thomas. Cambridge: Cambridge University Press. Part IV (where they are called K {\displaystyle \mathbb {K} } -rational series). ISBN 978-0-521-84425-3. Zbl 1188.68177. Droste, M., & Kuich, W. (2009). Semirings and Formal Power Series. Handbook of Weighted Automata, 3–28. doi:10.1007/978-3-642-01492-5_1 Sakarovitch, J. Rational and Recognisable Power Series. Handbook of Weighted Automata, 105–174 (2009). doi:10.1007/978-3-642-01492-5_4 W. Kuich. Semirings and formal power series: Their relevance to formal languages and automata theory. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 1, Chapter 9, pages 609–677. Springer, Berlin, 1997
Wikipedia:Rationalisation (mathematics)#0
In elementary algebra, root rationalisation (or rationalization) is a process by which radicals in the denominator of an algebraic fraction are eliminated. If the denominator is a monomial in some radical, say a x n k , {\displaystyle a{\sqrt[{n}]{x}}^{k},} with k < n, rationalisation consists of multiplying the numerator and the denominator by x n n − k {\displaystyle {\sqrt[{n}]{x}}^{n-k}} , and replacing x n n {\displaystyle {\sqrt[{n}]{x}}^{n}} by x (this is allowed, as, by definition, a nth root of x is a number that has x as its nth power). If k ≥ n, one writes k = qn + r with 0 ≤ r < n (Euclidean division), and x n k = x q x n r ; {\displaystyle {\sqrt[{n}]{x}}^{k}=x^{q}{\sqrt[{n}]{x}}^{r};} then one proceeds as above by multiplying by x n n − r . {\displaystyle {\sqrt[{n}]{x}}^{n-r}.} If the denominator is linear in some square root, say a + b x , {\displaystyle a+b{\sqrt {x}},} rationalisation consists of multiplying the numerator and the denominator by the conjugate a − b x , {\displaystyle a-b{\sqrt {x}},} and expanding the product in the denominator. This technique may be extended to any algebraic denominator, by multiplying the numerator and the denominator by all algebraic conjugates of the denominator, and expanding the new denominator into the norm of the old denominator. However, except in special cases, the resulting fractions may have huge numerators and denominators, and, therefore, the technique is generally used only in the above elementary cases. == Rationalisation of a monomial square root and cube root == For the fundamental technique, the numerator and denominator must be multiplied by the same factor. Example 1: 10 5 {\displaystyle {\frac {10}{\sqrt {5}}}} To rationalise this kind of expression, bring in the factor 5 {\displaystyle {\sqrt {5}}} : 10 5 = 10 5 ⋅ 5 5 = 10 5 ( 5 ) 2 {\displaystyle {\frac {10}{\sqrt {5}}}={\frac {10}{\sqrt {5}}}\cdot {\frac {\sqrt {5}}{\sqrt {5}}}={\frac {10{\sqrt {5}}}{\left({\sqrt {5}}\right)^{2}}}} The square root disappears from the denominator, because ( 5 ) 2 = 5 {\displaystyle \left({\sqrt {5}}\right)^{2}=5} by definition of a square root: 10 5 ( 5 ) 2 = 10 5 5 = 2 5 , {\displaystyle {\frac {10{\sqrt {5}}}{\left({\sqrt {5}}\right)^{2}}}={\frac {10{\sqrt {5}}}{5}}=2{\sqrt {5}},} which is the result of the rationalisation. Example 2: 10 a 3 {\displaystyle {\frac {10}{\sqrt[{3}]{a}}}} To rationalise this radical, bring in the factor a 3 2 {\displaystyle {\sqrt[{3}]{a}}^{2}} : 10 a 3 = 10 a 3 ⋅ a 3 2 a 3 2 = 10 a 3 2 a 3 3 {\displaystyle {\frac {10}{\sqrt[{3}]{a}}}={\frac {10}{\sqrt[{3}]{a}}}\cdot {\frac {{\sqrt[{3}]{a}}^{2}}{{\sqrt[{3}]{a}}^{2}}}={\frac {10{\sqrt[{3}]{a}}^{2}}{{\sqrt[{3}]{a}}^{3}}}} The cube root disappears from the denominator, because it is cubed; so 10 a 3 2 a 3 3 = 10 a 3 2 a , {\displaystyle {\frac {10{\sqrt[{3}]{a}}^{2}}{{\sqrt[{3}]{a}}^{3}}}={\frac {10{\sqrt[{3}]{a}}^{2}}{a}},} which is the result of the rationalisation. == Dealing with more square roots == For a denominator that is: 2 ± 3 {\displaystyle {\sqrt {2}}\pm {\sqrt {3}}\,} Rationalisation can be achieved by multiplying by the conjugate: 2 ∓ 3 {\displaystyle {\sqrt {2}}\mp {\sqrt {3}}\,} and applying the difference of two squares identity, which here will yield −1. To get this result, the entire fraction should be multiplied by 2 − 3 2 − 3 = 1. {\displaystyle {\frac {{\sqrt {2}}-{\sqrt {3}}}{{\sqrt {2}}-{\sqrt {3}}}}=1.} This technique works much more generally. It can easily be adapted to remove one square root at a time, i.e. to rationalise x ± y {\displaystyle x\pm {\sqrt {y}}\,} by multiplication by x ∓ y {\displaystyle x\mp {\sqrt {y}}} Example: 3 3 ± 5 {\displaystyle {\frac {3}{{\sqrt {3}}\pm {\sqrt {5}}}}} The fraction must be multiplied by a quotient containing 3 ∓ 5 {\displaystyle {{\sqrt {3}}\mp {\sqrt {5}}}} . 3 3 + 5 ⋅ 3 − 5 3 − 5 = 3 ( 3 − 5 ) 3 2 − 5 2 {\displaystyle {\frac {3}{{\sqrt {3}}+{\sqrt {5}}}}\cdot {\frac {{\sqrt {3}}-{\sqrt {5}}}{{\sqrt {3}}-{\sqrt {5}}}}={\frac {3({\sqrt {3}}-{\sqrt {5}})}{{\sqrt {3}}^{2}-{\sqrt {5}}^{2}}}} Now, we can proceed to remove the square roots in the denominator: 3 ( 3 − 5 ) 3 2 − 5 2 = 3 ( 3 − 5 ) 3 − 5 = 3 ( 3 − 5 ) − 2 {\displaystyle {\frac {3({\sqrt {3}}-{\sqrt {5}})}{{\sqrt {3}}^{2}-{\sqrt {5}}^{2}}}={\frac {3({\sqrt {3}}-{\sqrt {5}})}{3-5}}={\frac {3({\sqrt {3}}-{\sqrt {5}})}{-2}}} Example 2: This process also works with complex numbers with i = − 1 {\displaystyle i={\sqrt {-1}}} 7 1 ± − 5 {\displaystyle {\frac {7}{1\pm {\sqrt {-5}}}}} The fraction must be multiplied by a quotient containing 1 ∓ − 5 {\displaystyle {1\mp {\sqrt {-5}}}} . 7 1 + − 5 ⋅ 1 − − 5 1 − − 5 = 7 ( 1 − − 5 ) 1 2 − − 5 2 = 7 ( 1 − − 5 ) 1 − ( − 5 ) = 7 − 7 5 i 6 {\displaystyle {\frac {7}{1+{\sqrt {-5}}}}\cdot {\frac {1-{\sqrt {-5}}}{1-{\sqrt {-5}}}}={\frac {7(1-{\sqrt {-5}})}{1^{2}-{\sqrt {-5}}^{2}}}={\frac {7(1-{\sqrt {-5}})}{1-(-5)}}={\frac {7-7{\sqrt {5}}i}{6}}} == Generalizations == Rationalisation can be extended to all algebraic numbers and algebraic functions (as an application of norm forms). For example, to rationalise a cube root, two linear factors involving cube roots of unity should be used, or equivalently a quadratic factor. == References == This material is carried in classic algebra texts. For example: George Chrystal, Introduction to Algebra: For the Use of Secondary Schools and Technical Colleges is a nineteenth-century text, first edition 1889, in print (ISBN 1402159072); a trinomial example with square roots is on p. 256, while a general theory of rationalising factors for surds is on pp. 189–199.
Wikipedia:Rauzy fractal#0
In mathematics, the Rauzy fractal is a fractal set associated with the Tribonacci substitution s ( 1 ) = 12 , s ( 2 ) = 13 , s ( 3 ) = 1 . {\displaystyle s(1)=12,\ s(2)=13,\ s(3)=1\,.} It was studied in 1981 by Gérard Rauzy, with the idea of generalizing the dynamic properties of the Fibonacci morphism. That fractal set can be generalized to other maps over a 3-letter alphabet, generating other fractal sets with interesting properties, such as periodic tiling of the plane and self-similarity in three homothetic parts. == Definitions == === Tribonacci word === The infinite tribonacci word is a word constructed by iteratively applying the Tribonacci or Rauzy map : s ( 1 ) = 12 {\displaystyle s(1)=12} , s ( 2 ) = 13 {\displaystyle s(2)=13} , s ( 3 ) = 1 {\displaystyle s(3)=1} . It is an example of a morphic word. Starting from 1, the Tribonacci words are: t 0 = 1 {\displaystyle t_{0}=1} t 1 = 12 {\displaystyle t_{1}=12} t 2 = 1213 {\displaystyle t_{2}=1213} t 3 = 1213121 {\displaystyle t_{3}=1213121} t 4 = 1213121121312 {\displaystyle t_{4}=1213121121312} We can show that, for n > 2 {\displaystyle n>2} , t n = t n − 1 t n − 2 t n − 3 {\displaystyle t_{n}=t_{n-1}t_{n-2}t_{n-3}} ; hence the name "Tribonacci". === Fractal construction === Consider, now, the space R 3 {\displaystyle R^{3}} with cartesian coordinates (x,y,z). The Rauzy fractal is constructed this way: 1) Interpret the sequence of letters of the infinite Tribonacci word as a sequence of unitary vectors of the space, with the following rules (1 = direction x, 2 = direction y, 3 = direction z). 2) Then, build a "stair" by tracing the points reached by this sequence of vectors (see figure). For example, the first points are: 1 ⇒ ( 1 , 0 , 0 ) {\displaystyle 1\Rightarrow (1,0,0)} 2 ⇒ ( 1 , 1 , 0 ) {\displaystyle 2\Rightarrow (1,1,0)} 1 ⇒ ( 2 , 1 , 0 ) {\displaystyle 1\Rightarrow (2,1,0)} 3 ⇒ ( 2 , 1 , 1 ) {\displaystyle 3\Rightarrow (2,1,1)} 1 ⇒ ( 3 , 1 , 1 ) {\displaystyle 1\Rightarrow (3,1,1)} etc...Every point can be colored according to the corresponding letter, to stress the self-similarity property. 3) Then, project those points on the contracting plane (plane orthogonal to the main direction of propagation of the points, none of those projected points escape to infinity). == Properties == Can be tiled by three copies of itself, with area reduced by factors k {\displaystyle k} , k 2 {\displaystyle k^{2}} and k 3 {\displaystyle k^{3}} with k {\displaystyle k} solution of k 3 + k 2 + k − 1 = 0 {\displaystyle k^{3}+k^{2}+k-1=0} : k = 1 3 ( − 1 − 2 17 + 3 33 3 + 17 + 3 33 3 ) = 0.54368901269207636 {\displaystyle \scriptstyle {k={\frac {1}{3}}(-1-{\frac {2}{\sqrt[{3}]{17+3{\sqrt {33}}}}}+{\sqrt[{3}]{17+3{\sqrt {33}}}})=0.54368901269207636}} . Stable under exchanging pieces. We can obtain the same set by exchanging the place of the pieces. Connected and simply connected. Has no hole. Tiles the plane periodically, by translation. The matrix of the Tribonacci map has x 3 − x 2 − x − 1 {\displaystyle x^{3}-x^{2}-x-1} as its characteristic polynomial. Its eigenvalues are a real number β = 1.8392 {\displaystyle \beta =1.8392} , called the Tribonacci constant, a Pisot number, and two complex conjugates α {\displaystyle \alpha } and α ¯ {\displaystyle {\bar {\alpha }}} with α α ¯ = 1 / β {\displaystyle \alpha {\bar {\alpha }}=1/\beta } . Its boundary is fractal, and the Hausdorff dimension of this boundary equals 1.0933, the solution of 2 | α | 3 s + | α | 4 s = 1 {\displaystyle 2|\alpha |^{3s}+|\alpha |^{4s}=1} . == Variants and generalization == For any unimodular substitution of Pisot type, which verifies a coincidence condition (apparently always verified), one can construct a similar set called "Rauzy fractal of the map". They all display self-similarity and generate, for the examples below, a periodic tiling of the plane. == See also == List of fractals == References == Arnoux, Pierre; Harriss, Edmund (August 2014). "WHAT IS... a Rauzy Fractal?". Notices of the American Mathematical Society. 61 (7): 768–770. doi:10.1090/noti1144. Berthé, Valérie; Siegel, Anne; Thuswaldner, Jörg (2010). "Substitutions, Rauzy fractals and tilings". In Berthé, Valérie; Rigo, Michel (eds.). Combinatorics, automata, and number theory. Encyclopedia of Mathematics and its Applications. Vol. 135. Cambridge: Cambridge University Press. pp. 248–323. ISBN 978-0-521-51597-9. Zbl 1247.37015. Lothaire, M. (2005). Applied combinatorics on words. Encyclopedia of Mathematics and its Applications. Vol. 105. Cambridge University Press. ISBN 978-0-521-84802-2. MR 2165687. Zbl 1133.68067. Pytheas Fogg, N. (2002). Berthé, Valérie; Ferenczi, Sébastien; Mauduit, Christian; Siegel, Anne (eds.). Substitutions in dynamics, arithmetics and combinatorics. Lecture Notes in Mathematics. Vol. 1794. Berlin: Springer-Verlag. ISBN 3-540-44141-7. Zbl 1014.11015. == External links == Topological properties of Rauzy fractals Substitutions, Rauzy fractals and tilings, Anne Siegel, 2009 Rauzy fractals for free group automorphisms, 2006 Pisot Substitutions and Rauzy fractals Rauzy fractals Numberphile video about Rauzy fractals and Tribonacci numbers
Wikipedia:Ravi Vakil#0
Ravi D. Vakil (born February 22, 1970) is a Canadian-American mathematician working in algebraic geometry. He is the current president of the American Mathematical Society. == Education and career == Vakil attended high school at Martingrove Collegiate Institute in Etobicoke, Ontario, where he won several mathematical contests and olympiads. After earning a BSc and MSc from the University of Toronto in 1992, he completed a PhD in mathematics at Harvard University in 1997 under Joe Harris. He has since been an instructor at both Princeton University and MIT. Since the fall of 2001, he has taught at Stanford University, becoming a full professor in 2007. == Contributions == Vakil is an algebraic geometer and his research work spans over enumerative geometry, topology, Gromov–Witten theory, and classical algebraic geometry. He has solved several old problems in Schubert calculus. Among other results, he proved that all Schubert problems are enumerative over the real numbers, a result that resolves an issue mathematicians have worked on for at least two decades. == Awards and honors == Vakil has received many awards, including an NSF CAREER Fellowship, a Sloan Research Fellowship, an American Mathematical Society Centennial Fellowship, a G. de B. Robinson prize for the best paper published (2000) in the Canadian Journal of Mathematics and the Canadian Mathematical Bulletin, and the André-Aisenstadt Prize from the Centre de Recherches Mathématiques at the Université de Montréal (2005), and the Chauvenet Prize (2014). In 2013 he became a fellow of the American Mathematical Society. Vakil was elected as its president in 2024 and began his two-year term on 1 February 2025. == Mathematics contests == He was a member of the Canadian team in three International Mathematical Olympiads, winning silver, gold (perfect score), and gold in 1986, 1987, and 1988 respectively. He was also the fourth person to be a four-time Putnam Fellow in the history of the contest. Also, he has been the coordinator of weekly Putnam preparation seminars at Stanford. == References == == External links == Ravi Vakil's Home Page Ravi Vakil at the Mathematics Genealogy Project Ravi Vakil's results at International Mathematical Olympiad The Rising Sea | Ravi's notes on algebraic geometry
Wikipedia:Rayleigh dissipation function#0
In physics, the Rayleigh dissipation function, named after Lord Rayleigh, is a function used to handle the effects of velocity-proportional frictional forces in Lagrangian mechanics. It was first introduced by him in 1873. If the frictional force on a particle with velocity v → {\displaystyle {\vec {v}}} can be written as F → f = − k v → {\displaystyle {\vec {F}}_{f}=-k{\vec {v}}} , where k {\displaystyle k} is a diagonal matrix, then the Rayleigh dissipation function can be defined for a system of N {\displaystyle N} particles as R ( v ) = 1 2 ∑ i = 1 N ( k x v i , x 2 + k y v i , y 2 + k z v i , z 2 ) . {\displaystyle R(v)={\frac {1}{2}}\sum _{i=1}^{N}(k_{x}v_{i,x}^{2}+k_{y}v_{i,y}^{2}+k_{z}v_{i,z}^{2}).} This function represents half of the rate of energy dissipation of the system through friction. The force of friction is negative the velocity gradient of the dissipation function, F → f = − ∇ v R ( v ) {\displaystyle {\vec {F}}_{f}=-\nabla _{v}R(v)} , analogous to a force being equal to the negative position gradient of a potential. This relationship is represented in terms of the set of generalized coordinates q i = { q 1 , q 2 , … q n } {\displaystyle q_{i}=\left\{q_{1},q_{2},\ldots q_{n}\right\}} as F f , i = − ∂ R ∂ q ˙ i {\displaystyle F_{f,i}=-{\frac {\partial R}{\partial {\dot {q}}_{i}}}} . As friction is not conservative, it is included in the Q i {\displaystyle Q_{i}} term of Lagrange's equations, d d t ∂ L ∂ q i ˙ − ∂ L ∂ q i = Q i {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q_{i}}}}}-{\frac {\partial L}{\partial q_{i}}}=Q_{i}} . Applying of the value of the frictional force described by generalized coordinates into the Euler-Lagrange equations gives d d t ( ∂ L ∂ q i ˙ ) − ∂ L ∂ q i = − ∂ R ∂ q ˙ i {\displaystyle {\frac {d}{dt}}\left({\frac {\partial L}{\partial {\dot {q_{i}}}}}\right)-{\frac {\partial L}{\partial q_{i}}}=-{\frac {\partial R}{\partial {\dot {q}}_{i}}}} . Rayleigh writes the Lagrangian L {\displaystyle L} as kinetic energy T {\displaystyle T} minus potential energy V {\displaystyle V} , which yields Rayleigh's equation from 1873. d d t ( ∂ T ∂ q i ˙ ) − ∂ T ∂ q i + ∂ R ∂ q ˙ i + ∂ V ∂ q i = 0 {\displaystyle {\frac {d}{dt}}\left({\frac {\partial T}{\partial {\dot {q_{i}}}}}\right)-{\frac {\partial T}{\partial q_{i}}}+{\frac {\partial R}{\partial {\dot {q}}_{i}}}+{\frac {\partial V}{\partial q_{i}}}=0} . Since the 1970s the name Rayleigh dissipation potential for R {\displaystyle R} is more common. Moreover, the original theory is generalized from quadratic functions q ↦ R ( q ˙ ) = 1 2 q ˙ ⋅ V q ˙ {\displaystyle q\mapsto R({\dot {q}})={\frac {1}{2}}{\dot {q}}\cdot \mathbb {V} {\dot {q}}} to dissipation potentials that are depending on q {\displaystyle q} (then called state dependence) and are non-quadratic, which leads to nonlinear friction laws like in Coulomb friction or in plasticity. The main assumption is then, that the mapping q ˙ ↦ R ( q , q ˙ ) {\displaystyle {\dot {q}}\mapsto R(q,{\dot {q}})} is convex and satisfies 0 = R ( q , 0 ) ≤ R ( q , q ˙ ) {\displaystyle 0=R(q,0)\leq R(q,{\dot {q}})} . == References ==
Wikipedia:Rayleigh quotient#0
In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M {\displaystyle M} and nonzero vector x {\displaystyle x} is defined as: R ( M , x ) = x ∗ M x x ∗ x . {\displaystyle R(M,x)={x^{*}Mx \over x^{*}x}.} For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugate transpose x ∗ {\displaystyle x^{*}} to the usual transpose x ′ {\displaystyle x'} . Note that R ( M , c x ) = R ( M , x ) {\displaystyle R(M,cx)=R(M,x)} for any non-zero scalar c {\displaystyle c} . Recall that a Hermitian (or real symmetric) matrix is diagonalizable with only real eigenvalues. It can be shown that, for a given matrix, the Rayleigh quotient reaches its minimum value λ min {\displaystyle \lambda _{\min }} (the smallest eigenvalue of M {\displaystyle M} ) when x {\displaystyle x} is v min {\displaystyle v_{\min }} (the corresponding eigenvector). Similarly, R ( M , x ) ≤ λ max {\displaystyle R(M,x)\leq \lambda _{\max }} and R ( M , v max ) = λ max {\displaystyle R(M,v_{\max })=\lambda _{\max }} . The Rayleigh quotient is used in the min-max theorem to get exact values of all eigenvalues. It is also used in eigenvalue algorithms (such as Rayleigh quotient iteration) to obtain an eigenvalue approximation from an eigenvector approximation. The range of the Rayleigh quotient (for any matrix, not necessarily Hermitian) is called a numerical range and contains its spectrum. When the matrix is Hermitian, the numerical radius is equal to the spectral norm. Still in functional analysis, λ max {\displaystyle \lambda _{\max }} is known as the spectral radius. In the context of C ⋆ {\displaystyle C^{\star }} -algebras or algebraic quantum mechanics, the function that to M {\displaystyle M} associates the Rayleigh–Ritz quotient R ( M , x ) {\displaystyle R(M,x)} for a fixed x {\displaystyle x} and M {\displaystyle M} varying through the algebra would be referred to as vector state of the algebra. In quantum mechanics, the Rayleigh quotient gives the expectation value of the observable corresponding to the operator M {\displaystyle M} for a system whose state is given by x {\displaystyle x} . If we fix the complex matrix M {\displaystyle M} , then the resulting Rayleigh quotient map (considered as a function of x {\displaystyle x} ) completely determines M {\displaystyle M} via the polarization identity; indeed, this remains true even if we allow M {\displaystyle M} to be non-Hermitian. However, if we restrict the field of scalars to the real numbers, then the Rayleigh quotient only determines the symmetric part of M {\displaystyle M} . == Bounds for Hermitian M == As stated in the introduction, for any vector x, one has R ( M , x ) ∈ [ λ min , λ max ] {\displaystyle R(M,x)\in \left[\lambda _{\min },\lambda _{\max }\right]} , where λ min , λ max {\displaystyle \lambda _{\min },\lambda _{\max }} are respectively the smallest and largest eigenvalues of M {\displaystyle M} . This is immediate after observing that the Rayleigh quotient is a weighted average of eigenvalues of M: R ( M , x ) = x ∗ M x x ∗ x = ∑ i = 1 n λ i y i 2 ∑ i = 1 n y i 2 {\displaystyle R(M,x)={x^{*}Mx \over x^{*}x}={\frac {\sum _{i=1}^{n}\lambda _{i}y_{i}^{2}}{\sum _{i=1}^{n}y_{i}^{2}}}} where ( λ i , v i ) {\displaystyle (\lambda _{i},v_{i})} is the i {\displaystyle i} -th eigenpair after orthonormalization and y i = v i ∗ x {\displaystyle y_{i}=v_{i}^{*}x} is the i {\displaystyle i} th coordinate of x in the eigenbasis. It is then easy to verify that the bounds are attained at the corresponding eigenvectors v min , v max {\displaystyle v_{\min },v_{\max }} . The fact that the quotient is a weighted average of the eigenvalues can be used to identify the second, the third, ... largest eigenvalues. Let λ max = λ 1 ≥ λ 2 ≥ ⋯ ≥ λ n = λ min {\displaystyle \lambda _{\max }=\lambda _{1}\geq \lambda _{2}\geq \cdots \geq \lambda _{n}=\lambda _{\min }} be the eigenvalues in decreasing order. If n = 2 {\displaystyle n=2} and x {\displaystyle x} is constrained to be orthogonal to v 1 {\displaystyle v_{1}} , in which case y 1 = v 1 ∗ x = 0 {\displaystyle y_{1}=v_{1}^{*}x=0} , then R ( M , x ) {\displaystyle R(M,x)} has maximum value λ 2 {\displaystyle \lambda _{2}} , which is achieved when x = v 2 {\displaystyle x=v_{2}} . == Special case of covariance matrices == An empirical covariance matrix M {\displaystyle M} can be represented as the product A ′ A {\displaystyle A'A} of the data matrix A {\displaystyle A} pre-multiplied by its transpose A ′ {\displaystyle A'} . Being a positive semi-definite matrix, M {\displaystyle M} has non-negative eigenvalues, and orthogonal (or orthogonalisable) eigenvectors, which can be demonstrated as follows. Firstly, that the eigenvalues λ i {\displaystyle \lambda _{i}} are non-negative: M v i = A ′ A v i = λ i v i ⇒ v i ′ A ′ A v i = v i ′ λ i v i ⇒ ‖ A v i ‖ 2 = λ i ‖ v i ‖ 2 ⇒ λ i = ‖ A v i ‖ 2 ‖ v i ‖ 2 ≥ 0. {\displaystyle {\begin{aligned}&Mv_{i}=A'Av_{i}=\lambda _{i}v_{i}\\\Rightarrow {}&v_{i}'A'Av_{i}=v_{i}'\lambda _{i}v_{i}\\\Rightarrow {}&\left\|Av_{i}\right\|^{2}=\lambda _{i}\left\|v_{i}\right\|^{2}\\\Rightarrow {}&\lambda _{i}={\frac {\left\|Av_{i}\right\|^{2}}{\left\|v_{i}\right\|^{2}}}\geq 0.\end{aligned}}} Secondly, that the eigenvectors v i {\displaystyle v_{i}} are orthogonal to one another: M v i = λ i v i ⇒ v j ′ M v i = v j ′ λ i v i ⇒ ( M v j ) ′ v i = λ j v j ′ v i ⇒ λ j v j ′ v i = λ i v j ′ v i ⇒ ( λ j − λ i ) v j ′ v i = 0 ⇒ v j ′ v i = 0 {\displaystyle {\begin{aligned}&Mv_{i}=\lambda _{i}v_{i}\\\Rightarrow {}&v_{j}'Mv_{i}=v_{j}'\lambda _{i}v_{i}\\\Rightarrow {}&\left(Mv_{j}\right)'v_{i}=\lambda _{j}v_{j}'v_{i}\\\Rightarrow {}&\lambda _{j}v_{j}'v_{i}=\lambda _{i}v_{j}'v_{i}\\\Rightarrow {}&\left(\lambda _{j}-\lambda _{i}\right)v_{j}'v_{i}=0\\\Rightarrow {}&v_{j}'v_{i}=0\end{aligned}}} if the eigenvalues are different – in the case of multiplicity, the basis can be orthogonalized. To now establish that the Rayleigh quotient is maximized by the eigenvector with the largest eigenvalue, consider decomposing an arbitrary vector x {\displaystyle x} on the basis of the eigenvectors v i {\displaystyle v_{i}} : x = ∑ i = 1 n α i v i , {\displaystyle x=\sum _{i=1}^{n}\alpha _{i}v_{i},} where α i = x ′ v i v i ′ v i = ⟨ x , v i ⟩ ‖ v i ‖ 2 {\displaystyle \alpha _{i}={\frac {x'v_{i}}{v_{i}'v_{i}}}={\frac {\langle x,v_{i}\rangle }{\left\|v_{i}\right\|^{2}}}} is the coordinate of x {\displaystyle x} orthogonally projected onto v i {\displaystyle v_{i}} . Therefore, we have: R ( M , x ) = x ′ A ′ A x x ′ x = ( ∑ j = 1 n α j v j ) ′ ( A ′ A ) ( ∑ i = 1 n α i v i ) ( ∑ j = 1 n α j v j ) ′ ( ∑ i = 1 n α i v i ) = ( ∑ j = 1 n α j v j ) ′ ( ∑ i = 1 n α i ( A ′ A ) v i ) ( ∑ i = 1 n α i 2 v i ′ v i ) = ( ∑ j = 1 n α j v j ) ′ ( ∑ i = 1 n α i λ i v i ) ( ∑ i = 1 n α i 2 ‖ v i ‖ 2 ) {\displaystyle {\begin{aligned}R(M,x)&={\frac {x'A'Ax}{x'x}}\\&={\frac {{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'\left(A'A\right){\Bigl (}\sum _{i=1}^{n}\alpha _{i}v_{i}{\Bigr )}}{{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'{\Bigl (}\sum _{i=1}^{n}\alpha _{i}v_{i}{\Bigr )}}}\\&={\frac {{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'{\Bigl (}\sum _{i=1}^{n}\alpha _{i}(A'A)v_{i}{\Bigr )}}{{\Bigl (}\sum _{i=1}^{n}\alpha _{i}^{2}{v_{i}}'{v_{i}}{\Bigr )}}}\\&={\frac {{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'{\Bigl (}\sum _{i=1}^{n}\alpha _{i}\lambda _{i}v_{i}{\Bigr )}}{{\Bigl (}\sum _{i=1}^{n}\alpha _{i}^{2}\|{v_{i}}\|^{2}{\Bigr )}}}\end{aligned}}} which, by orthonormality of the eigenvectors, becomes: R ( M , x ) = ∑ i = 1 n α i 2 λ i ∑ i = 1 n α i 2 = ∑ i = 1 n λ i ( x ′ v i ) 2 ( x ′ x ) ( v i ′ v i ) 2 = ∑ i = 1 n λ i ( x ′ v i ) 2 ( x ′ x ) {\displaystyle {\begin{aligned}R(M,x)&={\frac {\sum _{i=1}^{n}\alpha _{i}^{2}\lambda _{i}}{\sum _{i=1}^{n}\alpha _{i}^{2}}}\\&=\sum _{i=1}^{n}\lambda _{i}{\frac {(x'v_{i})^{2}}{(x'x)(v_{i}'v_{i})^{2}}}\\&=\sum _{i=1}^{n}\lambda _{i}{\frac {(x'v_{i})^{2}}{(x'x)}}\end{aligned}}} The last representation establishes that the Rayleigh quotient is the sum of the squared cosines of the angles formed by the vector x {\displaystyle x} and each eigenvector v i {\displaystyle v_{i}} , weighted by corresponding eigenvalues. If a vector x {\displaystyle x} maximizes R ( M , x ) {\displaystyle R(M,x)} , then any non-zero scalar multiple k x {\displaystyle kx} also maximizes R {\displaystyle R} , so the problem can be reduced to the Lagrange problem of maximizing ∑ i = 1 n α i 2 λ i {\textstyle \sum _{i=1}^{n}\alpha _{i}^{2}\lambda _{i}} under the constraint that ∑ i = 1 n α i 2 = 1 {\textstyle \sum _{i=1}^{n}\alpha _{i}^{2}=1} . Define: β i = α i 2 {\displaystyle \beta _{i}=\alpha _{i}^{2}} . This then becomes a linear program, which always attains its maximum at one of the corners of the domain. A maximum point will have α 1 = ± 1 {\displaystyle \alpha _{1}=\pm 1} and α i = 0 {\displaystyle \alpha _{i}=0} for all i > 1 {\displaystyle i>1} (when the eigenvalues are ordered by decreasing magnitude). Thus, the Rayleigh quotient is maximized by the eigenvector with the largest eigenvalue. === Formulation using Lagrange multipliers === Alternatively, this result can be arrived at by the method of Lagrange multipliers. The first part is to show that the quotient is constant under scaling x → c x {\displaystyle x\to cx} , where c {\displaystyle c} is a scalar R ( M , c x ) = ( c x ) ∗ M c x ( c x ) ∗ c x = c ∗ c c ∗ c x ∗ M x x ∗ x = R ( M , x ) . {\displaystyle R(M,cx)={\frac {(cx)^{*}Mcx}{(cx)^{*}cx}}={\frac {c^{*}c}{c^{*}c}}{\frac {x^{*}Mx}{x^{*}x}}=R(M,x).} Because of this invariance, it is sufficient to study the special case ‖ x ‖ 2 = x T x = 1 {\displaystyle \|x\|^{2}=x^{T}x=1} . The problem is then to find the critical points of the function R ( M , x ) = x T M x , {\displaystyle R(M,x)=x^{\mathsf {T}}Mx,} subject to the constraint ‖ x ‖ 2 = x T x = 1. {\displaystyle \|x\|^{2}=x^{T}x=1.} In other words, it is to find the critical points of L ( x ) = x T M x − λ ( x T x − 1 ) , {\displaystyle {\mathcal {L}}(x)=x^{\mathsf {T}}Mx-\lambda \left(x^{\mathsf {T}}x-1\right),} where λ {\displaystyle \lambda } is a Lagrange multiplier. The stationary points of L ( x ) {\displaystyle {\mathcal {L}}(x)} occur at d L ( x ) d x = 0 ⇒ 2 x T M − 2 λ x T = 0 ⇒ 2 M x − 2 λ x = 0 (taking the transpose of both sides and noting that M is Hermitian) ⇒ M x = λ x {\displaystyle {\begin{aligned}&{\frac {d{\mathcal {L}}(x)}{dx}}=0\\\Rightarrow {}&2x^{\mathsf {T}}M-2\lambda x^{\mathsf {T}}=0\\\Rightarrow {}&2Mx-2\lambda x=0{\text{ (taking the transpose of both sides and noting that }}M{\text{ is Hermitian)}}\\\Rightarrow {}&Mx=\lambda x\end{aligned}}} and ∴ R ( M , x ) = x T M x x T x = λ x T x x T x = λ . {\displaystyle \therefore R(M,x)={\frac {x^{\mathsf {T}}Mx}{x^{\mathsf {T}}x}}=\lambda {\frac {x^{\mathsf {T}}x}{x^{\mathsf {T}}x}}=\lambda .} Therefore, the eigenvectors x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} of M {\displaystyle M} are the critical points of the Rayleigh quotient and their corresponding eigenvalues λ 1 , … , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} are the stationary values of L {\displaystyle {\mathcal {L}}} . This property is the basis for principal components analysis and canonical correlation. == Use in Sturm–Liouville theory == Sturm–Liouville theory concerns the action of the linear operator L ( y ) = 1 w ( x ) ( − d d x [ p ( x ) d y d x ] + q ( x ) y ) {\displaystyle L(y)={\frac {1}{w(x)}}\left(-{\frac {d}{dx}}\left[p(x){\frac {dy}{dx}}\right]+q(x)y\right)} on the inner product space defined by ⟨ y 1 , y 2 ⟩ = ∫ a b w ( x ) y 1 ( x ) y 2 ( x ) d x {\displaystyle \langle {y_{1},y_{2}}\rangle =\int _{a}^{b}w(x)y_{1}(x)y_{2}(x)\,dx} of functions satisfying some specified boundary conditions at a and b. In this case the Rayleigh quotient is ⟨ y , L y ⟩ ⟨ y , y ⟩ = ∫ a b y ( x ) ( − d d x [ p ( x ) d y d x ] + q ( x ) y ( x ) ) d x ∫ a b w ( x ) y ( x ) 2 d x . {\displaystyle {\frac {\langle {y,Ly}\rangle }{\langle {y,y}\rangle }}={\frac {\int _{a}^{b}y(x)\left(-{\frac {d}{dx}}\left[p(x){\frac {dy}{dx}}\right]+q(x)y(x)\right)dx}{\int _{a}^{b}{w(x)y(x)^{2}}dx}}.} This is sometimes presented in an equivalent form, obtained by separating the integral in the numerator and using integration by parts: ⟨ y , L y ⟩ ⟨ y , y ⟩ = { ∫ a b y ( x ) ( − d d x [ p ( x ) y ′ ( x ) ] ) d x } + { ∫ a b q ( x ) y ( x ) 2 d x } ∫ a b w ( x ) y ( x ) 2 d x = { − y ( x ) [ p ( x ) y ′ ( x ) ] | a b } + { ∫ a b y ′ ( x ) [ p ( x ) y ′ ( x ) ] d x } + { ∫ a b q ( x ) y ( x ) 2 d x } ∫ a b w ( x ) y ( x ) 2 d x = { − p ( x ) y ( x ) y ′ ( x ) | a b } + { ∫ a b [ p ( x ) y ′ ( x ) 2 + q ( x ) y ( x ) 2 ] d x } ∫ a b w ( x ) y ( x ) 2 d x . {\displaystyle {\begin{aligned}{\frac {\langle {y,Ly}\rangle }{\langle {y,y}\rangle }}&={\frac {\left\{\int _{a}^{b}y(x)\left(-{\frac {d}{dx}}\left[p(x)y'(x)\right]\right)dx\right\}+\left\{\int _{a}^{b}{q(x)y(x)^{2}}\,dx\right\}}{\int _{a}^{b}{w(x)y(x)^{2}}\,dx}}\\&={\frac {\left\{\left.-y(x)\left[p(x)y'(x)\right]\right|_{a}^{b}\right\}+\left\{\int _{a}^{b}y'(x)\left[p(x)y'(x)\right]\,dx\right\}+\left\{\int _{a}^{b}{q(x)y(x)^{2}}\,dx\right\}}{\int _{a}^{b}w(x)y(x)^{2}\,dx}}\\&={\frac {\left\{\left.-p(x)y(x)y'(x)\right|_{a}^{b}\right\}+\left\{\int _{a}^{b}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx\right\}}{\int _{a}^{b}{w(x)y(x)^{2}}\,dx}}.\end{aligned}}} == Generalizations == For a given pair (A, B) of matrices, and a given non-zero vector x, the generalized Rayleigh quotient is defined as: R ( A , B ; x ) := x ∗ A x x ∗ B x . {\displaystyle R(A,B;x):={\frac {x^{*}Ax}{x^{*}Bx}}.} The generalized Rayleigh quotient can be reduced to the Rayleigh quotient R ( D , C ∗ x ) {\displaystyle R(D,C^{*}x)} through the transformation D = C − 1 A C ∗ − 1 {\displaystyle D=C^{-1}A{C^{*}}^{-1}} where C C ∗ {\displaystyle CC^{*}} is the Cholesky decomposition of the Hermitian positive-definite matrix B. For a given pair (x, y) of non-zero vectors, and a given Hermitian matrix H, the generalized Rayleigh quotient or sometimes two-sided Rayleigh quotient can be defined as: R ( H ; x , y ) := y ∗ H x y ∗ y ⋅ x ∗ x {\displaystyle R(H;x,y):={\frac {y^{*}Hx}{\sqrt {y^{*}y\cdot x^{*}x}}}} which coincides with R(H,x) when x = y. In quantum mechanics, this quantity is called a "matrix element" or sometimes a "transition amplitude". == See also == Numerical range Courant minimax principle Min-max theorem Rayleigh's quotient in vibrations analysis Dirichlet eigenvalue == References == == Further reading == Shi Yu, Léon-Charles Tranchevent, Bart Moor, Yves Moreau, Kernel-based Data Fusion for Machine Learning: Methods and Applications in Bioinformatics and Text Mining, Ch. 2, Springer, 2011.
Wikipedia:Rayleigh theorem for eigenvalues#0
In mathematics, the Rayleigh theorem for eigenvalues pertains to the behavior of the solutions of an eigenvalue equation as the number of basis functions employed in its resolution increases. Rayleigh, Lord Rayleigh, and 3rd Baron Rayleigh are the titles of John William Strutt, after the death of his father, the 2nd Baron Rayleigh. Lord Rayleigh made contributions not just to both theoretical and experimental physics, but also to applied mathematics. The Rayleigh theorem for eigenvalues, as discussed below, enables the energy minimization that is required in many self-consistent calculations of electronic and related properties of materials, from atoms, molecules, and nanostructures to semiconductors, insulators, and metals. Except for metals, most of these other materials have an energy or a band gap, i.e., the difference between the lowest, unoccupied energy and the highest, occupied energy. For crystals, the energy spectrum is in bands and there is a band gap, if any, as opposed to energy gap. Given the diverse contributions of Lord Rayleigh, his name is associated with other theorems, including Parseval's theorem. For this reason, keeping the full name of "Rayleigh Theorem for Eigenvalues" avoids confusions. == Statement of the theorem == The theorem, as indicated above, applies to the resolution of equations called eigenvalue equations. i.e., the ones of the form HѰ = λѰ, where H is an operator, Ѱ is a function and λ is number called the eigenvalue. To solve problems of this type, we expand the unknown function Ѱ in terms of known functions. The number of these known functions is the size of the basis set. The expansion coefficients are also numbers. The number of known functions included in the expansion, the same as that of coefficients, is the dimension of the Hamiltonian matrix that will be generated. The statement of the theorem follows. Let an eigenvalue equation be solved by linearly expanding the unknown function in terms of N known functions. Let the resulting eigenvalues be ordered from the smallest (lowest), λ1, to the largest (highest), λN. Let the same eigenvalue equation be solved using a basis set of dimension N + 1 that comprises the previous N functions plus an additional one. Let the resulting eigenvalues be ordered from the smallest, λ′1, to the largest, λ′N+1. Then, the Rayleigh theorem for eigenvalues states that λ′i ≤ λi for i = 1 to N. A subtle point about the above statement is that the smaller of the two sets of functions must be a subset of the larger one. The above inequality does not hold otherwise. == Self-consistent calculations == In quantum mechanics, where the operator H is the Hamiltonian, the lowest eigenvalues are occupied (by electrons) up to the applicable number of electrons; the remaining eigenvalues, not occupied by electrons, are empty energy levels. The energy content of the Hamiltonian is the sum of the occupied eigenvalues. The Rayleigh theorem for eigenvalues is extensively utilized in calculations of electronic and related properties of materials. The electronic energies of materials are obtained through calculations said to be self-consistent, as explained below. In density functional theory (DFT) calculations of electronic energies of materials, the eigenvalue equation, HѰ = λѰ, has a companion equation that gives the electronic charge density of the material in terms of the wave functions of the occupied energies. To be reliable, these calculations have to be self-consistent, as explained below. The process of obtaining the electronic energies of material begins with the selection of an initial set of known functions (and related coefficients) in terms of which one expands the unknown function Ѱ . Using the known functions for the occupied states, one constructs an initial charge density for the material. For density functional theory calculations, once the charge density is known, the potential, the Hamiltonian, and the eigenvalue equation are generated. Solving this equation leads to eigenvalues (occupied or unoccupied) and their corresponding wave functions (in terms of the known functions and new coefficients of expansion). Using only the new wave functions of the occupied energies, one repeats the cycle of constructing the charge density and of generating the potential and the Hamiltonian. Then, using all the new wave functions (for occupied and empty states), one regenerates the eigenvalue equation and solves it. Each one of these cycles is called an iteration. The calculations are complete when the difference between the potentials generated in Iteration n + 1 and the one immediately preceding it (i.e., n) is 10−5 or less. The iterations are then said to have converged and the outcomes of the last iteration are the self-consistent results that are reliable. == The basis set conundrum of self-consistent calculations == The characteristics and number of the known functions utilized in the expansion of Ѱ naturally have a bearing on the quality of the final, self-consistent results. The selection of atomic orbitals that include exponential or Gaussian functions, in additional to polynomial and angular features that apply, practically ensures the high quality of self-consistent results, except for the effects of the size and of attendant characteristics (features) of the basis set. These characteristics include the polynomial and angular functions that are inherent to the description of s, p, d, and f states for an atom. While the s functions are spherically symmetric, the others are not; they are often called polarization orbitals or functions. The conundrum is the following. Density functional theory is for the description of the ground state of materials, i.e., the state of lowest energy. The second theorem of DFT states that the energy functional for the Hamiltonian [i.e., the energy content of the Hamiltonian] reaches its minimum value (i.e., the ground state) if the charge density employed in the calculation is that of the ground state. We described above the selection of an initial basis set in order to perform self-consistent calculations. A priori, there is no known mechanism for selecting a single basis set so that, after self consistency, the charge density it generates is that of the ground state. Self consistency with a given basis set leads to the reliable energy content of the Hamiltonian for that basis set. As per the Rayleigh theorem for eigenvalues, upon augmenting that initial basis set, the ensuing self consistent calculations lead to an energy content of the Hamiltonian that is lower than or equal to that obtained with the initial basis set. We recall that the reliable, self-consistent energy content of the Hamiltonian obtained with a basis set, after self consistency, is relative to that basis set. A larger basis set that contains the first one generally leads self consistent eigenvalues that are lower than or equal to their corresponding values from the previous calculation. One may paraphrase the issue as follows. Several basis sets of different sizes, upon the attainment of self-consistency, lead to stationary (converged) solutions. There exists an infinite number of such stationary solutions. The conundrum stems from the fact that, a priori, one has no means to determine the basis set, if any, after self consistency, leads to the ground state charge density of the material, and, according to the second DFT theorem, to the ground state energy of the material under study. == Resolution of the basis set conundrum with the Rayleigh theorem for eigenvalues == Let us first recall that a self-consistent density functional theory calculation, with a single basis set, produces a stationary solution which cannot be claimed to be that of the ground state. To find the DFT ground state of a material, one has to vary the basis set (in size and attendant features) in order to minimize the energy content of the Hamiltonian, while keeping the number of particles constant. Hohenberg and Kohn, specifically stated that the energy content of the Hamiltonian "has a minimum at the 'correct' ground state Ψ, relative to arbitrary variations of Ψ′ in which the total number of particles is kept constant." Hence, the trial basis set is to be varied in order to minimize the energy. The Rayleigh theorem for eigenvalues shows how to perform such a minimization with successive augmentation of the basis set. The first trial basis set has to be a small one that accounts for all the electrons in the system. After performing a self consistent calculation (following many iterations) with this initial basis set, one augments it with one atomic orbital . Depending on the s, p, d, or f character of this orbital, the size of the new basis set (and the dimension of the Hamiltonian matrix) will be larger than that of the initial one by 2, 6, 10, or 14, respectively, taking the spin into account. Given that the initial, trial basis set was deliberately selected to be small, the resulting self consistent results cannot be assumed to describe the ground state of the material. Upon performing self-consistent calculations with the augmented basis set, one compares the occupied energies from Calculations I and II, after setting the Fermi level to zero. Invariably, the occupied energies from Calculation II are lower than or equal to their corresponding values from Calculation I. Naturally, one cannot affirm that the results from Calculation II describe the ground state of the material, given the absence of any proof that the occupied energies cannot be lowered further. Hence, one continues the process of augmenting the basis set with one orbital and of performing the next self-consistent calculation. The process is complete when three consecutive calculations yield the same occupied energies. One can affirm that the occupied energies from these three calculations represent the ground state of the material. Indeed, while two consecutive calculations can produce the same occupied energies, these energies may be for a local minimum energy content of the Hamiltonian as opposed to the absolute minimum. To have three consecutive calculations produce the same occupied energies is the robust criterion for the attainment of the ground state of a material (i.e., the state where the occupied energies have their absolute minimal values). This paragraph described how successive augmentation of the basis set solves one aspect of the conundrum, i.e., a generalized minimization of the energy content of the Hamiltonian to reach the ground state of the system under study. Even though the paragraph above shows how the Rayleigh theorem enables the generalized minimization of the energy content of the Hamiltonian, to reach the ground state, we are still left with the fact that three different calculations produced this ground state. Let the respective numbers of these calculations be N, (N+1), and (N+2). While the occupied energies from these calculations are the same (i.e., the ground state), the unoccupied energies are not identical. Indeed, the general trend is that the unoccupied energies from the calculations are in the reverse order of the sizes of the basis sets for these calculations. In other words, for a given unoccupied eigenvalue (say the lowest one of the unoccupied energies), the result from Calculation (N+2) is smaller than or equal to that from Calculation (N+1). The latter, in turn, is smaller than or equal to the result from Calculation N. In the case of semiconductors, the lowest-laying unoccupied energies from the three calculations are generally the same, up to 6 to 10 eV or above, depending on the material, if the sizes of the basis sets of the three calculations are not vastly different. Still, for higher, unoccupied energies, the Rayleigh theorem for eigenvalues applies. This paragraph poses the question as to which one of the three, consecutive, self-consistent calculations leading to the ground state energy provides the true DFT description of the material – given the differences between some of their unoccupied energies. There are two distinct ways of determining the calculation providing the DFT description of the material. The first one starts by recalling that self-consistency requires the performance of iterations to obtain the reliable energy, the number of iterations may vary with the size of the basis set. With the generalized minimization made possible by the Rayleigh theorem, with successively augmented size and attendant features (i.e., polynomial and angular ones) of the basis set, the Hamiltonian changes from one calculation to the next, up to Calculation N. Calculations N + 1 and N + 2 reproduce the result from Calculation N for the occupied energies. The charge density changes from one calculation to the next, up Calculation N. Afterwards, it does not change in Calculations N + 1 and N + 2 or higher, nor does the Hamiltonian from its value in Calculation N. When the Hamiltonian does not change, a change in an unoccupied eigenvalue cannot be due to a physical interaction.. Therefore, any change of an unoccupied eigenvalue, from its value in Calculation N, is an artifact of the Rayleigh theorem for eigenvalues. Calculation N is therefore the only one that provide the DFT description of the material. The second way in determining the calculation that provides the DFT description of the material follows. The first DFT theorem states that the external potential is a unique functional of the charge density, except for an additive constant. The first corollary of this theorem is that the energy content of the Hamiltonian is also a unique functional of the charge density. The second corollary to the first DFT theorem is that the spectrum of the Hamiltonian is a unique functional of the charge density. Consequently, given that the charge density and the Hamiltonian do not change from their respective values in Calculation N, following an augmentation of the basis set, then any unoccupied eigenvalue, obtained in Calculations N + 1, N + 2, or higher, that is different (lower than) from its corresponding value in Calculation N, no longer belongs to the physically meaningful spectrum of the Hamiltonian, a unique functional of the charge density, given by the output of Calculation N. Hence, Calculation N is the one whose outputs possess the full, physical content of DFT; this Calculation N provides the DFT solution. The value of the above determination of the physically meaningful calculation is that it avoids the consideration of basis sets that are larger than that of Calculation N and are heretofore over-complete for the description of the ground state of the material. In the current literature, the only calculations that have reproduced or predicted the correct, electronic properties of semiconductors have been the ones that (1) searched for and reached the true ground state of materials and (2) avoided the utilization of over complete basis sets as described above. These accurate DFT calculations did not invoke the self-interaction correction (SIC) or the derivative discontinuity employed extensively in the literature to explain the woeful underestimation of the band gaps of semiconductors and insulators. In light of the content of the two bullets above, an alternative, plausible explanation of the energy and band gap underestimation in the literature is the use of over-complete basis sets that lead to an unphysical lowering of some unoccupied energies, including some of the lowest-laying ones. == References ==
Wikipedia:Rayleigh's quotient in vibrations analysis#0
The Rayleigh quotient represents a quick method to estimate the natural frequency of both discrete and continuous oscillating systems. ω n 2 ≈ V T ~ {\displaystyle \omega _{n}^{2}\approx {\frac {V}{\tilde {T}}}} where ω n {\displaystyle \omega _{n}} is the natural frequency of the nth mode, V {\displaystyle V} is the potential energy of the system and T ~ {\displaystyle {\tilde {T}}} is a property equivalent to the kinetic energy but with velocity replaced by position. == Discrete Systems == For multi degree-of-freedom vibration system, in which the mass and the stiffness matrices are known, the Rayleigh quotient can be derived starting from the equation of motion. The eigenvalue problem for a general system of the form M q ¨ ( t ) + C q ˙ ( t ) + K q ( t ) = Q ( t ) {\displaystyle M\,{\ddot {\textbf {q}}}(t)+C\,{\dot {\textbf {q}}}(t)+K\,{\textbf {q}}(t)={\textbf {Q}}(t)} in absence of damping and external forces reduces to: M q ¨ ( t ) + K q ( t ) = 0 {\displaystyle M\,{\ddot {\textbf {q}}}(t)+K\,{\textbf {q}}(t)=0} The previous equation can be written also as the following: K u = ω 2 M u {\displaystyle K\,{\textbf {u}}=\omega ^{2}\,M\,{\textbf {u}}} where ω {\displaystyle \omega } represents the natural frequency and M and K are the real positive symmetric mass and stiffness matrices respectively. For an N-degree-of-freedom system the equation has N solutions ω n 2 {\displaystyle \omega _{n}^{2}} , u n {\displaystyle {\textbf {u}}_{n}} for n = 1, 2, 3, ..., N. By multiplying both sides of the equation by u n T {\displaystyle {\textbf {u}}_{n}^{T}} and dividing by the scalar u n T M u n {\displaystyle {\textbf {u}}_{n}^{T}\,M\,{\textbf {u}}_{n}} , it is possible to express the eigenvalue problem as follows: ω n 2 = u n T K u n u n T M u n {\displaystyle \omega _{n}^{2}={\frac {{\textbf {u}}_{n}^{T}\,K\,{\textbf {u}}_{n}}{{\textbf {u}}_{n}^{T}\,M\,{\textbf {u}}_{n}}}} In the previous equation it is also possible to observe that the numerator is proportional to the potential energy while the denominator depicts a measure of the kinetic energy. Moreover, the equation allow us to calculate the natural frequency only if the eigenvector (as well as any other displacement vector) u n {\displaystyle {\textbf {u}}_{n}} is known. For academic interests, if the modal vectors are not known, we can repeat the foregoing process but with ω 2 {\displaystyle \omega ^{2}} and u {\displaystyle {\textbf {u}}} taking the place of ω n 2 {\displaystyle \omega _{n}^{2}} and u n {\displaystyle {\textbf {u}}_{n}} , respectively. By doing so we obtain the scalar R ( u ) {\displaystyle R({\textbf {u}})} , also known as Rayleigh's quotient: R ( u ) = ω 2 = u T K u u T M u {\displaystyle R({\textbf {u}})=\omega ^{2}={\frac {{\textbf {u}}^{T}\,K\,{\textbf {u}}}{{\textbf {u}}^{T}\,M\,{\textbf {u}}}}} Therefore, the Rayleigh's quotient is a scalar whose value depends on the vector u {\displaystyle {\textbf {u}}} and it can be calculated with good approximation for any arbitrary vector u {\displaystyle {\textbf {u}}} as long as it lays reasonably far from the modal vectors u i {\displaystyle {\textbf {u}}_{i}} , i = 1,2,3,...,N. Since, it is possible to state that the vector u {\displaystyle {\textbf {u}}} differs from the modal vector u n {\displaystyle {\textbf {u}}_{n}} by a small quantity of first order, the correct result of the Rayleigh's quotient will differ not sensitively from the estimated one and that's what makes this method very useful. A good way to estimate the lowest modal vector ( u 1 ) {\displaystyle (u_{1})} , that generally works well for most structures (even though is not guaranteed), is to assume ( u 1 ) {\displaystyle (u_{1})} equal to the static displacement from an applied force that has the same relative distribution of the diagonal mass matrix terms. The latter can be elucidated by the following 3-DOF example. === Example === As an example, we can consider a 3-degree-of-freedom system in which the mass and the stiffness matrices of them are known as follows: M = [ 1 0 0 0 1 0 0 0 3 ] , K = [ 3 − 1 0 − 1 3 − 2 0 − 2 2 ] {\displaystyle M={\begin{bmatrix}1&0&0\\0&1&0\\0&0&3\end{bmatrix}}\;,\quad K={\begin{bmatrix}3&-1&0\\-1&3&-2\\0&-2&2\end{bmatrix}}} To get an estimation of the lowest natural frequency we choose a trial vector of static displacement obtained by loading the system with a force proportional to the masses: F = k [ m 1 m 2 m 3 ] = 1 [ 1 1 3 ] {\displaystyle {\textbf {F}}=k{\begin{bmatrix}m_{1}\\m_{2}\\m_{3}\end{bmatrix}}=1{\begin{bmatrix}1\\1\\3\end{bmatrix}}} Thus, the trial vector will become u = K − 1 F = [ 2.5 6.5 8 ] {\displaystyle {\textbf {u}}=K^{-1}{\textbf {F}}={\begin{bmatrix}2.5\\6.5\\8\end{bmatrix}}} that allow us to calculate the Rayleigh quotient: R = u T K u u T M u = ⋯ = 0.137214 {\displaystyle R={\frac {{\textbf {u}}^{T}\,K\,{\textbf {u}}}{{\textbf {u}}^{T}\,M\,{\textbf {u}}}}=\cdots =0.137214} Thus, the lowest natural frequency, calculated by means of the Rayleigh quotient is: ω Ray = 0.370424 {\displaystyle \omega _{\text{Ray}}=0.370424} Using a calculation tool is pretty fast to verify how much it differs from the "real" one. In this case, using MATLAB, it has been calculated that the lowest natural frequency is: ω real = 0.369308 {\displaystyle \omega _{\text{real}}=0.369308} that has led to an error of 0.302315 % {\displaystyle 0.302315\%} using the Rayleigh's approximation, a remarkable result. The example shows how the Rayleigh quotient is capable of getting an accurate estimation of the lowest natural frequency. The practice of using the static displacement vector as a trial vector is valid as the static displacement vector tends to resemble the lowest vibration mode. == Continuous systems == For continuous systems, the concept of mass and stiffness matrices does not apply, but it can be seen that the Rayleigh quotient is still the ratio of the potential energy to the "kinetic energy without the time derivatives": R = ω 2 = V T ~ {\displaystyle R=\omega ^{2}={\frac {V}{\tilde {T}}}} The arises for the concept of the maximum kinetic energy being equal to the maximum potential energy for conservative systems For the case of a string of mass per unit length m under tension P: T ~ = 1 2 m ∫ y 2 d x {\displaystyle {\tilde {T}}={\frac {1}{2}}m\int y^{2}\,dx\quad } and V = 1 2 P ∫ ( ∂ y ∂ x ) 2 d x {\displaystyle \quad V={\frac {1}{2}}P\int \left({\frac {\partial y}{\partial x}}\right)^{2}\,dx} == References ==
Wikipedia:Raymond C. Archibald#0
Raymond Clare Archibald (7 October 1875 – 26 July 1955) was a prominent Canadian-American mathematician. He is known for his work as a historian of mathematics, his editorships of mathematical journals and his contributions to the teaching of mathematics. == Biography == Raymond Clare Archibald was born in South Branch, Stewiacke, Nova Scotia on 7 October 1875. He was the son of Abram Newcomb Archibald (1849–1883) and Mary Mellish Archibald (1849–1901). He was the fourth cousin twice removed of the famous Canadian-American astronomer and mathematician Simon Newcomb. Archibald graduated in 1894 from Mount Allison College with B.A. degree in mathematics and teacher's certificate in violin. After teaching mathematics and violin for a year at the Mount Allison Ladies' College he went to Harvard where he received a B.A. 1896 and a M.A. in 1897. He then traveled to Europe where he attended the Humboldt University of Berlin during 1898 and received a Ph.D. cum laude from the University of Strasbourg in 1900. His advisor was Karl Theodor Reye and title of his dissertation was The Cardioide and Some of its Related Curves. He returned to Canada in 1900 and taught mathematics and violin at the Mount Allison Ladies' College until 1907. After a one-year appointment at Acadia University he accepted an invitation of join the mathematics department at Brown University. He stayed at Brown for the rest of his career becoming a Professor Emeritus in 1943. While at Brown he created one of the finest mathematical libraries in the western hemisphere. Archibald returned to Mount Allison in 1954 to curate the Mary Mellish Archibald Memorial Library, the library he had founded in 1905 to honor his mother. At his death the library contained 23,000 volumes, 2,700 records, and 70,000 songs in American and English poetry and drama. Raymond Clare Archibald was a world-renowned historian of mathematics with a lifelong concern for the teaching of mathematics in secondary schools. At the presentation of his portrait to Brown University the head of the mathematics department, Professor Clarence Raymond Adams said of him: "The instincts of the bibliophile were also his from early years. Possessing a passion for accurate detail, systematic by nature and blessed with a memory that was the marvel of his friends, he gradually acquired a knowledge of mathematical books and their values which has scarcely been equalled. This knowledge and an untiring energy he dedicated to the upbuilding of the mathematical library at Brown University. From modest beginnings he has developed this essential equipment of the mathematical investigator to a point where it has no superior, in completeness and in convenience for the user." == Honors == Archibald received honorary degrees from the University of Padua (LL.D., 1922), Mount Allison University (LL.D., 1923) and from Brown University (M.A. ad eundem, 1943). Fellow, American Association for the Advancement of Science (1906) Member, Deutsche Mathematiker-Vereinigung (1908) Member, Edinburgh Mathematical Society (1909) Member, Mathematical Association (England) (1910) Member, Société Mathématique de France (1911) Member, London Mathematical Society (1912) Charter Member, Mathematical Association of America (1916); elected president for 1922 Fellow, American Academy of Arts and Sciences (1917) Librarian, American Mathematical Society (1921-1941) Member, Circolo Matematico di Palermo (1922) Soci Fondatori, Unione Matematica Italiana (1924) Founding Member, History of Science Society (1924) Honorary Member, Society of Sciences, Cluj, Roumania (1929) Honorary Foreign Fellow, Masarykova Akademie Prace, Prague, Czecho-Slovakia (1930) Membre Effective, Académie Internationale d'Historie des Sciences (1931) Honorary Foreign Member, Polish Mathematical Society (1934) Honorary Member, New Brunswick Museum (1946) Honorary Member, Mathematical Association (England) (1949) == Editorships == Associate editor, Bulletin of the American Mathematical Society (1913–20) Editor-in-chief, American Mathematical Monthly (1919–21); associate editor (1918–19) Associate editor, Revue Semestrielles des Publications Mathématiques (1923–34) Associate editor, Isis (1924–48) Associate editor, Scripta Mathematica (1932–49) Founder and editor, Mathematical Tables and Other Aids to Computation (1943–49) Co-founder and editor, Eudemes == Bibliography == Archibald's bibliography contains over 1,000 entries. He contributed to over 20 different journals, mathematical, scientific, educational and literary. The following are the books of which he is an author: Margaret Gordon, Lady Bannerman, Carlyle's First Love, John Lane, 1910, ISBN 9780659913456 Euclid's Book on Divisions of Figures: (Peri diairéseon biblion): with a restoration based on Woepcke's text and on the Practica geometriae of Leonardo Pisano, Cambridge University Press, 1916, ISBN 9780659914057 The Training of Teachers of Mathematics for the Secondary Schools of the Countries Represented in the International Commission on the Teaching of Mathematics, U.S. Government Printing Office, 1917 Benjamin Peirce, 1809–1880. Biographical Sketch and Bibliography, Mathematical Association of America, 1925 Bibliography of Egyptian and Babylonian Mathematics, Plandome Press, 1929 History of Mathematics, Mathematical Association of America, 1931 Outline of the History of Mathematics, The Lancaster Press, 1932 Bibliography of the Life and Works of Simon Newcomb, J. Hope and & Sons, 1932 A Semicentennial History of the American Mathematical Society, American Mathematical Society, 1938, ISBN 9780821801185 Mary Mellish Archibald Memory Library Guide for Students and Scholars, Mount Allison University, 1935–46 Mathematical Table Makers, Scripta Mathematica, 1948 Geometrical Constructions with a Ruler, Scripta Mathematica, 1950 Historical Notes on the Education of Women at Mount Allison, 1854–1954, Mount Allison University, 1954 Famous Problems of Elementary Geometry, Dover, 1955 Archibald, R. C (1914). "Time as a Fourth Dimension" (PDF). Bulletin of the American Mathematical Society: 409–412. == Biographies == Biographisch-Literarisches Handwörterbuch zur Geschichte der Exacten Wissenschaften Enthaltend Nachweisungen über Lebensverhältnisse und Leitstunger von Mathematikern, Astronomen, Physikern, Chemikern, Mineralogen, Geologen usw. aller Völker und Zeiten ("Poggendorff"), 1904/22 and 1923/31 American Men of Science, 1905 though 1955 The Canadian Men and Women of the Time, 1912 Who's Who in Science, International, 1913 Who's Who in America, 1914/15 though 1954/55 Who's Who, 1922 though 1955 Encyclopædia Britannica, 1929 Who's Who in American Education, 1935/36, with portrait The Compendium of American Genealogy, First Families of America, 1937 The Canadian Who's Who, 1937/38 though 1952/54 Who's Who in New England, 1916, 1938, 1948 The National Cyclopaedia of American Biography, 1938 Who's Who Among North American Authors, 1927/28 though 1936/40 Leaders in Education: A Biographical Directory, 1941 Directory of American Scholars. A Biographical Directory, 1942 Who's Who in the East, 1948 though 1953 World Biography, 1948 and 1954 The Author's & Writer's Who's Who, 1949 Who knows, and what, among authorities, experts, and the specially informed, 1949 The International Who is Who in Music, 1951 The New Century Cyclopedia of Names, 1954 Who Was Who. 1951–1960, 1964 Who Was Who in America. 1951–1960, 1964. International Personal Bibliographie, 1800—1943 Enciclopedia Universal Ilustrada Europeo-Americana, Madrid, 1905—1930 Internationale Bibliographie der Zeitschriftenliteratur aus allen Gebieten des Wissens A Bio-Bibliographical Finding List of Canadian Musicians Isis Cumulative Bibliography MacTutor Harvard College Class of 1896. Fiftieth Anniversary Report, 1946 == Further reading == Jim Tattersall and Shawnee McMurran, Raymond Clare Archibald: A Euterpean Historian of Mathematics, New England Math J., v.~36, n. 2, May 2004, p. 31—47. Cheryl White Ennals, Raymond Clare Archibald---Collector: The Legacy of a Scholar's Labor of Love, in The Book Disease: Atlantic Provinces Book Collectors, ed. Eric L. Swanick, London: The Vine Press, 1996, p. 99-117. == References == == External links == Works by Raymond Clare Archibald at Project Gutenberg Works by or about Raymond C. Archibald at the Internet Archive
Wikipedia:Raymond McLenaghan#0
Raymond George McLenaghan (born 14 April 1939) is a Canadian theoretical physicist and mathematician. With Carminati, he is known for Carminati–McLenaghan invariants. == Notes == == External links == Raymond McLenaghan at the Mathematics Genealogy Project
Wikipedia:Raymond Paley#0
Raymond Edward Alan Christopher Paley (7 January 1907 – 7 April 1933) was an English mathematician who made significant contributions to mathematical analysis before dying young in a skiing accident. == Life == Paley was born in Bournemouth, England, the son of an artillery officer who died of tuberculosis before Paley was born. He was educated at Eton College as a King's Scholar and at Trinity College, Cambridge. He became a wrangler in 1928, and with J. A. Todd, he was one of two winners of the 1930 Smith's Prize examination. He was elected a Research Fellow of Trinity College in 1930, edging out Todd for the position, and continued at Cambridge as a postgraduate student, advised by John Edensor Littlewood. After the 1931 return of G. H. Hardy to Cambridge he participated in weekly joint seminars with the other students of Hardy and Littlewood. He traveled to the US in 1932 to work with Norbert Wiener at the Massachusetts Institute of Technology and with George Pólya at Princeton University, and as part of the same trip also planned to work with Lipót Fejér at a seminar in Chicago organized as part of the Century of Progress exposition. He was killed on 7 April 1933 in a skiing trip to the Canadian Rockies, by an avalanche on Deception Pass.Paley, born in 1907, was one of the greatest stars in pure mathematics in Britain, whose young genius frightened even Hardy. Had he lived, he might well have turned into another Littlewood: his 26 papers, written mostly in collaboration with Littlewood, Zygmund, Wiener and Ursell, opened new areas in analysis. == Contributions == Paley's contributions include the following. His mathematical research with Littlewood began in 1929, with his work towards a fellowship at Trinity, and Hardy writes that "Littlewood's influence dominates nearly all his earliest work". Their work became the foundation for Littlewood–Paley theory, an application of real-variable techniques in complex analysis.[a] The Walsh–Paley numeration, a standard method for indexing the Walsh functions, came from a 1932 suggestion of Paley.[b] Paley collaborated with Antoni Zygmund on Fourier series, continuing the work on this topic that he had already done with Littlewood. His work in this area also led to the Paley–Zygmund inequality in probability theory.[c] In a 1933 paper, he published the Paley construction for Hadamard matrices.[d] In the same paper, he first formulated the Hadamard conjecture on the sizes of matrices for which Hadamard matrices exist. The Paley graphs and Paley tournaments in graph theory are closely related, although they do not appear explicitly in this work. In the context of compressed sensing, frames (partial bases of Hilbert spaces) derived from this construction have been called "Paley equiangular tight frames". His collaboration with Norbert Wiener included the Paley–Wiener theorem in harmonic analysis. Paley was originally selected as the 1934 American Mathematical Society Colloquium Lecturer; after his death, Wiener replaced him as speaker, and spoke on their joint work, which was published as a book.[e] == Selected publications == For the short span of his research career, Paley was very productive; Hardy lists 26 of Paley's publications, and more were published posthumously. These publications include: == References ==
Wikipedia:Raúl Chávez Sarmiento#0
Raúl Arturo Chávez Sarmiento (born 24 October 1997) is a Peruvian child prodigy in mathematics. At the age of 11 years, 271 days, he won a bronze medal at the 2009 International Mathematical Olympiad, making him the second youngest medalist in IMO history, behind Terence Tao, who won a bronze medal in 1986 at the age of 10. He then won a silver medal at the 2010 IMO, a gold medal (6th ranked overall) at the 2011 IMO, and a silver medal again at the 2012 IMO. Chávez Sarmiento received his Ph.D. in 2024 from Harvard University with the thesis The Hilbert-Chow algebra of a proper surface and Grojnowski calculus. == See also == List of child prodigies List of International Mathematical Olympiad participants == References == == External links == Raúl Chávez Sarmiento's results at International Mathematical Olympiad
Wikipedia:Real Analysis Exchange#0
The Real Analysis Exchange (RAEX) is a biannual mathematics journal, publishing survey articles, research papers, and conference reports in real analysis and related topics. Its editor-in-chief is Paul D. Humke.
Wikipedia:Real coordinate space#0
In mathematics, the real coordinate space or real coordinate n-space, of dimension n, denoted Rn or R n {\displaystyle \mathbb {R} ^{n}} , is the set of all ordered n-tuples of real numbers, that is the set of all sequences of n real numbers, also known as coordinate vectors. Special cases are called the real line R1, the real coordinate plane R2, and the real coordinate three-dimensional space R3. With component-wise addition and scalar multiplication, it is a real vector space. The coordinates over any basis of the elements of a real vector space form a real coordinate space of the same dimension as that of the vector space. Similarly, the Cartesian coordinates of the points of a Euclidean space of dimension n, En (Euclidean line, E; Euclidean plane, E2; Euclidean three-dimensional space, E3) form a real coordinate space of dimension n. These one to one correspondences between vectors, points and coordinate vectors explain the names of coordinate space and coordinate vector. It allows using geometric terms and methods for studying real coordinate spaces, and, conversely, to use methods of calculus in geometry. This approach of geometry was introduced by René Descartes in the 17th century. It is widely used, as it allows locating points in Euclidean spaces, and computing with them. == Definition and structures == For any natural number n, the set Rn consists of all n-tuples of real numbers (R). It is called the "n-dimensional real space" or the "real n-space". An element of Rn is thus a n-tuple, and is written ( x 1 , x 2 , … , x n ) {\displaystyle (x_{1},x_{2},\ldots ,x_{n})} where each xi is a real number. So, in multivariable calculus, the domain of a function of several real variables and the codomain of a real vector valued function are subsets of Rn for some n. The real n-space has several further properties, notably: With componentwise addition and scalar multiplication, it is a real vector space. Every n-dimensional real vector space is isomorphic to it. With the dot product (sum of the term by term product of the components), it is an inner product space. Every n-dimensional real inner product space is isomorphic to it. As every inner product space, it is a topological space, and a topological vector space. It is a Euclidean space and a real affine space, and every Euclidean or affine space is isomorphic to it. It is an analytic manifold, and can be considered as the prototype of all manifolds, as, by definition, a manifold is, near each point, isomorphic to an open subset of Rn. It is an algebraic variety, and every real algebraic variety is a subset of Rn. These properties and structures of Rn make it fundamental in almost all areas of mathematics and their application domains, such as statistics, probability theory, and many parts of physics. == The domain of a function of several variables == Any function f(x1, x2, ..., xn) of n real variables can be considered as a function on Rn (that is, with Rn as its domain). The use of the real n-space, instead of several variables considered separately, can simplify notation and suggest reasonable definitions. Consider, for n = 2, a function composition of the following form: F ( t ) = f ( g 1 ( t ) , g 2 ( t ) ) , {\displaystyle F(t)=f(g_{1}(t),g_{2}(t)),} where functions g1 and g2 are continuous. If ∀x1 ∈ R : f(x1, ·) is continuous (by x2) ∀x2 ∈ R : f(·, x2) is continuous (by x1) then F is not necessarily continuous. Continuity is a stronger condition: the continuity of f in the natural R2 topology (discussed below), also called multivariable continuity, which is sufficient for continuity of the composition F. == Vector space == The coordinate space Rn forms an n-dimensional vector space over the field of real numbers with the addition of the structure of linearity, and is often still denoted Rn. The operations on Rn as a vector space are typically defined by x + y = ( x 1 + y 1 , x 2 + y 2 , … , x n + y n ) {\displaystyle \mathbf {x} +\mathbf {y} =(x_{1}+y_{1},x_{2}+y_{2},\ldots ,x_{n}+y_{n})} α x = ( α x 1 , α x 2 , … , α x n ) . {\displaystyle \alpha \mathbf {x} =(\alpha x_{1},\alpha x_{2},\ldots ,\alpha x_{n}).} The zero vector is given by 0 = ( 0 , 0 , … , 0 ) {\displaystyle \mathbf {0} =(0,0,\ldots ,0)} and the additive inverse of the vector x is given by − x = ( − x 1 , − x 2 , … , − x n ) . {\displaystyle -\mathbf {x} =(-x_{1},-x_{2},\ldots ,-x_{n}).} This structure is important because any n-dimensional real vector space is isomorphic to the vector space Rn. === Matrix notation === In standard matrix notation, each element of Rn is typically written as a column vector x = [ x 1 x 2 ⋮ x n ] {\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}}} and sometimes as a row vector: x = [ x 1 x 2 ⋯ x n ] . {\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}&x_{2}&\cdots &x_{n}\end{bmatrix}}.} The coordinate space Rn may then be interpreted as the space of all n × 1 column vectors, or all 1 × n row vectors with the ordinary matrix operations of addition and scalar multiplication. Linear transformations from Rn to Rm may then be written as m × n matrices which act on the elements of Rn via left multiplication (when the elements of Rn are column vectors) and on elements of Rm via right multiplication (when they are row vectors). The formula for left multiplication, a special case of matrix multiplication, is: ( A x ) k = ∑ l = 1 n A k l x l {\displaystyle (A{\mathbf {x} })_{k}=\sum _{l=1}^{n}A_{kl}x_{l}} Any linear transformation is a continuous function (see below). Also, a matrix defines an open map from Rn to Rm if and only if the rank of the matrix equals to m. === Standard basis === The coordinate space Rn comes with a standard basis: e 1 = ( 1 , 0 , … , 0 ) e 2 = ( 0 , 1 , … , 0 ) ⋮ e n = ( 0 , 0 , … , 1 ) {\displaystyle {\begin{aligned}\mathbf {e} _{1}&=(1,0,\ldots ,0)\\\mathbf {e} _{2}&=(0,1,\ldots ,0)\\&{}\;\;\vdots \\\mathbf {e} _{n}&=(0,0,\ldots ,1)\end{aligned}}} To see that this is a basis, note that an arbitrary vector in Rn can be written uniquely in the form x = ∑ i = 1 n x i e i . {\displaystyle \mathbf {x} =\sum _{i=1}^{n}x_{i}\mathbf {e} _{i}.} == Geometric properties and uses == === Orientation === The fact that real numbers, unlike many other fields, constitute an ordered field yields an orientation structure on Rn. Any full-rank linear map of Rn to itself either preserves or reverses orientation of the space depending on the sign of the determinant of its matrix. If one permutes coordinates (or, in other words, elements of the basis), the resulting orientation will depend on the parity of the permutation. Diffeomorphisms of Rn or domains in it, by their virtue to avoid zero Jacobian, are also classified to orientation-preserving and orientation-reversing. It has important consequences for the theory of differential forms, whose applications include electrodynamics. Another manifestation of this structure is that the point reflection in Rn has different properties depending on evenness of n. For even n it preserves orientation, while for odd n it is reversed (see also improper rotation). === Affine space === Rn understood as an affine space is the same space, where Rn as a vector space acts by translations. Conversely, a vector has to be understood as a "difference between two points", usually illustrated by a directed line segment connecting two points. The distinction says that there is no canonical choice of where the origin should go in an affine n-space, because it can be translated anywhere. === Convexity === In a real vector space, such as Rn, one can define a convex cone, which contains all non-negative linear combinations of its vectors. Corresponding concept in an affine space is a convex set, which allows only convex combinations (non-negative linear combinations that sum to 1). In the language of universal algebra, a vector space is an algebra over the universal vector space R∞ of finite sequences of coefficients, corresponding to finite sums of vectors, while an affine space is an algebra over the universal affine hyperplane in this space (of finite sequences summing to 1), a cone is an algebra over the universal orthant (of finite sequences of nonnegative numbers), and a convex set is an algebra over the universal simplex (of finite sequences of nonnegative numbers summing to 1). This geometrizes the axioms in terms of "sums with (possible) restrictions on the coordinates". Another concept from convex analysis is a convex function from Rn to real numbers, which is defined through an inequality between its value on a convex combination of points and sum of values in those points with the same coefficients. === Euclidean space === The dot product x ⋅ y = ∑ i = 1 n x i y i = x 1 y 1 + x 2 y 2 + ⋯ + x n y n {\displaystyle \mathbf {x} \cdot \mathbf {y} =\sum _{i=1}^{n}x_{i}y_{i}=x_{1}y_{1}+x_{2}y_{2}+\cdots +x_{n}y_{n}} defines the norm |x| = √x ⋅ x on the vector space Rn. If every vector has its Euclidean norm, then for any pair of points the distance d ( x , y ) = ‖ x − y ‖ = ∑ i = 1 n ( x i − y i ) 2 {\displaystyle d(\mathbf {x} ,\mathbf {y} )=\|\mathbf {x} -\mathbf {y} \|={\sqrt {\sum _{i=1}^{n}(x_{i}-y_{i})^{2}}}} is defined, providing a metric space structure on Rn in addition to its affine structure. As for vector space structure, the dot product and Euclidean distance usually are assumed to exist in Rn without special explanations. However, the real n-space and a Euclidean n-space are distinct objects, strictly speaking. Any Euclidean n-space has a coordinate system where the dot product and Euclidean distance have the form shown above, called Cartesian. But there are many Cartesian coordinate systems on a Euclidean space. Conversely, the above formula for the Euclidean metric defines the standard Euclidean structure on Rn, but it is not the only possible one. Actually, any positive-definite quadratic form q defines its own "distance" √q(x − y), but it is not very different from the Euclidean one in the sense that ∃ C 1 > 0 , ∃ C 2 > 0 , ∀ x , y ∈ R n : C 1 d ( x , y ) ≤ q ( x − y ) ≤ C 2 d ( x , y ) . {\displaystyle \exists C_{1}>0,\ \exists C_{2}>0,\ \forall \mathbf {x} ,\mathbf {y} \in \mathbb {R} ^{n}:C_{1}d(\mathbf {x} ,\mathbf {y} )\leq {\sqrt {q(\mathbf {x} -\mathbf {y} )}}\leq C_{2}d(\mathbf {x} ,\mathbf {y} ).} Such a change of the metric preserves some of its properties, for example the property of being a complete metric space. This also implies that any full-rank linear transformation of Rn, or its affine transformation, does not magnify distances more than by some fixed C2, and does not make distances smaller than 1 / C1 times, a fixed finite number times smaller. The aforementioned equivalence of metric functions remains valid if √q(x − y) is replaced with M(x − y), where M is any convex positive homogeneous function of degree 1, i.e. a vector norm (see Minkowski distance for useful examples). Because of this fact that any "natural" metric on Rn is not especially different from the Euclidean metric, Rn is not always distinguished from a Euclidean n-space even in professional mathematical works. === In algebraic and differential geometry === Although the definition of a manifold does not require that its model space should be Rn, this choice is the most common, and almost exclusive one in differential geometry. On the other hand, Whitney embedding theorems state that any real differentiable m-dimensional manifold can be embedded into R2m. === Other appearances === Other structures considered on Rn include the one of a pseudo-Euclidean space, symplectic structure (even n), and contact structure (odd n). All these structures, although can be defined in a coordinate-free manner, admit standard (and reasonably simple) forms in coordinates. Rn is also a real vector subspace of Cn which is invariant to complex conjugation; see also complexification. === Polytopes in Rn === There are three families of polytopes which have simple representations in Rn spaces, for any n, and can be used to visualize any affine coordinate system in a real n-space. Vertices of a hypercube have coordinates (x1, x2, ..., xn) where each xk takes on one of only two values, typically 0 or 1. However, any two numbers can be chosen instead of 0 and 1, for example −1 and 1. An n-hypercube can be thought of as the Cartesian product of n identical intervals (such as the unit interval [0,1]) on the real line. As an n-dimensional subset it can be described with a system of 2n inequalities: 0 ≤ x 1 ≤ 1 ⋮ 0 ≤ x n ≤ 1 {\displaystyle {\begin{matrix}0\leq x_{1}\leq 1\\\vdots \\0\leq x_{n}\leq 1\end{matrix}}} for [0,1], and | x 1 | ≤ 1 ⋮ | x n | ≤ 1 {\displaystyle {\begin{matrix}|x_{1}|\leq 1\\\vdots \\|x_{n}|\leq 1\end{matrix}}} for [−1,1]. Each vertex of the cross-polytope has, for some k, the xk coordinate equal to ±1 and all other coordinates equal to 0 (such that it is the kth standard basis vector up to sign). This is a dual polytope of hypercube. As an n-dimensional subset it can be described with a single inequality which uses the absolute value operation: ∑ k = 1 n | x k | ≤ 1 , {\displaystyle \sum _{k=1}^{n}|x_{k}|\leq 1\,,} but this can be expressed with a system of 2n linear inequalities as well. The third polytope with simply enumerable coordinates is the standard simplex, whose vertices are n standard basis vectors and the origin (0, 0, ..., 0). As an n-dimensional subset it is described with a system of n + 1 linear inequalities: 0 ≤ x 1 ⋮ 0 ≤ x n ∑ k = 1 n x k ≤ 1 {\displaystyle {\begin{matrix}0\leq x_{1}\\\vdots \\0\leq x_{n}\\\sum \limits _{k=1}^{n}x_{k}\leq 1\end{matrix}}} Replacement of all "≤" with "<" gives interiors of these polytopes. == Topological properties == The topological structure of Rn (called standard topology, Euclidean topology, or usual topology) can be obtained not only from Cartesian product. It is also identical to the natural topology induced by Euclidean metric discussed above: a set is open in the Euclidean topology if and only if it contains an open ball around each of its points. Also, Rn is a linear topological space (see continuity of linear maps above), and there is only one possible (non-trivial) topology compatible with its linear structure. As there are many open linear maps from Rn to itself which are not isometries, there can be many Euclidean structures on Rn which correspond to the same topology. Actually, it does not depend much even on the linear structure: there are many non-linear diffeomorphisms (and other homeomorphisms) of Rn onto itself, or its parts such as a Euclidean open ball or the interior of a hypercube). Rn has the topological dimension n. An important result on the topology of Rn, that is far from superficial, is Brouwer's invariance of domain. Any subset of Rn (with its subspace topology) that is homeomorphic to another open subset of Rn is itself open. An immediate consequence of this is that Rm is not homeomorphic to Rn if m ≠ n – an intuitively "obvious" result which is nonetheless difficult to prove. Despite the difference in topological dimension, and contrary to a naïve perception, it is possible to map a lesser-dimensional real space continuously and surjectively onto Rn. A continuous (although not smooth) space-filling curve (an image of R1) is possible. == Examples == === n ≤ 1 === Cases of 0 ≤ n ≤ 1 do not offer anything new: R1 is the real line, whereas R0 (the space containing the empty column vector) is a singleton, understood as a zero vector space. However, it is useful to include these as trivial cases of theories that describe different n. === n = 2 === The case of (x,y) where x and y are real numbers has been developed as the Cartesian plane P. Further structure has been attached with Euclidean vectors representing directed line segments in P. The plane has also been developed as the field extension C {\displaystyle \mathbf {C} } by appending roots of X2 + 1 = 0 to the real field R . {\displaystyle \mathbf {R} .} The root i acts on P as a quarter turn with counterclockwise orientation. This root generates the group { i , − 1 , − i , + 1 } ≡ Z / 4 Z {\displaystyle \{i,-1,-i,+1\}\equiv \mathbf {Z} /4\mathbf {Z} } . When (x,y) is written x + y i it is a complex number. Another group action by Z / 2 Z {\displaystyle \mathbf {Z} /2\mathbf {Z} } , where the actor has been expressed as j, uses the line y=x for the involution of flipping the plane (x,y) ↦ (y,x), an exchange of coordinates. In this case points of P are written x + y j and called split-complex numbers. These numbers, with the coordinate-wise addition and multiplication according to jj=+1, form a ring that is not a field. Another ring structure on P uses a nilpotent e to write x + y e for (x,y). The action of e on P reduces the plane to a line: It can be decomposed into the projection into the x-coordinate, then quarter-turning the result to the y-axis: e (x + y e) = x e since e2 = 0. A number x + y e is a dual number. The dual numbers form a ring, but, since e has no multiplicative inverse, it does not generate a group so the action is not a group action. Excluding (0,0) from P makes [x : y] projective coordinates which describe the real projective line, a one-dimensional space. Since the origin is excluded, at least one of the ratios x/y and y/x exists. Then [x : y] = [x/y : 1] or [x : y] = [1 : y/x]. The projective line P1(R) is a topological manifold covered by two coordinate charts, [z : 1] → z or [1 : z] → z, which form an atlas. For points covered by both charts the transition function is multiplicative inversion on an open neighborhood of the point, which provides a homeomorphism as required in a manifold. One application of the real projective line is found in Cayley–Klein metric geometry. === n = 3 === === n = 4 === R4 can be imagined using the fact that 16 points (x1, x2, x3, x4), where each xk is either 0 or 1, are vertices of a tesseract (pictured), the 4-hypercube (see above). The first major use of R4 is a spacetime model: three spatial coordinates plus one temporal. This is usually associated with theory of relativity, although four dimensions were used for such models since Galilei. The choice of theory leads to different structure, though: in Galilean relativity the t coordinate is privileged, but in Einsteinian relativity it is not. Special relativity is set in Minkowski space. General relativity uses curved spaces, which may be thought of as R4 with a curved metric for most practical purposes. None of these structures provide a (positive-definite) metric on R4. Euclidean R4 also attracts the attention of mathematicians, for example due to its relation to quaternions, a 4-dimensional real algebra themselves. See rotations in 4-dimensional Euclidean space for some information. In differential geometry, n = 4 is the only case where Rn admits a non-standard differential structure: see exotic R4. == Norms on Rn == One could define many norms on the vector space Rn. Some common examples are the p-norm, defined by ‖ x ‖ p := ∑ i = 1 n | x i | p p {\textstyle \|\mathbf {x} \|_{p}:={\sqrt[{p}]{\sum _{i=1}^{n}|x_{i}|^{p}}}} for all x ∈ R n {\displaystyle \mathbf {x} \in \mathbf {R} ^{n}} where p {\displaystyle p} is a positive integer. The case p = 2 {\displaystyle p=2} is very important, because it is exactly the Euclidean norm. the ∞ {\displaystyle \infty } -norm or maximum norm, defined by ‖ x ‖ ∞ := max { x 1 , … , x n } {\displaystyle \|\mathbf {x} \|_{\infty }:=\max\{x_{1},\dots ,x_{n}\}} for all x ∈ R n {\displaystyle \mathbf {x} \in \mathbf {R} ^{n}} . This is the limit of all the p-norms: ‖ x ‖ ∞ = lim p → ∞ ∑ i = 1 n | x i | p p {\textstyle \|\mathbf {x} \|_{\infty }=\lim _{p\to \infty }{\sqrt[{p}]{\sum _{i=1}^{n}|x_{i}|^{p}}}} . A really surprising and helpful result is that every norm defined on Rn is equivalent. This means for two arbitrary norms ‖ ⋅ ‖ {\displaystyle \|\cdot \|} and ‖ ⋅ ‖ ′ {\displaystyle \|\cdot \|'} on Rn you can always find positive real numbers α , β > 0 {\displaystyle \alpha ,\beta >0} , such that α ⋅ ‖ x ‖ ≤ ‖ x ‖ ′ ≤ β ⋅ ‖ x ‖ {\displaystyle \alpha \cdot \|\mathbf {x} \|\leq \|\mathbf {x} \|'\leq \beta \cdot \|\mathbf {x} \|} for all x ∈ R n {\displaystyle \mathbf {x} \in \mathbb {R} ^{n}} . This defines an equivalence relation on the set of all norms on Rn. With this result you can check that a sequence of vectors in Rn converges with ‖ ⋅ ‖ {\displaystyle \|\cdot \|} if and only if it converges with ‖ ⋅ ‖ ′ {\displaystyle \|\cdot \|'} . Here is a sketch of what a proof of this result may look like: Because of the equivalence relation it is enough to show that every norm on Rn is equivalent to the Euclidean norm ‖ ⋅ ‖ 2 {\displaystyle \|\cdot \|_{2}} . Let ‖ ⋅ ‖ {\displaystyle \|\cdot \|} be an arbitrary norm on Rn. The proof is divided in two steps: We show that there exists a β > 0 {\displaystyle \beta >0} , such that ‖ x ‖ ≤ β ⋅ ‖ x ‖ 2 {\displaystyle \|\mathbf {x} \|\leq \beta \cdot \|\mathbf {x} \|_{2}} for all x ∈ R n {\displaystyle \mathbf {x} \in \mathbf {R} ^{n}} . In this step you use the fact that every x = ( x 1 , … , x n ) ∈ R n {\displaystyle \mathbf {x} =(x_{1},\dots ,x_{n})\in \mathbf {R} ^{n}} can be represented as a linear combination of the standard basis: x = ∑ i = 1 n e i ⋅ x i {\textstyle \mathbf {x} =\sum _{i=1}^{n}e_{i}\cdot x_{i}} . Then with the Cauchy–Schwarz inequality ‖ x ‖ = ‖ ∑ i = 1 n e i ⋅ x i ‖ ≤ ∑ i = 1 n ‖ e i ‖ ⋅ | x i | ≤ ∑ i = 1 n ‖ e i ‖ 2 ⋅ ∑ i = 1 n | x i | 2 = β ⋅ ‖ x ‖ 2 , {\displaystyle \|\mathbf {x} \|=\left\|\sum _{i=1}^{n}e_{i}\cdot x_{i}\right\|\leq \sum _{i=1}^{n}\|e_{i}\|\cdot |x_{i}|\leq {\sqrt {\sum _{i=1}^{n}\|e_{i}\|^{2}}}\cdot {\sqrt {\sum _{i=1}^{n}|x_{i}|^{2}}}=\beta \cdot \|\mathbf {x} \|_{2},} where β := ∑ i = 1 n ‖ e i ‖ 2 {\textstyle \beta :={\sqrt {\sum _{i=1}^{n}\|e_{i}\|^{2}}}} . Now we have to find an α > 0 {\displaystyle \alpha >0} , such that α ⋅ ‖ x ‖ 2 ≤ ‖ x ‖ {\displaystyle \alpha \cdot \|\mathbf {x} \|_{2}\leq \|\mathbf {x} \|} for all x ∈ R n {\displaystyle \mathbf {x} \in \mathbf {R} ^{n}} . Assume there is no such α {\displaystyle \alpha } . Then there exists for every k ∈ N {\displaystyle k\in \mathbf {N} } a x k ∈ R n {\displaystyle \mathbf {x} _{k}\in \mathbf {R} ^{n}} , such that ‖ x k ‖ 2 > k ⋅ ‖ x k ‖ {\displaystyle \|\mathbf {x} _{k}\|_{2}>k\cdot \|\mathbf {x} _{k}\|} . Define a second sequence ( x ~ k ) k ∈ N {\displaystyle ({\tilde {\mathbf {x} }}_{k})_{k\in \mathbf {N} }} by x ~ k := x k ‖ x k ‖ 2 {\textstyle {\tilde {\mathbf {x} }}_{k}:={\frac {\mathbf {x} _{k}}{\|\mathbf {x} _{k}\|_{2}}}} . This sequence is bounded because ‖ x ~ k ‖ 2 = 1 {\displaystyle \|{\tilde {\mathbf {x} }}_{k}\|_{2}=1} . So because of the Bolzano–Weierstrass theorem there exists a convergent subsequence ( x ~ k j ) j ∈ N {\displaystyle ({\tilde {\mathbf {x} }}_{k_{j}})_{j\in \mathbf {N} }} with limit a ∈ {\displaystyle \mathbf {a} \in } Rn. Now we show that ‖ a ‖ 2 = 1 {\displaystyle \|\mathbf {a} \|_{2}=1} but a = 0 {\displaystyle \mathbf {a} =\mathbf {0} } , which is a contradiction. It is ‖ a ‖ ≤ ‖ a − x ~ k j ‖ + ‖ x ~ k j ‖ ≤ β ⋅ ‖ a − x ~ k j ‖ 2 + ‖ x k j ‖ ‖ x k j ‖ 2 ⟶ j → ∞ 0 , {\displaystyle \|\mathbf {a} \|\leq \left\|\mathbf {a} -{\tilde {\mathbf {x} }}_{k_{j}}\right\|+\left\|{\tilde {\mathbf {x} }}_{k_{j}}\right\|\leq \beta \cdot \left\|\mathbf {a} -{\tilde {\mathbf {x} }}_{k_{j}}\right\|_{2}+{\frac {\|\mathbf {x} _{k_{j}}\|}{\|\mathbf {x} _{k_{j}}\|_{2}}}\ {\overset {j\to \infty }{\longrightarrow }}\ 0,} because ‖ a − x ~ k j ‖ → 0 {\displaystyle \|\mathbf {a} -{\tilde {\mathbf {x} }}_{k_{j}}\|\to 0} and 0 ≤ ‖ x k j ‖ ‖ x k j ‖ 2 < 1 k j {\displaystyle 0\leq {\frac {\|\mathbf {x} _{k_{j}}\|}{\|\mathbf {x} _{k_{j}}\|_{2}}}<{\frac {1}{k_{j}}}} , so ‖ x k j ‖ ‖ x k j ‖ 2 → 0 {\displaystyle {\frac {\|\mathbf {x} _{k_{j}}\|}{\|\mathbf {x} _{k_{j}}\|_{2}}}\to 0} . This implies ‖ a ‖ = 0 {\displaystyle \|\mathbf {a} \|=0} , so a = 0 {\displaystyle \mathbf {a} =\mathbf {0} } . On the other hand ‖ a ‖ 2 = 1 {\displaystyle \|\mathbf {a} \|_{2}=1} , because ‖ a ‖ 2 = ‖ lim j → ∞ x ~ k j ‖ 2 = lim j → ∞ ‖ x ~ k j ‖ 2 = 1 {\displaystyle \|\mathbf {a} \|_{2}=\left\|\lim _{j\to \infty }{\tilde {\mathbf {x} }}_{k_{j}}\right\|_{2}=\lim _{j\to \infty }\left\|{\tilde {\mathbf {x} }}_{k_{j}}\right\|_{2}=1} . This can not ever be true, so the assumption was false and there exists such a α > 0 {\displaystyle \alpha >0} . == See also == Exponential object, for theoretical explanation of the superscript notation Geometric space Real projective space == Sources == Kelley, John L. (1975). General Topology. Springer-Verlag. ISBN 0-387-90125-6. Munkres, James (1999). Topology. Prentice-Hall. ISBN 0-13-181629-2.
Wikipedia:Real-root isolation#0
In mathematics, and, more specifically in numerical analysis and computer algebra, real-root isolation of a polynomial consist of producing disjoint intervals of the real line, which contain each one (and only one) real root of the polynomial, and, together, contain all the real roots of the polynomial. Real-root isolation is useful because usual root-finding algorithms for computing the real roots of a polynomial may produce some real roots, but, cannot generally certify having found all real roots. In particular, if such an algorithm does not find any root, one does not know whether it is because there is no real root. Some algorithms compute all complex roots, but, as there are generally much fewer real roots than complex roots, most of their computation time is generally spent for computing non-real roots (in the average, a polynomial of degree n has n complex roots, and only log n real roots; see Geometrical properties of polynomial roots § Real roots). Moreover, it may be difficult to distinguish the real roots from the non-real roots with small imaginary part (see the example of Wilkinson's polynomial in next section). The first complete real-root isolation algorithm results from Sturm's theorem (1829). However, when real-root-isolation algorithms began to be implemented on computers it appeared that algorithms derived from Sturm's theorem are less efficient than those derived from Descartes' rule of signs (1637). Since the beginning of 20th century there has been much research activity for improving the algorithms derived from Descartes' rule of signs, getting very efficient implementations, and determining their computational complexities. The best implementations can routinely isolate real roots of polynomials of degree more than 1,000. == Specifications and general strategy == For finding real roots of a polynomial, the common strategy is to divide the real line (or an interval of it where root are searched) into disjoint intervals until having at most one root in each interval. Such a procedure is called root isolation, and a resulting interval that contains exactly one root is an isolating interval for this root. Wilkinson's polynomial shows that a very small modification of one coefficient of a polynomial may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to be close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of 10 20 ; {\displaystyle 10^{20};} this implies that an error of 10 − 10 {\displaystyle 10^{-10}} on the value of the root may produce a value of the polynomial at the approximate root that is of the order of 10 10 . {\displaystyle 10^{10}.} It follows that, except maybe for very low degrees, a root-isolation procedure cannot give reliable results without using exact arithmetic. Therefore, if one wants to isolate roots of a polynomial with floating-point coefficients, it is often better to convert them to rational numbers, and then take the primitive part of the resulting polynomial, for having a polynomial with integer coefficients. For this reason, although the methods that are described below work theoretically with real numbers, they are generally used in practice with polynomials with integer coefficients, and intervals ending with rational numbers. Also, the polynomials are always supposed to be square free. There are two reasons for that. Firstly Yun's algorithm for computing the square-free factorization is less costly than twice the cost of the computation of the greatest common divisor of the polynomial and its derivative. As this may produce factors of lower degrees, it is generally advantageous to apply root-isolation algorithms only on polynomials without multiple roots, even when this is not required by the algorithm. The second reason for considering only square-free polynomials is that the fastest root-isolation algorithms do not work in the case of multiple roots. For root isolation, one requires a procedure for counting the real roots of a polynomial in an interval without having to compute them, or, at least a procedure for deciding whether an interval contains zero, one or more roots. With such a decision procedure, one may work with a working list of intervals that may contain real roots. At the beginning, the list contains a single interval containing all roots of interest, generally the whole real line or its positive part. Then each interval of the list is divided into two smaller intervals. If one of the new intervals does not contain any root, it is removed from the list. If it contains one root, it is put in an output list of isolating intervals. Otherwise, it is kept in the working list for further divisions, and the process may continue until all roots are eventually isolated == Sturm's theorem == The first complete root-isolation procedure results of Sturm's theorem (1829), which expresses the number of real roots in an interval in terms of the number of sign variations of the values of a sequence of polynomials, called Sturm's sequence, at the ends of the interval. Sturm's sequence is the sequence of remainders that occur in a variant of Euclidean algorithm applied to the polynomial and its derivatives. When implemented on computers, it appeared that root isolation with Sturm's theorem is less efficient than the other methods that are described below. Consequently, Sturm's theorem is rarely used for effective computations, although it remains useful for theoretical purposes. == Descartes' rule of signs and its generalizations == Descartes' rule of signs asserts that the difference between the number of sign variations in the sequence of the coefficients of a polynomial and the number of its positive real roots is a nonnegative even integer. It results that if this number of sign variations is zero, then the polynomial does not have any positive real roots, and, if this number is one, then the polynomial has a unique positive real root, which is a single root. Unfortunately the converse is not true, that is, a polynomial which has either no positive real root or has a single positive simple root may have a number of sign variations greater than 1. This has been generalized by Budan's theorem (1807), into a similar result for the real roots in a half-open interval (a, b]: If f(x) is a polynomial, and v is the difference between of the numbers of sign variations of the sequences of the coefficients of f(x + a) and f(x + b), then v minus the number of real roots in the interval, counted with their multiplicities, is a nonnegative even integer. This is a generalization of Descartes' rule of signs, because, for b sufficiently large, there is no sign variation in the coefficients of f(x + b), and all real roots are smaller than b. Budan's may provide a real-root-isolation algorithm for a square-free polynomial (a polynomial without multiple root): from the coefficients of polynomial, one may compute an upper bound M of the absolute values of the roots and a lower bound m on the absolute values of the differences of two roots (see Properties of polynomial roots). Then, if one divides the interval [–M, M] into intervals of length less than m, then every real root is contained in some interval, and no interval contains two roots. The isolating intervals are thus the intervals for which Budan's theorem asserts an odd number of roots. However, this algorithm is very inefficient, as one cannot use a coarser partition of the interval [–M, M], because, if Budan's theorem gives a result larger than 1 for an interval of larger size, there is no way for insuring that it does not contain several roots. == Vincent's and related theorems == Vincent's theorem (1834) provides a method for real-root isolation, which is at the basis of the most efficient real-root-isolation algorithms. It concerns the positive real roots of a square-free polynomial (that is a polynomial without multiple roots). If a 1 , a 2 , … , {\displaystyle a_{1},a_{2},\ldots ,} is a sequence of positive real numbers, let c k = a 1 + 1 a 2 + 1 a 3 + 1 ⋱ + 1 a k {\displaystyle c_{k}=a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\cfrac {1}{\ddots +{\cfrac {1}{a_{k}}}}}}}}}} be the kth convergent of the continued fraction a 1 + 1 a 2 + 1 a 3 + 1 ⋱ . {\displaystyle a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\cfrac {1}{\ddots }}}}}}.} For proving his theorem, Vincent proved a result that is useful on its own: For working with real numbers, one may always choose c = d = 1, but, as effective computations are done with rational numbers, it is generally convenient to suppose that a, b, c, d are integers. The "small enough" condition has been quantified independently by Nikola Obreshkov, and Alexander Ostrowski: For polynomials with integer coefficients, the minimum distance sep(p) may be lower bounded in terms of the degree of the polynomial and the maximal absolute value of its coefficients; see Properties of polynomial roots § Root separation. This allows the analysis of worst-case complexity of algorithms based on Vincent's theorems. However, Obreschkoff–Ostrowski theorem shows that the number of iterations of these algorithms depend on the distances between roots in the neighborhood of the working interval; therefore, the number of iterations may vary dramatically for different roots of the same polynomial. James V. Uspensky gave a bound on the length of the continued fraction (the integer k in Vincent's theorem), for getting zero or one sign variations: == Continued fraction method == The use of continued fractions for real-root isolation has been introduced by Vincent, although he credited Joseph-Louis Lagrange for this idea, without providing a reference. For making an algorithm of Vincent's theorem, one must provide a criterion for choosing the a i {\displaystyle a_{i}} that occur in his theorem. Vincent himself provided some choice (see below). Some other choices are possible, and the efficiency of the algorithm may depend dramatically on these choices. Below is presented an algorithm, in which these choices result from an auxiliary function that will be discussed later. For running this algorithm one must work with a list of intervals represented by a specific data structure. The algorithm works by choosing an interval, removing it from the list, adding zero, one or two smaller intervals to the list, and possibly outputs an isolation interval. For isolating the real roots of a polynomial p(x) of degree n, each interval is represented by a pair ( A ( x ) , M ( x ) ) , {\displaystyle (A(x),M(x)),} where A(x) is a polynomial of degree n and M ( x ) = p x + r q x + s {\displaystyle M(x)={\frac {px+r}{qx+s}}} is a Möbius transformation with integer coefficients. One has A ( x ) = p ( M ( x ) ) , {\displaystyle A(x)=p(M(x)),} and the interval represented by this data structure is the interval that has M ( ∞ ) = p q {\displaystyle M(\infty )={\frac {p}{q}}} and M ( 0 ) = r s {\displaystyle M(0)={\frac {r}{s}}} as end points. The Möbius transformation maps the roots of p in this interval to the roots of A in (0, +∞). The algorithm works with a list of intervals that, at the beginning, contains the two intervals ( A ( x ) = p ( x ) , M ( x ) = x ) {\displaystyle (A(x)=p(x),M(x)=x)} and ( A ( x ) = p ( − x ) , M ( x ) = − x ) , {\displaystyle (A(x)=p(-x),M(x)=-x),} corresponding to the partition of the reals into the positive and the negative ones (one may suppose that zero is not a root, as, if it were, it suffices to apply the algorithm to p(x)/x). Then for each interval (A(x), M(x)) in the list, the algorithm remove it from the list; if the number of sign variations of the coefficients of A is zero, there is no root in the interval, and one passes to the next interval. If the number of sign variations is one, the interval defined by M ( 0 ) {\displaystyle M(0)} and M ( ∞ ) {\displaystyle M(\infty )} is an isolating interval. Otherwise, one chooses a positive real number b for dividing the interval (0, +∞) into (0, b) and (b, +∞), and, for each subinterval, one composes M with a Möbius transformation that maps the interval onto (0, +∞), for getting two new intervals to be added to the list. In pseudocode, this gives the following, where var(A) denotes the number of sign variations of the coefficients of the polynomial A. function continued fraction is input: P(x), a square-free polynomial, output: a list of pairs of rational numbers defining isolating intervals /* Initialization */ L := [(P(x), x), (P(–x), –x)] /* two starting intervals */ Isol := [ ] /* Computation */ while L ≠ [ ] do Choose (A(x), M(x)) in L, and remove it from L v := var(A) if v = 0 then exit /* no root in the interval */ if v = 1 then /* isolating interval found */ add (M(0), M(∞)) to Isol exit b := some positive integer B(x) = A(x + b) w := v – var(B) if B(0) = 0 then /* rational root found */ add (M(b), M(b)) to Isol B(x) := B(x)/x add (B(x), M(b + x)) to L /* roots in (M(b), M(+∞)) */ if w = 0 then exit /* Budan's theorem */ if w = 1 then /* Budan's theorem again */ add (M(0), M(b)) to Isol if w > 1 then add ( A(b/(1 + x)), M(b/(1 + x)) )to L /* roots in (M(0), M(b)) */ The different variants of the algorithm depend essentially on the choice of b. In Vincent's papers, and in Uspensky's book, one has always b = 1, with the difference that Uspensky did not use Budan's theorem for avoiding further bisections of the interval associated to (0, b) The drawback of always choosing b = 1 is that one has to do many successive changes of variable of the form x → 1 + x. These may be replaced by a single change of variable x → n + x, but, nevertheless, one has to do the intermediate changes of variables for applying Budan's theorem. A way for improving the efficiency of the algorithm is to take for b a lower bound of the positive real roots, computed from the coefficients of the polynomial (see Properties of polynomial roots for such bounds). == Bisection method == The bisection method consists roughly of starting from an interval containing all real roots of a polynomial, and divides it recursively into two parts until getting eventually intervals that contain either zero or one root. The starting interval may be of the form (-B, B), where B is an upper bound on the absolute values of the roots, such as those that are given in Properties of polynomial roots § Bounds on (complex) polynomial roots. For technical reasons (simpler changes of variable, simpler complexity analysis, possibility of taking advantage of the binary analysis of computers), the algorithms are generally presented as starting with the interval [0, 1]. There is no loss of generality, as the changes of variables x = By and x = –By move respectively the positive and the negative roots in the interval [0, 1]. (The single changes variable x = (2By – B) may also be used.) The method requires an algorithm for testing whether an interval has zero, one, or possibly several roots, and for warranting termination, this testing algorithm must exclude the possibility of getting infinitely many times the output "possibility of several roots". Sturm's theorem and Vincent's auxiliary theorem provide such convenient tests. As the use Descartes' rule of signs and Vincent's auxiliary theorem is much more computationally efficient than the use of Sturm's theorem, only the former is described in this section. The bisection method based on Descartes' rules of signs and Vincent's auxiliary theorem has been introduced in 1976 by Akritas and Collins under the name of Modified Uspensky algorithm, and has been referred to as the Uspensky algorithm, the Vincent–Akritas–Collins algorithm, or Descartes method, although Descartes, Vincent and Uspensky never described it. The method works as follows. For searching the roots in some interval, one changes first the variable for mapping the interval onto [0, 1] giving a new polynomial q(x). For searching the roots of q in [0, 1], one maps the interval [0, 1] onto [0, +∞]) by the change of variable x → 1 x + 1 , {\displaystyle x\to {\frac {1}{x+1}},} giving a polynomial r(x). Descartes' rule of signs applied to the polynomial r gives indications on the number of real roots of q in the interval [0, 1], and thus on the number of roots of the initial polynomial in the interval that has been mapped on [0, 1]. If there is no sign variation in the sequence of the coefficients of r, then there is no real root in the considered intervals. If there is one sign variation, then one has an isolation interval. Otherwise, one splits the interval [0, 1] into [0, 1/2] and [1/2, 1], one maps them onto [0, 1] by the changes of variable x = y/2 and x = (y + 1)/2. Vincent's auxiliary theorem insures the termination of this procedure. Except for the initialization, all these changes of variable consists of the composition of at most two very simple changes of variable which are the scalings by two x → x/2 , the translation x → x + 1, and the inversion x → 1/x , the latter consisting simply of reverting the order of the coefficients of the polynomial. As most of the computing time is devoted to changes of variable, the method consisting of mapping every interval to [0, 1] is fundamental for insuring a good efficiency. === Pseudocode === The following notation is used in the pseudocode that follows. p(x) is the polynomial for which the real roots in the interval [0, 1] have to be isolated var(q(x)) denotes the number of sign variations in the sequence of the coefficients of the polynomial q The elements of working list have the form (c, k, q(x)), where c and k are two nonnegative integers such that c < 2k, which represent the interval [ c 2 k , c + 1 2 k ] , {\displaystyle \left[{\frac {c}{2^{k}}},{\frac {c+1}{2^{k}}}\right],} q ( x ) = 2 k n p ( x + c 2 k ) , {\displaystyle q(x)=2^{kn}p\left({\frac {x+c}{2^{k}}}\right),} where n is the degree of p (the polynomial q may be computed directly from p, c and k, but it is less costly to compute it incrementally, as it will be done in the algorithm; if p has integer coefficients, the same is true for q) function bisection is input: p(x), a square-free polynomial, such that p(0) p(1) ≠ 0, for which the roots in the interval [0, 1] are searched output: a list of triples (c, k, h), representing isolating intervals of the form [ c 2 k , c + h 2 k ] {\displaystyle \left[{\frac {c}{2^{k}}},{\frac {c+h}{2^{k}}}\right]} /* Initialization */ L := [(0, 0, p(x))] /* a single element in the working list L */ Isol := [ ] n := degree(p) /* Computation */ while L ≠ [ ] do Choose (c, k, q(x)) in L, and remove it from L if q(0) = 0 then q(x) := q(x)/x n := n – 1 /* A rational root found */ add (c, k, 0) to Isol v := var ⁡ ( ( x + 1 ) n q ( 1 / ( x + 1 ) ) ) {\displaystyle \operatorname {var} ((x+1)^{n}q(1/(x+1)))} if v = 1 then /* An isolating interval found */ add (c, k, 1) to Isol if v > 1 then /* Bisecting */ add (2c, k + 1, 2 n q ( x / 2 ) {\displaystyle 2^{n}q(x/2)} to L add (2c + 1, k + 1, 2 n q ( ( x + 1 ) / 2 ) {\displaystyle 2^{n}q((x+1)/2)} to L end This procedure is essentially the one that has been described by Collins and Akritas. The running time depends mainly on the number of intervals that have to be considered, and on the changes of variables. There are ways for improving the efficiency, which have been an active subject of research since the publication of the algorithm, and mainly since the beginning of the 21st century. === Recent improvements === Various ways for improving Akritas–Collins bisection algorithm have been proposed. They include a method for avoiding storing a long list of polynomials without losing the simplicity of the changes of variables, the use of approximate arithmetic (floating point and interval arithmetic) when it allows getting the right value for the number of sign variations, the use of Newton's method when possible, the use of fast polynomial arithmetic, shortcuts for long chains of bisections in case of clusters of close roots, bisections in unequal parts for limiting instability problems in polynomial evaluation. All these improvement lead to an algorithm for isolating all real roots of a polynomial with integer coefficients, which has the complexity (using soft O notation, Õ, for omitting logarithmic factors) O ~ ( n 2 ( k + t ) ) , {\displaystyle {\tilde {O}}(n^{2}(k+t)),} where n is the degree of the polynomial, k is the number of nonzero terms, t is the maximum of digits of the coefficients. The implementation of this algorithm appears to be more efficient than any other implemented method for computing the real roots of a polynomial, even in the case of polynomials having very close roots (the case which was previously the most difficult for the bisection method). == References == == Sources ==
Wikipedia:Real-valued function#0
In mathematics, a real-valued function is a function whose values are real numbers. In other words, it is a function that assigns a real number to each member of its domain. Real-valued functions of a real variable (commonly called real functions) and real-valued functions of several real variables are the main object of study of calculus and, more generally, real analysis. In particular, many function spaces consist of real-valued functions. == Algebraic structure == Let F ( X , R ) {\displaystyle {\mathcal {F}}(X,{\mathbb {R} })} be the set of all functions from a set X to real numbers R {\displaystyle \mathbb {R} } . Because R {\displaystyle \mathbb {R} } is a field, F ( X , R ) {\displaystyle {\mathcal {F}}(X,{\mathbb {R} })} may be turned into a vector space and a commutative algebra over the reals with the following operations: f + g : x ↦ f ( x ) + g ( x ) {\displaystyle f+g:x\mapsto f(x)+g(x)} – vector addition 0 : x ↦ 0 {\displaystyle \mathbf {0} :x\mapsto 0} – additive identity c f : x ↦ c f ( x ) , c ∈ R {\displaystyle cf:x\mapsto cf(x),\quad c\in \mathbb {R} } – scalar multiplication f g : x ↦ f ( x ) g ( x ) {\displaystyle fg:x\mapsto f(x)g(x)} – pointwise multiplication These operations extend to partial functions from X to R , {\displaystyle \mathbb {R} ,} with the restriction that the partial functions f + g and f g are defined only if the domains of f and g have a nonempty intersection; in this case, their domain is the intersection of the domains of f and g. Also, since R {\displaystyle \mathbb {R} } is an ordered set, there is a partial order f ≤ g ⟺ ∀ x : f ( x ) ≤ g ( x ) , {\displaystyle \ f\leq g\quad \iff \quad \forall x:f(x)\leq g(x),} on F ( X , R ) , {\displaystyle {\mathcal {F}}(X,{\mathbb {R} }),} which makes F ( X , R ) {\displaystyle {\mathcal {F}}(X,{\mathbb {R} })} a partially ordered ring. == Measurable == The σ-algebra of Borel sets is an important structure on real numbers. If X has its σ-algebra and a function f is such that the preimage f −1(B) of any Borel set B belongs to that σ-algebra, then f is said to be measurable. Measurable functions also form a vector space and an algebra as explained above in § Algebraic structure. Moreover, a set (family) of real-valued functions on X can actually define a σ-algebra on X generated by all preimages of all Borel sets (or of intervals only, it is not important). This is the way how σ-algebras arise in (Kolmogorov's) probability theory, where real-valued functions on the sample space Ω are real-valued random variables. == Continuous == Real numbers form a topological space and a complete metric space. Continuous real-valued functions (which implies that X is a topological space) are important in theories of topological spaces and of metric spaces. The extreme value theorem states that for any real continuous function on a compact space its global maximum and minimum exist. The concept of metric space itself is defined with a real-valued function of two variables, the metric, which is continuous. The space of continuous functions on a compact Hausdorff space has a particular importance. Convergent sequences also can be considered as real-valued continuous functions on a special topological space. Continuous functions also form a vector space and an algebra as explained above in § Algebraic structure, and are a subclass of measurable functions because any topological space has the σ-algebra generated by open (or closed) sets. == Smooth == Real numbers are used as the codomain to define smooth functions. A domain of a real smooth function can be the real coordinate space (which yields a real multivariable function), a topological vector space, an open subset of them, or a smooth manifold. Spaces of smooth functions also are vector spaces and algebras as explained above in § Algebraic structure and are subspaces of the space of continuous functions. == Appearances in measure theory == A measure on a set is a non-negative real-valued functional on a σ-algebra of subsets. Lp spaces on sets with a measure are defined from aforementioned real-valued measurable functions, although they are actually quotient spaces. More precisely, whereas a function satisfying an appropriate summability condition defines an element of Lp space, in the opposite direction for any f ∈ Lp(X) and x ∈ X which is not an atom, the value f(x) is undefined. Though, real-valued Lp spaces still have some of the structure described above in § Algebraic structure. Each of Lp spaces is a vector space and have a partial order, and there exists a pointwise multiplication of "functions" which changes p, namely ⋅ : L 1 / α × L 1 / β → L 1 / ( α + β ) , 0 ≤ α , β ≤ 1 , α + β ≤ 1. {\displaystyle \cdot :L^{1/\alpha }\times L^{1/\beta }\to L^{1/(\alpha +\beta )},\quad 0\leq \alpha ,\beta \leq 1,\quad \alpha +\beta \leq 1.} For example, pointwise product of two L2 functions belongs to L1. == Other appearances == Other contexts where real-valued functions and their special properties are used include monotonic functions (on ordered sets), convex functions (on vector and affine spaces), harmonic and subharmonic functions (on Riemannian manifolds), analytic functions (usually of one or more real variables), algebraic functions (on real algebraic varieties), and polynomials (of one or more real variables). == See also == Real analysis Partial differential equations, a major user of real-valued functions Norm (mathematics) Scalar (mathematics) == Footnotes == == References == Apostol, Tom M. (1974). Mathematical Analysis (2nd ed.). Addison–Wesley. ISBN 978-0-201-00288-1. Gerald Folland, Real Analysis: Modern Techniques and Their Applications, Second Edition, John Wiley & Sons, Inc., 1999, ISBN 0-471-31716-0. Rudin, Walter (1976). Principles of Mathematical Analysis (3rd ed.). New York: McGraw-Hill. ISBN 978-0-07-054235-8. == External links == Weisstein, Eric W. "Real Function". MathWorld.
Wikipedia:Reality structure#0
In mathematics, a real structure on a complex vector space is a way to decompose the complex vector space in the direct sum of two real vector spaces. The prototype of such a structure is the field of complex numbers itself, considered as a complex vector space over itself and with the conjugation map σ : C → C {\displaystyle \sigma :{\mathbb {C} }\to {\mathbb {C} }\,} , with σ ( z ) = z ¯ {\displaystyle \sigma (z)={\bar {z}}} , giving the "canonical" real structure on C {\displaystyle {\mathbb {C} }\,} , that is C = R ⊕ i R {\displaystyle {\mathbb {C} }={\mathbb {R} }\oplus i{\mathbb {R} }\,} . The conjugation map is antilinear: σ ( λ z ) = λ ¯ σ ( z ) {\displaystyle \sigma (\lambda z)={\bar {\lambda }}\sigma (z)\,} and σ ( z 1 + z 2 ) = σ ( z 1 ) + σ ( z 2 ) {\displaystyle \sigma (z_{1}+z_{2})=\sigma (z_{1})+\sigma (z_{2})\,} . == Vector space == A real structure on a complex vector space V is an antilinear involution σ : V → V {\displaystyle \sigma :V\to V} . A real structure defines a real subspace V R ⊂ V {\displaystyle V_{\mathbb {R} }\subset V} , its fixed locus, and the natural map V R ⊗ R C → V {\displaystyle V_{\mathbb {R} }\otimes _{\mathbb {R} }{\mathbb {C} }\to V} is an isomorphism. Conversely any vector space that is the complexification of a real vector space has a natural real structure. One first notes that every complex space V has a realification obtained by taking the same vectors as in the original set and restricting the scalars to be real. If t ∈ V {\displaystyle t\in V\,} and t ≠ 0 {\displaystyle t\neq 0} then the vectors t {\displaystyle t\,} and i t {\displaystyle it\,} are linearly independent in the realification of V. Hence: dim R ⁡ V = 2 dim C ⁡ V {\displaystyle \dim _{\mathbb {R} }V=2\dim _{\mathbb {C} }V} Naturally, one would wish to represent V as the direct sum of two real vector spaces, the "real and imaginary parts of V". There is no canonical way of doing this: such a splitting is an additional real structure in V. It may be introduced as follows. Let σ : V → V {\displaystyle \sigma :V\to V\,} be an antilinear map such that σ ∘ σ = i d V {\displaystyle \sigma \circ \sigma =id_{V}\,} , that is an antilinear involution of the complex space V. Any vector v ∈ V {\displaystyle v\in V\,} can be written v = v + + v − {\displaystyle {v=v^{+}+v^{-}}\,} , where v + = 1 2 ( v + σ v ) {\displaystyle v^{+}={1 \over {2}}(v+\sigma v)} and v − = 1 2 ( v − σ v ) {\displaystyle v^{-}={1 \over {2}}(v-\sigma v)\,} . Therefore, one gets a direct sum of vector spaces V = V + ⊕ V − {\displaystyle V=V^{+}\oplus V^{-}\,} where: V + = { v ∈ V | σ v = v } {\displaystyle V^{+}=\{v\in V|\sigma v=v\}} and V − = { v ∈ V | σ v = − v } {\displaystyle V^{-}=\{v\in V|\sigma v=-v\}\,} . Both sets V + {\displaystyle V^{+}\,} and V − {\displaystyle V^{-}\,} are real vector spaces. The linear map K : V + → V − {\displaystyle K:V^{+}\to V^{-}\,} , where K ( t ) = i t {\displaystyle K(t)=it\,} , is an isomorphism of real vector spaces, whence: dim R ⁡ V + = dim R ⁡ V − = dim C ⁡ V {\displaystyle \dim _{\mathbb {R} }V^{+}=\dim _{\mathbb {R} }V^{-}=\dim _{\mathbb {C} }V\,} . The first factor V + {\displaystyle V^{+}\,} is also denoted by V R {\displaystyle V_{\mathbb {R} }\,} and is left invariant by σ {\displaystyle \sigma \,} , that is σ ( V R ) ⊂ V R {\displaystyle \sigma (V_{\mathbb {R} })\subset V_{\mathbb {R} }\,} . The second factor V − {\displaystyle V^{-}\,} is usually denoted by i V R {\displaystyle iV_{\mathbb {R} }\,} . The direct sum V = V + ⊕ V − {\displaystyle V=V^{+}\oplus V^{-}\,} reads now as: V = V R ⊕ i V R {\displaystyle V=V_{\mathbb {R} }\oplus iV_{\mathbb {R} }\,} , i.e. as the direct sum of the "real" V R {\displaystyle V_{\mathbb {R} }\,} and "imaginary" i V R {\displaystyle iV_{\mathbb {R} }\,} parts of V. This construction strongly depends on the choice of an antilinear involution of the complex vector space V. The complexification of the real vector space V R {\displaystyle V_{\mathbb {R} }\,} , i.e., V C = V R ⊗ R C {\displaystyle V^{\mathbb {C} }=V_{\mathbb {R} }\otimes _{\mathbb {R} }\mathbb {C} \,} admits a natural real structure and hence is canonically isomorphic to the direct sum of two copies of V R {\displaystyle V_{\mathbb {R} }\,} : V R ⊗ R C = V R ⊕ i V R {\displaystyle V_{\mathbb {R} }\otimes _{\mathbb {R} }\mathbb {C} =V_{\mathbb {R} }\oplus iV_{\mathbb {R} }\,} . It follows a natural linear isomorphism V R ⊗ R C → V {\displaystyle V_{\mathbb {R} }\otimes _{\mathbb {R} }\mathbb {C} \to V\,} between complex vector spaces with a given real structure. A real structure on a complex vector space V, that is an antilinear involution σ : V → V {\displaystyle \sigma :V\to V\,} , may be equivalently described in terms of the linear map σ ^ : V → V ¯ {\displaystyle {\hat {\sigma }}:V\to {\bar {V}}\,} from the vector space V {\displaystyle V\,} to the complex conjugate vector space V ¯ {\displaystyle {\bar {V}}\,} defined by v ↦ σ ^ ( v ) := σ ( v ) ¯ {\displaystyle v\mapsto {\hat {\sigma }}(v):={\overline {\sigma (v)}}\,} . == Algebraic variety == For an algebraic variety defined over a subfield of the real numbers, the real structure is the complex conjugation acting on the points of the variety in complex projective or affine space. Its fixed locus is the space of real points of the variety (which may be empty). == Scheme == For a scheme defined over a subfield of the real numbers, complex conjugation is in a natural way a member of the Galois group of the algebraic closure of the base field. The real structure is the Galois action of this conjugation on the extension of the scheme over the algebraic closure of the base field. The real points are the points whose residue field is fixed (which may be empty). == Reality structure == In mathematics, a reality structure on a complex vector space V is a decomposition of V into two real subspaces, called the real and imaginary parts of V: V = V R ⊕ i V R . {\displaystyle V=V_{\mathbb {R} }\oplus iV_{\mathbb {R} }.} Here VR is a real subspace of V, i.e. a subspace of V considered as a vector space over the real numbers. If V has complex dimension n (real dimension 2n), then VR must have real dimension n. The standard reality structure on the vector space C n {\displaystyle \mathbb {C} ^{n}} is the decomposition C n = R n ⊕ i R n . {\displaystyle \mathbb {C} ^{n}=\mathbb {R} ^{n}\oplus i\,\mathbb {R} ^{n}.} In the presence of a reality structure, every vector in V has a real part and an imaginary part, each of which is a vector in VR: v = Re ⁡ { v } + i Im ⁡ { v } {\displaystyle v=\operatorname {Re} \{v\}+i\,\operatorname {Im} \{v\}} In this case, the complex conjugate of a vector v is defined as follows: v ¯ = Re ⁡ { v } − i Im ⁡ { v } {\displaystyle {\overline {v}}=\operatorname {Re} \{v\}-i\,\operatorname {Im} \{v\}} This map v ↦ v ¯ {\displaystyle v\mapsto {\overline {v}}} is an antilinear involution, i.e. v ¯ ¯ = v , v + w ¯ = v ¯ + w ¯ , and α v ¯ = α ¯ v ¯ . {\displaystyle {\overline {\overline {v}}}=v,\quad {\overline {v+w}}={\overline {v}}+{\overline {w}},\quad {\text{and}}\quad {\overline {\alpha v}}={\overline {\alpha }}\,{\overline {v}}.} Conversely, given an antilinear involution v ↦ c ( v ) {\displaystyle v\mapsto c(v)} on a complex vector space V, it is possible to define a reality structure on V as follows. Let Re ⁡ { v } = 1 2 ( v + c ( v ) ) , {\displaystyle \operatorname {Re} \{v\}={\frac {1}{2}}\left(v+c(v)\right),} and define V R = { Re ⁡ { v } ∣ v ∈ V } . {\displaystyle V_{\mathbb {R} }=\left\{\operatorname {Re} \{v\}\mid v\in V\right\}.} Then V = V R ⊕ i V R . {\displaystyle V=V_{\mathbb {R} }\oplus iV_{\mathbb {R} }.} This is actually the decomposition of V as the eigenspaces of the real linear operator c. The eigenvalues of c are +1 and −1, with eigenspaces VR and i {\displaystyle i} VR, respectively. Typically, the operator c itself, rather than the eigenspace decomposition it entails, is referred to as the reality structure on V. == See also == Antilinear map Canonical complex conjugation map Complex conjugate Complex conjugate vector space Complexification Linear complex structure Linear map Sesquilinear form Spinor calculus == Notes == == References == Horn and Johnson, Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-38632-2. (antilinear maps are discussed in section 4.6). Budinich, P. and Trautman, A. The Spinorial Chessboard. Springer-Verlag, 1988. ISBN 0-387-19078-3. (antilinear maps are discussed in section 3.3). Penrose, Roger; Rindler, Wolfgang (1986), Spinors and space-time. Vol. 2, Cambridge Monographs on Mathematical Physics, Cambridge University Press, ISBN 978-0-521-25267-6, MR 0838301
Wikipedia:Rebeca Guber#0
Rebeca Cherep de Guber (2 July 1926 – 25 August 2020) was an Argentine mathematician, university professor, textbook author, and 1960s pioneer in the development of computer science in Argentina. Guber died in 2020 from COVID-19. == Biography == Rebeca Cherep was born in Avellaneda, a suburb of Buenos Aires, Argentina. She completed her undergraduate studies at the National University of La Plata, earned her PhD in mathematics, and taught at the Faculties of Exact and Natural Sciences and Engineering at the University of Buenos Aires. She married José Guber, an engineer, and they had at least one child, Rosana Guber. In 1960, she was part of the group of scientists and teachers who created the Argentine Calculation Society, under the direction of Manuel Sadosky, with whom, years before, she had written the textbook, Elements of Differential and Integral Calculus. In the years since its first publication, the text has been widely disseminated among advanced students of science and engineering, and republished many times. === Calculation Institute === The Calculation Institute (IC) of the Faculty of Exact and Natural Sciences was created around 1959. Rebeca Guber took over as Technical Secretary on June 6, 1960. A few months later, the computer named Clementina (which was installed in 18 metal cabinets stretching 18 metres (59 ft) long) became known as the first computer installed for scientific research in Argentina and began its operations at the IC. About her work there, Guber has recalled:"After 1955, Manuel [Sadosky] became a professor of the Analysis I course and I was his head of practical work. When the Calculation Institute was created, Manuel called me to be his chief of operations. It was a very busy and rewarding time. Manuel outlined the policies and I made sure that everything went as planned. He had to handle a group of seventy people."Guber's work proved to be fundamental in the entire process of installation and development of the famous Clementina. Rebeca Guber, along with her colleague and friend Cecilia Berdichevsky, are only two of the female mathematicians who were fundamental to the success of the early development of information science in Argentina. === University closure === In 1966, with Argentina's coup d'état that removed the president from power and culminated in the Night of the Long Batons, scientists and researchers massively resigned from institutes and universities. The Calculation Institute of the Faculty of Exact and Natural Sciences was "practically dismantled." After Rebeca Guber, Juan Ángel Chamero and David Jacovkis resigned their positions there and under the leadership of Manuel Sadosky, they founded a consultancy firm called Scientific Technical Advisors (ACT), in part to prevent the institute's lines of research and work from being totally abandoned. === Secretariat of Science and Technology === After the return of Argentinian democracy and the election of president Raúl Alfonsín at the end of 1983, Guber continued to work with Sadosky when he was named the Nation's Secretariat of Science and Technology. === Legacy === In tribute to her, in the Calculus Institute there is a room that bears her name: Rebeca Cherep de Guber Classroom. == Selected publications == Rebeca Guber; François Le Lionnais; Néstor Míguez; Luis Antonio Santaló, Las grandes corrientes del pensamiento matemático. Buenos Aires : Editora Universitaria de Buenos Aires, 1962. (in Spanish) Sadosky, Manuel; Guber, Rebeca Ch de, Elementos de cálculo diferencial e integral, Buenos Aires: Alsina, 1982. (in Spanish) == References ==
Wikipedia:Rebecca Hoyle#0
Rebecca Bryony Hoyle is a professor of applied mathematics at the University of Southampton, and associate dean for research at Southampton. She was the London Mathematical Society Mary Cartwright Lecturer for 2017. == Research == Hoyle describes herself as an interdisciplinary mathematician working on dynamical processes in biology and social science. Her 2017 LMS Mary Cartwright Lecture, entitled Transgenerational plasticity and environmental change, illustrates her work in evolutionary biology but her research is broader than that, touching on diverse topics in applied mathematics including dynamic network analysis and industrial ecology. She is the author of the book Pattern Formation: An Introduction to Methods (Cambridge University Press, 2006). == Education and career == Hoyle read mathematics at the University of Cambridge, where she earned a bachelor's degree in 1989, took the Mathematical Tripos in 1990, and completed her Ph.D. in 1994. Her dissertation, Instabilities of Three Dimensional Patterns, was supervised by Michael Proctor. After postdoctoral study at Northwestern University she returned to Cambridge as a research and teaching fellow, but after a brief stint at McKinsey & Company she moved to the University of Surrey in 2000. She moved again to Southampton in 2016. == Awards == Hoyle won the inaugural Hedy Lamarr Prize in 2021, awarded by the Institute of Mathematics and its Applications for knowledge exchange in mathematics and its applications. Hoyle is a founding member of the Virtual Forum for Knowledge Exchange in Mathematical Sciences (V-KEMS) and was awarded the prize primarily for her role in setting up V-KEMS and for promoting effective knowledge exchange during the COVID-19 pandemic. == References == == External links == Home page Rebecca Hoyle publications indexed by Google Scholar
Wikipedia:Rebecca Walo Omana#0
Rebecca Walo Omana (born 15 July 1951) is a Congolese mathematician, professor, and reverend sister. Omana became the first female mathematics professor in the Democratic Republic of the Congo in 1982. She is the director of the mathematics and informatics doctoral program at the University of Kinshasa and is a vice-president of the African Women in Mathematics Association. Her mathematical interests lie in differential equations, nonlinear analysis, and modeling. == Biography == Omana was born in the Democratic Republic of the Congo, on 15 July 1951. She was passionate about mathematics during high school. She made her religious profession to the Catholic Soeurs de St Francois d'Assise at the age of 18, and made her sacred vows in 1978. Omana earned a bachelor’s of science in mathematics from the Université du Québec à Montréal in 1979. She earned her master’s of science in 1982 from the Université Laval. In both institutions, she was the only African woman in the department. Of this period Omana says: I had to double effort to be better and remove negative prejudices in the heads of my colleagues and my professors to be accepted. But in view of results, I was not only accepted but invited by groups of colleagues for research works. In 1982, Omana began working as a lecturer and became the first female mathematics professor in the Democratic Republic of the Congo. Omana earned her Diplôme d'études approfondies in 1985 and her Ph.D. in 1990 from the Université catholique de Louvain where she worked with advisor Jean Mawhin. She was the first Congolese woman to earn a doctorate there. At the founding of the quarterly multidisciplinary journal la revue Notre Dame de la Sagesse (RENODAS), Omana was listed as the director. She has supervised numerous doctoral students. She hopes that some of her doctoral students will join her among the small number of female professors in the Democratic Republic of the Congo. Omana heads the mathematics doctoral program at the University of Kinshasa. Since 2010, she has served as the rector for the Université Notre-Dame de Tshumbe (UNITSHU), a Catholic public university which was founded in 2010 in Tshumbe, Democratic Republic of the Congo. == Mathematical works == Omana has published two books. Her work on ordinary differential equations has had applications in fields such as epidemiology and law. == Personal life == Omana's parents are not academics, but some siblings hold master's degrees. Her teachers and father influenced her decision to become a mathematician. She has said "mathematics is fantastic; as its name is female, it is a domain that should belong to us women". == See also == == Notes == == References ==
Wikipedia:Reciprocal difference#0
In mathematics, the reciprocal difference of a finite sequence of numbers ( x 0 , x 1 , . . . , x n ) {\displaystyle (x_{0},x_{1},...,x_{n})} on a function f ( x ) {\displaystyle f(x)} is defined inductively by the following formulas: ρ 1 ( x 1 , x 2 ) = x 1 − x 2 f ( x 1 ) − f ( x 2 ) {\displaystyle \rho _{1}(x_{1},x_{2})={\frac {x_{1}-x_{2}}{f(x_{1})-f(x_{2})}}} ρ 2 ( x 1 , x 2 , x 3 ) = x 1 − x 3 ρ 1 ( x 1 , x 2 ) − ρ 1 ( x 2 , x 3 ) + f ( x 2 ) {\displaystyle \rho _{2}(x_{1},x_{2},x_{3})={\frac {x_{1}-x_{3}}{\rho _{1}(x_{1},x_{2})-\rho _{1}(x_{2},x_{3})}}+f(x_{2})} ρ n ( x 1 , x 2 , … , x n + 1 ) = x 1 − x n + 1 ρ n − 1 ( x 1 , x 2 , … , x n ) − ρ n − 1 ( x 2 , x 3 , … , x n + 1 ) + ρ n − 2 ( x 2 , … , x n ) {\displaystyle \rho _{n}(x_{1},x_{2},\ldots ,x_{n+1})={\frac {x_{1}-x_{n+1}}{\rho _{n-1}(x_{1},x_{2},\ldots ,x_{n})-\rho _{n-1}(x_{2},x_{3},\ldots ,x_{n+1})}}+\rho _{n-2}(x_{2},\ldots ,x_{n})} == See also == Divided differences == References == Weisstein, Eric W. "Reciprocal Difference". MathWorld. Abramowitz, Milton; Irene A. Stegun (1972) [1964]. Handbook of Mathematical Functions (ninth Dover printing, tenth GPO printing ed.). Dover. p. 878. ISBN 0-486-61272-4.
Wikipedia:Reciprocal rule#0
In calculus, the reciprocal rule gives the derivative of the reciprocal of a function f in terms of the derivative of f. The reciprocal rule can be used to show that the power rule holds for negative exponents if it has already been established for positive exponents. Also, one can readily deduce the quotient rule from the reciprocal rule and the product rule. The reciprocal rule states that if f is differentiable at a point x and f(x) ≠ 0 then g(x) = 1/f(x) is also differentiable at x and g ′ ( x ) = − f ′ ( x ) f ( x ) 2 . {\displaystyle g'(x)=-{\frac {f'(x)}{f(x)^{2}}}.} == Proof == This proof relies on the premise that f {\displaystyle f} is differentiable at x , {\displaystyle x,} and on the theorem that f {\displaystyle f} is then also necessarily continuous there. Applying the definition of the derivative of g {\displaystyle g} at x {\displaystyle x} with f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} gives g ′ ( x ) = d d x ( 1 f ( x ) ) = lim h → 0 ( 1 f ( x + h ) − 1 f ( x ) h ) = lim h → 0 ( f ( x ) − f ( x + h ) h ⋅ f ( x ) ⋅ f ( x + h ) ) = lim h → 0 ( − ( ( f ( x + h ) − f ( x ) h ) ⋅ ( 1 f ( x ) ⋅ f ( x + h ) ) ) {\displaystyle {\begin{aligned}g'(x)={\frac {d}{dx}}\left({\frac {1}{f(x)}}\right)&=\lim _{h\to 0}\left({\frac {{\frac {1}{f(x+h)}}-{\frac {1}{f(x)}}}{h}}\right)\\&=\lim _{h\to 0}\left({\frac {f(x)-f(x+h)}{h\cdot f(x)\cdot f(x+h)}}\right)\\&=\lim _{h\to 0}\left(-({\frac {(f(x+h)-f(x)}{h}})\cdot ({\frac {1}{f(x)\cdot f(x+h)}})\right)\end{aligned}}} The limit of this product exists and is equal to the product of the existing limits of its factors: ( − lim h → 0 f ( x + h ) − f ( x ) h ) ⋅ ( lim h → 0 1 f ( x ) ⋅ f ( x + h ) ) {\displaystyle \left(-\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\right)\cdot \left(\lim _{h\to 0}{\frac {1}{f(x)\cdot f(x+h)}}\right)} Because of the differentiability of f {\displaystyle f} at x {\displaystyle x} the first limit equals − f ′ ( x ) , {\displaystyle -f'(x),} and because of f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} and the continuity of f {\displaystyle f} at x {\displaystyle x} the second limit equals 1 / f ( x ) 2 , {\displaystyle 1/f(x)^{2},} thus yielding g ′ ( x ) = − f ′ ( x ) ⋅ 1 f ( x ) 2 = − f ′ ( x ) f ( x ) 2 {\displaystyle g'(x)=-f'(x)\cdot {\frac {1}{f(x)^{2}}}=-{\frac {f'(x)}{f(x)^{2}}}} == A weak reciprocal rule that follows algebraically from the product rule == It may be argued that since f ( x ) ⋅ 1 f ( x ) = 1 , {\displaystyle f(x)\cdot {\frac {1}{f(x)}}=1,} an application of the product rule says that f ′ ( x ) ( 1 f ) ( x ) + f ( x ) ( 1 f ) ′ ( x ) = 0 , {\displaystyle f'(x)\left({\frac {1}{f}}\right)(x)+f(x)\left({\frac {1}{f}}\right)'(x)=0,} and this may be algebraically rearranged to say ( 1 f ) ′ ( x ) = − f ′ ( x ) f ( x ) 2 . {\displaystyle \left({\frac {1}{f}}\right)'(x)={\frac {-f'(x)}{f(x)^{2}}}.} However, this fails to prove that 1/f is differentiable at x; it is valid only when differentiability of 1/f at x is already established. In that way, it is a weaker result than the reciprocal rule proved above. However, in the context of differential algebra, in which there is nothing that is not differentiable and in which derivatives are not defined by limits, it is in this way that the reciprocal rule and the more general quotient rule are established. == Application to generalization of the power rule == Often the power rule, stating that d d x ( x n ) = n x n − 1 {\displaystyle {\tfrac {d}{dx}}(x^{n})=nx^{n-1}} , is proved by methods that are valid only when n is a nonnegative integer. This can be extended to negative integers n by letting n = − m {\displaystyle n=-m} , where m is a positive integer. d d x x n = d d x ( 1 x m ) = − d d x x m ( x m ) 2 , by the reciprocal rule = − m x m − 1 x 2 m , by the power rule applied to the positive integer m , = − m x − m − 1 = n x n − 1 , by substituting back n = − m . {\displaystyle {\begin{aligned}{\frac {d}{dx}}x^{n}&={\frac {d}{dx}}\,\left({\frac {1}{x^{m}}}\right)\\&=-{\frac {{\frac {d}{dx}}x^{m}}{(x^{m})^{2}}},{\text{ by the reciprocal rule}}\\&=-{\frac {mx^{m-1}}{x^{2m}}},{\text{ by the power rule applied to the positive integer }}m,\\&=-mx^{-m-1}=nx^{n-1},{\text{ by substituting back }}n=-m.\end{aligned}}} == Application to a proof of the quotient rule == The reciprocal rule is a special case of the quotient rule, which states that if f and g are differentiable at x and g(x) ≠ 0 then d d x [ f ( x ) g ( x ) ] = g ( x ) f ′ ( x ) − f ( x ) g ′ ( x ) [ g ( x ) ] 2 . {\displaystyle {\frac {d}{dx}}\,\left[{\frac {f(x)}{g(x)}}\right]={\frac {g(x)f\,'(x)-f(x)g'(x)}{[g(x)]^{2}}}.} The quotient rule can be proved by writing f ( x ) g ( x ) = f ( x ) ⋅ 1 g ( x ) {\displaystyle {\frac {f(x)}{g(x)}}=f(x)\cdot {\frac {1}{g(x)}}} and then first applying the product rule, and then applying the reciprocal rule to the second factor. d d x [ f ( x ) g ( x ) ] = d d x [ f ( x ) ⋅ 1 g ( x ) ] = f ′ ( x ) ⋅ 1 g ( x ) + f ( x ) ⋅ d d x [ 1 g ( x ) ] = f ′ ( x ) ⋅ 1 g ( x ) + f ( x ) ⋅ [ − g ′ ( x ) g ( x ) 2 ] = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) [ g ( x ) ] 2 = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) [ g ( x ) ] 2 . {\displaystyle {\begin{aligned}{\frac {d}{dx}}\left[{\frac {f(x)}{g(x)}}\right]&={\frac {d}{dx}}\left[f(x)\cdot {\frac {1}{g(x)}}\right]\\&=f'(x)\cdot {\frac {1}{g(x)}}+f(x)\cdot {\frac {d}{dx}}\left[{\frac {1}{g(x)}}\right]\\&=f'(x)\cdot {\frac {1}{g(x)}}+f(x)\cdot \left[{\frac {-g'(x)}{g(x)^{2}}}\right]\\&={\frac {f'(x)}{g(x)}}-{\frac {f(x)g'(x)}{[g(x)]^{2}}}\\&={\frac {f'(x)g(x)-f(x)g'(x)}{[g(x)]^{2}}}.\end{aligned}}} == Application to differentiation of trigonometric functions == By using the reciprocal rule one can find the derivative of the secant and cosecant functions. For the secant function: d d x sec ⁡ x = d d x ( 1 cos ⁡ x ) = − d d x cos ⁡ x cos 2 ⁡ x = sin ⁡ x cos 2 ⁡ x = 1 cos ⁡ x ⋅ sin ⁡ x cos ⁡ x = sec ⁡ x tan ⁡ x . {\displaystyle {\begin{aligned}{\frac {d}{dx}}\sec x&={\frac {d}{dx}}\,\left({\frac {1}{\cos x}}\right)={\frac {-{\frac {d}{dx}}\cos x}{\cos ^{2}x}}={\frac {\sin x}{\cos ^{2}x}}={\frac {1}{\cos x}}\cdot {\frac {\sin x}{\cos x}}=\sec x\tan x.\end{aligned}}} The cosecant is treated similarly: d d x csc ⁡ x = d d x ( 1 sin ⁡ x ) = − d d x sin ⁡ x sin 2 ⁡ x = − cos ⁡ x sin 2 ⁡ x = − 1 sin ⁡ x ⋅ cos ⁡ x sin ⁡ x = − csc ⁡ x cot ⁡ x . {\displaystyle {\begin{aligned}{\frac {d}{dx}}\csc x&={\frac {d}{dx}}\,\left({\frac {1}{\sin x}}\right)={\frac {-{\frac {d}{dx}}\sin x}{\sin ^{2}x}}=-{\frac {\cos x}{\sin ^{2}x}}=-{\frac {1}{\sin x}}\cdot {\frac {\cos x}{\sin x}}=-\csc x\cot x.\end{aligned}}} == See also == Chain rule – For derivatives of composed functions Difference quotient – Expression in calculus Differentiation of integrals – Problem in mathematics Differentiation rules – Rules for computing derivatives of functions General Leibniz rule – Generalization of the product rule in calculus Integration by parts – Mathematical method in calculus Inverse functions and differentiation – Formula for the derivative of an inverse functionPages displaying short descriptions of redirect targets Linearity of differentiation – Calculus property Product rule – Formula for the derivative of a product Quotient rule – Formula for the derivative of a ratio of functions Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets Vector calculus identities – Mathematical identities == References ==
Wikipedia:Recurrence relation#0
In mathematics, a recurrence relation is an equation according to which the n {\displaystyle n} th term of a sequence of numbers is equal to some combination of the previous terms. Often, only k {\displaystyle k} previous terms of the sequence appear in the equation, for a parameter k {\displaystyle k} that is independent of n {\displaystyle n} ; this number k {\displaystyle k} is called the order of the relation. If the values of the first k {\displaystyle k} numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation. In linear recurrences, the nth term is equated to a linear function of the k {\displaystyle k} previous terms. A famous example is the recurrence for the Fibonacci numbers, F n = F n − 1 + F n − 2 {\displaystyle F_{n}=F_{n-1}+F_{n-2}} where the order k {\displaystyle k} is two and the linear function merely adds the two previous terms. This example is a linear recurrence with constant coefficients, because the coefficients of the linear function (1 and 1) are constants that do not depend on n . {\displaystyle n.} For these recurrences, one can express the general term of the sequence as a closed-form expression of n {\displaystyle n} . As well, linear recurrences with polynomial coefficients depending on n {\displaystyle n} are also important, because many common elementary functions and special functions have a Taylor series whose coefficients satisfy such a recurrence relation (see holonomic function). Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n {\displaystyle n} . The concept of a recurrence relation can be extended to multidimensional arrays, that is, indexed families that are indexed by tuples of natural numbers. == Definition == A recurrence relation is an equation that expresses each element of a sequence as a function of the preceding ones. More precisely, in the case where only the immediately preceding element is involved, a recurrence relation has the form u n = φ ( n , u n − 1 ) for n > 0 , {\displaystyle u_{n}=\varphi (n,u_{n-1})\quad {\text{for}}\quad n>0,} where φ : N × X → X {\displaystyle \varphi :\mathbb {N} \times X\to X} is a function, where X is a set to which the elements of a sequence must belong. For any u 0 ∈ X {\displaystyle u_{0}\in X} , this defines a unique sequence with u 0 {\displaystyle u_{0}} as its first element, called the initial value. It is easy to modify the definition for getting sequences starting from the term of index 1 or higher. This defines recurrence relation of first order. A recurrence relation of order k has the form u n = φ ( n , u n − 1 , u n − 2 , … , u n − k ) for n ≥ k , {\displaystyle u_{n}=\varphi (n,u_{n-1},u_{n-2},\ldots ,u_{n-k})\quad {\text{for}}\quad n\geq k,} where φ : N × X k → X {\displaystyle \varphi :\mathbb {N} \times X^{k}\to X} is a function that involves k consecutive elements of the sequence. In this case, k initial values are needed for defining a sequence. == Examples == === Factorial === The factorial is defined by the recurrence relation n ! = n ⋅ ( n − 1 ) ! for n > 0 , {\displaystyle n!=n\cdot (n-1)!\quad {\text{for}}\quad n>0,} and the initial condition 0 ! = 1. {\displaystyle 0!=1.} This is an example of a linear recurrence with polynomial coefficients of order 1, with the simple polynomial (in n) n {\displaystyle n} as its only coefficient. === Logistic map === An example of a recurrence relation is the logistic map defined by x n + 1 = r x n ( 1 − x n ) , {\displaystyle x_{n+1}=rx_{n}(1-x_{n}),} for a given constant r . {\displaystyle r.} The behavior of the sequence depends dramatically on r , {\displaystyle r,} but is stable when the initial condition x 0 {\displaystyle x_{0}} varies. === Fibonacci numbers === The recurrence of order two satisfied by the Fibonacci numbers is the canonical example of a homogeneous linear recurrence relation with constant coefficients (see below). The Fibonacci sequence is defined using the recurrence F n = F n − 1 + F n − 2 {\displaystyle F_{n}=F_{n-1}+F_{n-2}} with initial conditions F 0 = 0 {\displaystyle F_{0}=0} F 1 = 1. {\displaystyle F_{1}=1.} Explicitly, the recurrence yields the equations F 2 = F 1 + F 0 {\displaystyle F_{2}=F_{1}+F_{0}} F 3 = F 2 + F 1 {\displaystyle F_{3}=F_{2}+F_{1}} F 4 = F 3 + F 2 {\displaystyle F_{4}=F_{3}+F_{2}} etc. We obtain the sequence of Fibonacci numbers, which begins 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... The recurrence can be solved by methods described below yielding Binet's formula, which involves powers of the two roots of the characteristic polynomial t 2 = t + 1 {\displaystyle t^{2}=t+1} ; the generating function of the sequence is the rational function t 1 − t − t 2 . {\displaystyle {\frac {t}{1-t-t^{2}}}.} === Binomial coefficients === A simple example of a multidimensional recurrence relation is given by the binomial coefficients ( n k ) {\displaystyle {\tbinom {n}{k}}} , which count the ways of selecting k {\displaystyle k} elements out of a set of n {\displaystyle n} elements. They can be computed by the recurrence relation ( n k ) = ( n − 1 k − 1 ) + ( n − 1 k ) , {\displaystyle {\binom {n}{k}}={\binom {n-1}{k-1}}+{\binom {n-1}{k}},} with the base cases ( n 0 ) = ( n n ) = 1 {\displaystyle {\tbinom {n}{0}}={\tbinom {n}{n}}=1} . Using this formula to compute the values of all binomial coefficients generates an infinite array called Pascal's triangle. The same values can also be computed directly by a different formula that is not a recurrence, but uses factorials, multiplication and division, not just additions: ( n k ) = n ! k ! ( n − k ) ! . {\displaystyle {\binom {n}{k}}={\frac {n!}{k!(n-k)!}}.} The binomial coefficients can also be computed with a uni-dimensional recurrence: ( n k ) = ( n k − 1 ) ( n − k + 1 ) / k , {\displaystyle {\binom {n}{k}}={\binom {n}{k-1}}(n-k+1)/k,} with the initial value ( n 0 ) = 1 {\textstyle {\binom {n}{0}}=1} (The division is not displayed as a fraction for emphasizing that it must be computed after the multiplication, for not introducing fractional numbers). This recurrence is widely used in computers because it does not require to build a table as does the bi-dimensional recurrence, and does involve very large integers as does the formula with factorials (if one uses ( n k ) = ( n n − k ) , {\textstyle {\binom {n}{k}}={\binom {n}{n-k}},} all involved integers are smaller than the final result). == Difference operator and difference equations == The difference operator is an operator that maps sequences to sequences, and, more generally, functions to functions. It is commonly denoted Δ , {\displaystyle \Delta ,} and is defined, in functional notation, as ( Δ f ) ( x ) = f ( x + 1 ) − f ( x ) . {\displaystyle (\Delta f)(x)=f(x+1)-f(x).} It is thus a special case of finite difference. When using the index notation for sequences, the definition becomes ( Δ a ) n = a n + 1 − a n . {\displaystyle (\Delta a)_{n}=a_{n+1}-a_{n}.} The parentheses around Δ f {\displaystyle \Delta f} and Δ a {\displaystyle \Delta a} are generally omitted, and Δ a n {\displaystyle \Delta a_{n}} must be understood as the term of index n in the sequence Δ a , {\displaystyle \Delta a,} and not Δ {\displaystyle \Delta } applied to the element a n . {\displaystyle a_{n}.} Given sequence a = ( a n ) n ∈ N , {\displaystyle a=(a_{n})_{n\in \mathbb {N} },} the first difference of a is Δ a . {\displaystyle \Delta a.} The second difference is Δ 2 a = ( Δ ∘ Δ ) a = Δ ( Δ a ) . {\displaystyle \Delta ^{2}a=(\Delta \circ \Delta )a=\Delta (\Delta a).} A simple computation shows that Δ 2 a n = a n + 2 − 2 a n + 1 + a n . {\displaystyle \Delta ^{2}a_{n}=a_{n+2}-2a_{n+1}+a_{n}.} More generally: the kth difference is defined recursively as Δ k = Δ ∘ Δ k − 1 , {\displaystyle \Delta ^{k}=\Delta \circ \Delta ^{k-1},} and one has Δ k a n = ∑ t = 0 k ( − 1 ) t ( k t ) a n + k − t . {\displaystyle \Delta ^{k}a_{n}=\sum _{t=0}^{k}(-1)^{t}{\binom {k}{t}}a_{n+k-t}.} This relation can be inverted, giving a n + k = a n + ( k 1 ) Δ a n + ⋯ + ( k k ) Δ k ( a n ) . {\displaystyle a_{n+k}=a_{n}+{k \choose 1}\Delta a_{n}+\cdots +{k \choose k}\Delta ^{k}(a_{n}).} A difference equation of order k is an equation that involves the k first differences of a sequence or a function, in the same way as a differential equation of order k relates the k first derivatives of a function. The two above relations allow transforming a recurrence relation of order k into a difference equation of order k, and, conversely, a difference equation of order k into recurrence relation of order k. Each transformation is the inverse of the other, and the sequences that are solution of the difference equation are exactly those that satisfies the recurrence relation. For example, the difference equation 3 Δ 2 a n + 2 Δ a n + 7 a n = 0 {\displaystyle 3\Delta ^{2}a_{n}+2\Delta a_{n}+7a_{n}=0} is equivalent to the recurrence relation 3 a n + 2 = 4 a n + 1 − 8 a n , {\displaystyle 3a_{n+2}=4a_{n+1}-8a_{n},} in the sense that the two equations are satisfied by the same sequences. As it is equivalent for a sequence to satisfy a recurrence relation or to be the solution of a difference equation, the two terms "recurrence relation" and "difference equation" are sometimes used interchangeably. See Rational difference equation and Matrix difference equation for example of uses of "difference equation" instead of "recurrence relation" Difference equations resemble differential equations, and this resemblance is often used to mimic methods for solving differentiable equations to apply to solving difference equations, and therefore recurrence relations. Summation equations relate to difference equations as integral equations relate to differential equations. See time scale calculus for a unification of the theory of difference equations with that of differential equations. === From sequences to grids === Single-variable or one-dimensional recurrence relations are about sequences (i.e. functions defined on one-dimensional grids). Multi-variable or n-dimensional recurrence relations are about n {\displaystyle n} -dimensional grids. Functions defined on n {\displaystyle n} -grids can also be studied with partial difference equations. == Solving == === Solving linear recurrence relations with constant coefficients === === Solving first-order non-homogeneous recurrence relations with variable coefficients === Moreover, for the general first-order non-homogeneous linear recurrence relation with variable coefficients: a n + 1 = f n a n + g n , f n ≠ 0 , {\displaystyle a_{n+1}=f_{n}a_{n}+g_{n},\qquad f_{n}\neq 0,} there is also a nice method to solve it: a n + 1 − f n a n = g n {\displaystyle a_{n+1}-f_{n}a_{n}=g_{n}} a n + 1 ∏ k = 0 n f k − f n a n ∏ k = 0 n f k = g n ∏ k = 0 n f k {\displaystyle {\frac {a_{n+1}}{\prod _{k=0}^{n}f_{k}}}-{\frac {f_{n}a_{n}}{\prod _{k=0}^{n}f_{k}}}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}} a n + 1 ∏ k = 0 n f k − a n ∏ k = 0 n − 1 f k = g n ∏ k = 0 n f k {\displaystyle {\frac {a_{n+1}}{\prod _{k=0}^{n}f_{k}}}-{\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}} Let A n = a n ∏ k = 0 n − 1 f k , {\displaystyle A_{n}={\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}},} Then A n + 1 − A n = g n ∏ k = 0 n f k {\displaystyle A_{n+1}-A_{n}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}} ∑ m = 0 n − 1 ( A m + 1 − A m ) = A n − A 0 = ∑ m = 0 n − 1 g m ∏ k = 0 m f k {\displaystyle \sum _{m=0}^{n-1}(A_{m+1}-A_{m})=A_{n}-A_{0}=\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}} a n ∏ k = 0 n − 1 f k = A 0 + ∑ m = 0 n − 1 g m ∏ k = 0 m f k {\displaystyle {\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}}=A_{0}+\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}} a n = ( ∏ k = 0 n − 1 f k ) ( A 0 + ∑ m = 0 n − 1 g m ∏ k = 0 m f k ) {\displaystyle a_{n}=\left(\prod _{k=0}^{n-1}f_{k}\right)\left(A_{0}+\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}\right)} If we apply the formula to a n + 1 = ( 1 + h f n h ) a n + h g n h {\displaystyle a_{n+1}=(1+hf_{nh})a_{n}+hg_{nh}} and take the limit h → 0 {\displaystyle h\to 0} , we get the formula for first order linear differential equations with variable coefficients; the sum becomes an integral, and the product becomes the exponential function of an integral. === Solving general homogeneous linear recurrence relations === Many homogeneous linear recurrence relations may be solved by means of the generalized hypergeometric series. Special cases of these lead to recurrence relations for the orthogonal polynomials, and many special functions. For example, the solution to J n + 1 = 2 n z J n − J n − 1 {\displaystyle J_{n+1}={\frac {2n}{z}}J_{n}-J_{n-1}} is given by J n = J n ( z ) , {\displaystyle J_{n}=J_{n}(z),} the Bessel function, while ( b − n ) M n − 1 + ( 2 n − b + z ) M n − n M n + 1 = 0 {\displaystyle (b-n)M_{n-1}+(2n-b+z)M_{n}-nM_{n+1}=0} is solved by M n = M ( n , b ; z ) {\displaystyle M_{n}=M(n,b;z)} the confluent hypergeometric series. Sequences which are the solutions of linear difference equations with polynomial coefficients are called P-recursive. For these specific recurrence equations algorithms are known which find polynomial, rational or hypergeometric solutions. === Solving general non-homogeneous linear recurrence relations with constant coefficients === Furthermore, for the general non-homogeneous linear recurrence relation with constant coefficients, one can solve it based on variation of parameter. === Solving first-order rational difference equations === A first order rational difference equation has the form w t + 1 = a w t + b c w t + d {\displaystyle w_{t+1}={\tfrac {aw_{t}+b}{cw_{t}+d}}} . Such an equation can be solved by writing w t {\displaystyle w_{t}} as a nonlinear transformation of another variable x t {\displaystyle x_{t}} which itself evolves linearly. Then standard methods can be used to solve the linear difference equation in x t {\displaystyle x_{t}} . == Stability == === Stability of linear higher-order recurrences === The linear recurrence of order d {\displaystyle d} , a n = c 1 a n − 1 + c 2 a n − 2 + ⋯ + c d a n − d , {\displaystyle a_{n}=c_{1}a_{n-1}+c_{2}a_{n-2}+\cdots +c_{d}a_{n-d},} has the characteristic equation λ d − c 1 λ d − 1 − c 2 λ d − 2 − ⋯ − c d λ 0 = 0. {\displaystyle \lambda ^{d}-c_{1}\lambda ^{d-1}-c_{2}\lambda ^{d-2}-\cdots -c_{d}\lambda ^{0}=0.} The recurrence is stable, meaning that the iterates converge asymptotically to a fixed value, if and only if the eigenvalues (i.e., the roots of the characteristic equation), whether real or complex, are all less than unity in absolute value. === Stability of linear first-order matrix recurrences === In the first-order matrix difference equation [ x t − x ∗ ] = A [ x t − 1 − x ∗ ] {\displaystyle [x_{t}-x^{*}]=A[x_{t-1}-x^{*}]} with state vector x {\displaystyle x} and transition matrix A {\displaystyle A} , x {\displaystyle x} converges asymptotically to the steady state vector x ∗ {\displaystyle x^{*}} if and only if all eigenvalues of the transition matrix A {\displaystyle A} (whether real or complex) have an absolute value which is less than 1. === Stability of nonlinear first-order recurrences === Consider the nonlinear first-order recurrence x n = f ( x n − 1 ) . {\displaystyle x_{n}=f(x_{n-1}).} This recurrence is locally stable, meaning that it converges to a fixed point x ∗ {\displaystyle x^{*}} from points sufficiently close to x ∗ {\displaystyle x^{*}} , if the slope of f {\displaystyle f} in the neighborhood of x ∗ {\displaystyle x^{*}} is smaller than unity in absolute value: that is, | f ′ ( x ∗ ) | < 1. {\displaystyle |f'(x^{*})|<1.} A nonlinear recurrence could have multiple fixed points, in which case some fixed points may be locally stable and others locally unstable; for continuous f two adjacent fixed points cannot both be locally stable. A nonlinear recurrence relation could also have a cycle of period k {\displaystyle k} for k > 1 {\displaystyle k>1} . Such a cycle is stable, meaning that it attracts a set of initial conditions of positive measure, if the composite function g ( x ) := f ∘ f ∘ ⋯ ∘ f ( x ) {\displaystyle g(x):=f\circ f\circ \cdots \circ f(x)} with f {\displaystyle f} appearing k {\displaystyle k} times is locally stable according to the same criterion: | g ′ ( x ∗ ) | < 1 , {\displaystyle |g'(x^{*})|<1,} where x ∗ {\displaystyle x^{*}} is any point on the cycle. In a chaotic recurrence relation, the variable x {\displaystyle x} stays in a bounded region but never converges to a fixed point or an attracting cycle; any fixed points or cycles of the equation are unstable. See also logistic map, dyadic transformation, and tent map. == Relationship to differential equations == When solving an ordinary differential equation numerically, one typically encounters a recurrence relation. For example, when solving the initial value problem y ′ ( t ) = f ( t , y ( t ) ) , y ( t 0 ) = y 0 , {\displaystyle y'(t)=f(t,y(t)),\ \ y(t_{0})=y_{0},} with Euler's method and a step size h {\displaystyle h} , one calculates the values y 0 = y ( t 0 ) , y 1 = y ( t 0 + h ) , y 2 = y ( t 0 + 2 h ) , … {\displaystyle y_{0}=y(t_{0}),\ \ y_{1}=y(t_{0}+h),\ \ y_{2}=y(t_{0}+2h),\ \dots } by the recurrence y n + 1 = y n + h f ( t n , y n ) , t n = t 0 + n h {\displaystyle \,y_{n+1}=y_{n}+hf(t_{n},y_{n}),t_{n}=t_{0}+nh} Systems of linear first order differential equations can be discretized exactly analytically using the methods shown in the discretization article. == Applications == === Mathematical biology === Some of the best-known difference equations have their origins in the attempt to model population dynamics. For example, the Fibonacci numbers were once used as a model for the growth of a rabbit population. The logistic map is used either directly to model population growth, or as a starting point for more detailed models of population dynamics. In this context, coupled difference equations are often used to model the interaction of two or more populations. For example, the Nicholson–Bailey model for a host-parasite interaction is given by N t + 1 = λ N t e − a P t {\displaystyle N_{t+1}=\lambda N_{t}e^{-aP_{t}}} P t + 1 = N t ( 1 − e − a P t ) , {\displaystyle P_{t+1}=N_{t}(1-e^{-aP_{t}}),} with N t {\displaystyle N_{t}} representing the hosts, and P t {\displaystyle P_{t}} the parasites, at time t {\displaystyle t} . Integrodifference equations are a form of recurrence relation important to spatial ecology. These and other difference equations are particularly suited to modeling univoltine populations. === Computer science === Recurrence relations are also of fundamental importance in analysis of algorithms. If an algorithm is designed so that it will break a problem into smaller subproblems (divide and conquer), its running time is described by a recurrence relation. A simple example is the time an algorithm takes to find an element in an ordered vector with n {\displaystyle n} elements, in the worst case. A naive algorithm will search from left to right, one element at a time. The worst possible scenario is when the required element is the last, so the number of comparisons is n {\displaystyle n} . A better algorithm is called binary search. However, it requires a sorted vector. It will first check if the element is at the middle of the vector. If not, then it will check if the middle element is greater or lesser than the sought element. At this point, half of the vector can be discarded, and the algorithm can be run again on the other half. The number of comparisons will be given by c 1 = 1 {\displaystyle c_{1}=1} c n = 1 + c n / 2 {\displaystyle c_{n}=1+c_{n/2}} the time complexity of which will be O ( log 2 ⁡ ( n ) ) {\displaystyle O(\log _{2}(n))} . === Digital signal processing === In digital signal processing, recurrence relations can model feedback in a system, where outputs at one time become inputs for future time. They thus arise in infinite impulse response (IIR) digital filters. For example, the equation for a "feedforward" IIR comb filter of delay T {\displaystyle T} is: y t = ( 1 − α ) x t + α y t − T , {\displaystyle y_{t}=(1-\alpha )x_{t}+\alpha y_{t-T},} where x t {\displaystyle x_{t}} is the input at time t {\displaystyle t} , y t {\displaystyle y_{t}} is the output at time t {\displaystyle t} , and α {\displaystyle \alpha } controls how much of the delayed signal is fed back into the output. From this we can see that y t = ( 1 − α ) x t + α ( ( 1 − α ) x t − T + α y t − 2 T ) {\displaystyle y_{t}=(1-\alpha )x_{t}+\alpha ((1-\alpha )x_{t-T}+\alpha y_{t-2T})} y t = ( 1 − α ) x t + ( α − α 2 ) x t − T + α 2 y t − 2 T {\displaystyle y_{t}=(1-\alpha )x_{t}+(\alpha -\alpha ^{2})x_{t-T}+\alpha ^{2}y_{t-2T}} etc. === Economics === Recurrence relations, especially linear recurrence relations, are used extensively in both theoretical and empirical economics. In particular, in macroeconomics one might develop a model of various broad sectors of the economy (the financial sector, the goods sector, the labor market, etc.) in which some agents' actions depend on lagged variables. The model would then be solved for current values of key variables (interest rate, real GDP, etc.) in terms of past and current values of other variables. == See also == == References == === Footnotes === === Bibliography === Batchelder, Paul M. (1967). An introduction to linear difference equations. Dover Publications. Miller, Kenneth S. (1968). Linear difference equations. W. A. Benjamin. Fillmore, Jay P.; Marx, Morris L. (1968). "Linear recursive sequences". SIAM Rev. Vol. 10, no. 3. pp. 324–353. JSTOR 2027658. Brousseau, Alfred (1971). Linear Recursion and Fibonacci Sequences. Fibonacci Association. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 1990. ISBN 0-262-03293-7. Chapter 4: Recurrences, pp. 62–90. Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete Mathematics: A Foundation for Computer Science (2 ed.). Addison-Wesley. ISBN 0-201-55802-5. Enders, Walter (2010). Applied Econometric Times Series (3 ed.). Archived from the original on 2014-11-10. Cull, Paul; Flahive, Mary; Robson, Robbie (2005). Difference Equations: From Rabbits to Chaos. Springer. ISBN 0-387-23234-6. chapter 7. Jacques, Ian (2006). Mathematics for Economics and Business (Fifth ed.). Prentice Hall. pp. 551–568. ISBN 0-273-70195-9. Chapter 9.1: Difference Equations. Minh, Tang; Van To, Tan (2006). "Using generating functions to solve linear inhomogeneous recurrence equations" (PDF). Proc. Int. Conf. Simulation, Modelling and Optimization, SMO'06. pp. 399–404. Archived from the original (PDF) on 2016-03-04. Retrieved 2014-08-07. Polyanin, Andrei D. "Difference and Functional Equations: Exact Solutions". at EqWorld - The World of Mathematical Equations. Polyanin, Andrei D. "Difference and Functional Equations: Methods". at EqWorld - The World of Mathematical Equations. Wang, Xiang-Sheng; Wong, Roderick (2012). "Asymptotics of orthogonal polynomials via recurrence relations". Anal. Appl. 10 (2): 215–235. arXiv:1101.4371. doi:10.1142/S0219530512500108. S2CID 28828175. == External links == "Recurrence relation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Recurrence Equation". MathWorld. "OEIS Index Rec". OEIS index to a few thousand examples of linear recurrences, sorted by order (number of terms) and signature (vector of values of the constant coefficients)
Wikipedia:Recurrent word#0
In mathematics, a recurrent word or sequence is an infinite word over a finite alphabet in which every factor occurs infinitely many times. An infinite word is recurrent if and only if it is a sesquipower. A uniformly recurrent word is a recurrent word in which for any given factor X in the sequence, there is some length nX (often much longer than the length of X) such that X appears in every block of length nX. The terms minimal sequence and almost periodic sequence (Muchnik, Semenov, Ushakov 2003) are also used. == Examples == The easiest way to make a recurrent sequence is to form a periodic sequence, one where the sequence repeats entirely after a given number m of steps. Such a sequence is then uniformly recurrent and nX can be set to any multiple of m that is larger than twice the length of X. A recurrent sequence that is ultimately periodic is purely periodic. The Thue–Morse sequence is uniformly recurrent without being periodic, nor even eventually periodic (meaning periodic after some nonperiodic initial segment). All Sturmian words are uniformly recurrent. == Notes == == References == Allouche, Jean-Paul; Shallit, Jeffrey (2003). Automatic Sequences: Theory, Applications, Generalizations. Cambridge University Press. ISBN 978-0-521-82332-6. Zbl 1086.11015. Berstel, Jean; Lauve, Aaron; Reutenauer, Christophe; Saliola, Franco V. (2009). Combinatorics on words. Christoffel words and repetitions in words. CRM Monograph Series. Vol. 27. Providence, RI: American Mathematical Society. ISBN 978-0-8218-4480-9. Zbl 1161.68043. Berthé, Valérie; Rigo, Michel, eds. (2010). Combinatorics, automata, and number theory. Encyclopedia of Mathematics and its Applications. Vol. 135. Cambridge: Cambridge University Press. ISBN 978-0-521-51597-9. Zbl 1197.68006. Lothaire, M. (2011). Algebraic combinatorics on words. Encyclopedia of Mathematics and Its Applications. Vol. 90. With preface by Jean Berstel and Dominique Perrin (Reprint of the 2002 hardback ed.). Cambridge University Press. ISBN 978-0-521-18071-9. Zbl 1221.68183. Pytheas Fogg, N. (2002). Berthé, Valérie; Ferenczi, Sébastien; Mauduit, Christian; Siegel, Anne (eds.). Substitutions in dynamics, arithmetics and combinatorics. Lecture Notes in Mathematics. Vol. 1794. Berlin: Springer-Verlag. ISBN 3-540-44141-7. Zbl 1014.11015. An. Muchnik, A. Semenov, M. Ushakov, Almost periodic sequences, Theoret. Comput. Sci. vol.304 no.1-3 (2003), 1-33.
Wikipedia:Red auxiliary number#0
In the study of ancient Egyptian mathematics, red auxiliary numbers are numbers written in red ink in the Rhind Mathematical Papyrus, apparently used as aids for arithmetic computations involving fractions.It is considered to be the first examples of method that uses Least common multiples. == References == Gillings, Richard J. (1982). Mathematics in the Time of the Pharaohs. Dover Publications. ISBN 9780486243153. OCLC 301431218. Clagett, Marshall (1989). Ancient Egyptian Science: Ancient Egyptian mathematics. American Philosophical Society. ISBN 9780871692320. OCLC 313400062. Bunt, Lucas N. H.; Jones, Phillip S.; Bedient, Jack D. (2012). The Historical Roots of Elementary Mathematics. Dover Publications. ISBN 9780486139685. OCLC 868272907.
Wikipedia:Reduced derivative#0
In mathematics, the reduced derivative is a generalization of the notion of derivative that is well-suited to the study of functions of bounded variation. Although functions of bounded variation have derivatives in the sense of Radon measures, it is desirable to have a derivative that takes values in the same space as the functions themselves. Although the precise definition of the reduced derivative is quite involved, its key properties are quite easy to remember: it is a multiple of the usual derivative wherever it exists; at jump points, it is a multiple of the jump vector. The notion of reduced derivative appears to have been introduced by Alexander Mielke and Florian Theil in 2004. == Definition == Let X be a separable, reflexive Banach space with norm || || and fix T > 0. Let BV−([0, T]; X) denote the space of all left-continuous functions z : [0, T] → X with bounded variation on [0, T]. For any function of time f, use subscripts +/− to denote the right/left continuous versions of f, i.e. f + ( t ) = lim s ↓ t f ( s ) ; {\displaystyle f_{+}(t)=\lim _{s\downarrow t}f(s);} f − ( t ) = lim s ↑ t f ( s ) . {\displaystyle f_{-}(t)=\lim _{s\uparrow t}f(s).} For any sub-interval [a, b] of [0, T], let Var(z, [a, b]) denote the variation of z over [a, b], i.e., the supremum V a r ( z , [ a , b ] ) = sup { ∑ i = 1 k ‖ z ( t i ) − z ( t i − 1 ) ‖ | a = t 0 < t 1 < ⋯ < t k = b , k ∈ N } . {\displaystyle \mathrm {Var} (z,[a,b])=\sup \left\{\left.\sum _{i=1}^{k}\|z(t_{i})-z(t_{i-1})\|\right|a=t_{0}<t_{1}<\cdots <t_{k}=b,k\in \mathbb {N} \right\}.} The first step in the construction of the reduced derivative is the "stretch" time so that z can be linearly interpolated at its jump points. To this end, define τ ^ : [ 0 , T ] → [ 0 , + ∞ ) ; {\displaystyle {\hat {\tau }}\colon [0,T]\to [0,+\infty );} τ ^ ( t ) = t + ∫ [ 0 , t ] ‖ d z ‖ = t + V a r ( z , [ 0 , t ] ) . {\displaystyle {\hat {\tau }}(t)=t+\int _{[0,t]}\|\mathrm {d} z\|=t+\mathrm {Var} (z,[0,t]).} The "stretched time" function τ̂ is left-continuous (i.e. τ̂ = τ̂−); moreover, τ̂− and τ̂+ are strictly increasing and agree except at the (at most countable) jump points of z. Setting T̂ = τ̂(T), this "stretch" can be inverted by t ^ : [ 0 , T ^ ] → [ 0 , T ] ; {\displaystyle {\hat {t}}\colon [0,{\hat {T}}]\to [0,T];} t ^ ( τ ) = max { t ∈ [ 0 , T ] | τ ^ ( t ) ≤ τ } . {\displaystyle {\hat {t}}(\tau )=\max\{t\in [0,T]|{\hat {\tau }}(t)\leq \tau \}.} Using this, the stretched version of z is defined by z ^ ∈ C 0 ( [ 0 , T ^ ] ; X ) ; {\displaystyle {\hat {z}}\in C^{0}([0,{\hat {T}}];X);} z ^ ( τ ) = ( 1 − θ ) z − ( t ) + θ z + ( t ) {\displaystyle {\hat {z}}(\tau )=(1-\theta )z_{-}(t)+\theta z_{+}(t)} where θ ∈ [0, 1] and τ = ( 1 − θ ) τ ^ − ( t ) + θ τ ^ + ( t ) . {\displaystyle \tau =(1-\theta ){\hat {\tau }}_{-}(t)+\theta {\hat {\tau }}_{+}(t).} The effect of this definition is to create a new function ẑ which "stretches out" the jumps of z by linear interpolation. A quick calculation shows that ẑ is not just continuous, but also lies in a Sobolev space: z ^ ∈ W 1 , ∞ ( [ 0 , T ^ ] ; X ) ; {\displaystyle {\hat {z}}\in W^{1,\infty }([0,{\hat {T}}];X);} ‖ d z ^ d τ ‖ L ∞ ( [ 0 , T ^ ] ; X ) ≤ 1. {\displaystyle \left\|{\frac {\mathrm {d} {\hat {z}}}{\mathrm {d} \tau }}\right\|_{L^{\infty }([0,{\hat {T}}];X)}\leq 1.} The derivative of ẑ(τ) with respect to τ is defined almost everywhere with respect to Lebesgue measure. The reduced derivative of z is the pull-back of this derivative by the stretching function τ̂ : [0, T] → [0, T̂]. In other words, r d ( z ) : [ 0 , T ] → { x ∈ X | ‖ x ‖ ≤ 1 } ; {\displaystyle \mathrm {rd} (z)\colon [0,T]\to \{x\in X|\|x\|\leq 1\};} r d ( z ) ( t ) = d z ^ d τ ( τ ^ − ( t ) + τ ^ + ( t ) 2 ) . {\displaystyle \mathrm {rd} (z)(t)={\frac {\mathrm {d} {\hat {z}}}{\mathrm {d} \tau }}\left({\frac {{\hat {\tau }}_{-}(t)+{\hat {\tau }}_{+}(t)}{2}}\right).} Associated with this pull-back of the derivative is the pull-back of Lebesgue measure on [0, T̂], which defines the differential measure μz: μ z ( [ t 1 , t 2 ) ) = λ ( [ τ ^ ( t 1 ) , τ ^ ( t 2 ) ) = τ ^ ( t 2 ) − τ ^ ( t 1 ) = t 2 − t 1 + ∫ [ t 1 , t 2 ] ‖ d z ‖ . {\displaystyle \mu _{z}([t_{1},t_{2}))=\lambda ([{\hat {\tau }}(t_{1}),{\hat {\tau }}(t_{2}))={\hat {\tau }}(t_{2})-{\hat {\tau }}(t_{1})=t_{2}-t_{1}+\int _{[t_{1},t_{2}]}\|\mathrm {d} z\|.} == Properties == The reduced derivative rd(z) is defined only μz-almost everywhere on [0, T]. If t is a jump point of z, then μ z ( { t } ) = ‖ z + ( t ) − z − ( t ) ‖ and r d ( z ) ( t ) = z + ( t ) − z − ( t ) ‖ z + ( t ) − z − ( t ) ‖ . {\displaystyle \mu _{z}(\{t\})=\|z_{+}(t)-z_{-}(t)\|{\mbox{ and }}\mathrm {rd} (z)(t)={\frac {z_{+}(t)-z_{-}(t)}{\|z_{+}(t)-z_{-}(t)\|}}.} If z is differentiable on (t1, t2), then μ z ( ( t 1 , t 2 ) ) = ∫ t 1 t 2 1 + ‖ z ˙ ( t ) ‖ d t {\displaystyle \mu _{z}((t_{1},t_{2}))=\int _{t_{1}}^{t_{2}}1+\|{\dot {z}}(t)\|\,\mathrm {d} t} and, for t ∈ (t1, t2), r d ( z ) ( t ) = z ˙ ( t ) 1 + ‖ z ˙ ( t ) ‖ {\displaystyle \mathrm {rd} (z)(t)={\frac {{\dot {z}}(t)}{1+\|{\dot {z}}(t)\|}}} , For 0 ≤ s < t ≤ T, ∫ [ s , t ) r d ( z ) ( r ) d μ z ( r ) = ∫ [ s , t ) d z = z ( t ) − z ( s ) . {\displaystyle \int _{[s,t)}\mathrm {rd} (z)(r)\,\mathrm {d} \mu _{z}(r)=\int _{[s,t)}\mathrm {d} z=z(t)-z(s).} == References == Mielke, Alexander; Theil, Florian (2004). "On rate-independent hysteresis models". NoDEA Nonlinear Differential Equations Appl. 11 (2): 151–189. doi:10.1007/s00030-003-1052-7. ISSN 1021-9722. S2CID 54705046. MR2210284
Wikipedia:Reducing subspace#0
In linear algebra, a reducing subspace W {\displaystyle W} of a linear map T : V → V {\displaystyle T:V\to V} from a Hilbert space V {\displaystyle V} to itself is an invariant subspace of T {\displaystyle T} whose orthogonal complement W ⊥ {\displaystyle W^{\perp }} is also an invariant subspace of T . {\displaystyle T.} That is, T ( W ) ⊆ W {\displaystyle T(W)\subseteq W} and T ( W ⊥ ) ⊆ W ⊥ . {\displaystyle T(W^{\perp })\subseteq W^{\perp }.} One says that the subspace W {\displaystyle W} reduces the map T . {\displaystyle T.} One says that a linear map is reducible if it has a nontrivial reducing subspace. Otherwise one says it is irreducible. If V {\displaystyle V} is of finite dimension r {\displaystyle r} and W {\displaystyle W} is a reducing subspace of the map T : V → V {\displaystyle T:V\to V} represented under basis B {\displaystyle B} by matrix M ∈ R r × r {\displaystyle M\in \mathbb {R} ^{r\times r}} then M {\displaystyle M} can be expressed as the sum M = P W M P W + P W ⊥ M P W ⊥ {\displaystyle M=P_{W}MP_{W}+P_{W^{\perp }}MP_{W^{\perp }}} where P W ∈ R r × r {\displaystyle P_{W}\in \mathbb {R} ^{r\times r}} is the matrix of the orthogonal projection from V {\displaystyle V} to W {\displaystyle W} and P W ⊥ = I − P W {\displaystyle P_{W^{\perp }}=I-P_{W}} is the matrix of the projection onto W ⊥ . {\displaystyle W^{\perp }.} (Here I ∈ R r × r {\displaystyle I\in \mathbb {R} ^{r\times r}} is the identity matrix.) Furthermore, V {\displaystyle V} has an orthonormal basis B ′ {\displaystyle B'} with a subset that is an orthonormal basis of W {\displaystyle W} . If Q ∈ R r × r {\displaystyle Q\in \mathbb {R} ^{r\times r}} is the transition matrix from B {\displaystyle B} to B ′ {\displaystyle B'} then with respect to B ′ {\displaystyle B'} the matrix Q − 1 M Q {\displaystyle Q^{-1}MQ} representing T {\displaystyle T} is a block-diagonal matrix Q − 1 M Q = [ A 0 0 B ] {\displaystyle Q^{-1}MQ=\left[{\begin{array}{cc}A&0\\0&B\end{array}}\right]} with A ∈ R d × d , {\displaystyle A\in \mathbb {R} ^{d\times d},} where d = dim ⁡ W {\displaystyle d=\dim W} , and B ∈ R ( r − d ) × ( r − d ) . {\displaystyle B\in \mathbb {R} ^{(r-d)\times (r-d)}.} == References ==
Wikipedia:Reduction (mathematics)#0
In mathematics, reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest whole-number denominator possible (while keeping the numerator a whole number) is called "reducing a fraction". Rewriting a radical (or "root") expression with the smallest possible whole number under the radical symbol is called "reducing a radical". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals. == Algebra == In linear algebra, reduction refers to applying simple rules to a series of equations or matrices to change them into a simpler form. In the case of matrices, the process involves manipulating either the rows or the columns of the matrix and so is usually referred to as row-reduction or column-reduction, respectively. Often the aim of reduction is to transform a matrix into its "row-reduced echelon form" or "row-echelon form"; this is the goal of Gaussian elimination. == Calculus == In calculus, reduction refers to using the technique of integration by parts to evaluate integrals by reducing them to simpler forms. == Static (Guyan) reduction == In dynamic analysis, static reduction refers to reducing the number of degrees of freedom. Static reduction can also be used in finite element analysis to refer to simplification of a linear algebraic problem. Since a static reduction requires several inversion steps it is an expensive matrix operation and is prone to some error in the solution. Consider the following system of linear equations in an FEA problem: [ K 11 K 12 K 21 K 22 ] [ x 1 x 2 ] = [ F 1 F 2 ] {\displaystyle {\begin{bmatrix}K_{11}&K_{12}\\K_{21}&K_{22}\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}={\begin{bmatrix}F_{1}\\F_{2}\end{bmatrix}}} where K and F are known and K, x and F are divided into submatrices as shown above. If F2 contains only zeros, and only x1 is desired, K can be reduced to yield the following system of equations [ K 11 , reduced ] [ x 1 ] = [ F 1 ] {\displaystyle {\begin{bmatrix}K_{11,{\text{reduced}}}\end{bmatrix}}{\begin{bmatrix}x_{1}\end{bmatrix}}={\begin{bmatrix}F_{1}\end{bmatrix}}} K 11 , reduced {\displaystyle K_{11,{\text{reduced}}}} is obtained by writing out the set of equations as follows: Equation (2) can be solved for x 2 {\displaystyle x_{2}} (assuming invertibility of K 22 {\displaystyle K_{22}} ): − K 22 − 1 K 21 x 1 = x 2 . {\displaystyle -K_{22}^{-1}K_{21}x_{1}=x_{2}.} And substituting into (1) gives K 11 x 1 − K 12 K 22 − 1 K 21 x 1 = F 1 . {\displaystyle K_{11}x_{1}-K_{12}K_{22}^{-1}K_{21}x_{1}=F_{1}.} Thus K 11 , reduced = K 11 − K 12 K 22 − 1 K 21 . {\displaystyle K_{11,{\text{reduced}}}=K_{11}-K_{12}K_{22}^{-1}K_{21}.} In a similar fashion, any row or column i of F with a zero value may be eliminated if the corresponding value of xi is not desired. A reduced K may be reduced again. As a note, since each reduction requires an inversion, and each inversion is an operation with computational cost O(n3), most large matrices are pre-processed to reduce calculation time. == History == In the 9th century, Persian mathematician Al-Khwarizmi's Al-Jabr introduced the fundamental concepts of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation and the cancellation of like terms on opposite sides of the equation. This is the operation which Al-Khwarizmi originally described as al-jabr. The name "algebra" comes from the "al-jabr" in the title of his book. == References ==
Wikipedia:Reflection (mathematics)#0
In mathematics, a reflection (also spelled reflexion) is a mapping from a Euclidean space to itself that is an isometry with a hyperplane as the set of fixed points; this set is called the axis (in dimension 2) or plane (in dimension 3) of reflection. The image of a figure by a reflection is its mirror image in the axis or plane of reflection. For example the mirror image of the small Latin letter p for a reflection with respect to a vertical axis (a vertical reflection) would look like q. Its image by reflection in a horizontal axis (a horizontal reflection) would look like b. A reflection is an involution: when applied twice in succession, every point returns to its original location, and every geometrical object is restored to its original state. The term reflection is sometimes used for a larger class of mappings from a Euclidean space to itself, namely the non-identity isometries that are involutions. The set of fixed points (the "mirror") of such an isometry is an affine subspace, but is possibly smaller than a hyperplane. For instance a reflection through a point is an involutive isometry with just one fixed point; the image of the letter p under it would look like a d. This operation is also known as a central inversion (Coxeter 1969, §7.2), and exhibits Euclidean space as a symmetric space. In a Euclidean vector space, the reflection in the point situated at the origin is the same as vector negation. Other examples include reflections in a line in three-dimensional space. Typically, however, unqualified use of the term "reflection" means reflection in a hyperplane. Some mathematicians use "flip" as a synonym for "reflection". == Construction == In a plane (or, respectively, 3-dimensional) geometry, to find the reflection of a point drop a perpendicular from the point to the line (plane) used for reflection, and extend it the same distance on the other side. To find the reflection of a figure, reflect each point in the figure. To reflect point P through the line AB using compass and straightedge, proceed as follows (see figure): Step 1 (red): construct a circle with center at P and some fixed radius r to create points A′ and B′ on the line AB, which will be equidistant from P. Step 2 (green): construct circles centered at A′ and B′ having radius r. P and Q will be the points of intersection of these two circles. Point Q is then the reflection of point P through line AB. == Properties == The matrix for a reflection is orthogonal with determinant −1 and eigenvalues −1, 1, 1, ..., 1. The product of two such matrices is a special orthogonal matrix that represents a rotation. Every rotation is the result of reflecting in an even number of reflections in hyperplanes through the origin, and every improper rotation is the result of reflecting in an odd number. Thus reflections generate the orthogonal group, and this result is known as the Cartan–Dieudonné theorem. Similarly the Euclidean group, which consists of all isometries of Euclidean space, is generated by reflections in affine hyperplanes. In general, a group generated by reflections in affine hyperplanes is known as a reflection group. The finite groups generated in this way are examples of Coxeter groups. == Reflection across a line in the plane == Reflection across an arbitrary line through the origin in two dimensions can be described by the following formula Ref l ⁡ ( v ) = 2 v ⋅ l l ⋅ l l − v , {\displaystyle \operatorname {Ref} _{l}(v)=2{\frac {v\cdot l}{l\cdot l}}l-v,} where v {\displaystyle v} denotes the vector being reflected, l {\displaystyle l} denotes any vector in the line across which the reflection is performed, and v ⋅ l {\displaystyle v\cdot l} denotes the dot product of v {\displaystyle v} with l {\displaystyle l} . Note the formula above can also be written as Ref l ⁡ ( v ) = 2 Proj l ⁡ ( v ) − v , {\displaystyle \operatorname {Ref} _{l}(v)=2\operatorname {Proj} _{l}(v)-v,} saying that a reflection of v {\displaystyle v} across l {\displaystyle l} is equal to 2 times the projection of v {\displaystyle v} on l {\displaystyle l} , minus the vector v {\displaystyle v} . Reflections in a line have the eigenvalues of 1, and −1. == Reflection through a hyperplane in n dimensions == Given a vector v {\displaystyle v} in Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , the formula for the reflection in the hyperplane through the origin, orthogonal to a {\displaystyle a} , is given by Ref a ⁡ ( v ) = v − 2 v ⋅ a a ⋅ a a , {\displaystyle \operatorname {Ref} _{a}(v)=v-2{\frac {v\cdot a}{a\cdot a}}a,} where v ⋅ a {\displaystyle v\cdot a} denotes the dot product of v {\displaystyle v} with a {\displaystyle a} . Note that the second term in the above equation is just twice the vector projection of v {\displaystyle v} onto a {\displaystyle a} . One can easily check that Refa(v) = −v, if v {\displaystyle v} is parallel to a {\displaystyle a} , and Refa(v) = v, if v {\displaystyle v} is perpendicular to a. Using the geometric product, the formula is Ref a ⁡ ( v ) = − a v a a 2 . {\displaystyle \operatorname {Ref} _{a}(v)=-{\frac {ava}{a^{2}}}.} Since these reflections are isometries of Euclidean space fixing the origin they may be represented by orthogonal matrices. The orthogonal matrix corresponding to the above reflection is the matrix R = I − 2 a a T a T a , {\displaystyle R=I-2{\frac {aa^{T}}{a^{T}a}},} where I {\displaystyle I} denotes the n × n {\displaystyle n\times n} identity matrix and a T {\displaystyle a^{T}} is the transpose of a. Its entries are R i j = δ i j − 2 a i a j ‖ a ‖ 2 , {\displaystyle R_{ij}=\delta _{ij}-2{\frac {a_{i}a_{j}}{\left\|a\right\|^{2}}},} where δij is the Kronecker delta. The formula for the reflection in the affine hyperplane v ⋅ a = c {\displaystyle v\cdot a=c} not through the origin is Ref a , c ⁡ ( v ) = v − 2 v ⋅ a − c a ⋅ a a . {\displaystyle \operatorname {Ref} _{a,c}(v)=v-2{\frac {v\cdot a-c}{a\cdot a}}a.} == See also == Additive inverse Coordinate rotations and reflections Householder transformation Inversive geometry Plane of rotation Reflection mapping Reflection group Reflection symmetry == Notes == == References == Coxeter, Harold Scott MacDonald (1969), Introduction to Geometry (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-50458-0, MR 0123930 Popov, V.L. (2001) [1994], "Reflection", Encyclopedia of Mathematics, EMS Press Weisstein, Eric W. "Reflection". MathWorld. == External links == Reflection in Line at cut-the-knot Understanding 2D Reflection and Understanding 3D Reflection by Roger Germundsson, The Wolfram Demonstrations Project.
Wikipedia:Regina S. Burachik#0
Regina Sandra Burachik is an Argentine mathematician who works on optimization and analysis (particularly: convex analysis, functional analysis and non-smooth analysis). Currently, she is a professor at the University of South Australia. She earned her Ph.D. from the IMPA in 1995 under the supervision of Alfredo Noel Iusem (Generalized Proximal Point Method for the Variational Inequality Problem). In her thesis, she "introduced and analyzed solution methods for variational inequalities, the latter being a generalization of the convex constrained optimization problem." == Selected publications == === Articles === with A. N. Iusem and B. F. Svaiter. "Enlargement of monotone operators with applications to variational inequalities", Set-Valued Analysis with A. N. Iusem. "A generalized proximal point algorithm for the variational inequality problem in a Hilbert space", SIAM Journal on Optimization with A. N. Iusem. "Set-valued mappings & enlargements of monotone operators", Optimization and its Applications with B. F. Svaiter. "Maximal monotone operators, convex functions and a special family of enlargements", Set-Valued Analysis === Books === With Iusem: Set-Valued Mappings and Enlargements of Monotone Operators (2007) Variational Analysis and Generalized Differentiation in Optimization and Control (2010, as editor) == References == == External links == Regina S. Burachik publications indexed by Google Scholar Page at the University of South Australia
Wikipedia:Regius Professor of Mathematics#0
A Regius Professor is a university professor who has, or originally had, royal patronage or appointment. They are a unique feature of academia in the United Kingdom and Ireland. The first Regius Professorship was in the field of medicine, and founded by the Scottish King James IV at the University of Aberdeen in 1497. Regius chairs have since been instituted in various universities, in disciplines judged to be fundamental and for which there is a continuing and significant need. Each was established by an English, Scottish, or British monarch, and following proper advertisement and interview through the offices of the university and the national government, the current monarch still appoints the professor (except for those at Trinity College Dublin in Ireland, which left the United Kingdom in 1922). This royal imprimatur, and the relative rarity of these professorships, means a Regius chair is prestigious and highly sought-after. Regius Professors are traditionally addressed as "Regius" and not "Professor". The University of Glasgow currently has the highest number of extant Regius chairs, at fourteen. Traditionally, Regius Chairs only existed in the seven ancient universities of the UK and Ireland. In October 2012 it was announced that Queen Elizabeth II would create up to six new Regius Professorships, to be announced in early 2013, to mark her Diamond Jubilee. In January 2013 the full list was announced, comprising twelve new chairs, probably the largest number ever created in one year, and more than created in most centuries. In July 2015 it was announced that further Regius Professorships would be created to mark the Queen's 90th birthday. == University of Aberdeen == Regius Professor of Anatomy (1863) Regius Professor of Botany Regius Professor of English Literature (1894) Regius Professor of Greek Regius Professor of Humanity, formerly Regius Professor of Classics Regius Professor of Logic Regius Professor of Mathematics (1703) Regius Professor of Medicine, formerly Regius Professor of Materia Medica (1858): 159 Regius Professor of Moral Philosophy Regius Professor of Natural History Regius Professor of Obstetrics and Gynaecology, formerly Regius Professor of Midwifery (1858): 159 Regius Professor of Pathology Regius Professor of Physiology (1858): 159 Regius Professor of Surgery (1839): 189 == Aston University == Regius Professor of Pharmacy == Cardiff University == Regius Professor of Chemistry (2016) == University of Cambridge == Regius Professor of Botany (1724/2009) Regius Professor of Civil Law (1540) Regius Professor of Divinity (1540) Regius Professor of Engineering (2011) Regius Professor of Greek (1540) Regius Professor of Hebrew (1540) Regius Professor of History (1724) Regius Professor of Physic (1540) == Trinity College Dublin == Regius Professor of Divinity (1600/1761) Regius Professor of Physic (1637?) Regius Professor of Hebrew (1637/1762/1855) Regius Professor of Laws (1668) Regius Professor of Feudal and English Law (1761–1934) Regius Professor of Greek (1761) Regius Professor of Surgery (1852/1868) == University of Dundee == Regius Professor of Life Sciences (2013) == University of Edinburgh == Regius Professor of Public Law and the Law of Nature and Nations (1707) Regius Professor of Rhetoric and English Literature (1762) Regius Professor of Astronomy (1785) Regius Professor of Clinical Surgery (1803) Regius Professor of Medical Science Regius Professor of Forensic Medicine (1807) Regius Professor of Sanskrit (1862) (now the Regius Professor of South Asian Language, Culture and Society) Regius Professor of Engineering (1868) Regius Professor of Geology (1871) Regius Chair of Plant Science The first Professor of Greek (1708) at Edinburgh, Wiliam Scott Primus, was referred to with the honorific 'Regius Professor' but was, ultimately, unable to secure a grant from the Crown. However, both of the inaugural professors of Greek and Humanity at Edinburgh were appointed by Royal Warrant. == University of Essex == Regius Professor of Political Science (2013) == University of Glasgow == Regius Professor of Medicine and Therapeutics (1637/1713) Regius Professor of Materia Medica (1831–1989) (merged with the Regius chair in Medicine and Therapeutics) Regius Professor of Law (1713) Regius Professor of Anatomy (1718) Regius Professor of Astronomy (1760) Regius Professor of Zoology (1807) Regius Professor of Obstetrics and Gynaecology (1815) Regius Professor of Surgery (1815) Regius Professor of Chemistry (1817) Regius Professor of Botany (1818) Regius Professor of Forensic Medicine (1839) Regius Professor of Physiology (1839) Regius Professor of Civil Engineering and Mechanics (1840) Regius Professor of English Language and Literature (1861) Regius Professor of Ecclesiastical History (1716–1935) (ceased being a Regius chair in 1935) Regius Professor of Precision Medicine (2016) Regius Professor of Law (2012) == Imperial College of Science, Technology and Medicine == Regius Professor of Engineering (2013) Regius Professor of Infectious Disease (2016) == University of Liverpool == Regius Professor of Chemistry (2016) == University of London == Regius Professor of Cancer Research (2016) (Based at The Institute of Cancer Research) Regius Professor of Psychiatry (2013) (Based at King's College London) Regius Professor of Economics (2013) (Based at LSE) Regius Professor of Music (2013) (Based at Royal Holloway) == University of Manchester == Regius Professor of Physics (2013) Regius Professor of Materials (2016) == Newcastle University == Regius Professor of Ageing (2016) == Open University == Regius Professor of Open Education (2013) == University of Oxford == Regius Professor of Civil Law (c.1540) Regius Professor of Divinity (1535) Regius Professor of Moral and Pastoral Theology (1842) Regius Professor of Ecclesiastical History (1842) Regius Professor of Hebrew (1546) Regius Professor of Medicine (1546) Regius Professor of Greek (c.1541) Regius Professor of History (1724) Regius Professor of Mathematics (2016) == Queen's University Belfast == Regius Professor of Electronics & Computer Engineering (2016) == University of Reading == Regius Professor of Meteorology and Climate Science (2013) == University of St Andrews == Regius Professor of Mathematics (1668) == University of Southampton == Regius Professor of Computer Science (2013) Regius Professorship in Ocean Sciences (2016) == University of Surrey == Regius Professor of Electronic Engineering (2013) == University of Warwick == Regius Professor of Mathematics (2013) Regius Professor of Manufacturing (2016) == References ==
Wikipedia:Regular chain#0
In mathematics, and more specifically in computer algebra and elimination theory, a regular chain is a particular kind of triangular set of multivariate polynomials over a field, where a triangular set is a finite sequence of polynomials such that each one contains at least one more indeterminate than the preceding one. The condition that a triangular set must satisfy to be a regular chain is that, for every k, every common zero (in an algebraically closed field) of the k first polynomials may be prolongated to a common zero of the (k + 1)th polynomial. In other words, regular chains allow solving systems of polynomial equations by solving successive univariate equations without considering different cases. Regular chains enhance the notion of Wu's characteristic sets in the sense that they provide a better result with a similar method of computation. == Introduction == Given a linear system, one can convert it to a triangular system via Gaussian elimination. For the non-linear case, given a polynomial system F over a field, one can convert (decompose or triangularize) it to a finite set of triangular sets, in the sense that the algebraic variety V(F) is described by these triangular sets. A triangular set may merely describe the empty set. To fix this degenerated case, the notion of regular chain was introduced, independently by Kalkbrener (1993), Yang and Zhang (1994). Regular chains also appear in Chou and Gao (1992). Regular chains are special triangular sets which are used in different algorithms for computing unmixed-dimensional decompositions of algebraic varieties. Without using factorization, these decompositions have better properties that the ones produced by Wu's algorithm. Kalkbrener's original definition was based on the following observation: every irreducible variety is uniquely determined by one of its generic points and varieties can be represented by describing the generic points of their irreducible components. These generic points are given by regular chains. == Examples == Denote Q the rational number field. In Q[x1, x2, x3] with variable ordering x1 < x2 < x3, T = { x 2 2 − x 1 2 , x 2 ( x 3 − x 1 ) } {\displaystyle T=\{x_{2}^{2}-x_{1}^{2},x_{2}(x_{3}-x_{1})\}} is a triangular set and also a regular chain. Two generic points given by T are (a, a, a) and (a, −a, a) where a is transcendental over Q. Thus there are two irreducible components, given by { x2 − x1, x3 − x1 } and { x2 + x1, x3 − x1 }, respectively. Note that: (1) the content of the second polynomial is x2, which does not contribute to the generic points represented and thus can be removed; (2) the dimension of each component is 1, the number of free variables in the regular chain. == Formal definitions == The variables in the polynomial ring R = k [ x 1 , … , x n ] {\displaystyle R=k[x_{1},\ldots ,x_{n}]} are always sorted as x1 < ⋯ < xn. A non-constant polynomial f in R {\displaystyle R} can be seen as a univariate polynomial in its greatest variable. The greatest variable in f is called its main variable, denoted by mvar(f). Let u be the main variable of f and write it as f = a e u e + ⋯ + a 0 , {\displaystyle f=a_{e}u^{e}+\cdots +a_{0},} where e is the degree of f with respect to u and a e {\displaystyle a_{e}} is the leading coefficient of f with respect to u. Then the initial of f is a e {\displaystyle a_{e}} and e is its main degree. Triangular set A non-empty subset T of R {\displaystyle R} is a triangular set, if the polynomials in T are non-constant and have distinct main variables. Hence, a triangular set is finite, and has cardinality at most n. Regular chain Let T = {t1, ..., ts} be a triangular set such that mvar(t1) < ⋯ < mvar(ts), h i {\displaystyle h_{i}} be the initial of ti and h be the product of hi's. Then T is a regular chain if r e s u l t a n t ( h , T ) = r e s u l t a n t ( ⋯ ( r e s u l t a n t ( h , t s ) , … , t i ) ⋯ ) ≠ 0 , {\displaystyle \mathrm {resultant} (h,T)=\mathrm {resultant} (\cdots (\mathrm {resultant} (h,t_{s}),\ldots ,t_{i})\cdots )\neq 0,} where each resultant is computed with respect to the main variable of ti, respectively. This definition is from Yang and Zhang, which is of much algorithmic flavor. Quasi-component and saturated ideal of a regular chain The quasi-component W(T) described by the regular chain T is W ( T ) = V ( T ) ∖ V ( h ) {\displaystyle W(T)=V(T)\setminus V(h)} , that is, the set difference of the varieties V(T) and V(h). The attached algebraic object of a regular chain is its saturated ideal s a t ( T ) = ( T ) : h ∞ . {\displaystyle \mathrm {sat} (T)=(T):h^{\infty }.} A classic result is that the Zariski closure of W(T) equals the variety defined by sat(T), that is, W ( T ) ¯ = V ( s a t ( T ) ) , {\displaystyle {\overline {W(T)}}=V(\mathrm {sat} (T)),} and its dimension is n − |T|, the difference of the number of variables and the number of polynomials in T. Triangular decompositions In general, there are two ways to decompose a polynomial system F. The first one is to decompose lazily, that is, only to represent its generic points in the (Kalkbrener) sense, ( F ) = ⋂ i = 1 e s a t ( T i ) . {\displaystyle {\sqrt {(F)}}=\bigcap _{i=1}^{e}{\sqrt {\mathrm {sat} (T_{i})}}.} The second is to describe all zeroes in the Lazard sense, V ( F ) = ⋃ i = 1 e W ( T i ) . {\displaystyle V(F)=\bigcup _{i=1}^{e}W(T_{i}).} There are various algorithms available for triangular decompositions in either sense. == Properties == Let T be a regular chain in the polynomial ring R. The saturated ideal sat(T) is an unmixed ideal with dimension n − |T|. A regular chain holds a strong elimination property in the sense that: s a t ( T ∩ k [ x 1 , … , x i ] ) = s a t ( T ) ∩ k [ x 1 , … , x i ] . {\displaystyle \mathrm {sat} (T\cap k[x_{1},\ldots ,x_{i}])=\mathrm {sat} (T)\cap k[x_{1},\ldots ,x_{i}].} A polynomial p is in sat(T) if and only if p is pseudo-reduced to zero by T, that is, p ∈ s a t ( T ) ⟺ p r e m ( p , T ) = 0. {\displaystyle p\in \mathrm {sat} (T)\iff \mathrm {prem} (p,T)=0.} Hence the membership test for sat(T) is algorithmic. A polynomial p is a zero-divisor modulo sat(T) if and only if p r e m ( p , T ) ≠ 0 {\displaystyle \mathrm {prem} (p,T)\neq 0} and r e s u l t a n t ( p , T ) = 0 {\displaystyle \mathrm {resultant} (p,T)=0} . Hence the regularity test for sat(T) is algorithmic. Given a prime ideal P, there exists a regular chain C such that P = sat(C). If the first element of a regular chain C is an irreducible polynomial and the others are linear in their main variable, then sat(C) is a prime ideal. Conversely, if P is a prime ideal, then, after almost all linear changes of variables, there exists a regular chain C of the preceding shape such that P = sat(C). A triangular set is a regular chain if and only if it is a Ritt characteristic set of its saturated ideal. == See also == Wu's method of characteristic set Gröbner basis Regular semi-algebraic system Triangular decomposition == Further references == P. Aubry, D. Lazard, M. Moreno Maza. On the theories of triangular sets. Journal of Symbolic Computation, 28(1–2):105–124, 1999. F. Boulier and F. Lemaire and M. Moreno Maza. Well known theorems on triangular systems and the D5 principle. Transgressive Computing 2006, Granada, Spain. E. Hubert. Notes on triangular sets and triangulation-decomposition algorithms I: Polynomial systems. LNCS, volume 2630, Springer-Verlag Heidelberg. F. Lemaire and M. Moreno Maza and Y. Xie. The RegularChains library. Maple Conference 2005. M. Kalkbrener: Algorithmic Properties of Polynomial Rings. J. Symb. Comput. 26(5): 525–581 (1998). M. Kalkbrener: A Generalized Euclidean Algorithm for Computing Triangular Representations of Algebraic Varieties. J. Symb. Comput. 15(2): 143–167 (1993). D. Wang. Computing Triangular Systems and Regular Systems. Journal of Symbolic Computation 30(2) (2000) 221–236. Yang, L., Zhang, J. (1994). Searching dependency between algebraic equations: an algorithm applied to automated reasoning. Artificial Intelligence in Mathematics, pp. 14715, Oxford University Press.
Wikipedia:Regular number#0
Regular numbers are numbers that evenly divide powers of 60 (or, equivalently, powers of 30). Equivalently, they are the numbers whose only prime divisors are 2, 3, and 5. As an example, 602 = 3600 = 48 × 75, so as divisors of a power of 60 both 48 and 75 are regular. These numbers arise in several areas of mathematics and its applications, and have different names coming from their different areas of study. In number theory, these numbers are called 5-smooth, because they can be characterized as having only 2, 3, or 5 as their prime factors. This is a specific case of the more general k-smooth numbers, the numbers that have no prime factor greater than k. In the study of Babylonian mathematics, the divisors of powers of 60 are called regular numbers or regular sexagesimal numbers, and are of great importance in this area because of the sexagesimal (base 60) number system that the Babylonians used for writing their numbers, and that was central to Babylonian mathematics. In music theory, regular numbers occur in the ratios of tones in five-limit just intonation. In connection with music theory and related theories of architecture, these numbers have been called the harmonic whole numbers. In computer science, regular numbers are often called Hamming numbers, after Richard Hamming, who proposed the problem of finding computer algorithms for generating these numbers in ascending order. This problem has been used as a test case for functional programming. == Number theory == Formally, a regular number is an integer of the form 2 i ⋅ 3 j ⋅ 5 k {\displaystyle 2^{i}\cdot 3^{j}\cdot 5^{k}} , for nonnegative integers i {\displaystyle i} , j {\displaystyle j} , and k {\displaystyle k} . Such a number is a divisor of 60 max ( ⌈ i / 2 ⌉ , j , k ) {\displaystyle 60^{\max(\lceil i\,/2\rceil ,j,k)}} . The regular numbers are also called 5-smooth, indicating that their greatest prime factor is at most 5. More generally, a k-smooth number is a number whose greatest prime factor is at most k. The first few regular numbers are Several other sequences at the On-Line Encyclopedia of Integer Sequences have definitions involving 5-smooth numbers. Although the regular numbers appear dense within the range from 1 to 60, they are quite sparse among the larger integers. A regular number n = 2 i ⋅ 3 j ⋅ 5 k {\displaystyle n=2^{i}\cdot 3^{j}\cdot 5^{k}} is less than or equal to some threshold N {\displaystyle N} if and only if the point ( i , j , k ) {\displaystyle (i,j,k)} belongs to the tetrahedron bounded by the coordinate planes and the plane i ln ⁡ 2 + j ln ⁡ 3 + k ln ⁡ 5 ≤ ln ⁡ N , {\displaystyle i\ln 2+j\ln 3+k\ln 5\leq \ln N,} as can be seen by taking logarithms of both sides of the inequality 2 i ⋅ 3 j ⋅ 5 k ≤ N {\displaystyle 2^{i}\cdot 3^{j}\cdot 5^{k}\leq N} . Therefore, the number of regular numbers that are at most N {\displaystyle N} can be estimated as the volume of this tetrahedron, which is log 2 ⁡ N log 3 ⁡ N log 5 ⁡ N 6 . {\displaystyle {\frac {\log _{2}N\,\log _{3}N\,\log _{5}N}{6}}.} Even more precisely, using big O notation, the number of regular numbers up to N {\displaystyle N} is ( ln ⁡ ( N 30 ) ) 3 6 ln ⁡ 2 ln ⁡ 3 ln ⁡ 5 + O ( log ⁡ N ) , {\displaystyle {\frac {\left(\ln(N{\sqrt {30}})\right)^{3}}{6\ln 2\ln 3\ln 5}}+O(\log N),} and it has been conjectured that the error term of this approximation is actually O ( log ⁡ log ⁡ N ) {\displaystyle O(\log \log N)} . A similar formula for the number of 3-smooth numbers up to N {\displaystyle N} is given by Srinivasa Ramanujan in his first letter to G. H. Hardy. == Babylonian mathematics == In the Babylonian sexagesimal notation, the reciprocal of a regular number has a finite representation. If n {\displaystyle n} divides 60 k {\displaystyle 60^{k}} , then the sexagesimal representation of 1 / n {\displaystyle 1/n} is just that for 60 k / n {\displaystyle 60^{k}/n} , shifted by some number of places. This allows for easy division by these numbers: to divide by n {\displaystyle n} , multiply by 1 / n {\displaystyle 1/n} , then shift. For instance, consider division by the regular number 54 = 2133. 54 is a divisor of 603, and 603/54 = 4000, so dividing by 54 in sexagesimal can be accomplished by multiplying by 4000 and shifting three places. In sexagesimal 4000 = 1×3600 + 6×60 + 40×1, or (as listed by Joyce) 1:6:40. Thus, 1/54, in sexagesimal, is 1/60 + 6/602 + 40/603, also denoted 1:6:40 as Babylonian notational conventions did not specify the power of the starting digit. Conversely 1/4000 = 54/603, so division by 1:6:40 = 4000 can be accomplished by instead multiplying by 54 and shifting three sexagesimal places. The Babylonians used tables of reciprocals of regular numbers, some of which still survive. These tables existed relatively unchanged throughout Babylonian times. One tablet from Seleucid times, by someone named Inaqibıt-Anu, contains the reciprocals of 136 of the 231 six-place regular numbers whose first place is 1 or 2, listed in order. It also includes reciprocals of some numbers of more than six places, such as 323 (2 1 4 8 3 0 7 in sexagesimal), whose reciprocal has 17 sexagesimal digits. Noting the difficulty of both calculating these numbers and sorting them, Donald Knuth in 1972 hailed Inaqibıt-Anu as "the first man in history to solve a computational problem that takes longer than one second of time on a modern electronic computer!" (Two tables are also known giving approximations of reciprocals of non-regular numbers, one of which gives reciprocals for all the numbers from 56 to 80.) Although the primary reason for preferring regular numbers to other numbers involves the finiteness of their reciprocals, some Babylonian calculations other than reciprocals also involved regular numbers. For instance, tables of regular squares have been found and the broken tablet Plimpton 322 has been interpreted by Neugebauer as listing Pythagorean triples ( p 2 − q 2 , 2 p q , p 2 + q 2 ) {\displaystyle (p^{2}-q^{2},\,2pq,\,p^{2}+q^{2})} generated by p {\displaystyle p} and q {\displaystyle q} both regular and less than 60. Fowler and Robson discuss the calculation of square roots, such as how the Babylonians found an approximation to the square root of 2, perhaps using regular number approximations of fractions such as 17/12. == Music theory == In music theory, the just intonation of the diatonic scale involves regular numbers: the pitches in a single octave of this scale have frequencies proportional to the numbers in the sequence 24, 27, 30, 32, 36, 40, 45, 48 of nearly consecutive regular numbers. Thus, for an instrument with this tuning, all pitches are regular-number harmonics of a single fundamental frequency. This scale is called a 5-limit tuning, meaning that the interval between any two pitches can be described as a product 2i3j5k of powers of the prime numbers up to 5, or equivalently as a ratio of regular numbers. 5-limit musical scales other than the familiar diatonic scale of Western music have also been used, both in traditional musics of other cultures and in modern experimental music: Honingh & Bod (2005) list 31 different 5-limit scales, drawn from a larger database of musical scales. Each of these 31 scales shares with diatonic just intonation the property that all intervals are ratios of regular numbers. Euler's tonnetz provides a convenient graphical representation of the pitches in any 5-limit tuning, by factoring out the octave relationships (powers of two) so that the remaining values form a planar grid. Some music theorists have stated more generally that regular numbers are fundamental to tonal music itself, and that pitch ratios based on primes larger than 5 cannot be consonant. However the equal temperament of modern pianos is not a 5-limit tuning, and some modern composers have experimented with tunings based on primes larger than five. In connection with the application of regular numbers to music theory, it is of interest to find pairs of regular numbers that differ by one. There are exactly ten such pairs ( x , x + 1 ) {\displaystyle (x,x+1)} and each such pair defines a superparticular ratio x + 1 x {\displaystyle {\tfrac {x+1}{x}}} that is meaningful as a musical interval. These intervals are 2/1 (the octave), 3/2 (the perfect fifth), 4/3 (the perfect fourth), 5/4 (the just major third), 6/5 (the just minor third), 9/8 (the just major tone), 10/9 (the just minor tone), 16/15 (the just diatonic semitone), 25/24 (the just chromatic semitone), and 81/80 (the syntonic comma). In the Renaissance theory of universal harmony, musical ratios were used in other applications, including the architecture of buildings. In connection with the analysis of these shared musical and architectural ratios, for instance in the architecture of Palladio, the regular numbers have also been called the harmonic whole numbers. == Algorithms == Algorithms for calculating the regular numbers in ascending order were popularized by Edsger Dijkstra. Dijkstra (1976, 1981) attributes to Hamming the problem of building the infinite ascending sequence of all 5-smooth numbers; this problem is now known as Hamming's problem, and the numbers so generated are also called the Hamming numbers. Dijkstra's ideas to compute these numbers are the following: The sequence of Hamming numbers begins with the number 1. The remaining values in the sequence are of the form 2 h {\displaystyle 2h} , 3 h {\displaystyle 3h} , and 5 h {\displaystyle 5h} , where h {\displaystyle h} is any Hamming number. Therefore, the sequence H {\displaystyle H} may be generated by outputting the value 1, and then merging the sequences 2 H {\displaystyle 2H} , 3 H {\displaystyle 3H} , and 5 H {\displaystyle 5H} . This algorithm is often used to demonstrate the power of a lazy functional programming language, because (implicitly) concurrent efficient implementations, using a constant number of arithmetic operations per generated value, are easily constructed as described above. Similarly efficient strict functional or imperative sequential implementations are also possible whereas explicitly concurrent generative solutions might be non-trivial. In the Python programming language, lazy functional code for generating regular numbers is used as one of the built-in tests for correctness of the language's implementation. A related problem, discussed by Knuth (1972), is to list all k {\displaystyle k} -digit sexagesimal numbers in ascending order (see #Babylonian mathematics above). In algorithmic terms, this is equivalent to generating (in order) the subsequence of the infinite sequence of regular numbers, ranging from 60 k {\displaystyle 60^{k}} to 60 k + 1 {\displaystyle 60^{k+1}} . See Gingerich (1965) for an early description of computer code that generates these numbers out of order and then sorts them; Knuth describes an ad hoc algorithm, which he attributes to Bruins (1970), for generating the six-digit numbers more quickly but that does not generalize in a straightforward way to larger values of k {\displaystyle k} . Eppstein (2007) describes an algorithm for computing tables of this type in linear time for arbitrary values of k {\displaystyle k} . == Other applications == Heninger, Rains & Sloane (2006) show that, when n {\displaystyle n} is a regular number and is divisible by 8, the generating function of an n {\displaystyle n} -dimensional extremal even unimodular lattice is an n {\displaystyle n} th power of a polynomial. As with other classes of smooth numbers, regular numbers are important as problem sizes in computer programs for performing the fast Fourier transform, a technique for analyzing the dominant frequencies of signals in time-varying data. For instance, the method of Temperton (1992) requires that the transform length be a regular number. Book VIII of Plato's Republic involves an allegory of marriage centered on the highly regular number 604 = 12,960,000 and its divisors (see Plato's number). Later scholars have invoked both Babylonian mathematics and music theory in an attempt to explain this passage. Certain species of bamboo release large numbers of seeds in synchrony (a process called masting) at intervals that have been estimated as regular numbers of years, with different intervals for different species, including examples with intervals of 10, 15, 16, 30, 32, 48, 60, and 120 years. It has been hypothesized that the biological mechanism for timing and synchronizing this process lends itself to smooth numbers, and in particular in this case to 5-smooth numbers. Although the estimated masting intervals for some other species of bamboo are not regular numbers of years, this may be explainable as measurement error. == Notes == == References == == External links == Table of reciprocals of regular numbers up to 3600 from the web site of Professor David E. Joyce, Clark University. RosettaCode Generation of Hamming_numbers in ~ 50 programming languages
Wikipedia:Regular semi-algebraic system#0
In computer algebra, a regular semi-algebraic system is a particular kind of triangular system of multivariate polynomials over a real closed field. == Introduction == Regular chains and triangular decompositions are fundamental and well-developed tools for describing the complex solutions of polynomial systems. The notion of a regular semi-algebraic system is an adaptation of the concept of a regular chain focusing on solutions of the real analogue: semi-algebraic systems. Any semi-algebraic system S {\displaystyle S} can be decomposed into finitely many regular semi-algebraic systems S 1 , … , S e {\displaystyle S_{1},\ldots ,S_{e}} such that a point (with real coordinates) is a solution of S {\displaystyle S} if and only if it is a solution of one of the systems S 1 , … , S e {\displaystyle S_{1},\ldots ,S_{e}} . == Formal definition == Let T {\displaystyle T} be a regular chain of k [ x 1 , … , x n ] {\displaystyle \mathbf {k} [x_{1},\ldots ,x_{n}]} for some ordering of the variables x = x 1 , … , x n {\displaystyle \mathbf {x} =x_{1},\ldots ,x_{n}} and a real closed field k {\displaystyle \mathbf {k} } . Let u = u 1 , … , u d {\displaystyle \mathbf {u} =u_{1},\ldots ,u_{d}} and y = y 1 , … , y n − d {\displaystyle \mathbf {y} =y_{1},\ldots ,y_{n-d}} designate respectively the variables of x {\displaystyle \mathbf {x} } that are free and algebraic with respect to T {\displaystyle T} . Let P ⊂ k [ x ] {\displaystyle P\subset \mathbf {k} [\mathbf {x} ]} be finite such that each polynomial in P {\displaystyle P} is regular with respect to the saturated ideal of T {\displaystyle T} . Define P > := { p > 0 ∣ p ∈ P } {\displaystyle P_{>}:=\{p>0\mid p\in P\}} . Let Q {\displaystyle {\mathcal {Q}}} be a quantifier-free formula of k [ x ] {\displaystyle \mathbf {k} [\mathbf {x} ]} involving only the variables of u {\displaystyle \mathbf {u} } . We say that R := [ Q , T , P > ] {\displaystyle R:=[{\mathcal {Q}},T,P_{>}]} is a regular semi-algebraic system if the following three conditions hold. Q {\displaystyle {\mathcal {Q}}} defines a non-empty open semi-algebraic set S {\displaystyle S} of k d {\displaystyle \mathbf {k} ^{d}} , the regular system [ T , P ] {\displaystyle [T,P]} specializes well at every point u {\displaystyle u} of S {\displaystyle S} , at each point u {\displaystyle u} of S {\displaystyle S} , the specialized system [ T ( u ) , P ( u ) > ] {\displaystyle [T(u),P(u)_{>}]} has at least one real zero. The zero set of R {\displaystyle R} , denoted by Z k ( R ) {\displaystyle Z_{\mathbf {k} }(R)} , is defined as the set of points ( u , y ) ∈ k d × k n − d {\displaystyle (u,y)\in \mathbf {k} ^{d}\times \mathbf {k} ^{n-d}} such that Q ( u ) {\displaystyle {\mathcal {Q}}(u)} is true and t ( u , y ) = 0 , p ( u , y ) > 0 {\displaystyle t(u,y)=0,p(u,y)>0} , for all t ∈ T {\displaystyle t\in T} and all p ∈ P {\displaystyle p\in P} . Observe that Z k ( R ) {\displaystyle Z_{\mathbf {k} }(R)} has dimension d {\displaystyle d} in the affine space k n {\displaystyle \mathbf {k} ^{n}} . == See also == Real algebraic geometry == References ==
Wikipedia:Regularization (mathematics)#0
In mathematics, statistics, finance, and computer science, particularly in machine learning and inverse problems, regularization is a process that converts the answer to a problem to a simpler one. It is often used in solving ill-posed problems or to prevent overfitting. Although regularization procedures can be divided in many ways, the following delineation is particularly helpful: Explicit regularization is regularization whenever one explicitly adds a term to the optimization problem. These terms could be priors, penalties, or constraints. Explicit regularization is commonly employed with ill-posed optimization problems. The regularization term, or penalty, imposes a cost on the optimization function to make the optimal solution unique. Implicit regularization is all other forms of regularization. This includes, for example, early stopping, using a robust loss function, and discarding outliers. Implicit regularization is essentially ubiquitous in modern machine learning approaches, including stochastic gradient descent for training deep neural networks, and ensemble methods (such as random forests and gradient boosted trees). In explicit regularization, independent of the problem or model, there is always a data term, that corresponds to a likelihood of the measurement, and a regularization term that corresponds to a prior. By combining both using Bayesian statistics, one can compute a posterior, that includes both information sources and therefore stabilizes the estimation process. By trading off both objectives, one chooses to be more aligned to the data or to enforce regularization (to prevent overfitting). There is a whole research branch dealing with all possible regularizations. In practice, one usually tries a specific regularization and then figures out the probability density that corresponds to that regularization to justify the choice. It can also be physically motivated by common sense or intuition. In machine learning, the data term corresponds to the training data and the regularization is either the choice of the model or modifications to the algorithm. It is always intended to reduce the generalization error, i.e. the error score with the trained model on the evaluation set (testing data) and not the training data. One of the earliest uses of regularization is Tikhonov regularization (ridge regression), related to the method of least squares. == Regularization in machine learning == In machine learning, a key challenge is enabling models to accurately predict outcomes on unseen data, not just on familiar training data. Regularization is crucial for addressing overfitting—where a model memorizes training data details but cannot generalize to new data. The goal of regularization is to encourage models to learn the broader patterns within the data rather than memorizing it. Techniques like early stopping, L1 and L2 regularization, and dropout are designed to prevent overfitting and underfitting, thereby enhancing the model's ability to adapt to and perform well with new data, thus improving model generalization. === Early Stopping === Stops training when validation performance deteriorates, preventing overfitting by halting before the model memorizes training data. === L1 and L2 Regularization === Adds penalty terms to the cost function to discourage complex models: L1 regularization (also called LASSO) leads to sparse models by adding a penalty based on the absolute value of coefficients. L2 regularization (also called ridge regression) encourages smaller, more evenly distributed weights by adding a penalty based on the square of the coefficients. === Dropout === In the context of neural networks, the Dropout technique repeatedly ignores random subsets of neurons during training, which simulates the training of multiple neural network architectures at once to improve generalization. == Classification == Empirical learning of classifiers (from a finite data set) is always an underdetermined problem, because it attempts to infer a function of any x {\displaystyle x} given only examples x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} . A regularization term (or regularizer) R ( f ) {\displaystyle R(f)} is added to a loss function: min f ∑ i = 1 n V ( f ( x i ) , y i ) + λ R ( f ) {\displaystyle \min _{f}\sum _{i=1}^{n}V(f(x_{i}),y_{i})+\lambda R(f)} where V {\displaystyle V} is an underlying loss function that describes the cost of predicting f ( x ) {\displaystyle f(x)} when the label is y {\displaystyle y} , such as the square loss or hinge loss; and λ {\displaystyle \lambda } is a parameter which controls the importance of the regularization term. R ( f ) {\displaystyle R(f)} is typically chosen to impose a penalty on the complexity of f {\displaystyle f} . Concrete notions of complexity used include restrictions for smoothness and bounds on the vector space norm. A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution (as depicted in the figure above, where the green function, the simpler one, may be preferred). From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters. Regularization can serve multiple purposes, including learning simpler models, inducing models to be sparse and introducing group structure into the learning problem. The same idea arose in many fields of science. A simple form of regularization applied to integral equations (Tikhonov regularization) is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, including total variation regularization, have become popular. === Generalization === Regularization can be motivated as a technique to improve the generalizability of a learned model. The goal of this learning problem is to find a function that fits or predicts the outcome (label) that minimizes the expected error over all possible inputs and labels. The expected error of a function f n {\displaystyle f_{n}} is: I [ f n ] = ∫ X × Y V ( f n ( x ) , y ) ρ ( x , y ) d x d y {\displaystyle I[f_{n}]=\int _{X\times Y}V(f_{n}(x),y)\rho (x,y)\,dx\,dy} where X {\displaystyle X} and Y {\displaystyle Y} are the domains of input data x {\displaystyle x} and their labels y {\displaystyle y} respectively. Typically in learning problems, only a subset of input data and labels are available, measured with some noise. Therefore, the expected error is unmeasurable, and the best surrogate available is the empirical error over the N {\displaystyle N} available samples: I S [ f n ] = 1 n ∑ i = 1 N V ( f n ( x ^ i ) , y ^ i ) {\displaystyle I_{S}[f_{n}]={\frac {1}{n}}\sum _{i=1}^{N}V(f_{n}({\hat {x}}_{i}),{\hat {y}}_{i})} Without bounds on the complexity of the function space (formally, the reproducing kernel Hilbert space) available, a model will be learned that incurs zero loss on the surrogate empirical error. If measurements (e.g. of x i {\displaystyle x_{i}} ) were made with noise, this model may suffer from overfitting and display poor expected error. Regularization introduces a penalty for exploring certain regions of the function space used to build the model, which can improve generalization. == Tikhonov regularization (ridge regression) == These techniques are named for Andrey Nikolayevich Tikhonov, who applied regularization to integral equations and made important contributions in many other areas. When learning a linear function f {\displaystyle f} , characterized by an unknown vector w {\displaystyle w} such that f ( x ) = w ⋅ x {\displaystyle f(x)=w\cdot x} , one can add the L 2 {\displaystyle L_{2}} -norm of the vector w {\displaystyle w} to the loss expression in order to prefer solutions with smaller norms. Tikhonov regularization is one of the most common forms. It is also known as ridge regression. It is expressed as: min w ∑ i = 1 n V ( x ^ i ⋅ w , y ^ i ) + λ ‖ w ‖ 2 2 , {\displaystyle \min _{w}\sum _{i=1}^{n}V({\hat {x}}_{i}\cdot w,{\hat {y}}_{i})+\lambda \left\|w\right\|_{2}^{2},} where ( x ^ i , y ^ i ) , 1 ≤ i ≤ n , {\displaystyle ({\hat {x}}_{i},{\hat {y}}_{i}),\,1\leq i\leq n,} would represent samples used for training. In the case of a general function, the norm of the function in its reproducing kernel Hilbert space is: min f ∑ i = 1 n V ( f ( x ^ i ) , y ^ i ) + λ ‖ f ‖ H 2 {\displaystyle \min _{f}\sum _{i=1}^{n}V(f({\hat {x}}_{i}),{\hat {y}}_{i})+\lambda \left\|f\right\|_{\mathcal {H}}^{2}} As the L 2 {\displaystyle L_{2}} norm is differentiable, learning can be advanced by gradient descent. === Tikhonov-regularized least squares === The learning problem with the least squares loss function and Tikhonov regularization can be solved analytically. Written in matrix form, the optimal w {\displaystyle w} is the one for which the gradient of the loss function with respect to w {\displaystyle w} is 0. min w 1 n ( X ^ w − Y ) T ( X ^ w − Y ) + λ ‖ w ‖ 2 2 {\displaystyle \min _{w}{\frac {1}{n}}\left({\hat {X}}w-Y\right)^{\mathsf {T}}\left({\hat {X}}w-Y\right)+\lambda \left\|w\right\|_{2}^{2}} ∇ w = 2 n X ^ T ( X ^ w − Y ) + 2 λ w {\displaystyle \nabla _{w}={\frac {2}{n}}{\hat {X}}^{\mathsf {T}}\left({\hat {X}}w-Y\right)+2\lambda w} 0 = X ^ T ( X ^ w − Y ) + n λ w {\displaystyle 0={\hat {X}}^{\mathsf {T}}\left({\hat {X}}w-Y\right)+n\lambda w} w = ( X ^ T X ^ + λ n I ) − 1 ( X ^ T Y ) {\displaystyle w=\left({\hat {X}}^{\mathsf {T}}{\hat {X}}+\lambda nI\right)^{-1}\left({\hat {X}}^{\mathsf {T}}Y\right)} where the third statement is a first-order condition. By construction of the optimization problem, other values of w {\displaystyle w} give larger values for the loss function. This can be verified by examining the second derivative ∇ w w {\displaystyle \nabla _{ww}} . During training, this algorithm takes O ( d 3 + n d 2 ) {\displaystyle O(d^{3}+nd^{2})} time. The terms correspond to the matrix inversion and calculating X T X {\displaystyle X^{\mathsf {T}}X} , respectively. Testing takes O ( n d ) {\displaystyle O(nd)} time. == Early stopping == Early stopping can be viewed as regularization in time. Intuitively, a training procedure such as gradient descent tends to learn more and more complex functions with increasing iterations. By regularizing for time, model complexity can be controlled, improving generalization. Early stopping is implemented using one data set for training, one statistically independent data set for validation and another for testing. The model is trained until performance on the validation set no longer improves and then applied to the test set. === Theoretical motivation in least squares === Consider the finite approximation of Neumann series for an invertible matrix A where ‖ I − A ‖ < 1 {\displaystyle \left\|I-A\right\|<1} : ∑ i = 0 T − 1 ( I − A ) i ≈ A − 1 {\displaystyle \sum _{i=0}^{T-1}\left(I-A\right)^{i}\approx A^{-1}} This can be used to approximate the analytical solution of unregularized least squares, if γ is introduced to ensure the norm is less than one. w T = γ n ∑ i = 0 T − 1 ( I − γ n X ^ T X ^ ) i X ^ T Y ^ {\displaystyle w_{T}={\frac {\gamma }{n}}\sum _{i=0}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}} The exact solution to the unregularized least squares learning problem minimizes the empirical error, but may fail. By limiting T, the only free parameter in the algorithm above, the problem is regularized for time, which may improve its generalization. The algorithm above is equivalent to restricting the number of gradient descent iterations for the empirical risk I s [ w ] = 1 2 n ‖ X ^ w − Y ^ ‖ R n 2 {\displaystyle I_{s}[w]={\frac {1}{2n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|_{\mathbb {R} ^{n}}^{2}} with the gradient descent update: w 0 = 0 w t + 1 = ( I − γ n X ^ T X ^ ) w t + γ n X ^ T Y ^ {\displaystyle {\begin{aligned}w_{0}&=0\\[1ex]w_{t+1}&=\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)w_{t}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\end{aligned}}} The base case is trivial. The inductive case is proved as follows: w T = ( I − γ n X ^ T X ^ ) γ n ∑ i = 0 T − 2 ( I − γ n X ^ T X ^ ) i X ^ T Y ^ + γ n X ^ T Y ^ = γ n ∑ i = 1 T − 1 ( I − γ n X ^ T X ^ ) i X ^ T Y ^ + γ n X ^ T Y ^ = γ n ∑ i = 0 T − 1 ( I − γ n X ^ T X ^ ) i X ^ T Y ^ {\displaystyle {\begin{aligned}w_{T}&=\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right){\frac {\gamma }{n}}\sum _{i=0}^{T-2}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\\[1ex]&={\frac {\gamma }{n}}\sum _{i=1}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\\[1ex]&={\frac {\gamma }{n}}\sum _{i=0}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\end{aligned}}} == Regularizers for sparsity == Assume that a dictionary ϕ j {\displaystyle \phi _{j}} with dimension p {\displaystyle p} is given such that a function in the function space can be expressed as: f ( x ) = ∑ j = 1 p ϕ j ( x ) w j {\displaystyle f(x)=\sum _{j=1}^{p}\phi _{j}(x)w_{j}} Enforcing a sparsity constraint on w {\displaystyle w} can lead to simpler and more interpretable models. This is useful in many real-life applications such as computational biology. An example is developing a simple predictive test for a disease in order to minimize the cost of performing medical tests while maximizing predictive power. A sensible sparsity constraint is the L 0 {\displaystyle L_{0}} norm ‖ w ‖ 0 {\displaystyle \|w\|_{0}} , defined as the number of non-zero elements in w {\displaystyle w} . Solving a L 0 {\displaystyle L_{0}} regularized learning problem, however, has been demonstrated to be NP-hard. The L 1 {\displaystyle L_{1}} norm (see also Norms) can be used to approximate the optimal L 0 {\displaystyle L_{0}} norm via convex relaxation. It can be shown that the L 1 {\displaystyle L_{1}} norm induces sparsity. In the case of least squares, this problem is known as LASSO in statistics and basis pursuit in signal processing. min w ∈ R p 1 n ‖ X ^ w − Y ^ ‖ 2 + λ ‖ w ‖ 1 {\displaystyle \min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|^{2}+\lambda \left\|w\right\|_{1}} L 1 {\displaystyle L_{1}} regularization can occasionally produce non-unique solutions. A simple example is provided in the figure when the space of possible solutions lies on a 45 degree line. This can be problematic for certain applications, and is overcome by combining L 1 {\displaystyle L_{1}} with L 2 {\displaystyle L_{2}} regularization in elastic net regularization, which takes the following form: min w ∈ R p 1 n ‖ X ^ w − Y ^ ‖ 2 + λ ( α ‖ w ‖ 1 + ( 1 − α ) ‖ w ‖ 2 2 ) , α ∈ [ 0 , 1 ] {\displaystyle \min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|^{2}+\lambda \left(\alpha \left\|w\right\|_{1}+(1-\alpha )\left\|w\right\|_{2}^{2}\right),\alpha \in [0,1]} Elastic net regularization tends to have a grouping effect, where correlated input features are assigned equal weights. Elastic net regularization is commonly used in practice and is implemented in many machine learning libraries. === Proximal methods === While the L 1 {\displaystyle L_{1}} norm does not result in an NP-hard problem, the L 1 {\displaystyle L_{1}} norm is convex but is not strictly differentiable due to the kink at x = 0. Subgradient methods which rely on the subderivative can be used to solve L 1 {\displaystyle L_{1}} regularized learning problems. However, faster convergence can be achieved through proximal methods. For a problem min w ∈ H F ( w ) + R ( w ) {\displaystyle \min _{w\in H}F(w)+R(w)} such that F {\displaystyle F} is convex, continuous, differentiable, with Lipschitz continuous gradient (such as the least squares loss function), and R {\displaystyle R} is convex, continuous, and proper, then the proximal method to solve the problem is as follows. First define the proximal operator prox R ⁡ ( v ) = argmin w ∈ R D ⁡ { R ( w ) + 1 2 ‖ w − v ‖ 2 } , {\displaystyle \operatorname {prox} _{R}(v)=\mathop {\operatorname {argmin} } _{w\in \mathbb {R} ^{D}}\left\{R(w)+{\frac {1}{2}}\left\|w-v\right\|^{2}\right\},} and then iterate w k + 1 = prox γ , R ⁡ ( w k − γ ∇ F ( w k ) ) {\displaystyle w_{k+1}=\mathop {\operatorname {prox} } _{\gamma ,R}\left(w_{k}-\gamma \nabla F(w_{k})\right)} The proximal method iteratively performs gradient descent and then projects the result back into the space permitted by R {\displaystyle R} . When R {\displaystyle R} is the L1 regularizer, the proximal operator is equivalent to the soft-thresholding operator, S λ ( v ) f ( n ) = { v i − λ , if v i > λ 0 , if v i ∈ [ − λ , λ ] v i + λ , if v i < − λ {\displaystyle S_{\lambda }(v)f(n)={\begin{cases}v_{i}-\lambda ,&{\text{if }}v_{i}>\lambda \\0,&{\text{if }}v_{i}\in [-\lambda ,\lambda ]\\v_{i}+\lambda ,&{\text{if }}v_{i}<-\lambda \end{cases}}} This allows for efficient computation. === Group sparsity without overlaps === Groups of features can be regularized by a sparsity constraint, which can be useful for expressing certain prior knowledge into an optimization problem. In the case of a linear model with non-overlapping known groups, a regularizer can be defined: R ( w ) = ∑ g = 1 G ‖ w g ‖ 2 , {\displaystyle R(w)=\sum _{g=1}^{G}\left\|w_{g}\right\|_{2},} where ‖ w g ‖ 2 = ∑ j = 1 | G g | ( w g j ) 2 {\displaystyle \|w_{g}\|_{2}={\sqrt {\sum _{j=1}^{|G_{g}|}\left(w_{g}^{j}\right)^{2}}}} This can be viewed as inducing a regularizer over the L 2 {\displaystyle L_{2}} norm over members of each group followed by an L 1 {\displaystyle L_{1}} norm over groups. This can be solved by the proximal method, where the proximal operator is a block-wise soft-thresholding function: prox λ , R , g ⁡ ( w g ) = { ( 1 − λ ‖ w g ‖ 2 ) w g , if ‖ w g ‖ 2 > λ 0 , if ‖ w g ‖ 2 ≤ λ {\displaystyle \operatorname {prox} \limits _{\lambda ,R,g}(w_{g})={\begin{cases}\left(1-{\dfrac {\lambda }{\left\|w_{g}\right\|_{2}}}\right)w_{g},&{\text{if }}\left\|w_{g}\right\|_{2}>\lambda \\[1ex]0,&{\text{if }}\|w_{g}\|_{2}\leq \lambda \end{cases}}} === Group sparsity with overlaps === The algorithm described for group sparsity without overlaps can be applied to the case where groups do overlap, in certain situations. This will likely result in some groups with all zero elements, and other groups with some non-zero and some zero elements. If it is desired to preserve the group structure, a new regularizer can be defined: R ( w ) = inf { ∑ g = 1 G ‖ w g ‖ 2 : w = ∑ g = 1 G w ¯ g } {\displaystyle R(w)=\inf \left\{\sum _{g=1}^{G}\|w_{g}\|_{2}:w=\sum _{g=1}^{G}{\bar {w}}_{g}\right\}} For each w g {\displaystyle w_{g}} , w ¯ g {\displaystyle {\bar {w}}_{g}} is defined as the vector such that the restriction of w ¯ g {\displaystyle {\bar {w}}_{g}} to the group g {\displaystyle g} equals w g {\displaystyle w_{g}} and all other entries of w ¯ g {\displaystyle {\bar {w}}_{g}} are zero. The regularizer finds the optimal disintegration of w {\displaystyle w} into parts. It can be viewed as duplicating all elements that exist in multiple groups. Learning problems with this regularizer can also be solved with the proximal method with a complication. The proximal operator cannot be computed in closed form, but can be effectively solved iteratively, inducing an inner iteration within the proximal method iteration. == Regularizers for semi-supervised learning == When labels are more expensive to gather than input examples, semi-supervised learning can be useful. Regularizers have been designed to guide learning algorithms to learn models that respect the structure of unsupervised training samples. If a symmetric weight matrix W {\displaystyle W} is given, a regularizer can be defined: R ( f ) = ∑ i , j w i j ( f ( x i ) − f ( x j ) ) 2 {\displaystyle R(f)=\sum _{i,j}w_{ij}\left(f(x_{i})-f(x_{j})\right)^{2}} If W i j {\displaystyle W_{ij}} encodes the result of some distance metric for points x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} , it is desirable that f ( x i ) ≈ f ( x j ) {\displaystyle f(x_{i})\approx f(x_{j})} . This regularizer captures this intuition, and is equivalent to: R ( f ) = f ¯ T L f ¯ {\displaystyle R(f)={\bar {f}}^{\mathsf {T}}L{\bar {f}}} where L = D − W {\displaystyle L=D-W} is the Laplacian matrix of the graph induced by W {\displaystyle W} . The optimization problem min f ∈ R m R ( f ) , m = u + l {\displaystyle \min _{f\in \mathbb {R} ^{m}}R(f),m=u+l} can be solved analytically if the constraint f ( x i ) = y i {\displaystyle f(x_{i})=y_{i}} is applied for all supervised samples. The labeled part of the vector f {\displaystyle f} is therefore obvious. The unlabeled part of f {\displaystyle f} is solved for by: min f u ∈ R u f T L f = min f u ∈ R u { f u T L u u f u + f l T L l u f u + f u T L u l f l } {\displaystyle \min _{f_{u}\in \mathbb {R} ^{u}}f^{\mathsf {T}}Lf=\min _{f_{u}\in \mathbb {R} ^{u}}\left\{f_{u}^{\mathsf {T}}L_{uu}f_{u}+f_{l}^{\mathsf {T}}L_{lu}f_{u}+f_{u}^{\mathsf {T}}L_{ul}f_{l}\right\}} ∇ f u = 2 L u u f u + 2 L u l Y {\displaystyle \nabla _{f_{u}}=2L_{uu}f_{u}+2L_{ul}Y} f u = L u u † ( L u l Y ) {\displaystyle f_{u}=L_{uu}^{\dagger }\left(L_{ul}Y\right)} The pseudo-inverse can be taken because L u l {\displaystyle L_{ul}} has the same range as L u u {\displaystyle L_{uu}} . == Regularizers for multitask learning == In the case of multitask learning, T {\displaystyle T} problems are considered simultaneously, each related in some way. The goal is to learn T {\displaystyle T} functions, ideally borrowing strength from the relatedness of tasks, that have predictive power. This is equivalent to learning the matrix W : T × D {\displaystyle W:T\times D} . === Sparse regularizer on columns === R ( w ) = ∑ i = 1 D ‖ W ‖ 2 , 1 {\displaystyle R(w)=\sum _{i=1}^{D}\left\|W\right\|_{2,1}} This regularizer defines an L2 norm on each column and an L1 norm over all columns. It can be solved by proximal methods. === Nuclear norm regularization === R ( w ) = ‖ σ ( W ) ‖ 1 {\displaystyle R(w)=\left\|\sigma (W)\right\|_{1}} where σ ( W ) {\displaystyle \sigma (W)} is the eigenvalues in the singular value decomposition of W {\displaystyle W} . === Mean-constrained regularization === R ( f 1 ⋯ f T ) = ∑ t = 1 T ‖ f t − 1 T ∑ s = 1 T f s ‖ H k 2 {\displaystyle R(f_{1}\cdots f_{T})=\sum _{t=1}^{T}\left\|f_{t}-{\frac {1}{T}}\sum _{s=1}^{T}f_{s}\right\|_{H_{k}}^{2}} This regularizer constrains the functions learned for each task to be similar to the overall average of the functions across all tasks. This is useful for expressing prior information that each task is expected to share with each other task. An example is predicting blood iron levels measured at different times of the day, where each task represents an individual. === Clustered mean-constrained regularization === R ( f 1 ⋯ f T ) = ∑ r = 1 C ∑ t ∈ I ( r ) ‖ f t − 1 I ( r ) ∑ s ∈ I ( r ) f s ‖ H k 2 {\displaystyle R(f_{1}\cdots f_{T})=\sum _{r=1}^{C}\sum _{t\in I(r)}\left\|f_{t}-{\frac {1}{I(r)}}\sum _{s\in I(r)}f_{s}\right\|_{H_{k}}^{2}} where I ( r ) {\displaystyle I(r)} is a cluster of tasks. This regularizer is similar to the mean-constrained regularizer, but instead enforces similarity between tasks within the same cluster. This can capture more complex prior information. This technique has been used to predict Netflix recommendations. A cluster would correspond to a group of people who share similar preferences. === Graph-based similarity === More generally than above, similarity between tasks can be defined by a function. The regularizer encourages the model to learn similar functions for similar tasks. R ( f 1 ⋯ f T ) = ∑ t , s = 1 , t ≠ s T ‖ f t − f s ‖ 2 M t s {\displaystyle R(f_{1}\cdots f_{T})=\sum _{t,s=1,t\neq s}^{\mathsf {T}}\left\|f_{t}-f_{s}\right\|^{2}M_{ts}} for a given symmetric similarity matrix M {\displaystyle M} . == Other uses of regularization in statistics and machine learning == Bayesian learning methods make use of a prior probability that (usually) gives lower probability to more complex models. Well-known model selection techniques include the Akaike information criterion (AIC), minimum description length (MDL), and the Bayesian information criterion (BIC). Alternative methods of controlling overfitting not involving regularization include cross-validation. Examples of applications of different methods of regularization to the linear model are: == See also == Bayesian interpretation of regularization Bias–variance tradeoff Matrix regularization Regularization by spectral filtering Regularized least squares Lagrange multiplier Variance reduction == Notes == == References == Neumaier, A. (1998). "Solving ill-conditioned and singular linear systems: A tutorial on regularization" (PDF). SIAM Review. 40 (3): 636–666. Bibcode:1998SIAMR..40..636N. doi:10.1137/S0036144597321909. Archived from the original (PDF) on 2007-06-30. Kukačka, Jan; Golkov, Vladimir; Cremers, Daniel (2017). "Regularization for Deep Learning: A Taxonomy". arXiv:1710.10686 [cs.LG].
Wikipedia:Regularization by spectral filtering#0
Spectral regularization is any of a class of regularization techniques used in machine learning to control the impact of noise and prevent overfitting. Spectral regularization can be used in a broad range of applications, from deblurring images to classifying emails into a spam folder and a non-spam folder. For instance, in the email classification example, spectral regularization can be used to reduce the impact of noise and prevent overfitting when a machine learning system is being trained on a labeled set of emails to learn how to tell a spam and a non-spam email apart. Spectral regularization algorithms rely on methods that were originally defined and studied in the theory of ill-posed inverse problems (for instance, see) focusing on the inversion of a linear operator (or a matrix) that possibly has a bad condition number or an unbounded inverse. In this context, regularization amounts to substituting the original operator by a bounded operator called the "regularization operator" that has a condition number controlled by a regularization parameter, a classical example being Tikhonov regularization. To ensure stability, this regularization parameter is tuned based on the level of noise. The main idea behind spectral regularization is that each regularization operator can be described using spectral calculus as an appropriate filter on the eigenvalues of the operator that defines the problem, and the role of the filter is to "suppress the oscillatory behavior corresponding to small eigenvalues". Therefore, each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function (which needs to be derived for that particular algorithm). Three of the most commonly used regularization algorithms for which spectral filtering is well-studied are Tikhonov regularization, Landweber iteration, and truncated singular value decomposition (TSVD). As for choosing the regularization parameter, examples of candidate methods to compute this parameter include the discrepancy principle, generalized cross validation, and the L-curve criterion. It is of note that the notion of spectral filtering studied in the context of machine learning is closely connected to the literature on function approximation (in signal processing). == Notation == The training set is defined as S = { ( x 1 , y 1 ) , … , ( x n , y n ) } {\displaystyle S=\{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}} , where X {\displaystyle X} is the n × d {\displaystyle n\times d} input matrix and Y = ( y 1 , … , y n ) {\displaystyle Y=(y_{1},\dots ,y_{n})} is the output vector. Where applicable, the kernel function is denoted by k {\displaystyle k} , and the n × n {\displaystyle n\times n} kernel matrix is denoted by K {\displaystyle K} which has entries K i j = k ( x i , x j ) {\displaystyle K_{ij}=k(x_{i},x_{j})} and H {\displaystyle {\mathcal {H}}} denotes the Reproducing Kernel Hilbert Space (RKHS) with kernel k {\displaystyle k} . The regularization parameter is denoted by λ {\displaystyle \lambda } . (Note: For g ∈ G {\displaystyle g\in G} and f ∈ F {\displaystyle f\in F} , with G {\displaystyle G} and F {\displaystyle F} being Hilbert spaces, given a linear, continuous operator L {\displaystyle L} , assume that g = L f {\displaystyle g=Lf} holds. In this setting, the direct problem would be to solve for g {\displaystyle g} given f {\displaystyle f} and the inverse problem would be to solve for f {\displaystyle f} given g {\displaystyle g} . If the solution exists, is unique and stable, the inverse problem (i.e. the problem of solving for f {\displaystyle f} ) is well-posed; otherwise, it is ill-posed.) == Relation to the theory of ill-posed inverse problems == The connection between the regularized least squares (RLS) estimation problem (Tikhonov regularization setting) and the theory of ill-posed inverse problems is an example of how spectral regularization algorithms are related to the theory of ill-posed inverse problems. The RLS estimator solves min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 + λ ‖ f ‖ H 2 {\displaystyle \min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}+\lambda \left\|f\right\|_{\mathcal {H}}^{2}} and the RKHS allows for expressing this RLS estimator as f S λ ( X ) = ∑ i = 1 n c i k ( x , x i ) {\displaystyle f_{S}^{\lambda }(X)=\sum _{i=1}^{n}c_{i}k(x,x_{i})} where ( K + n λ I ) c = Y {\displaystyle (K+n\lambda I)c=Y} with c = ( c 1 , … , c n ) {\displaystyle c=(c_{1},\dots ,c_{n})} . The penalization term is used for controlling smoothness and preventing overfitting. Since the solution of empirical risk minimization min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 {\displaystyle \min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}} can be written as f S λ ( X ) = ∑ i = 1 n c i k ( x , x i ) {\displaystyle f_{S}^{\lambda }(X)=\sum _{i=1}^{n}c_{i}k(x,x_{i})} such that K c = Y {\displaystyle Kc=Y} , adding the penalty function amounts to the following change in the system that needs to be solved: { min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 → min f ∈ H 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 + λ ‖ f ‖ H 2 } ≡ { K c = Y → ( K + n λ I ) c = Y } . {\displaystyle \left\{\min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}\left(y_{i}-f(x_{i})\right)^{2}\rightarrow \min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}\left(y_{i}-f(x_{i})\right)^{2}+\lambda \left\|f\right\|_{\mathcal {H}}^{2}\right\}\equiv {\biggl \{}Kc=Y\rightarrow \left(K+n\lambda I\right)c=Y{\biggr \}}.} In this learning setting, the kernel matrix can be decomposed as K = Q Σ Q T {\displaystyle K=Q\Sigma Q^{T}} , with σ = diag ⁡ ( σ 1 , … , σ n ) , σ 1 ≥ σ 2 ≥ ⋯ ≥ σ n ≥ 0 {\displaystyle \sigma =\operatorname {diag} (\sigma _{1},\dots ,\sigma _{n}),~\sigma _{1}\geq \sigma _{2}\geq \cdots \geq \sigma _{n}\geq 0} and q 1 , … , q n {\displaystyle q_{1},\dots ,q_{n}} are the corresponding eigenvectors. Therefore, in the initial learning setting, the following holds: c = K − 1 Y = Q Σ − 1 Q T Y = ∑ i = 1 n 1 σ i ⟨ q i , Y ⟩ q i . {\displaystyle c=K^{-1}Y=Q\Sigma ^{-1}Q^{T}Y=\sum _{i=1}^{n}{\frac {1}{\sigma _{i}}}\langle q_{i},Y\rangle q_{i}.} Thus, for small eigenvalues, even small perturbations in the data can lead to considerable changes in the solution. Hence, the problem is ill-conditioned, and solving this RLS problem amounts to stabilizing a possibly ill-conditioned matrix inversion problem, which is studied in the theory of ill-posed inverse problems; in both problems, a main concern is to deal with the issue of numerical stability. == Implementation of algorithms == Each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function, denoted here by G λ ( ⋅ ) {\displaystyle G_{\lambda }(\cdot )} . If the Kernel matrix is denoted by K {\displaystyle K} , then λ {\displaystyle \lambda } should control the magnitude of the smaller eigenvalues of G λ ( K ) {\displaystyle G_{\lambda }(K)} . In a filtering setup, the goal is to find estimators f S λ ( X ) := ∑ i = 1 n c i k ( x , x i ) {\displaystyle f_{S}^{\lambda }(X):=\sum _{i=1}^{n}c_{i}k(x,x_{i})} where c = G λ ( K ) Y {\displaystyle c=G_{\lambda }(K)Y} . To do so, a scalar filter function G λ ( σ ) {\displaystyle G_{\lambda }(\sigma )} is defined using the eigen-decomposition of the kernel matrix: G λ ( K ) = Q G λ ( Σ ) Q T , {\displaystyle G_{\lambda }(K)=QG_{\lambda }(\Sigma )Q^{T},} which yields G λ ( K ) Y = ∑ i = 1 n G λ ( σ i ) ⟨ q i , Y ⟩ q i . {\displaystyle G_{\lambda }(K)Y~=~\sum _{i=1}^{n}G_{\lambda }(\sigma _{i})\langle q_{i},Y\rangle q_{i}.} Typically, an appropriate filter function should have the following properties: As λ {\displaystyle \lambda } goes to zero, G λ ( σ ) → 1 / σ {\displaystyle G_{\lambda }(\sigma )~\rightarrow ~1/\sigma } . The magnitude of the (smaller) eigenvalues of G λ {\displaystyle G_{\lambda }} is controlled by λ {\displaystyle \lambda } . While the above items give a rough characterization of the general properties of filter functions for all spectral regularization algorithms, the derivation of the filter function (and hence its exact form) varies depending on the specific regularization method that spectral filtering is applied to. === Filter function for Tikhonov regularization === In the Tikhonov regularization setting, the filter function for RLS is described below. As shown in, in this setting, c = ( K + n λ I ) − 1 Y {\displaystyle c=\left(K+n\lambda I\right)^{-1}Y} . Thus, c = ( K + n λ I ) − 1 Y = Q ( Σ + n λ I ) − 1 Q T Y = ∑ i = 1 n 1 σ i + n λ < q i , Y > q i . {\displaystyle c=(K+n\lambda I)^{-1}Y=Q(\Sigma +n\lambda I)^{-1}Q^{T}Y=\sum _{i=1}^{n}{\frac {1}{\sigma _{i}+n\lambda }}<q_{i},Y>q_{i}.} The undesired components are filtered out using regularization: If σ ≫ λ n {\displaystyle \sigma \gg \lambda n} , then 1 σ i + n λ ∼ 1 σ i {\displaystyle {\frac {1}{\sigma _{i}+n\lambda }}\sim {\frac {1}{\sigma _{i}}}} . If σ ≪ λ n {\displaystyle \sigma \ll \lambda n} , then 1 σ i + n λ ∼ 1 λ n {\displaystyle {\frac {1}{\sigma _{i}+n\lambda }}\sim {\frac {1}{\lambda n}}} . The filter function for Tikhonov regularization is therefore defined as: G λ ( σ ) = 1 σ + n λ . {\displaystyle G_{\lambda }(\sigma )={\frac {1}{\sigma +n\lambda }}.} === Filter function for Landweber iteration === The idea behind the Landweber iteration is gradient descent: c0 := 0 for i = 1, ..., t − 1 ci := ci−1 + η(Y − Kci−1) end In this setting, if n {\displaystyle n} is larger than K {\displaystyle K} 's largest eigenvalue, the above iteration converges by choosing η = 2 / n {\displaystyle \eta =2/n} as the step-size:. The above iteration is equivalent to minimizing 1 n ‖ Y − K c ‖ 2 2 {\displaystyle {\frac {1}{n}}\left\|Y-Kc\right\|_{2}^{2}} (i.e. the empirical risk) via gradient descent; using induction, it can be proved that at the t {\displaystyle t} -th iteration, the solution is given by c = η ∑ i = 0 t − 1 ( I − η K ) i Y . {\displaystyle c=\eta \sum _{i=0}^{t-1}\left(I-\eta K\right)^{i}Y.} Thus, the appropriate filter function is defined by: G λ ( σ ) = η ∑ i = 0 t − 1 ( I − η σ ) i . {\displaystyle G_{\lambda }(\sigma )=\eta \sum _{i=0}^{t-1}\left(I-\eta \sigma \right)^{i}.} It can be shown that this filter function corresponds to a truncated power expansion of K − 1 {\displaystyle K^{-1}} ; to see this, note that the relation ∑ i ≥ 0 x i = 1 / ( 1 − x ) {\displaystyle \sum _{i\geq 0}x^{i}=1/(1-x)} , would still hold if x {\displaystyle x} is replaced by a matrix; thus, if K {\displaystyle K} (the kernel matrix), or rather I − η K {\displaystyle I-\eta K} , is considered, the following holds: K − 1 = η ∑ i = 0 ∞ ( I − η K ) i ∼ η ∑ i = 0 t − 1 ( I − η K ) i . {\displaystyle K^{-1}=\eta \sum _{i=0}^{\infty }\left(I-\eta K\right)^{i}\sim \eta \sum _{i=0}^{t-1}\left(I-\eta K\right)^{i}.} In this setting, the number of iterations gives the regularization parameter; roughly speaking, t ∼ 1 / λ {\displaystyle t\sim 1/\lambda } . If t {\displaystyle t} is large, overfitting may be a concern. If t {\displaystyle t} is small, oversmoothing may be a concern. Thus, choosing an appropriate time for early stopping of the iterations provides a regularization effect. === Filter function for TSVD === In the TSVD setting, given the eigen-decomposition K = Q Σ Q T {\displaystyle K=Q\Sigma Q^{T}} and using a prescribed threshold λ n {\displaystyle \lambda n} , a regularized inverse can be formed for the kernel matrix by discarding all the eigenvalues that are smaller than this threshold. Thus, the filter function for TSVD can be defined as G λ ( σ ) = { 1 / σ , if σ ≥ λ n 0 , otherwise {\displaystyle G_{\lambda }(\sigma )={\begin{cases}1/\sigma ,&{\text{if }}\sigma \geq \lambda n\\[1ex]0,&{\text{otherwise}}\end{cases}}} It can be shown that TSVD is equivalent to the (unsupervised) projection of the data using (kernel) Principal Component Analysis (PCA), and that it is also equivalent to minimizing the empirical risk on the projected data (without regularization). Note that the number of components kept for the projection is the only free parameter here. == References ==
Wikipedia:Regularization perspectives on support vector machines#0
Within mathematical analysis, Regularization perspectives on support-vector machines provide a way of interpreting support-vector machines (SVMs) in the context of other regularization-based machine-learning algorithms. SVM algorithms categorize binary data, with the goal of fitting the training set data in a way that minimizes the average of the hinge-loss function and L2 norm of the learned weights. This strategy avoids overfitting via Tikhonov regularization and in the L2 norm sense and also corresponds to minimizing the bias and variance of our estimator of the weights. Estimators with lower Mean squared error predict better or generalize better when given unseen data. Specifically, Tikhonov regularization algorithms produce a decision boundary that minimizes the average training-set error and constrain the Decision boundary not to be excessively complicated or overfit the training data via a L2 norm of the weights term. The training and test-set errors can be measured without bias and in a fair way using accuracy, precision, Auc-Roc, precision-recall, and other metrics. Regularization perspectives on support-vector machines interpret SVM as a special case of Tikhonov regularization, specifically Tikhonov regularization with the hinge loss for a loss function. This provides a theoretical framework with which to analyze SVM algorithms and compare them to other algorithms with the same goals: to generalize without overfitting. SVM was first proposed in 1995 by Corinna Cortes and Vladimir Vapnik, and framed geometrically as a method for finding hyperplanes that can separate multidimensional data into two categories. This traditional geometric interpretation of SVMs provides useful intuition about how SVMs work, but is difficult to relate to other machine-learning techniques for avoiding overfitting, like regularization, early stopping, sparsity and Bayesian inference. However, once it was discovered that SVM is also a special case of Tikhonov regularization, regularization perspectives on SVM provided the theory necessary to fit SVM within a broader class of algorithms. This has enabled detailed comparisons between SVM and other forms of Tikhonov regularization, and theoretical grounding for why it is beneficial to use SVM's loss function, the hinge loss. == Theoretical background == In the statistical learning theory framework, an algorithm is a strategy for choosing a function f : X → Y {\displaystyle f\colon \mathbf {X} \to \mathbf {Y} } given a training set S = { ( x 1 , y 1 ) , … , ( x n , y n ) } {\displaystyle S=\{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\}} of inputs x i {\displaystyle x_{i}} and their labels y i {\displaystyle y_{i}} (the labels are usually ± 1 {\displaystyle \pm 1} ). Regularization strategies avoid overfitting by choosing a function that fits the data, but is not too complex. Specifically: f = argmin f ∈ H { 1 n ∑ i = 1 n V ( y i , f ( x i ) ) + λ ‖ f ‖ H 2 } , {\displaystyle f={\underset {f\in {\mathcal {H}}}{\operatorname {argmin} }}\left\{{\frac {1}{n}}\sum _{i=1}^{n}V(y_{i},f(x_{i}))+\lambda \|f\|_{\mathcal {H}}^{2}\right\},} where H {\displaystyle {\mathcal {H}}} is a hypothesis space of functions, V : Y × Y → R {\displaystyle V\colon \mathbf {Y} \times \mathbf {Y} \to \mathbb {R} } is the loss function, ‖ ⋅ ‖ H {\displaystyle \|\cdot \|_{\mathcal {H}}} is a norm on the hypothesis space of functions, and λ ∈ R {\displaystyle \lambda \in \mathbb {R} } is the regularization parameter. When H {\displaystyle {\mathcal {H}}} is a reproducing kernel Hilbert space, there exists a kernel function K : X × X → R {\displaystyle K\colon \mathbf {X} \times \mathbf {X} \to \mathbb {R} } that can be written as an n × n {\displaystyle n\times n} symmetric positive-definite matrix K {\displaystyle \mathbf {K} } . By the representer theorem, f ( x i ) = ∑ j = 1 n c j K i j , and ‖ f ‖ H 2 = ⟨ f , f ⟩ H = ∑ i = 1 n ∑ j = 1 n c i c j K ( x i , x j ) = c T K c . {\displaystyle f(x_{i})=\sum _{j=1}^{n}c_{j}\mathbf {K} _{ij},{\text{ and }}\|f\|_{\mathcal {H}}^{2}=\langle f,f\rangle _{\mathcal {H}}=\sum _{i=1}^{n}\sum _{j=1}^{n}c_{i}c_{j}K(x_{i},x_{j})=c^{T}\mathbf {K} c.} == Special properties of the hinge loss == The simplest and most intuitive loss function for categorization is the misclassification loss, or 0–1 loss, which is 0 if f ( x i ) = y i {\displaystyle f(x_{i})=y_{i}} and 1 if f ( x i ) ≠ y i {\displaystyle f(x_{i})\neq y_{i}} , i.e. the Heaviside step function on − y i f ( x i ) {\displaystyle -y_{i}f(x_{i})} . However, this loss function is not convex, which makes the regularization problem very difficult to minimize computationally. Therefore, we look for convex substitutes for the 0–1 loss. The hinge loss, V ( y i , f ( x i ) ) = ( 1 − y f ( x ) ) + {\displaystyle V{\big (}y_{i},f(x_{i}){\big )}={\big (}1-yf(x){\big )}_{+}} , where ( s ) + = max ( s , 0 ) {\displaystyle (s)_{+}=\max(s,0)} , provides such a convex relaxation. In fact, the hinge loss is the tightest convex upper bound to the 0–1 misclassification loss function, and with infinite data returns the Bayes-optimal solution: f b ( x ) = { 1 , p ( 1 ∣ x ) > p ( − 1 ∣ x ) , − 1 , p ( 1 ∣ x ) < p ( − 1 ∣ x ) . {\displaystyle f_{b}(x)={\begin{cases}1,&p(1\mid x)>p(-1\mid x),\\-1,&p(1\mid x)<p(-1\mid x).\end{cases}}} == Derivation == The Tikhonov regularization problem can be shown to be equivalent to traditional formulations of SVM by expressing it in terms of the hinge loss. With the hinge loss V ( y i , f ( x i ) ) = ( 1 − y f ( x ) ) + , {\displaystyle V{\big (}y_{i},f(x_{i}){\big )}={\big (}1-yf(x){\big )}_{+},} where ( s ) + = max ( s , 0 ) {\displaystyle (s)_{+}=\max(s,0)} , the regularization problem becomes f = argmin f ∈ H { 1 n ∑ i = 1 n ( 1 − y f ( x ) ) + + λ ‖ f ‖ H 2 } . {\displaystyle f={\underset {f\in {\mathcal {H}}}{\operatorname {argmin} }}\left\{{\frac {1}{n}}\sum _{i=1}^{n}{\big (}1-yf(x){\big )}_{+}+\lambda \|f\|_{\mathcal {H}}^{2}\right\}.} Multiplying by 1 / ( 2 λ ) {\displaystyle 1/(2\lambda )} yields f = argmin f ∈ H { C ∑ i = 1 n ( 1 − y f ( x ) ) + + 1 2 ‖ f ‖ H 2 } {\displaystyle f={\underset {f\in {\mathcal {H}}}{\operatorname {argmin} }}\left\{C\sum _{i=1}^{n}{\big (}1-yf(x){\big )}_{+}+{\frac {1}{2}}\|f\|_{\mathcal {H}}^{2}\right\}} with C = 1 / ( 2 λ n ) {\displaystyle C=1/(2\lambda n)} , which is equivalent to the standard SVM minimization problem. == Notes and references == Evgeniou, Theodoros; Massimiliano Pontil; Tomaso Poggio (2000). "Regularization Networks and Support Vector Machines" (PDF). Advances in Computational Mathematics. 13 (1): 1–50. doi:10.1023/A:1018946025316. S2CID 70866. Joachims, Thorsten. "SVMlight". Archived from the original on 2015-04-19. Retrieved 2012-05-18. Vapnik, Vladimir (1999). The Nature of Statistical Learning Theory. New York: Springer-Verlag. ISBN 978-0-387-98780-4.
Wikipedia:Regularized canonical correlation analysis#0
In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors X = (X1, ..., Xn) and Y = (Y1, ..., Ym) of random variables, and there are correlations among the variables, then canonical-correlation analysis will find linear combinations of X and Y that have a maximum correlation with each other. T. R. Knapp notes that "virtually all of the commonly encountered parametric tests of significance can be treated as special cases of canonical-correlation analysis, which is the general procedure for investigating the relationships between two sets of variables." The method was first introduced by Harold Hotelling in 1936, although in the context of angles between flats the mathematical concept was published by Camille Jordan in 1875. CCA is now a cornerstone of multivariate statistics and multi-view learning, and a great number of interpretations and extensions have been proposed, such as probabilistic CCA, sparse CCA, multi-view CCA, deep CCA, and DeepGeoCCA. Unfortunately, perhaps because of its popularity, the literature can be inconsistent with notation, we attempt to highlight such inconsistencies in this article to help the reader make best use of the existing literature and techniques available. Like its sister method PCA, CCA can be viewed in population form (corresponding to random vectors and their covariance matrices) or in sample form (corresponding to datasets and their sample covariance matrices). These two forms are almost exact analogues of each other, which is why their distinction is often overlooked, but they can behave very differently in high dimensional settings. We next give explicit mathematical definitions for the population problem and highlight the different objects in the so-called canonical decomposition - understanding the differences between these objects is crucial for interpretation of the technique. == Population CCA definition via correlations == Given two column vectors X = ( x 1 , … , x n ) T {\displaystyle X=(x_{1},\dots ,x_{n})^{T}} and Y = ( y 1 , … , y m ) T {\displaystyle Y=(y_{1},\dots ,y_{m})^{T}} of random variables with finite second moments, one may define the cross-covariance Σ X Y = cov ⁡ ( X , Y ) {\displaystyle \Sigma _{XY}=\operatorname {cov} (X,Y)} to be the n × m {\displaystyle n\times m} matrix whose ( i , j ) {\displaystyle (i,j)} entry is the covariance cov ⁡ ( x i , y j ) {\displaystyle \operatorname {cov} (x_{i},y_{j})} . In practice, we would estimate the covariance matrix based on sampled data from X {\displaystyle X} and Y {\displaystyle Y} (i.e. from a pair of data matrices). Canonical-correlation analysis seeks a sequence of vectors a k {\displaystyle a_{k}} ( a k ∈ R n {\displaystyle a_{k}\in \mathbb {R} ^{n}} ) and b k {\displaystyle b_{k}} ( b k ∈ R m {\displaystyle b_{k}\in \mathbb {R} ^{m}} ) such that the random variables a k T X {\displaystyle a_{k}^{T}X} and b k T Y {\displaystyle b_{k}^{T}Y} maximize the correlation ρ = corr ⁡ ( a k T X , b k T Y ) {\displaystyle \rho =\operatorname {corr} (a_{k}^{T}X,b_{k}^{T}Y)} . The (scalar) random variables U = a 1 T X {\displaystyle U=a_{1}^{T}X} and V = b 1 T Y {\displaystyle V=b_{1}^{T}Y} are the first pair of canonical variables. Then one seeks vectors maximizing the same correlation subject to the constraint that they are to be uncorrelated with the first pair of canonical variables; this gives the second pair of canonical variables. This procedure may be continued up to min { m , n } {\displaystyle \min\{m,n\}} times. ( a k , b k ) = argmax a , b corr ⁡ ( a T X , b T Y ) subject to cov ⁡ ( a T X , a j T X ) = cov ⁡ ( b T Y , b j T Y ) = 0 for j = 1 , … , k − 1 {\displaystyle (a_{k},b_{k})={\underset {a,b}{\operatorname {argmax} }}\operatorname {corr} (a^{T}X,b^{T}Y)\quad {\text{ subject to }}\operatorname {cov} (a^{T}X,a_{j}^{T}X)=\operatorname {cov} (b^{T}Y,b_{j}^{T}Y)=0{\text{ for }}j=1,\dots ,k-1} The sets of vectors a k , b k {\displaystyle a_{k},b_{k}} are called canonical directions or weight vectors or simply weights. The 'dual' sets of vectors Σ X X a k , Σ Y Y b k {\displaystyle \Sigma _{XX}a_{k},\Sigma _{YY}b_{k}} are called canonical loading vectors or simply loadings; these are often more straightforward to interpret than the weights. == Computation == === Derivation === Let Σ X Y {\displaystyle \Sigma _{XY}} be the cross-covariance matrix for any pair of (vector-shaped) random variables X {\displaystyle X} and Y {\displaystyle Y} . The target function to maximize is ρ = a T Σ X Y b a T Σ X X a b T Σ Y Y b . {\displaystyle \rho ={\frac {a^{T}\Sigma _{XY}b}{{\sqrt {a^{T}\Sigma _{XX}a}}{\sqrt {b^{T}\Sigma _{YY}b}}}}.} The first step is to define a change of basis and define c = Σ X X 1 / 2 a , {\displaystyle c=\Sigma _{XX}^{1/2}a,} d = Σ Y Y 1 / 2 b , {\displaystyle d=\Sigma _{YY}^{1/2}b,} where Σ X X 1 / 2 {\displaystyle \Sigma _{XX}^{1/2}} and Σ Y Y 1 / 2 {\displaystyle \Sigma _{YY}^{1/2}} can be obtained from the eigen-decomposition (or by diagonalization): Σ X X 1 / 2 = V X D X 1 / 2 V X ⊤ , V X D X V X ⊤ = Σ X X , {\displaystyle \Sigma _{XX}^{1/2}=V_{X}D_{X}^{1/2}V_{X}^{\top },\qquad V_{X}D_{X}V_{X}^{\top }=\Sigma _{XX},} and Σ Y Y 1 / 2 = V Y D Y 1 / 2 V Y ⊤ , V Y D Y V Y ⊤ = Σ Y Y . {\displaystyle \Sigma _{YY}^{1/2}=V_{Y}D_{Y}^{1/2}V_{Y}^{\top },\qquad V_{Y}D_{Y}V_{Y}^{\top }=\Sigma _{YY}.} Thus ρ = c T Σ X X − 1 / 2 Σ X Y Σ Y Y − 1 / 2 d c T c d T d . {\displaystyle \rho ={\frac {c^{T}\Sigma _{XX}^{-1/2}\Sigma _{XY}\Sigma _{YY}^{-1/2}d}{{\sqrt {c^{T}c}}{\sqrt {d^{T}d}}}}.} By the Cauchy–Schwarz inequality, ...can someone check the this, particularly the term to the right of "(d) leq"? ( c T Σ X X − 1 / 2 Σ X Y Σ Y Y − 1 / 2 ) ( d ) ≤ ( c T Σ X X − 1 / 2 Σ X Y Σ Y Y − 1 / 2 Σ Y Y − 1 / 2 Σ Y X Σ X X − 1 / 2 c ) 1 / 2 ( d T d ) 1 / 2 , {\displaystyle \left(c^{T}\Sigma _{XX}^{-1/2}\Sigma _{XY}\Sigma _{YY}^{-1/2}\right)(d)\leq \left(c^{T}\Sigma _{XX}^{-1/2}\Sigma _{XY}\Sigma _{YY}^{-1/2}\Sigma _{YY}^{-1/2}\Sigma _{YX}\Sigma _{XX}^{-1/2}c\right)^{1/2}\left(d^{T}d\right)^{1/2},} ρ ≤ ( c T Σ X X − 1 / 2 Σ X Y Σ Y Y − 1 Σ Y X Σ X X − 1 / 2 c ) 1 / 2 ( c T c ) 1 / 2 . {\displaystyle \rho \leq {\frac {\left(c^{T}\Sigma _{XX}^{-1/2}\Sigma _{XY}\Sigma _{YY}^{-1}\Sigma _{YX}\Sigma _{XX}^{-1/2}c\right)^{1/2}}{\left(c^{T}c\right)^{1/2}}}.} There is equality if the vectors d {\displaystyle d} and Σ Y Y − 1 / 2 Σ Y X Σ X X − 1 / 2 c {\displaystyle \Sigma _{YY}^{-1/2}\Sigma _{YX}\Sigma _{XX}^{-1/2}c} are collinear. In addition, the maximum of correlation is attained if c {\displaystyle c} is the eigenvector with the maximum eigenvalue for the matrix Σ X X − 1 / 2 Σ X Y Σ Y Y − 1 Σ Y X Σ X X − 1 / 2 {\displaystyle \Sigma _{XX}^{-1/2}\Sigma _{XY}\Sigma _{YY}^{-1}\Sigma _{YX}\Sigma _{XX}^{-1/2}} (see Rayleigh quotient). The subsequent pairs are found by using eigenvalues of decreasing magnitudes. Orthogonality is guaranteed by the symmetry of the correlation matrices. Another way of viewing this computation is that c {\displaystyle c} and d {\displaystyle d} are the left and right singular vectors of the correlation matrix of X and Y corresponding to the highest singular value. === Solution === The solution is therefore: c {\displaystyle c} is an eigenvector of Σ X X − 1 / 2 Σ X Y Σ Y Y − 1 Σ Y X Σ X X − 1 / 2 {\displaystyle \Sigma _{XX}^{-1/2}\Sigma _{XY}\Sigma _{YY}^{-1}\Sigma _{YX}\Sigma _{XX}^{-1/2}} d {\displaystyle d} is proportional to Σ Y Y − 1 / 2 Σ Y X Σ X X − 1 / 2 c {\displaystyle \Sigma _{YY}^{-1/2}\Sigma _{YX}\Sigma _{XX}^{-1/2}c} Reciprocally, there is also: d {\displaystyle d} is an eigenvector of Σ Y Y − 1 / 2 Σ Y X Σ X X − 1 Σ X Y Σ Y Y − 1 / 2 {\displaystyle \Sigma _{YY}^{-1/2}\Sigma _{YX}\Sigma _{XX}^{-1}\Sigma _{XY}\Sigma _{YY}^{-1/2}} c {\displaystyle c} is proportional to Σ X X − 1 / 2 Σ X Y Σ Y Y − 1 / 2 d {\displaystyle \Sigma _{XX}^{-1/2}\Sigma _{XY}\Sigma _{YY}^{-1/2}d} Reversing the change of coordinates, we have that a {\displaystyle a} is an eigenvector of Σ X X − 1 Σ X Y Σ Y Y − 1 Σ Y X {\displaystyle \Sigma _{XX}^{-1}\Sigma _{XY}\Sigma _{YY}^{-1}\Sigma _{YX}} , b {\displaystyle b} is proportional to Σ Y Y − 1 Σ Y X a ; {\displaystyle \Sigma _{YY}^{-1}\Sigma _{YX}a;} b {\displaystyle b} is an eigenvector of Σ Y Y − 1 Σ Y X Σ X X − 1 Σ X Y , {\displaystyle \Sigma _{YY}^{-1}\Sigma _{YX}\Sigma _{XX}^{-1}\Sigma _{XY},} a {\displaystyle a} is proportional to Σ X X − 1 Σ X Y b {\displaystyle \Sigma _{XX}^{-1}\Sigma _{XY}b} . The canonical variables are defined by: U = c T Σ X X − 1 / 2 X = a T X {\displaystyle U=c^{T}\Sigma _{XX}^{-1/2}X=a^{T}X} V = d T Σ Y Y − 1 / 2 Y = b T Y {\displaystyle V=d^{T}\Sigma _{YY}^{-1/2}Y=b^{T}Y} === Implementation === CCA can be computed using singular value decomposition on a correlation matrix. It is available as a function in MATLAB as canoncorr (also in Octave) R as the standard function cancor and several other packages, including candisc, CCA and vegan. CCP for statistical hypothesis testing in canonical correlation analysis. SAS as proc cancorr Python in the library scikit-learn, as cross decomposition and in statsmodels, as CanCorr. The CCA-Zoo library implements CCA extensions, such as probabilistic CCA, sparse CCA, multi-view CCA, and deep CCA. SPSS as macro CanCorr shipped with the main software Julia (programming language) in the MultivariateStats.jl package. CCA computation using singular value decomposition on a correlation matrix is related to the cosine of the angles between flats. The cosine function is ill-conditioned for small angles, leading to very inaccurate computation of highly correlated principal vectors in finite precision computer arithmetic. To fix this trouble, alternative algorithms are available in SciPy as linear-algebra function subspace_angles MATLAB as FileExchange function subspacea == Hypothesis testing == Each row can be tested for significance with the following method. Since the correlations are sorted, saying that row i {\displaystyle i} is zero implies all further correlations are also zero. If we have p {\displaystyle p} independent observations in a sample and ρ ^ i {\displaystyle {\widehat {\rho }}_{i}} is the estimated correlation for i = 1 , … , min { m , n } {\displaystyle i=1,\dots ,\min\{m,n\}} . For the i {\displaystyle i} th row, the test statistic is: χ 2 = − ( p − 1 − 1 2 ( m + n + 1 ) ) ln ⁡ ∏ j = i min { m , n } ( 1 − ρ ^ j 2 ) , {\displaystyle \chi ^{2}=-\left(p-1-{\frac {1}{2}}(m+n+1)\right)\ln \prod _{j=i}^{\min\{m,n\}}(1-{\widehat {\rho }}_{j}^{2}),} which is asymptotically distributed as a chi-squared with ( m − i + 1 ) ( n − i + 1 ) {\displaystyle (m-i+1)(n-i+1)} degrees of freedom for large p {\displaystyle p} . Since all the correlations from min { m , n } {\displaystyle \min\{m,n\}} to p {\displaystyle p} are logically zero (and estimated that way also) the product for the terms after this point is irrelevant. Note that in the small sample size limit with p < n + m {\displaystyle p<n+m} then we are guaranteed that the top m + n − p {\displaystyle m+n-p} correlations will be identically 1 and hence the test is meaningless. == Practical uses == A typical use for canonical correlation in the experimental context is to take two sets of variables and see what is common among the two sets. For example, in psychological testing, one could take two well established multidimensional personality tests such as the Minnesota Multiphasic Personality Inventory (MMPI-2) and the NEO. By seeing how the MMPI-2 factors relate to the NEO factors, one could gain insight into what dimensions were common between the tests and how much variance was shared. For example, one might find that an extraversion or neuroticism dimension accounted for a substantial amount of shared variance between the two tests. One can also use canonical-correlation analysis to produce a model equation which relates two sets of variables, for example a set of performance measures and a set of explanatory variables, or a set of outputs and set of inputs. Constraint restrictions can be imposed on such a model to ensure it reflects theoretical requirements or intuitively obvious conditions. This type of model is known as a maximum correlation model. Visualization of the results of canonical correlation is usually through bar plots of the coefficients of the two sets of variables for the pairs of canonical variates showing significant correlation. Some authors suggest that they are best visualized by plotting them as heliographs, a circular format with ray like bars, with each half representing the two sets of variables. == Examples == Let X = x 1 {\displaystyle X=x_{1}} with zero expected value, i.e., E ⁡ ( X ) = 0 {\displaystyle \operatorname {E} (X)=0} . If Y = X {\displaystyle Y=X} , i.e., X {\displaystyle X} and Y {\displaystyle Y} are perfectly correlated, then, e.g., a = 1 {\displaystyle a=1} and b = 1 {\displaystyle b=1} , so that the first (and only in this example) pair of canonical variables is U = X {\displaystyle U=X} and V = Y = X {\displaystyle V=Y=X} . If Y = − X {\displaystyle Y=-X} , i.e., X {\displaystyle X} and Y {\displaystyle Y} are perfectly anticorrelated, then, e.g., a = 1 {\displaystyle a=1} and b = − 1 {\displaystyle b=-1} , so that the first (and only in this example) pair of canonical variables is U = X {\displaystyle U=X} and V = − Y = X {\displaystyle V=-Y=X} . We notice that in both cases U = V {\displaystyle U=V} , which illustrates that the canonical-correlation analysis treats correlated and anticorrelated variables similarly. == Connection to principal angles == Assuming that X = ( x 1 , … , x n ) T {\displaystyle X=(x_{1},\dots ,x_{n})^{T}} and Y = ( y 1 , … , y m ) T {\displaystyle Y=(y_{1},\dots ,y_{m})^{T}} have zero expected values, i.e., E ⁡ ( X ) = E ⁡ ( Y ) = 0 {\displaystyle \operatorname {E} (X)=\operatorname {E} (Y)=0} , their covariance matrices Σ X X = Cov ⁡ ( X , X ) = E ⁡ [ X X T ] {\displaystyle \Sigma _{XX}=\operatorname {Cov} (X,X)=\operatorname {E} [XX^{T}]} and Σ Y Y = Cov ⁡ ( Y , Y ) = E ⁡ [ Y Y T ] {\displaystyle \Sigma _{YY}=\operatorname {Cov} (Y,Y)=\operatorname {E} [YY^{T}]} can be viewed as Gram matrices in an inner product for the entries of X {\displaystyle X} and Y {\displaystyle Y} , correspondingly. In this interpretation, the random variables, entries x i {\displaystyle x_{i}} of X {\displaystyle X} and y j {\displaystyle y_{j}} of Y {\displaystyle Y} are treated as elements of a vector space with an inner product given by the covariance cov ⁡ ( x i , y j ) {\displaystyle \operatorname {cov} (x_{i},y_{j})} ; see Covariance#Relationship to inner products. The definition of the canonical variables U {\displaystyle U} and V {\displaystyle V} is then equivalent to the definition of principal vectors for the pair of subspaces spanned by the entries of X {\displaystyle X} and Y {\displaystyle Y} with respect to this inner product. The canonical correlations corr ⁡ ( U , V ) {\displaystyle \operatorname {corr} (U,V)} is equal to the cosine of principal angles. == Whitening and probabilistic canonical correlation analysis == CCA can also be viewed as a special whitening transformation where the random vectors X {\displaystyle X} and Y {\displaystyle Y} are simultaneously transformed in such a way that the cross-correlation between the whitened vectors X C C A {\displaystyle X^{CCA}} and Y C C A {\displaystyle Y^{CCA}} is diagonal. The canonical correlations are then interpreted as regression coefficients linking X C C A {\displaystyle X^{CCA}} and Y C C A {\displaystyle Y^{CCA}} and may also be negative. The regression view of CCA also provides a way to construct a latent variable probabilistic generative model for CCA, with uncorrelated hidden variables representing shared and non-shared variability. == See also == Generalized canonical correlation RV coefficient Angles between flats Principal component analysis Linear discriminant analysis Regularized canonical correlation analysis Singular value decomposition Partial least squares regression == References == == External links == Discriminant Correlation Analysis (DCA) (MATLAB) Hardoon, D. R.; Szedmak, S.; Shawe-Taylor, J. (2004). "Canonical Correlation Analysis: An Overview with Application to Learning Methods". Neural Computation. 16 (12): 2639–2664. CiteSeerX 10.1.1.14.6452. doi:10.1162/0899766042321814. PMID 15516276. S2CID 202473. A note on the ordinal canonical-correlation analysis of two sets of ranking scores (Also provides a FORTRAN program)- in Journal of Quantitative Economics 7(2), 2009, pp. 173–199 Representation-Constrained Canonical Correlation Analysis: A Hybridization of Canonical Correlation and Principal Component Analyses (Also provides a FORTRAN program)- in Journal of Applied Economic Sciences 4(1), 2009, pp. 115–124
Wikipedia:Regularized least squares#0
Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution. RLS is used for two main reasons. The first comes up when the number of variables in the linear system exceeds the number of observations. In such settings, the ordinary least-squares problem is ill-posed and is therefore impossible to fit because the associated optimization problem has infinitely many solutions. RLS allows the introduction of further constraints that uniquely determine the solution. The second reason for using RLS arises when the learned model suffers from poor generalization. RLS can be used in such cases to improve the generalizability of the model by constraining it at training time. This constraint can either force the solution to be "sparse" in some way or to reflect other prior knowledge about the problem such as information about correlations between features. A Bayesian understanding of this can be reached by showing that RLS methods are often equivalent to priors on the solution to the least-squares problem. == General formulation == Consider a learning setting given by a probabilistic space ( X × Y , ρ ( X , Y ) ) {\displaystyle (X\times Y,\rho (X,Y))} , Y ∈ R {\displaystyle Y\in R} . Let S = { x i , y i } i = 1 n {\displaystyle S=\{x_{i},y_{i}\}_{i=1}^{n}} denote a training set of n {\displaystyle n} pairs i.i.d. with respect to the joint distribution ρ {\displaystyle \rho } . Let V : Y × R → [ 0 ; ∞ ) {\displaystyle V:Y\times R\to [0;\infty )} be a loss function. Define F {\displaystyle F} as the space of the functions such that expected risk: ε ( f ) = ∫ V ( y , f ( x ) ) d ρ ( x , y ) {\displaystyle \varepsilon (f)=\int V(y,f(x))\,d\rho (x,y)} is well defined. The main goal is to minimize the expected risk: inf f ∈ F ε ( f ) {\displaystyle \inf _{f\in F}\varepsilon (f)} Since the problem cannot be solved exactly there is a need to specify how to measure the quality of a solution. A good learning algorithm should provide an estimator with a small risk. As the joint distribution ρ {\displaystyle \rho } is typically unknown, the empirical risk is taken. For regularized least squares the square loss function is introduced: ε ( f ) = 1 n ∑ i = 1 n V ( y i , f ( x i ) ) = 1 n ∑ i = 1 n ( y i − f ( x i ) ) 2 {\displaystyle \varepsilon (f)={\frac {1}{n}}\sum _{i=1}^{n}V(y_{i},f(x_{i}))={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}} However, if the functions are from a relatively unconstrained space, such as the set of square-integrable functions on X {\displaystyle X} , this approach may overfit the training data, and lead to poor generalization. Thus, it should somehow constrain or penalize the complexity of the function f {\displaystyle f} . In RLS, this is accomplished by choosing functions from a reproducing kernel Hilbert space (RKHS) H {\displaystyle {\mathcal {H}}} , and adding a regularization term to the objective function, proportional to the norm of the function in H {\displaystyle {\mathcal {H}}} : inf f ∈ F ε ( f ) + λ R ( f ) , λ > 0 {\displaystyle \inf _{f\in F}\varepsilon (f)+\lambda R(f),\lambda >0} == Kernel formulation == === Definition of RKHS === A RKHS can be defined by a symmetric positive-definite kernel function K ( x , z ) {\displaystyle K(x,z)} with the reproducing property: ⟨ K x , f ⟩ H = f ( x ) , {\displaystyle \langle K_{x},f\rangle _{\mathcal {H}}=f(x),} where K x ( z ) = K ( x , z ) {\displaystyle K_{x}(z)=K(x,z)} . The RKHS for a kernel K {\displaystyle K} consists of the completion of the space of functions spanned by { K x ∣ x ∈ X } {\displaystyle \left\{K_{x}\mid x\in X\right\}} : f ( x ) = ∑ i = 1 n α i K x i ( x ) , f ∈ H {\textstyle f(x)=\sum _{i=1}^{n}\alpha _{i}K_{x_{i}}(x),\,f\in {\mathcal {H}}} , where all α i {\displaystyle \alpha _{i}} are real numbers. Some commonly used kernels include the linear kernel, inducing the space of linear functions: K ( x , z ) = x T z , {\displaystyle K(x,z)=x^{\mathsf {T}}z,} the polynomial kernel, inducing the space of polynomial functions of order d {\displaystyle d} : K ( x , z ) = ( x T z + 1 ) d , {\displaystyle K(x,z)=\left(x^{\mathsf {T}}z+1\right)^{d},} and the Gaussian kernel: K ( x , z ) = e − ‖ x − z ‖ 2 / σ 2 . {\displaystyle K(x,z)=e^{-{\left\|x-z\right\|^{2}}/{\sigma ^{2}}}.} Note that for an arbitrary loss function V {\displaystyle V} , this approach defines a general class of algorithms named Tikhonov regularization. For instance, using the hinge loss leads to the support vector machine algorithm, and using the epsilon-insensitive loss leads to support vector regression. === Arbitrary kernel === The representer theorem guarantees that the solution can be written as: f ( x ) = ∑ i = 1 n c i K ( x i , x ) {\displaystyle f(x)=\sum _{i=1}^{n}c_{i}K(x_{i},x)} for some c ∈ R n {\displaystyle c\in \mathbb {R} ^{n}} . The minimization problem can be expressed as: min c ∈ R n 1 n ‖ Y − K c ‖ R n 2 + λ ‖ f ‖ H 2 , {\displaystyle \min _{c\in \mathbb {R} ^{n}}{\frac {1}{n}}\left\|Y-Kc\right\|_{\mathbb {R} ^{n}}^{2}+\lambda \left\|f\right\|_{H}^{2},} where, with some abuse of notation, the i , j {\displaystyle i,j} entry of kernel matrix K {\displaystyle K} (as opposed to kernel function K ( ⋅ , ⋅ ) {\displaystyle K(\cdot ,\cdot )} ) is K ( x i , x j ) {\displaystyle K(x_{i},x_{j})} . For such a function, ‖ f ‖ H 2 = ⟨ f , f ⟩ H = ⟨ ∑ i = 1 n c i K ( x i , ⋅ ) , ∑ j = 1 n c j K ( x j , ⋅ ) ⟩ H = ∑ i = 1 n ∑ j = 1 n c i c j ⟨ K ( x i , ⋅ ) , K ( x j , ⋅ ) ⟩ H = ∑ i = 1 n ∑ j = 1 n c i c j K ( x i , x j ) = c T K c , {\displaystyle {\begin{aligned}\left\|f\right\|_{H}^{2}&=\langle f,f\rangle _{H}\\[1ex]&=\left\langle \sum _{i=1}^{n}c_{i}K(x_{i},\cdot ),\sum _{j=1}^{n}c_{j}K(x_{j},\cdot )\right\rangle _{H}\\[1ex]&=\sum _{i=1}^{n}\sum _{j=1}^{n}c_{i}c_{j}\left\langle K(x_{i},\cdot ),K(x_{j},\cdot )\right\rangle _{H}\\&=\sum _{i=1}^{n}\sum _{j=1}^{n}c_{i}c_{j}K(x_{i},x_{j})\\&=c^{\mathsf {T}}Kc,\end{aligned}}} The following minimization problem can be obtained: min c ∈ R n 1 n ‖ Y − K c ‖ R n 2 + λ c T K c . {\displaystyle \min _{c\in \mathbb {R} ^{n}}{\frac {1}{n}}\left\|Y-Kc\right\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{\mathsf {T}}Kc.} As the sum of convex functions is convex, the solution is unique and its minimum can be found by setting the gradient with respect to c {\displaystyle c} to 0 {\displaystyle 0} : − 1 n K ( Y − K c ) + λ K c = 0 ⇒ K ( K + λ n I ) c = K Y ⇒ c = ( K + λ n I ) − 1 Y , {\displaystyle -{\frac {1}{n}}K\left(Y-Kc\right)+\lambda Kc=0\Rightarrow K\left(K+\lambda nI\right)c=KY\Rightarrow c=\left(K+\lambda nI\right)^{-1}Y,} where c ∈ R n . {\displaystyle c\in \mathbb {R} ^{n}.} ==== Complexity ==== The complexity of training is basically the cost of computing the kernel matrix plus the cost of solving the linear system which is roughly O ( n 3 ) {\displaystyle O(n^{3})} . The computation of the kernel matrix for the linear or Gaussian kernel is O ( n 2 D ) {\displaystyle O(n^{2}D)} . The complexity of testing is O ( n ) {\displaystyle O(n)} . === Prediction === The prediction at a new test point x ∗ {\displaystyle x_{*}} is: f ( x ∗ ) = ∑ i = 1 n c i K ( x i , x ∗ ) = K ( X , X ∗ ) T c {\displaystyle f(x_{*})=\sum _{i=1}^{n}c_{i}K(x_{i},x_{*})=K(X,X_{*})^{\mathsf {T}}c} === Linear kernel === For convenience a vector notation is introduced. Let X {\displaystyle X} be an n × d {\displaystyle n\times d} matrix, where the rows are input vectors, and Y {\displaystyle Y} a n × 1 {\displaystyle n\times 1} vector where the entries are corresponding outputs. In terms of vectors, the kernel matrix can be written as K = X X T {\displaystyle K=XX^{\mathsf {T}}} . The learning function can be written as: f ( x ∗ ) = K x ∗ c = x ∗ T X T c = x ∗ T w {\displaystyle f(x_{*})=K_{x_{*}}c=x_{*}^{\mathsf {T}}X^{\mathsf {T}}c=x_{*}^{\mathsf {T}}w} Here we define w = X T c , w ∈ R d {\displaystyle w=X^{\mathsf {T}}c,w\in \mathbb {R} ^{d}} . The objective function can be rewritten as: 1 n ‖ Y − K c ‖ R n 2 + λ c T K c = 1 n ‖ y − X X T c ‖ R n 2 + λ c T X X T c = 1 n ‖ y − X w ‖ R n 2 + λ ‖ w ‖ R d 2 {\displaystyle {\begin{aligned}{\frac {1}{n}}\left\|Y-Kc\right\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{\mathsf {T}}Kc&={\frac {1}{n}}\left\|y-XX^{\mathsf {T}}c\right\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{\mathsf {T}}XX^{\mathsf {T}}c\\[1ex]&={\frac {1}{n}}\left\|y-Xw\right\|_{\mathbb {R} ^{n}}^{2}+\lambda \left\|w\right\|_{\mathbb {R} ^{d}}^{2}\end{aligned}}} The first term is the objective function from ordinary least squares (OLS) regression, corresponding to the residual sum of squares. The second term is a regularization term, not present in OLS, which penalizes large w {\displaystyle w} values. As a smooth finite dimensional problem is considered and it is possible to apply standard calculus tools. In order to minimize the objective function, the gradient is calculated with respect to w {\displaystyle w} and set it to zero: X T X w − X T y + λ n w = 0 {\displaystyle X^{\mathsf {T}}Xw-X^{\mathsf {T}}y+\lambda nw=0} w = ( X T X + λ n I ) − 1 X T y {\displaystyle w=\left(X^{\mathsf {T}}X+\lambda nI\right)^{-1}X^{\mathsf {T}}y} This solution closely resembles that of standard linear regression, with an extra term λ I {\displaystyle \lambda I} . If the assumptions of OLS regression hold, the solution w = ( X T X ) − 1 X T y {\displaystyle w=\left(X^{\mathsf {T}}X\right)^{-1}X^{\mathsf {T}}y} , with λ = 0 {\displaystyle \lambda =0} , is an unbiased estimator, and is the minimum-variance linear unbiased estimator, according to the Gauss–Markov theorem. The term λ n I {\displaystyle \lambda nI} therefore leads to a biased solution; however, it also tends to reduce variance. This is easy to see, as the covariance matrix of the w {\displaystyle w} -values is proportional to ( X T X + λ n I ) − 1 {\displaystyle \left(X^{\mathsf {T}}X+\lambda nI\right)^{-1}} , and therefore large values of λ {\displaystyle \lambda } will lead to lower variance. Therefore, manipulating λ {\displaystyle \lambda } corresponds to trading-off bias and variance. For problems with high-variance w {\displaystyle w} estimates, such as cases with relatively small n {\displaystyle n} or with correlated regressors, the optimal prediction accuracy may be obtained by using a nonzero λ {\displaystyle \lambda } , and thus introducing some bias to reduce variance. Furthermore, it is not uncommon in machine learning to have cases where n < d {\displaystyle n<d} , in which case X T X {\displaystyle X^{\mathsf {T}}X} is rank-deficient, and a nonzero λ {\displaystyle \lambda } is necessary to compute ( X T X + λ n I ) − 1 {\displaystyle \left(X^{\mathsf {T}}X+\lambda nI\right)^{-1}} . ==== Complexity ==== The parameter λ {\displaystyle \lambda } controls the invertibility of the matrix X T X + λ n I {\displaystyle X^{\mathsf {T}}X+\lambda nI} . Several methods can be used to solve the above linear system, Cholesky decomposition being probably the method of choice, since the matrix X T X + λ n I {\displaystyle X^{\mathsf {T}}X+\lambda nI} is symmetric and positive definite. The complexity of this method is O ( n D 2 ) {\displaystyle O(nD^{2})} for training and O ( D ) {\displaystyle O(D)} for testing. The cost O ( n D 2 ) {\displaystyle O(nD^{2})} is essentially that of computing X T X {\displaystyle X^{\mathsf {T}}X} , whereas the inverse computation (or rather the solution of the linear system) is roughly O ( D 3 ) {\displaystyle O(D^{3})} . == Feature maps and Mercer's theorem == In this section it will be shown how to extend RLS to any kind of reproducing kernel K. Instead of linear kernel a feature map is considered Φ : X → F {\displaystyle \Phi :X\to F} for some Hilbert space F {\displaystyle F} , called the feature space. In this case the kernel is defined as: The matrix X {\displaystyle X} is now replaced by the new data matrix Φ {\displaystyle \Phi } , where Φ i j = φ j ( x i ) {\displaystyle \Phi _{ij}=\varphi _{j}(x_{i})} , or the j {\displaystyle j} -th component of the φ ( x i ) {\displaystyle \varphi (x_{i})} . K ( x , x ′ ) = ⟨ Φ ( x ) , Φ ( x ′ ) ⟩ F . {\displaystyle K(x,x')=\langle \Phi (x),\Phi (x')\rangle _{F}.} It means that for a given training set K = Φ Φ T {\displaystyle K=\Phi \Phi ^{\mathsf {T}}} . Thus, the objective function can be written as min c ∈ R n ‖ Y − Φ Φ T c ‖ R n 2 + λ c T Φ Φ T c . {\displaystyle \min _{c\in \mathbb {R} ^{n}}\left\|Y-\Phi \Phi ^{\mathsf {T}}c\right\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{\mathsf {T}}\Phi \Phi ^{\mathsf {T}}c.} This approach is known as the kernel trick. This technique can significantly simplify the computational operations. If F {\displaystyle F} is high dimensional, computing φ ( x i ) {\displaystyle \varphi (x_{i})} may be rather intensive. If the explicit form of the kernel function is known, we just need to compute and store the n × n {\displaystyle n\times n} kernel matrix K {\displaystyle K} . In fact, the Hilbert space F {\displaystyle F} need not be isomorphic to R m {\displaystyle \mathbb {R} ^{m}} , and can be infinite dimensional. This follows from Mercer's theorem, which states that a continuous, symmetric, positive definite kernel function can be expressed as K ( x , z ) = ∑ i = 1 ∞ σ i e i ( x ) e i ( z ) {\displaystyle K(x,z)=\sum _{i=1}^{\infty }\sigma _{i}e_{i}(x)e_{i}(z)} where e i ( x ) {\displaystyle e_{i}(x)} form an orthonormal basis for ℓ 2 ( X ) {\displaystyle \ell ^{2}(X)} , and σ i ∈ R {\displaystyle \sigma _{i}\in \mathbb {R} } . If feature maps is defined φ ( x ) {\displaystyle \varphi (x)} with components φ i ( x ) = σ i e i ( x ) {\displaystyle \varphi _{i}(x)={\sqrt {\sigma _{i}}}e_{i}(x)} , it follows that K ( x , z ) = ⟨ φ ( x ) , φ ( z ) ⟩ {\displaystyle K(x,z)=\langle \varphi (x),\varphi (z)\rangle } . This demonstrates that any kernel can be associated with a feature map, and that RLS generally consists of linear RLS performed in some possibly higher-dimensional feature space. While Mercer's theorem shows how one feature map that can be associated with a kernel, in fact multiple feature maps can be associated with a given reproducing kernel. For instance, the map φ ( x ) = K x {\displaystyle \varphi (x)=K_{x}} satisfies the property K ( x , z ) = ⟨ φ ( x ) , φ ( z ) ⟩ {\displaystyle K(x,z)=\langle \varphi (x),\varphi (z)\rangle } for an arbitrary reproducing kernel. == Bayesian interpretation == Least squares can be viewed as a likelihood maximization under an assumption of normally distributed residuals. This is because the exponent of the Gaussian distribution is quadratic in the data, and so is the least-squares objective function. In this framework, the regularization terms of RLS can be understood to be encoding priors on w {\displaystyle w} . For instance, Tikhonov regularization corresponds to a normally distributed prior on w {\displaystyle w} that is centered at 0. To see this, first note that the OLS objective is proportional to the log-likelihood function when each sampled y i {\displaystyle y^{i}} is normally distributed around w T ⋅ x i {\displaystyle w^{\mathsf {T}}\cdot x^{i}} . Then observe that a normal prior on w {\displaystyle w} centered at 0 has a log-probability of the form log ⁡ P ( w ) = q − α ∑ j = 1 d w j 2 {\displaystyle \log P(w)=q-\alpha \sum _{j=1}^{d}w_{j}^{2}} where q {\displaystyle q} and α {\displaystyle \alpha } are constants that depend on the variance of the prior and are independent of w {\displaystyle w} . Thus, minimizing the logarithm of the likelihood times the prior is equivalent to minimizing the sum of the OLS loss function and the ridge regression regularization term. This gives a more intuitive interpretation for why Tikhonov regularization leads to a unique solution to the least-squares problem: there are infinitely many vectors w {\displaystyle w} satisfying the constraints obtained from the data, but since we come to the problem with a prior belief that w {\displaystyle w} is normally distributed around the origin, we will end up choosing a solution with this constraint in mind. Other regularization methods correspond to different priors. See the list below for more details. == Specific examples == === Ridge regression (or Tikhonov regularization) === One particularly common choice for the penalty function R {\displaystyle R} is the squared ℓ 2 {\displaystyle \ell _{2}} norm, i.e., R ( w ) = ∑ j = 1 d w j 2 {\displaystyle R(w)=\sum _{j=1}^{d}w_{j}^{2}} and the solution is found as w ^ = argmin w ∈ R d 1 n ‖ Y − X w ‖ 2 2 + λ ∑ j = 1 d | w j | 2 {\displaystyle {\hat {w}}={\text{argmin}}_{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\left\|Y-Xw\right\|_{2}^{2}+\lambda \sum _{j=1}^{d}\left|w_{j}\right|^{2}} The most common names for this are called Tikhonov regularization and ridge regression. It admits a closed-form solution for w {\displaystyle w} : w ^ = ( 1 n X T X + λ I ) − 1 1 n X T Y = ( X T X + n λ I ) − 1 X T Y {\displaystyle {\hat {w}}=\left({\frac {1}{n}}X^{\mathsf {T}}X+\lambda I\right)^{-1}{\frac {1}{n}}X^{\mathsf {T}}Y=\left(X^{\mathsf {T}}X+n\lambda I\right)^{-1}X^{\mathsf {T}}Y} The name ridge regression alludes to the fact that the λ I {\displaystyle \lambda I} term adds positive entries along the diagonal "ridge" of the sample covariance matrix 1 n X T X {\displaystyle {\frac {1}{n}}X^{\mathsf {T}}X} . When λ = 0 {\displaystyle \lambda =0} , i.e., in the case of ordinary least squares, the condition that d > n {\displaystyle d>n} causes the sample covariance matrix 1 n X T X {\displaystyle {\frac {1}{n}}X^{\mathsf {T}}X} to not have full rank and so it cannot be inverted to yield a unique solution. This is why there can be an infinitude of solutions to the ordinary least squares problem when d > n {\displaystyle d>n} . However, when λ > 0 {\displaystyle \lambda >0} , i.e., when ridge regression is used, the addition of λ I {\displaystyle \lambda I} to the sample covariance matrix ensures that all of its eigenvalues will be strictly greater than 0. In other words, it becomes invertible, and the solution is then unique. Compared to ordinary least squares, ridge regression is not unbiased. It accepts bias to reduce variance and the mean square error. ==== Simplifications and automatic regularization ==== If we want to find w ^ {\displaystyle {\hat {w}}} for different values of the regularization coefficient λ {\displaystyle \lambda } (which we denote w ^ ( λ ) {\displaystyle {\hat {w}}(\lambda )} ) we may use the eigenvalue decomposition of the covariance matrix 1 n X T X = Q diag ( α 1 , … , α d ) Q T {\displaystyle {\frac {1}{n}}X^{\mathsf {T}}X=Q{\text{diag}}(\alpha _{1},\ldots ,\alpha _{d})Q^{\mathsf {T}}} where columns of Q ∈ R d × d {\displaystyle Q\in \mathbb {R} ^{d\times d}} are the eigenvectors of 1 n X T X {\displaystyle {\frac {1}{n}}X^{\mathsf {T}}X} and α 1 , … , α d {\displaystyle \alpha _{1},\ldots ,\alpha _{d}} - its d {\displaystyle d} eigenvalues. The solution in then given by w ^ ( λ ) = Q diag − 1 ( α 1 + λ , … , α d + λ ) Z {\displaystyle {\hat {w}}(\lambda )=Q{\text{diag}}^{-1}(\alpha _{1}+\lambda ,\ldots ,\alpha _{d}+\lambda )Z} where Z = 1 n Q T X T Y = [ Z 1 , … , Z d ] T . {\displaystyle Z={\frac {1}{n}}Q^{\mathsf {T}}X^{\mathsf {T}}Y=[Z_{1},\ldots ,Z_{d}]^{\mathsf {T}}.} Using the above results, the algorithm for finding a maximum likelihood estimate of λ {\displaystyle \lambda } may be defined as follows: λ ← 1 n ∑ i = 1 d α i α i + λ [ 1 n ‖ Y − X w ^ ( λ ) ‖ 2 ‖ w ^ ( λ ) ‖ 2 + λ ] . {\displaystyle \lambda \leftarrow {\frac {1}{n}}\sum _{i=1}^{d}{\frac {\alpha _{i}}{\alpha _{i}+\lambda }}\left[{\frac {{\frac {1}{n}}\|Y-X{\hat {w}}(\lambda )\|^{2}}{\|{\hat {w}}(\lambda )\|^{2}}}+\lambda \right].} This algorithm, for automatic (as opposed to heuristic) regularization, is obtained as a fixed point solution in the maximum likelihood estimation of the parameters. Although the guarantees of convergence are not provided, the examples indicate that a satisfactory solution may be obtained after a couple of iterations. The eigenvalue decomposition simplifies derivation of the algorithm and also simplifies the calculations: ‖ w ^ ( λ ) ‖ 2 = ∑ i = 1 d | Z i | 2 ( α i + λ ) 2 , {\displaystyle \|{\hat {w}}(\lambda )\|^{2}=\sum _{i=1}^{d}{\frac {|Z_{i}|^{2}}{(\alpha _{i}+\lambda )^{2}}},} 1 n ‖ Y − X w ^ ( λ ) ‖ 2 = ∑ i = 1 d | Z i | 2 α i + λ . {\displaystyle {\frac {1}{n}}\|Y-X{\hat {w}}(\lambda )\|^{2}=\sum _{i=1}^{d}{\frac {|Z_{i}|^{2}}{\alpha _{i}+\lambda }}.} An alternative fixed-point algorithm known as Gull-McKay algorithm λ ← 1 n ‖ Y − X w ^ ( λ ) ‖ 2 [ n ∑ i = 1 d α i α i + λ − 1 ] ‖ w ^ ( λ ) ‖ 2 {\displaystyle \lambda \leftarrow {\frac {{\frac {1}{n}}\|Y-X{\hat {w}}(\lambda )\|^{2}}{\left[{\frac {n}{\sum _{i=1}^{d}{\frac {\alpha _{i}}{\alpha _{i}+\lambda }}}}-1\right]\|{\hat {w}}(\lambda )\|^{2}}}} usually has a faster convergence, but may be used only if n > ∑ i = 1 d α i α i + λ {\displaystyle n>\sum _{i=1}^{d}{\frac {\alpha _{i}}{\alpha _{i}+\lambda }}} . Thus, while it can be used without problems for n > d {\displaystyle n>d} caution is recommended for n < d {\displaystyle n<d} . === Lasso regression === The least absolute selection and shrinkage (LASSO) method is another popular choice. In lasso regression, the lasso penalty function R {\displaystyle R} is the ℓ 1 {\displaystyle \ell _{1}} norm, i.e. R ( w ) = ∑ j = 1 d | w j | {\displaystyle R(w)=\sum _{j=1}^{d}\left|w_{j}\right|} 1 n ‖ Y − X w ‖ 2 2 + λ ∑ j = 1 d | w j | → min w ∈ R d {\displaystyle {\frac {1}{n}}\left\|Y-Xw\right\|_{2}^{2}+\lambda \sum _{j=1}^{d}|w_{j}|\rightarrow \min _{w\in \mathbb {R} ^{d}}} Note that the lasso penalty function is convex but not strictly convex. Unlike Tikhonov regularization, this scheme does not have a convenient closed-form solution: instead, the solution is typically found using quadratic programming or more general convex optimization methods, as well as by specific algorithms such as the least-angle regression algorithm. An important difference between lasso regression and Tikhonov regularization is that lasso regression forces more entries of w {\displaystyle w} to actually equal 0 than would otherwise. In contrast, while Tikhonov regularization forces entries of w {\displaystyle w} to be small, it does not force more of them to be 0 than would be otherwise. Thus, LASSO regularization is more appropriate than Tikhonov regularization in cases in which we expect the number of non-zero entries of w {\displaystyle w} to be small, and Tikhonov regularization is more appropriate when we expect that entries of w {\displaystyle w} will generally be small but not necessarily zero. Which of these regimes is more relevant depends on the specific data set at hand. Besides feature selection described above, LASSO has some limitations. Ridge regression provides better accuracy in the case n > d {\displaystyle n>d} for highly correlated variables. In another case, n < d {\displaystyle n<d} , LASSO selects at most n {\displaystyle n} variables. Moreover, LASSO tends to select some arbitrary variables from group of highly correlated samples, so there is no grouping effect. === ℓ0 Penalization === 1 n ‖ Y − X w ‖ 2 2 + λ ‖ w j ‖ 0 → min w ∈ R d {\displaystyle {\frac {1}{n}}\left\|Y-Xw\right\|_{2}^{2}+\lambda \left\|w_{j}\right\|_{0}\rightarrow \min _{w\in \mathbb {R} ^{d}}} The most extreme way to enforce sparsity is to say that the actual magnitude of the coefficients of w {\displaystyle w} does not matter; rather, the only thing that determines the complexity of w {\displaystyle w} is the number of non-zero entries. This corresponds to setting R ( w ) {\displaystyle R(w)} to be the ℓ 0 {\displaystyle \ell _{0}} norm of w {\displaystyle w} . This regularization function, while attractive for the sparsity that it guarantees, is very difficult to solve because doing so requires optimization of a function that is not even weakly convex. Lasso regression is the minimal possible relaxation of ℓ 0 {\displaystyle \ell _{0}} penalization that yields a weakly convex optimization problem. === Elastic net === For any non-negative λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} the objective has the following form: 1 n ‖ Y − X w ‖ 2 2 + λ 1 ∑ j = 1 d | w j | + λ 2 ∑ j = 1 d | w j | 2 → min w ∈ R d {\displaystyle {\frac {1}{n}}\left\|Y-Xw\right\|_{2}^{2}+\lambda _{1}\sum _{j=1}^{d}\left|w_{j}\right|+\lambda _{2}\sum _{j=1}^{d}\left|w_{j}\right|^{2}\rightarrow \min _{w\in \mathbb {R} ^{d}}} Let α = λ 1 λ 1 + λ 2 {\displaystyle \alpha ={\frac {\lambda _{1}}{\lambda _{1}+\lambda _{2}}}} , then the solution of the minimization problem is described as: 1 n ‖ Y − X w ‖ 2 2 → min w ∈ R d s.t. ( 1 − α ) ‖ w ‖ 1 + α ‖ w ‖ 2 ≤ t {\displaystyle {\frac {1}{n}}\left\|Y-Xw\right\|_{2}^{2}\rightarrow \min _{w\in \mathbb {R} ^{d}}{\text{ s.t. }}(1-\alpha )\left\|w\right\|_{1}+\alpha \left\|w\right\|_{2}\leq t} for some t {\displaystyle t} . Consider ( 1 − α ) ‖ w ‖ 1 + α ‖ w ‖ 2 ≤ t {\displaystyle (1-\alpha )\left\|w\right\|_{1}+\alpha \left\|w\right\|_{2}\leq t} as an Elastic Net penalty function. When α = 1 {\displaystyle \alpha =1} , elastic net becomes ridge regression, whereas α = 0 {\displaystyle \alpha =0} it becomes Lasso. ∀ α ∈ ( 0 , 1 ] {\displaystyle \forall \alpha \in (0,1]} Elastic Net penalty function doesn't have the first derivative at 0 and it is strictly convex ∀ α > 0 {\displaystyle \forall \alpha >0} taking the properties both lasso regression and ridge regression. One of the main properties of the Elastic Net is that it can select groups of correlated variables. The difference between weight vectors of samples x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} is given by: | w i ∗ ( λ 1 , λ 2 ) − w j ∗ ( λ 1 , λ 2 ) | ≤ ∑ i = 1 n | y i | λ 2 2 ( 1 − ρ i j ) , {\displaystyle \left|w_{i}^{*}(\lambda _{1},\lambda _{2})-w_{j}^{*}(\lambda _{1},\lambda _{2})\right|\leq {\frac {\sum _{i=1}^{n}|y_{i}|}{\lambda _{2}}}{\sqrt {2(1-\rho _{ij})}},} where ρ i j = x i T x j {\displaystyle \rho _{ij}=x_{i}^{\mathsf {T}}x_{j}} . If x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} are highly correlated ( ρ i j → 1 {\displaystyle \rho _{ij}\to 1} ), the weight vectors are very close. In the case of negatively correlated samples ( ρ i j → − 1 {\displaystyle \rho _{ij}\to -1} ) the samples − x j {\displaystyle -x_{j}} can be taken. To summarize, for highly correlated variables the weight vectors tend to be equal up to a sign in the case of negative correlated variables. == Partial list of RLS methods == The following is a list of possible choices of the regularization function R ( ⋅ ) {\displaystyle R(\cdot )} , along with the name for each one, the corresponding prior if there is a simple one, and ways for computing the solution to the resulting optimization problem. == See also == Least squares Regularization in mathematics. Generalization error, one of the reasons regularization is used. Tikhonov regularization Lasso regression Elastic net regularization Least-angle regression == References == == External links == http://www.stanford.edu/~hastie/TALKS/enet_talk.pdf Regularization and Variable Selection via the Elastic Net (presentation) Regularized Least Squares and Support Vector Machines (presentation) Regularized Least Squares(presentation)
Wikipedia:Reinhard Diestel#0
Reinhard Diestel (born 1959) is a German mathematician specializing in graph theory, including the interplay among graph minors, matroid theory, tree decomposition, and infinite graphs. He holds the chair of discrete mathematics at the University of Hamburg. == Education and career == Diestel has a Ph.D. from the University of Cambridge in England, completed in 1986. His dissertation, Simplicial Decompositions and Universal Graphs, was supervised by Béla Bollobás. He continued at Cambridge as a fellow of St. John's College, Cambridge until 1990. In 1994, he took a professorship at the Chemnitz University of Technology, and in 1999 he was given his current chair at the University of Hamburg. At Hamburg, his doctoral students have included Daniela Kühn and Maya Stein. == Books == Diestel's books include: Graph Decompositions: A Study in Infinite Graph Theory (Oxford University Press, 1990) Graph Theory (Graduate Texts in Mathematics 173, Springer, 1997; 6th ed., 2024). Originally published in German as Graphentheorie (1996), and translated into Chinese, Japanese, and Russian. Tangles: A Structural Approach to Artificial Intelligence in the Empirical Sciences (Cambridge University Press, 2024; arXiv:2006.01830) == References == == External links == Home page Reinhard Diestel publications indexed by Google Scholar Graph Theory home page including free online preview version
Wikipedia:Reinhold Baer#0
Reinhold Baer (22 July 1902 – 22 October 1979) was a German mathematician, known for his work in algebra. He introduced injective modules in 1940. He is the eponym of Baer rings, Baer groups, and Baer subplanes. == Biography == Baer studied mechanical engineering for a year at Leibniz University Hannover. He then went to study philosophy at Freiburg in 1921. While he was at Göttingen in 1922 he was influenced by Emmy Noether and Hellmuth Kneser. In 1924 he won a scholarship for specially gifted students. Baer wrote up his doctoral dissertation and it was published in Crelle's Journal in 1927. Baer accepted a post at Halle in 1928. There, he published Ernst Steinitz's "Algebraische Theorie der Körper" with Helmut Hasse, first published in Crelle's Journal in 1910. While Baer was with his wife in Austria, Adolf Hitler and the Nazis came into power. Both of Baer's parents were Jewish, and he was for this reason informed that his services at Halle were no longer required. Louis Mordell invited him to go to Manchester and Baer accepted. Baer stayed at Princeton University and was a visiting scholar at the nearby Institute for Advanced Study from 1935 to 1937. For a short while he lived in North Carolina. From 1938 to 1956 he worked at the University of Illinois at Urbana-Champaign. He returned to Germany in 1956. According to biographer K. W. Gruenberg, The rapid development of lattice theory in the mid-thirties suggested that projective geometry should be viewed as a special kind of lattice, the lattice of all subspaces of a vector space... [Linear Algebra and Projective Geometry (1952)] is an account of the representation of vector spaces over division rings, of projectivities by semi-linear transformations and of dualities by semi-bilinear forms. He died of heart failure on 22 October in 1979. In 2016 the Reinhold Baer Prize for the best Ph.D. thesis in group theory was set up in his honour. == Bibliography == 1934: "Erweiterung von Gruppen und ihren Isomorphismen", Mathematische Zeitschrift 38(1): 375–416 (German) doi:10.1007/BF01170643 MR1545456 1940: "Nilpotent groups and their generalizations", Transactions of the American Mathematical Society 47: 393–434 MR0002121 1944: "The higher commutator subgroups of a group", Bulletin of the American Mathematical Society 50: 143–160 MR0009954 1945: "Representations of groups as quotient groups. II. Minimal central chains of a group", Transactions of the American Mathematical Society 58: 348–389 MR0015107 1945: "Representations of groups as quotient groups. III. Invariants of classes of related representations", Transactions of the American Mathematical Society 58: 390–419 MR0015108 == See also == Capable group Dedekind group Retract (group theory) Radical of a ring Semiprime ring Nielsen-Schreier theorem == References == O. H. Kegel (1979) "Reinhold Baer (1902 — 1979)", Mathematical Intelligencer 2:181,2. == External links == Reinhold Baer at the Mathematics Genealogy Project K.W. Gruenberg & Derek Robinson (2003) The Mathematical Legacy of Reinhold Baer, Illinois Journal of Mathematics 47(1-2) from Project Euclid. Author profile in the database zbMATH Baer Family's Schedule of 1940 US Census. Reproduction of a talk given by Baer on his last lecture in 1967, before his retirement from the University of Frankfurt - here is a translation.
Wikipedia:Reisner Papyrus#0
The Reisner Papyri date to the reign of Senusret I, who was king of ancient Egypt in the 19th century BCE. The documents were discovered by G.A. Reisner during excavations in 1901–04 in Naga ed-Deir in southern Egypt. A total of four papyrus rolls were found in a wooden coffin in a tomb. The Reisner I Papyrus is about 3.5 meters long and 31.6 cm wide in total. It consists of nine separate sheets and includes records of building construction with numbers of workers needed, carpentry workshops, dockyard workshops with lists of tools. Some segments contain calculations used in construction. The sections of the document were given letter designations by W.K. Simpson. Sections G, H, I, J and K contain records of the construction of a building, usually thought to be a temple. Section O is a record of worker's compensation. The records span 72 days of work. The Reisner II Papyrus: the Accounts of the Dockyard Workshop at This in the Reign of Sesostris I was published by W.K. Simpson in 1965. This papyrus contains accounts dating to years 15–18 of Senusret I. There are three administrative orders from a vizier. The Reisner III Papyrus: the Records of a Building Project in the Early Twelfth Dynasty was published by W. K. Simpson in 1969 for the Boston Museum of Fine Arts. Further research at this point indicated that the papyri may have come from a slightly earlier period. The Reisner IV Papyrus: the Personnel Accounts of the Early Twelfth Dynasty was published by W.K. Simpson in 1986. == Mathematical texts == Several sections contain tables with mathematical content. === Papyrus Reisner I, Section G === Section G consistes of 19 lines of text. In the first line the column headings are given: length (3w), width (wsx), thickness or depth (mDwt), units, product/volume (sty), and in the last column the calculations of the number of workers needed for the work of that day. === Papyrus Reisner I, Section H === The format of the table in section H is similar to that of section G. In this document only the column heading product/volume is used however, and there is no column recording the number of workers required. === Papyrus Reisner I, Section I === Section I closely resembles section H. Columns recording the length, width, height and product/volume are presented. In this case there are no column headings written down by the scribe. The text is damaged in places but can be reconstructed. The units are cubits except where the scribe mentions palms. The square brackets indicate added or reconstructed text. == Difficulties with interpretation == Gillings and other scholars accepted 100-year-old views of this document, with several of the views being incomplete and misleading. Two of the documents, reported in Tables 22.2 and 22.2, a detail a division by 10 method, a method that also appears in the Rhind Mathematical Papyrus. Labor efficiencies were monitored by applying this method. For example, how deep did 10 workmen dig in one day as calculated in the Reisner Papyrus, and by Ahmes 150 years later? In addition, the methods used in the Reisner and RMP to convert vulgar fractions to unit fraction series look similar to the conversion methods used in the Egyptian Mathematical Leather Roll. Gillings repeated a common and incomplete view of the Reisner Papyrus. He analyzed lines G10, from table 22.3B, and line 17 from Table 22.2 on page 221, in the "Mathematics in the Time of the Pharaohs", citing these Reisner Papyrus facts: divide 39 by 10 = 4, a poor approximation to the correct value, reported Gillings. Gillings fairly reported that the scribe should have stated the problem and data as: 39/10 = (30 + 9)/10 = 3 + 1/2 + 1/3 + 1/15 Yet, all other the division by 10 problems and answers were correctly stated, points that Gillings did not stress. Table 22.2 data described the work done in the Eastern Chapel. Additional raw data was listed on lines G5, G6/H32, G14, G15, G16, G17/H33 and G18/H34, as follows: 12/10 = 1 + 1/5 (G5) 10/10 = 1 (G6 & H32) 8/10 = 1/2 + 1/4 + 1/20 (G14) 48/10 = 4 + 1/2 + 1/4 + 1/20 (G15) 16/10 = 1 + 1/2 + 1/10 (G16) 64/10 = 6 + 1/4 + 1/10 + 1/20 (G17 & H33) 36/10 = 3 + 1/2 + 1/10 (G18 & H34) Chace and Shute had noted the Reisner Papyrus division by 10 method, also applied in the RMP. Chace, nor Shute, clearly cite the quotients and remainders that were used by Ahmes. Other additive scholars have also muddled the reading the first 6 problems of the Rhind Mathematical Papyrus, missing its use of quotient and remainders. Gillings, Chace and Shute apparently had not analyzed the RMP data in a broader context, and reported its older structure, thereby missing a major fragment of Akhmim Wooden Tablet and Reisner Papyrus remainder arithmetic. That is, Gillings' citation in the Reisner and RMP documented in the "Mathematics in the Time of the Pharaohs" only scratched the surface of scribal arithmetic. Had scholars dug a little deeper, academics may have found 80 years ago other reasons for the Reisner Papyrus 39/10 error. The Reisner Papyrus error may have been noted by Gillings as using quotients (Q) and remainders (R). Ahmes used quotients and remainders in the RMP's first six problems. Gillings may have forgotten to summarize his findings in a rigorous manner, showing that several Middle Kingdom texts had used quotients and remainders. Seen in a broader sense the Reisner Papyrus data should be noted as: 39/10 = (Q' + R)/10 with Q' = (Q*10), Q = 3 and R = 9 such that: 39/10 = 3 + 9/10 = 3 + 1/2 + 1/3 + 1/15 with 9/10 being converted to a unit fraction series following rules set down in the AWT, and followed in RMP and other texts. Confirmation of the scribal remainder arithmetic is found in other hieratic texts. The most important text is the Akhmim Wooden Tablet. The AWT defines scribal remainder arithmetic in term of another context, a hekat (volume unit). Oddly, Gillings did not cite AWT data in "Mathematics in the Time of the Pharaohs". Gillings and the earlier 1920s scholars had missed a major opportunity to point out a multiple use of scribal remainder arithmetic built upon quotient and remainders. The modern looking remainder arithmetic was later found by others by taking a broader view of the 39/10 error, as corrected as the actual Eastern Chapel data reports. Gillings and the academic community therefore had inadvertently omitted a critically important discussion of fragments of remainder arithmetic. Remainder arithmetic, as used in many ancient cultures to solve astronomy and time problems, is one of several plausible historical division methods that may have allowed a full restoration of scribal division around 1906. In summary, the Reisner Papyri was built upon a method described in the Akhmim Wooden Tablet, and later followed by Ahmes writing the RMP. The Reisner calculations apparently follows our modern Occam's Razor rule, that the simplest method was the historical method; in this case remainder arithmetic, such that: n/10 = Q + R/10 where Q was a quotient and R was a remainder. The Reisner, following this Occam's Razor rule, says that 10 workmen units were used to divide raw data using a method that was defined in the text, a method that also begins the Rhind Mathematical Papyrus, as noted in its first six problems. == See also == List of ancient Egyptian papyri == References == Chace, Arnold Buffum. 1927–1929. The Rhind Mathematical Papyrus: Free Translation and Commentary with Selected Photographs, Translations, Transliterations and Literal Translations. Classics in Mathematics Education 8. 2 vols. Oberlin: Mathematical Association of America. (Reprinted Reston: National Council of Teachers of Mathematics, 1979). ISBN 0-87353-133-7 Gillings, Richard J., "Mathematics in the Time of the Pharaohs", Dover, New York, 1971, ISBN 0-486-24315-X Robins, R. Gay, and Charles C. D. Shute. 1987. The Rhind Mathematical Papyrus: An Ancient Egyptian Text. London: British Museum Publications Limited. ISBN 0-7141-0944-4 == External links == https://web.archive.org/web/20100117021426/http://ahmespapyrus.blogspot.com/2009/01/ahmes-papyrus-new-and-old.html http://mathworld.wolfram.com/AkhmimWoodenTablet.html "Rhind Papyrus". MathWorld–A Wolfram Web Resource. http://egyptianmath.blogspot.com
Wikipedia:Relation (mathematics)#0
In mathematics, a relation denotes some kind of relationship between two objects in a set, which may or may not hold. As an example, "is less than" is a relation on the set of natural numbers; it holds, for instance, between the values 1 and 3 (denoted as 1 < 3), and likewise between 3 and 4 (denoted as 3 < 4), but not between the values 3 and 1 nor between 4 and 4, that is, 3 < 1 and 4 < 4 both evaluate to false. As another example, "is sister of" is a relation on the set of all people, it holds e.g. between Marie Curie and Bronisława Dłuska, and likewise vice versa. Set members may not be in relation "to a certain degree" – either they are in relation or they are not. Formally, a relation R over a set X can be seen as a set of ordered pairs (x,y) of members of X. The relation R holds between x and y if (x,y) is a member of R. For example, the relation "is less than" on the natural numbers is an infinite set Rless of pairs of natural numbers that contains both (1,3) and (3,4), but neither (3,1) nor (4,4). The relation "is a nontrivial divisor of" on the set of one-digit natural numbers is sufficiently small to be shown here: Rdv = { (2,4), (2,6), (2,8), (3,6), (3,9), (4,8) }; for example 2 is a nontrivial divisor of 8, but not vice versa, hence (2,8) ∈ Rdv, but (8,2) ∉ Rdv. If R is a relation that holds for x and y, one often writes xRy. For most common relations in mathematics, special symbols are introduced, like "<" for "is less than", and "|" for "is a nontrivial divisor of", and, most popular "=" for "is equal to". For example, "1 < 3", "1 is less than 3", and "(1,3) ∈ Rless" mean all the same; some authors also write "(1,3) ∈ (<)". Various properties of relations are investigated. A relation R is reflexive if xRx holds for all x, and irreflexive if xRx holds for no x. It is symmetric if xRy always implies yRx, and asymmetric if xRy implies that yRx is impossible. It is transitive if xRy and yRz always implies xRz. For example, "is less than" is irreflexive, asymmetric, and transitive, but neither reflexive nor symmetric. "is sister of" is transitive, but neither reflexive (e.g. Pierre Curie is not a sister of himself), nor symmetric, nor asymmetric; while being irreflexive or not may be a matter of definition (is every woman a sister of herself?), "is ancestor of" is transitive, while "is parent of" is not. Mathematical theorems are known about combinations of relation properties, such as "a transitive relation is irreflexive if, and only if, it is asymmetric". Of particular importance are relations that satisfy certain combinations of properties. A partial order is a relation that is reflexive, antisymmetric, and transitive, an equivalence relation is a relation that is reflexive, symmetric, and transitive, a function is a relation that is right-unique and left-total (see below). Since relations are sets, they can be manipulated using set operations, including union, intersection, and complementation, leading to the algebra of sets. Furthermore, the calculus of relations includes the operations of taking the converse and composing relations. The above concept of relation has been generalized to admit relations between members of two different sets (heterogeneous relation, like "lies on" between the set of all points and that of all lines in geometry), relations between three or more sets (finitary relation, like "person x lives in town y at time z"), and relations between classes (like "is an element of" on the class of all sets, see Binary relation § Sets versus classes). == Definition == Given a set X, a relation R over X is a set of ordered pairs of elements from X, formally: R ⊆ { (x,y) | x, y ∈ X }. The statement (x,y) ∈ R reads "x is R-related to y" and is written in infix notation as xRy. The order of the elements is important; if x ≠ y then yRx can be true or false independently of xRy. For example, 3 divides 9, but 9 does not divide 3. == Representation of relations == A relation R on a finite set X may be represented as: Directed graph: Each member of X corresponds to a vertex; a directed edge from x to y exists if and only if (x,y) ∈ R. Boolean matrix: The members of X are arranged in some fixed sequence x1, ..., xn; the matrix has dimensions n × n, with the element in line i, column j, being , if (xi,xj) ∈ R, and , otherwise. 2D-plot: As a generalization of a Boolean matrix, a relation on the –infinite– set R of real numbers can be represented as a two-dimensional geometric figure: using Cartesian coordinates, draw a point at (x,y) whenever (x,y) ∈ R. A transitive relation R on a finite set X may be also represented as Hasse diagram: Each member of X corresponds to a vertex; directed edges are drawn such that a directed path from x to y exists if and only if (x,y) ∈ R. Compared to a directed-graph representation, a Hasse diagram needs fewer edges, leading to a less tangled image. Since the relation "a directed path exists from x to y" is transitive, only transitive relations can be represented in Hasse diagrams. Usually the diagram is laid out such that all edges point in an upward direction, and the arrows are omitted. For example, on the set of all divisors of 12, define the relation Rdiv by x Rdiv y if x is a divisor of y and x ≠ y. Formally, X = { 1, 2, 3, 4, 6, 12 } and Rdiv = { (1,2), (1,3), (1,4), (1,6), (1,12), (2,4), (2,6), (2,12), (3,6), (3,12), (4,12), (6,12) }. The representation of Rdiv as a Boolean matrix is shown in the middle table; the representation both as a Hasse diagram and as a directed graph is shown in the left picture. The following are equivalent: x Rdiv y is true. (x,y) ∈ Rdiv. A path from x to y exists in the Hasse diagram representing Rdiv. An edge from x to y exists in the directed graph representing Rdiv. In the Boolean matrix representing Rdiv, the element in line x, column y is "". As another example, define the relation Rel on R by x Rel y if x2 + xy + y2 = 1. The representation of Rel as a 2D-plot obtains an ellipse, see right picture. Since R is not finite, neither a directed graph, nor a finite Boolean matrix, nor a Hasse diagram can be used to depict Rel. == Properties of relations == Some important properties that a relation R over a set X may have are: Reflexive for all x ∈ X, xRx. For example, ≥ is a reflexive relation but > is not. Irreflexive (or strict) for all x ∈ X, not xRx. For example, > is an irreflexive relation, but ≥ is not. The previous 2 alternatives are not exhaustive; e.g., the red relation y = x2 given in the diagram below is neither irreflexive, nor reflexive, since it contains the pair (0,0), but not (2,2), respectively. Symmetric for all x, y ∈ X, if xRy then yRx. For example, "is a blood relative of" is a symmetric relation, because x is a blood relative of y if and only if y is a blood relative of x. Antisymmetric for all x, y ∈ X, if xRy and yRx then x = y. For example, ≥ is an antisymmetric relation; so is >, but vacuously (the condition in the definition is always false). Asymmetric for all x, y ∈ X, if xRy then not yRx. A relation is asymmetric if and only if it is both antisymmetric and irreflexive. For example, > is an asymmetric relation, but ≥ is not. Again, the previous 3 alternatives are far from being exhaustive; as an example over the natural numbers, the relation xRy defined by x > 2 is neither symmetric (e.g. 5R1, but not 1R5) nor antisymmetric (e.g. 6R4, but also 4R6), let alone asymmetric. Transitive for all x, y, z ∈ X, if xRy and yRz then xRz. A transitive relation is irreflexive if and only if it is asymmetric. For example, "is ancestor of" is a transitive relation, while "is parent of" is not. Connected for all x, y ∈ X, if x ≠ y then xRy or yRx. For example, on the natural numbers, < is connected, while "is a divisor of" is not (e.g. neither 5R7 nor 7R5). Strongly connected for all x, y ∈ X, xRy or yRx. For example, on the natural numbers, ≤ is strongly connected, but < is not. A relation is strongly connected if, and only if, it is connected and reflexive. Uniqueness properties: Injective (also called left-unique) For all x, y, z ∈ X, if xRy and zRy then x = z. For example, the green and blue relations in the diagram are injective, but the red one is not (as it relates both −1 and 1 to 1), nor is the black one (as it relates both −1 and 1 to 0). Functional (also called right-unique, right-definite or univalent) For all x, y, z ∈ X, if xRy and xRz then y = z. Such a relation is called a partial function. For example, the red and green relations in the diagram are functional, but the blue one is not (as it relates 1 to both −1 and 1), nor is the black one (as it relates 0 to both −1 and 1). Totality properties: Serial (also called total or left-total) For all x ∈ X, there exists some y ∈ X such that xRy. Such a relation is called a multivalued function. For example, the red and green relations in the diagram are total, but the blue one is not (as it does not relate −1 to any real number), nor is the black one (as it does not relate 2 to any real number). As another example, > is a serial relation over the integers. But it is not a serial relation over the positive integers, because there is no y in the positive integers such that 1 > y. However, < is a serial relation over the positive integers, the rational numbers and the real numbers. Every reflexive relation is serial: for a given x, choose y = x. Surjective (also called right-total or onto) For all y ∈ Y, there exists an x ∈ X such that xRy. For example, the green and blue relations in the diagram are surjective, but the red one is not (as it does not relate any real number to −1), nor is the black one (as it does not relate any real number to 2). === Combinations of properties === Relations that satisfy certain combinations of the above properties are particularly useful, and thus have received names by their own. Equivalence relation A relation that is reflexive, symmetric, and transitive. It is also a relation that is symmetric, transitive, and serial, since these properties imply reflexivity. Orderings: Partial order A relation that is reflexive, antisymmetric, and transitive. Strict partial order A relation that is irreflexive, asymmetric, and transitive. Total order A relation that is reflexive, antisymmetric, transitive and connected. Strict total order A relation that is irreflexive, asymmetric, transitive and connected. Uniqueness properties: One-to-one Injective and functional. For example, the green relation in the diagram is one-to-one, but the red, blue and black ones are not. One-to-many Injective and not functional. For example, the blue relation in the diagram is one-to-many, but the red, green and black ones are not. Many-to-one Functional and not injective. For example, the red relation in the diagram is many-to-one, but the green, blue and black ones are not. Many-to-many Not injective nor functional. For example, the black relation in the diagram is many-to-many, but the red, green and blue ones are not. Uniqueness and totality properties: A function A relation that is functional and total. For example, the red and green relations in the diagram are functions, but the blue and black ones are not. An injection A function that is injective. For example, the green relation in the diagram is an injection, but the red, blue and black ones are not. A surjection A function that is surjective. For example, the green relation in the diagram is a surjection, but the red, blue and black ones are not. A bijection A function that is injective and surjective. For example, the green relation in the diagram is a bijection, but the red, blue and black ones are not. == Operations on relations == Union If R and S are relations over X then R ∪ S = { (x, y) | xRy or xSy } is the union relation of R and S. The identity element of this operation is the empty relation. For example, ≤ is the union of < and =, and ≥ is the union of > and =. Intersection If R and S are relations over X then R ∩ S = { (x, y) | xRy and xSy } is the intersection relation of R and S. The identity element of this operation is the universal relation. For example, "is a lower card of the same suit as" is the intersection of "is a lower card than" and "belongs to the same suit as". Composition If R and S are relations over X then S ∘ R = { (x, z) | there exists y ∈ X such that xRy and ySz } (also denoted by R; S) is the relative product of R and S. The identity element is the identity relation. The order of R and S in the notation S ∘ R, used here agrees with the standard notational order for composition of functions. For example, the composition "is mother of" ∘ "is parent of" yields "is maternal grandparent of", while the composition "is parent of" ∘ "is mother of" yields "is grandmother of". For the former case, if x is the parent of y and y is the mother of z, then x is the maternal grandparent of z. Converse If R is a relation over sets X and Y then RT = { (y, x) | xRy } is the converse relation of R over Y and X. For example, = is the converse of itself, as is ≠, and < and > are each other's converse, as are ≤ and ≥. Complement If R is a relation over X then R = { (x, y) | x, y ∈ X and not xRy } (also denoted by R or ¬R) is the complementary relation of R. For example, = and ≠ are each other's complement, as are ⊆ and ⊈, ⊇ and ⊉, and ∈ and ∉, and, for total orders, also < and ≥, and > and ≤. The complement of the converse relation RT is the converse of the complement: R T ¯ = R ¯ T . {\displaystyle {\overline {R^{\mathsf {T}}}}={\bar {R}}^{\mathsf {T}}.} Restriction If R is a relation over X and S is a subset of X then R|S = { (x, y) | xRy and x, y ∈ S } is the restriction relation of R to S. The expression R|S = { (x, y) | xRy and x ∈ S } is the left-restriction relation of R to S; the expression R|S = { (x, y) | xRy and y ∈ S } is called the right-restriction relation of R to S. If a relation is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, strict weak order, total preorder (weak order), or an equivalence relation, then so too are its restrictions. However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general not equal. For example, restricting the relation "x is parent of y" to women yields the relation "x is mother of the woman y"; its transitive closure does not relate a woman with her paternal grandmother. On the other hand, the transitive closure of "is parent of" is "is ancestor of"; its restriction to women does relate a woman with her paternal grandmother. A relation R over sets X and Y is said to be contained in a relation S over X and Y, written R ⊆ S, if R is a subset of S, that is, for all x ∈ X and y ∈ Y, if xRy, then xSy. If R is contained in S and S is contained in R, then R and S are called equal written R = S. If R is contained in S but S is not contained in R, then R is said to be smaller than S, written R ⊊ S. For example, on the rational numbers, the relation > is smaller than ≥, and equal to the composition > ∘ >. == Theorems about relations == A relation is asymmetric if, and only if, it is antisymmetric and irreflexive. A transitive relation is irreflexive if, and only if, it is asymmetric. A relation is reflexive if, and only if, its complement is irreflexive. A relation is strongly connected if, and only if, it is connected and reflexive. A relation is equal to its converse if, and only if, it is symmetric. A relation is connected if, and only if, its complement is anti-symmetric. A relation is strongly connected if, and only if, its complement is asymmetric. If R and S are relations over a set X, and R is contained in S, then If R is reflexive, connected, strongly connected, left-total, or right-total, then so is S. If S is irreflexive, asymmetric, anti-symmetric, left-unique, or right-unique, then so is R. A relation is reflexive, irreflexive, symmetric, asymmetric, anti-symmetric, connected, strongly connected, and transitive if its converse is, respectively. == Examples == Order relations, including strict orders: Greater than Greater than or equal to Less than Less than or equal to Divides (evenly) Subset of Equivalence relations: Equality Parallel with (for affine spaces) Is in bijection with Isomorphic Tolerance relation, a reflexive and symmetric relation: Dependency relation, a finite tolerance relation Independency relation, the complement of some dependency relation Kinship relations == Generalizations == The above concept of relation has been generalized to admit relations between members of two different sets. Given sets X and Y, a heterogeneous relation R over X and Y is a subset of { (x,y) | x∈X, y∈Y }. When X = Y, the relation concept described above is obtained; it is often called homogeneous relation (or endorelation) to distinguish it from its generalization. The above properties and operations that are marked "" and "", respectively, generalize to heterogeneous relations. An example of a heterogeneous relation is "ocean x borders continent y". The best-known examples are functions with distinct domains and ranges, such as sqrt : N → R+. == See also == Incidence structure, a heterogeneous relation between set of points and lines Order theory, investigates properties of order relations Relation algebra == Notes == == References == == Bibliography ==
Wikipedia:Relative dimension#0
In mathematics, specifically linear algebra and geometry, relative dimension is the dual notion to codimension. In linear algebra, given a quotient map V → Q {\displaystyle V\to Q} , the difference dim V − dim Q is the relative dimension; this equals the dimension of the kernel. In fiber bundles, the relative dimension of the map is the dimension of the fiber. More abstractly, the codimension of a map is the dimension of the cokernel, while the relative dimension of a map is the dimension of the kernel. These are dual in that the inclusion of a subspace V → W {\displaystyle V\to W} of codimension k dualizes to yield a quotient map W ∗ → V ∗ {\displaystyle W^{*}\to V^{*}} of relative dimension k, and conversely. The additivity of codimension under intersection corresponds to the additivity of relative dimension in a fiber product. Just as codimension is mostly used for injective maps, relative dimension is mostly used for surjective maps. == References ==
Wikipedia:Rellich–Kondrachov theorem#0
In mathematics, the Rellich–Kondrachov theorem is a compact embedding theorem concerning Sobolev spaces. It is named after the Austrian-German mathematician Franz Rellich and the Russian mathematician Vladimir Iosifovich Kondrashov. Rellich proved the L2 theorem and Kondrashov the Lp theorem. == Statement of the theorem == Let Ω ⊆ Rn be an open, bounded Lipschitz domain, and let 1 ≤ p < n. Set p ∗ := n p n − p . {\displaystyle p^{*}:={\frac {np}{n-p}}.} Then the Sobolev space W1,p(Ω; R) is continuously embedded in the Lp space Lp∗(Ω; R) and is compactly embedded in Lq(Ω; R) for every 1 ≤ q < p∗. In symbols, W 1 , p ( Ω ) ↪ L p ∗ ( Ω ) {\displaystyle W^{1,p}(\Omega )\hookrightarrow L^{p^{*}}(\Omega )} and W 1 , p ( Ω ) ⊂⊂ L q ( Ω ) for 1 ≤ q < p ∗ . {\displaystyle W^{1,p}(\Omega )\subset \subset L^{q}(\Omega ){\text{ for }}1\leq q<p^{*}.} === Kondrachov embedding theorem === On a compact manifold with C1 boundary, the Kondrachov embedding theorem states that if k > ℓ and k − n/p > ℓ − n/q then the Sobolev embedding W k , p ( M ) ⊂ W ℓ , q ( M ) {\displaystyle W^{k,p}(M)\subset W^{\ell ,q}(M)} is completely continuous (compact). == Consequences == Since an embedding is compact if and only if the inclusion (identity) operator is a compact operator, the Rellich–Kondrachov theorem implies that any uniformly bounded sequence in W1,p(Ω; R) has a subsequence that converges in Lq(Ω; R). Stated in this form, in the past the result was sometimes referred to as the Rellich–Kondrachov selection theorem, since one "selects" a convergent subsequence. (However, today the customary name is "compactness theorem", whereas "selection theorem" has a precise and quite different meaning, referring to set-valued functions.) The Rellich–Kondrachov theorem may be used to prove the Poincaré inequality, which states that for u ∈ W1,p(Ω; R) (where Ω satisfies the same hypotheses as above), ‖ u − u Ω ‖ L p ( Ω ) ≤ C ‖ ∇ u ‖ L p ( Ω ) {\displaystyle \|u-u_{\Omega }\|_{L^{p}(\Omega )}\leq C\|\nabla u\|_{L^{p}(\Omega )}} for some constant C depending only on p and the geometry of the domain Ω, where u Ω := 1 meas ⁡ ( Ω ) ∫ Ω u ( x ) d x {\displaystyle u_{\Omega }:={\frac {1}{\operatorname {meas} (\Omega )}}\int _{\Omega }u(x)\,\mathrm {d} x} denotes the mean value of u over Ω. == References == == Literature == Evans, Lawrence C. (2010). Partial Differential Equations (2nd ed.). American Mathematical Society. ISBN 978-0-8218-4974-3. Kondrachov, V. I., On certain properties of functions in the space L p .Dokl. Akad. Nauk SSSR 48, 563–566 (1945). Leoni, Giovanni (2009). A First Course in Sobolev Spaces. Graduate Studies in Mathematics. 105. American Mathematical Society. pp. xvi+607. ISBN 978-0-8218-4768-8. MR 2527916. Zbl 1180.46001 Rellich, Franz (24 January 1930). "Ein Satz über mittlere Konvergenz". Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse (in German). 1930: 30–35. JFM 56.0224.02.
Wikipedia:Remez inequality#0
In mathematics, the Remez inequality, discovered by the Soviet mathematician Evgeny Yakovlevich Remez (Remez 1936), gives a bound on the sup norms of certain polynomials, the bound being attained by the Chebyshev polynomials. == The inequality == Let σ be an arbitrary fixed positive number. Define the class of polynomials πn(σ) to be those polynomials p of degree n for which | p ( x ) | ≤ 1 {\displaystyle |p(x)|\leq 1} on some set of measure ≥ 2 contained in the closed interval [−1, 1+σ]. Then the Remez inequality states that sup p ∈ π n ( σ ) ‖ p ‖ ∞ = ‖ T n ‖ ∞ {\displaystyle \sup _{p\in \pi _{n}(\sigma )}\left\|p\right\|_{\infty }=\left\|T_{n}\right\|_{\infty }} where Tn(x) is the Chebyshev polynomial of degree n, and the supremum norm is taken over the interval [−1, 1+σ]. Observe that Tn is increasing on [ 1 , + ∞ ] {\displaystyle [1,+\infty ]} , hence ‖ T n ‖ ∞ = T n ( 1 + σ ) . {\displaystyle \|T_{n}\|_{\infty }=T_{n}(1+\sigma ).} The R.i., combined with an estimate on Chebyshev polynomials, implies the following corollary: If J ⊂ R is a finite interval, and E ⊂ J is an arbitrary measurable set, then for any polynomial p of degree n. == Extensions: Nazarov–Turán lemma == Inequalities similar to (⁎) have been proved for different classes of functions, and are known as Remez-type inequalities. One important example is Nazarov's inequality for exponential sums (Nazarov 1993): Nazarov's inequality. Let p ( x ) = ∑ k = 1 n a k e λ k x {\displaystyle p(x)=\sum _{k=1}^{n}a_{k}e^{\lambda _{k}x}} be an exponential sum (with arbitrary λk ∈C), and let J ⊂ R be a finite interval, E ⊂ J—an arbitrary measurable set. Then max x ∈ J | p ( x ) | ≤ e max k | ℜ λ k | mes ⁡ J ( C mes ⁡ J mes ⁡ E ) n − 1 sup x ∈ E | p ( x ) | , {\displaystyle \max _{x\in J}|p(x)|\leq e^{\max _{k}|\Re \lambda _{k}|\,\operatorname {mes} J}\left({\frac {C\,\,\operatorname {mes} J}{\operatorname {mes} E}}\right)^{n-1}\sup _{x\in E}|p(x)|~,} where C > 0 is a numerical constant. In the special case when λk are pure imaginary and integer, and the subset E is itself an interval, the inequality was proved by Pál Turán and is known as Turán's lemma. This inequality also extends to L p ( T ) , 0 ≤ p ≤ 2 {\displaystyle L^{p}(\mathbb {T} ),\ 0\leq p\leq 2} in the following way ‖ p ‖ L p ( T ) ≤ e A ( n − 1 ) mes ⁡ ( T ∖ E ) ‖ p ‖ L p ( E ) {\displaystyle \|p\|_{L^{p}(\mathbb {T} )}\leq e^{A(n-1)\operatorname {mes} (\mathbb {T} \setminus E)}\|p\|_{L^{p}(E)}} for some A > 0 independent of p, E, and n. When mes ⁡ E < 1 − log ⁡ n n {\displaystyle \operatorname {mes} E<1-{\frac {\log n}{n}}} a similar inequality holds for p > 2. For p = ∞ there is an extension to multidimensional polynomials. Proof: Applying Nazarov's lemma to E = E λ = { x : | p ( x ) | ≤ λ } , λ > 0 {\displaystyle E=E_{\lambda }=\{x:|p(x)|\leq \lambda \},\ \lambda >0} leads to max x ∈ J | p ( x ) | ≤ e max k | ℜ λ k | mes ⁡ J ( C mes ⁡ J mes ⁡ E λ ) n − 1 sup x ∈ E λ | p ( x ) | ≤ e max k | ℜ λ k | mes ⁡ J ( C mes ⁡ J mes ⁡ E λ ) n − 1 λ {\displaystyle \max _{x\in J}|p(x)|\leq e^{\max _{k}|\Re \lambda _{k}|\,\operatorname {mes} J}\left({\frac {C\,\,\operatorname {mes} J}{\operatorname {mes} E_{\lambda }}}\right)^{n-1}\sup _{x\in E_{\lambda }}|p(x)|\leq e^{\max _{k}|\Re \lambda _{k}|\,\operatorname {mes} J}\left({\frac {C\,\,\operatorname {mes} J}{\operatorname {mes} E_{\lambda }}}\right)^{n-1}\lambda } thus mes ⁡ E λ ≤ C mes ⁡ J ( λ e max k | ℜ λ k | mes ⁡ J max x ∈ J | p ( x ) | ) 1 n − 1 {\displaystyle \operatorname {mes} E_{\lambda }\leq C\,\,\operatorname {mes} J\left({\frac {\lambda e^{\max _{k}|\Re \lambda _{k}|\,\operatorname {mes} J}}{\max _{x\in J}|p(x)|}}\right)^{\frac {1}{n-1}}} Now fix a set E {\displaystyle E} and choose λ {\displaystyle \lambda } such that mes ⁡ E λ ≤ 1 2 mes ⁡ E {\displaystyle \operatorname {mes} E_{\lambda }\leq {\tfrac {1}{2}}\operatorname {mes} E} , that is λ = ( mes ⁡ E 2 C mes ⁡ J ) n − 1 e − max k | ℜ λ k | mes ⁡ J max x ∈ J | p ( x ) | {\displaystyle \lambda =\left({\frac {\operatorname {mes} E}{2C\operatorname {mes} J}}\right)^{n-1}e^{-\max _{k}|\Re \lambda _{k}|\,\operatorname {mes} J}\max _{x\in J}|p(x)|} Note that this implies: mes ⁡ E ∖ E λ ≥ 1 2 mes ⁡ E . {\displaystyle \operatorname {mes} E\setminus E_{\lambda }\geq {\tfrac {1}{2}}\operatorname {mes} E.} ∀ x ∈ E ∖ E λ : | p ( x ) | > λ . {\displaystyle \forall x\in E\setminus E_{\lambda }:|p(x)|>\lambda .} Now ∫ x ∈ E | p ( x ) | p d x ≥ ∫ x ∈ E ∖ E λ | p ( x ) | p d x ≥ λ p 1 2 mes ⁡ E = 1 2 mes ⁡ E ( mes ⁡ E 2 C mes ⁡ J ) p ( n − 1 ) e − p max k | ℜ λ k | mes ⁡ J max x ∈ J | p ( x ) | p ≥ 1 2 mes ⁡ E mes ⁡ J ( mes ⁡ E 2 C mes ⁡ J ) p ( n − 1 ) e − p max k | ℜ λ k | mes ⁡ J ∫ x ∈ J | p ( x ) | p d x , {\displaystyle {\begin{aligned}\int _{x\in E}|p(x)|^{p}\,{\mbox{d}}x&\geq \int _{x\in E\setminus E_{\lambda }}|p(x)|^{p}\,{\mbox{d}}x\\[6pt]&\geq \lambda ^{p}{\frac {1}{2}}\operatorname {mes} E\\[6pt]&={\frac {1}{2}}\operatorname {mes} E\left({\frac {\operatorname {mes} E}{2C\operatorname {mes} J}}\right)^{p(n-1)}e^{-p\max _{k}|\Re \lambda _{k}|\,\operatorname {mes} J}\max _{x\in J}|p(x)|^{p}\\[6pt]&\geq {\frac {1}{2}}{\frac {\operatorname {mes} E}{\operatorname {mes} J}}\left({\frac {\operatorname {mes} E}{2C\operatorname {mes} J}}\right)^{p(n-1)}e^{-p\max _{k}|\Re \lambda _{k}|\,\operatorname {mes} J}\int _{x\in J}|p(x)|^{p}\,{\mbox{d}}x,\end{aligned}}} which completes the proof. == Pólya inequality == One of the corollaries of the Remez inequality is the Pólya inequality, which was proved by George Pólya (Pólya 1928), and states that the Lebesgue measure of a sub-level set of a polynomial p of degree n is bounded in terms of the leading coefficient LC(p) as follows: mes ⁡ { x ∈ R : | P ( x ) | ≤ a } ≤ 4 ( a 2 L C ( p ) ) 1 / n , a > 0 . {\displaystyle \operatorname {mes} \left\{x\in \mathbb {R} :\left|P(x)\right|\leq a\right\}\leq 4\left({\frac {a}{2\mathrm {LC} (p)}}\right)^{1/n},\quad a>0~.} == References == Remez, E. J. (1936). "Sur une propriété des polynômes de Tchebyscheff". Comm. Inst. Sci. Kharkow. 13: 93–95. Bojanov, B. (May 1993). "Elementary Proof of the Remez Inequality". The American Mathematical Monthly. 100 (5). Mathematical Association of America: 483–485. doi:10.2307/2324304. JSTOR 2324304. Fontes-Merz, N. (2006). "A multidimensional version of Turan's lemma". Journal of Approximation Theory. 140 (1): 27–30. doi:10.1016/j.jat.2005.11.012. Nazarov, F. (1993). "Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type". Algebra i Analiz. 5 (4): 3–66. Nazarov, F. (2000). "Complete Version of Turan's Lemma for Trigonometric Polynomials on the Unit Circumference". Complex Analysis, Operators, and Related Topics. 113: 239–246. doi:10.1007/978-3-0348-8378-8_20. ISBN 978-3-0348-9541-5. Pólya, G. (1928). "Beitrag zur Verallgemeinerung des Verzerrungssatzes auf mehrfach zusammenhängende Gebiete". Sitzungsberichte Akad. Berlin: 280–282.
Wikipedia:Rena Bakhshi#0
Rena Bakhshi (born 1981) is a Dutch computer scientist and mathematician and programme manager for the Netherlands eScience Center's natural sciences and engineering domain. == Life and work == Bakhshi holds two master’s degrees. The first is in Applied Mathematics from Baku State University in Azerbaijan and the second in Computer Science from KTH Royal Institute of Technology in Stockholm. In 2011, she went on to receive a PhD in Theoretical Computer Science from Vrije Universiteit Amsterdam (VU) for which she studied formal modelling and analysis of large-scale stochastic systems under the supervision of Willem Jan (Wan) Fokkink and Maarten R. van Steen. She advised at least one student, Suhail Yousaf. She took on postdoctoral positions as a fellow and Assistant Professor at VU, as well as Research Visitor at Australia's NICTA Sydney and University of Melbourne on large-scale complex systems, which included several interdisciplinary projects. In 2016, Bakhshi joined the Netherlands eScience Center in Amsterdam to coordinate its climate science and physics projects. Since June 2021, she has served as the organization's Programme Manager for the Natural Sciences and Engineering domain. She has published academic papers in computer science and mathematical logic and foundations. == References ==
Wikipedia:Renate Scheidler#0
Renate Scheidler (born 1960) is a German and Canadian mathematician and computer scientist specializing in computational number theory and its applications in cryptography. She is a professor at the University of Calgary, in the Department of Mathematics & Statistics and the Department of Computer Science. She is the co-editor-in-chief of Contributions to Discrete Mathematics and one of the founders of the Women in Number Theory research community and the Women in Numbers conference series. == Education and career == Scheidler was born in 1960. She earned a diplom (the German equivalent of a combined bachelor's and master's degree) in mathematics at the University of Cologne in 1987. She first came to Canada for doctoral study in mathematics at the University of Manitoba, where she completed her Ph.D. in 1993. Her dissertation, Applications of Algebraic Number Theory to Cryptography, was supervised by Hugh C. Williams. She joined the mathematics faculty at the University of Delaware in 1993, adding a courtesy appointment in the university's Department of Computer & Information Sciences in 1995. She was promoted to associate professor there in 1999. In 2001 she moved to her present position, jointly appointed to the Department of Mathematics & Statistics and the Department of Computer Science at the University of Calgary; she has been a full professor since 2008. In 2022 she was Helene Lange Visiting Professor at the University of Oldenburg in Germany. In 2008, with Kristin Lauter and Rachel Justine Pries, she became one of the founders of the Women in Numbers conference, which became the first of a series of conferences for the Women in Number Theory research community. She has been co-editor-in-chief of Contributions to Discrete Mathematics since 2023. == Recognition == Scheidler was named as a Fellow of the Association for Women in Mathematics, recognizing her "for her vision and role in founding the Women in Numbers Research Network; for her continuing leadership in that research community; and for impactful work mentoring women at all career stages". She is the 2024 recipient of the Krieger–Nelson Prize, given "in recognition of her important and significant contributions to research, particularly in the fields of computational number theory and algebraic number theory". == References == == External links == Home page Renate Scheidler publications indexed by Google Scholar
Wikipedia:Rencontres numbers#0
In combinatorics, the rencontres numbers are a triangular array of integers that enumerate permutations of the set { 1, ..., n } with specified numbers of fixed points: in other words, partial derangements. (Rencontre is French for encounter. By some accounts, the problem is named after a solitaire game.) For n ≥ 0 and 0 ≤ k ≤ n, the rencontres number Dn, k is the number of permutations of { 1, ..., n } that have exactly k fixed points. For example, if seven presents are given to seven different people, but only two are destined to get the right present, there are D7, 2 = 924 ways this could happen. Another often cited example is that of a dance school with 7 opposite-sex couples, where, after tea-break the participants are told to randomly find an opposite-sex partner to continue, then once more there are D7, 2 = 924 possibilities that exactly 2 previous couples meet again by chance. == Numerical values == Here is the beginning of this array (sequence A008290 in the OEIS): == Formulas == The numbers in the k = 0 column enumerate derangements. Thus D 0 , 0 = 1 , {\displaystyle D_{0,0}=1,\!} D 1 , 0 = 0 , {\displaystyle D_{1,0}=0,\!} D n + 2 , 0 = ( n + 1 ) ( D n + 1 , 0 + D n , 0 ) {\displaystyle D_{n+2,0}=(n+1)(D_{n+1,0}+D_{n,0})\!} for non-negative n. It turns out that D n , 0 = ⌈ n ! e ⌋ , {\displaystyle D_{n,0}=\left\lceil {\frac {n!}{e}}\right\rfloor ,} where the ratio is rounded up for even n and rounded down for odd n. For n ≥ 1, this gives the nearest integer. More generally, for any k ≥ 0 {\displaystyle k\geq 0} , we have D n , k = ( n k ) ⋅ D n − k , 0 . {\displaystyle D_{n,k}={n \choose k}\cdot D_{n-k,0}.} The proof is easy after one knows how to enumerate derangements: choose the k fixed points out of n; then choose the derangement of the other n − k points. The numbers Dn,0/(n!) are generated by the power series e−z/(1 − z); accordingly, an explicit formula for Dn, m can be derived as follows: D n , m = n ! m ! [ z n − m ] e − z 1 − z = n ! m ! ∑ k = 0 n − m ( − 1 ) k k ! . {\displaystyle D_{n,m}={\frac {n!}{m!}}[z^{n-m}]{\frac {e^{-z}}{1-z}}={\frac {n!}{m!}}\sum _{k=0}^{n-m}{\frac {(-1)^{k}}{k!}}.} This immediately implies that D n , m = ( n m ) D n − m , 0 and D n , m n ! ≈ e − 1 m ! {\displaystyle D_{n,m}={n \choose m}D_{n-m,0}\;\;{\mbox{ and }}\;\;{\frac {D_{n,m}}{n!}}\approx {\frac {e^{-1}}{m!}}} for n large, m fixed. == Probability distribution == The sum of the entries in each row for the table in "Numerical Values" is the total number of permutations of { 1, ..., n }, and is therefore n!. If one divides all the entries in the nth row by n!, one gets the probability distribution of the number of fixed points of a uniformly distributed random permutation of { 1, ..., n }. The probability that the number of fixed points is k is D n , k n ! . {\displaystyle {D_{n,k} \over n!}.} For n ≥ 1, the expected number of fixed points is 1 (a fact that follows from linearity of expectation). More generally, for i ≤ n, the ith moment of this probability distribution is the ith moment of the Poisson distribution with expected value 1. For i > n, the ith moment is smaller than that of that Poisson distribution. Specifically, for i ≤ n, the ith moment is the ith Bell number, i.e. the number of partitions of a set of size i. === Limiting probability distribution === As the size of the permuted set grows, we get lim n → ∞ D n , k n ! = e − 1 k ! . {\displaystyle \lim _{n\to \infty }{D_{n,k} \over n!}={e^{-1} \over k!}.} This is just the probability that a Poisson-distributed random variable with expected value 1 is equal to k. In other words, as n grows, the probability distribution of the number of fixed points of a random permutation of a set of size n approaches the Poisson distribution with expected value 1. == See also == Oberwolfach problem, a different mathematical problem involving the arrangement of diners at tables Problème des ménages, a similar problem involving partial derangements == References == Riordan, John, An Introduction to Combinatorial Analysis, New York, Wiley, 1958, pages 57, 58, and 65. Weisstein, Eric W. "Partial Derangements". MathWorld.
Wikipedia:Renfrey Potts#0
Renfrey Burnard (Ren) Potts AO (1925–2005) was an Australian mathematician and is notable for the Potts model and his achievements in: operations research, especially networks; transportation science, car-following and road traffic; Ising-type models in mathematical physics; difference equations; and robotics. He was interested in computing from the early days of the computing revolution and oversaw the first computer purchases at the University of Adelaide. == Personal == The fourth child of Gilbert MacDonald Potts and Lorna Potts (née West), named after family friend and medical doctor Renfrey Gershom Burnard, Potts was educated at Rose Park Primary School and Prince Alfred College, where his father was Second Master. Potts was an outstanding lecturer who drew large audiences to his talks. In addition to mathematics, he was interested in sports and music. His sporting activities included long distance and marathon running, hockey, tennis, squash, badminton, bushwalking, and swimming. He played both the piano and the clarinet and was a volunteer disc jockey at a local radio station. He married Barbara Kidman in Oxford on 1 July 1950. They had two daughters, Linda and Rebecca. They also had four grandchildren, Frank, Zoe, Jack and Georgia. == Summary == 1925 Born: 4 October 1925 Adelaide, South Australia, Australia 1930–1936 Rose Park Primary School 1937–1942 Prince Alfred College 1943–1947 University of Adelaide Bachelor of Science (First class honours in mathematics) 1948 Rhodes Scholar, Queen's College, Oxford 1949 Barbara Kidman graduated with first class Honours in Physics 1949–1951 D Phil, (Oxford), Dissertation: The Mathematical Investigation of Some Cooperative Phenomena, Advisor: Cyril Domb 1950 Married Barbara Kidman in Oxford on 1 July 1950 1951–1957 Lecturer in Mathematics at the University of Adelaide 1955–1956 Postdoctoral Scientist at the University of Maryland, USA 1956 Barbara Kidman awarded a PhD 1957–1959 Associate Professor at the University of Toronto in Canada 1958–1959 Consultant to General Motors in Detroit 1959 Awarded the Lanchester Prize for research in operations research 1959 Appointed to a newly created chair in applied mathematics at the University of Adelaide 1959–1990 Professor, Chair and popular lecturer of applied mathematics at the University of Adelaide 1966 Dr Kidman returns to workforce as lecturer in the (then) new area of Computer Science 1968 Doctor of Science (DSc) received from the University of Oxford 1975 fellow of the Australian Academy of Science (FAA) 1976 Appointed (Sir Thomas) Elder Professor of Mathematics Foundation President of the South Australian Computer Society (the forerunner of the Australian Computer Society). He is recognised as the founder of the Australian Computer Society, and was elected a Fellow of that society (FACS). 1978–9 chairman, Division of Applied Mathematics of the Australian Mathematical Society (the progenitor of ANZIAM) 1983 Fellow of the Australian Academy of Technological Sciences and Engineering (FTSE) 1987 Dr Kidman retires 1990 Prof Potts retires 1991 Emeritus Professor of Applied Mathematics at the University of Adelaide 1991 Officer of the Order of Australia (AO) 1991–1993 After his retirement from Adelaide, he taught as a visiting professor at the National University of Singapore for three semesters. 1994 Fellow of the Australian Mathematical Society (FAustMS) 1995 Inaugural recipient of the ANZAAS (Australian and New Zealand Association for the Advancement of Science) Medal 1995 Awarded first ANZIAM Medal 2001 Centenary Medal received from the Australian Government 2004 Inducted to the Pearcey Hall of Fame (The Pearcey Foundation) 2005 Died: 9 August 2005 Adelaide, South Australia, Australia. == Publications == Most-cited publication: Potts, R. B. (1952). "Some Generalized Order-Disorder Transformations". Mathematical Proceedings of the Cambridge Philosophical Society. 48 (1): 106–109. doi:10.1017/S0305004100027419. Some others: (Ren published about 90 research papers) === Books === With Robert Oliver, "Flows in Transportation Networks" 1978 Potts, R. B., "Transport in Australia: Some Key Issues", Australian Academy of Science, Canberra, 1978, 159 pp. ISBN 0-85847-048-9 === Book chapters === 1990 Potts, R. B., "Wilton, John Raymond (1884–1944), Mathematician", in John Ritchie (ed.), Australian Dictionary of Biography, vol. 12, Melbourne University Press, Melbourne, 1990, pp. 533–534 === Journal articles === 1959 Harold Willis Milnes, Renfrey B. Potts: "Boundary Contraction Solution of Laplace's Differential Equation" J. ACM 6(2): 226–235 (1959)] Article 1959 Robert Herman, Elliott W. Montroll, Renfrey B. Potts, and Richard W. Rohtery, "Traffic Dynamics: Analysis of Stability in Car Following", Operations Research 7, pp. 86–106 (January–February 1959) 1959 Denos C. Gazis, Robert Herman, and Renfrey B. Potts, "Car-Following Theory of Steady-State Traffic Flow", Operations Research 7, pp. 499–505 (July–August 1959). 1962 "The Measurement of Acceleration Noise—A Traffic Parameter", Trevor R. Jones, Renfrey B. Potts, Operations Research, Vol. 10, No. 6 (Nov. – Dec. 1962), pp. 745–763 Abstract 1982 "Differential and Difference Equations", Renfrey B. Potts, The American Mathematical Monthly, Vol. 89, No. 6 (Jun. – Jul. 1982), pp. 402–407 doi:10.2307/2321656 Abstract 1985 Potts, R. B., "Mathematics at the University of Adelaide, Part 3: 1944–1958", Australian Mathematical Society Gazette, vol. 12, no. 2, 1985, pp. 25–30. 1988 Tamar Flash, Renfrey B. Potts: "Discrete trajectory planning". October 1988, International Journal of Robotics Research, Volume 7 Issue 5 ACM PortalAbstract 2004 Wall, G. E., Pitman, Jane and Potts, R. B., "Eric Stephen Barnes, 1924–2000", Historical Records of Australian Science, vol. 15, no. 1, 2004, pp. 21–45. (Also available at http://www.publish.csiro.au/paper/HR03013.htm and https://web.archive.org/web/20090929205257/http://www.science.org.au/academy/memoirs/barnes.htm) == Affiliations == 1959 General Motors Corporation, Detroit, Michigan 1960s–1980s P.G. Pak-Poy & Associates, Adelaide 1988 Weizmann Institute of Science, Rehovot, Israel == Awards == Officer of the Order of Australia (AO) 1991 Centenary Medal 2001 == See also == Mathematics Genealogy Project Bright Spacs Biography entry Portrait — date unknown, St Andrews Portrait — date unknown, St Andrews Biography, St Andrews Obituary, Australian Mathematics Trust Obituary, Australian Academy of Technological Sciences and Engineering "Obituaries", Australian Academy of Science Newsletter, vol. 63, August–November 2005, pp. 10–11. E O Tuck, Obituary — Professor Renfrey Burnard Potts, The Adelaidean (October 2005). E O Tuck, Obituary : Renfrey B. Potts, 4/10/1925–9/8/2005, Austral. Math. Soc. Gaz. 32 (4) (2005), 291–292. E O Tuck, Retirement of Professor R. B. Potts, AO, Austral. Math. Soc. Gaz. 18 (4) (1991), 111–112. 2003 photo of Dr Barbara Kidman and Emeritus Professor Ren Potts, Adelaidean, Volume 12 Number 7 August 2003, pg14 Biography of Renfrey Potts from the Institute for Operations Research and the Management Sciences == References ==
Wikipedia:Rentsen Enkhbat#0
Rentsen Enkhbat is the current director of the Institute Mathematics at the National University of Mongolia in Ulaanbaatar, Mongolia. He is a professor of mathematics at the Business School of National University of Mongolia. == Education == He received his bachelor's, master's and Ph.D. degrees from Irkutsk State University (Russia) in applied mathematics in 1980 and 1990, respectively. He received also his master's degree in economics from National University of Mongolia in 1998 and Sc.D degree from Mongolian Academy of Sciences in 2003. == Awards and recognition == His awards include prize of the Third World and Mongolian Academy of Sciences, award of Consortium of Mongolian Higher Education Universities, and Government Medal. He is the author and co-author 16 books and the editor of 17 books. He has written more than 100 scientific papers. His recent research interest lies in Global Optimization, Optimal Control and Game theory. He has held visiting appointments at the University of Massachusetts, Kyoto University, Ibaraki University, Humboldt University of Berlin, University of Florida, Korea Institute for Advanced Study, Inje University (Korea), Curtin University (Australia), and Littoral University (France). He is a member of American and Mongolian Mathematical Societies, and USA and Mongolian Chess Federations. He is a Mongolian National Chess Master. == References == Taylor & Francis Journal: Optimization - A Journal of Mathematical Programming and Operations Research Lambert Publisher 2009 Journal: International Journal of Pure and Applied Mathematics, Book: Quasiconvex Programming and its Applications, ISBN 9783838308074 World Scientific Optimization and Optimal Control == External links == Homepage
Wikipedia:René Gosse#0
René Gosse (16 August 1883 – 22 December 1943) was a French mathematician and resistant during the Second World War.
Wikipedia:René de Saussure#0
René de Saussure (17 March 1868 – 2 December 1943) was a Swiss Esperantist and professional mathematician who composed important works about the linguistics of Esperanto and interlinguistics. == Biography == He was born in Geneva, Switzerland. René's father was the scientist Henri Louis Frédéric de Saussure. His brothers were linguist Ferdinand de Saussure and Sinologist Léopold de Saussure. He defended a doctoral thesis on a subject in geometry at the Johns Hopkins University in 1895 and until 1899 he was professor at the Catholic University of America in Washington, D.C., and later in Geneva and Bern. His main work is an analysis of the logic of word construction in Esperanto, Fundamentaj reguloj de la vortteorio en Esperanto ("Fundamental rules of word theory in Esperanto"), defending the language against several Idist critiques. He developed the concept of neceso kaj sufiĉo ("necessity and sufficience") by which he opposed the criticism of Louis Couturat that Esperanto lacks recursion. In 1907, de Saussure proposed the international currency spesmilo (₷). It was used by the Ĉekbanko esperantista and other British and Swiss banks until the First World War. Beginning in 1919, de Saussure proposed a series of Esperanto reforms, and in 1925, he renounced Esperanto in favor of his artificial language Esperanto II. He later became a consultant for the International Auxiliary Language Association, the linguistic research body that standardized and presented Interlingua. He died on 2 December 1943 in Bern, Switzerland. == Legacy == A new silver Esperanto coin for 100 Steloj was struck in 2018 for the 150th birthday of René de Saussure. == References ==
Wikipedia:Rep-tile#0
A necktie, long tie, or simply a tie, is a cloth article of formal neckwear or office attire worn for decorative or symbolic purposes, resting under a folded shirt collar or knotted at the throat, and usually draped down the chest. On rare occasions neckties are worn above a winged shirt collar. However, in occupations where manual labor is involved, the end of the necktie is often tucked into the button line front placket of a dress shirt, such as the dress uniform of the United States Marine Corps. Neckties are usually paired with suit jackets or sport coats, but have often been seen with other articles, such as v-neck sweaters. Neckties are reported by fashion historians to be descended from the regency era cravat. Adult neckties are generally unsized in length but may be available in a longer sizes for taller persons. Widths are matched to the width of a suit jacket lapel. Neckties were originally considered "menswear," but are now considered unisex items in most Western cultures. Neckties can also be part of a uniform. Neckties are traditionally worn with the top shirt button fastened, and the tie knot resting between the collar points. == History == === Origins === The necktie that spread from Europe traces back to Croatian mercenaries serving in France during the Thirty Years' War (1618–1648). These mercenaries from the Military Frontier, wearing their traditional small, knotted neckerchiefs, aroused the interest of the Parisians. Because of the difference between the Croatian word for Croats, Hrvati, and the French word, Croates, the garment gained the name cravat (cravate in French). Louis XIV began wearing a lace cravat around 1646 when he was seven and set the fashion for French nobility. This new article of clothing started a fashion craze in Europe; both men and women wore pieces of fabric around their necks. From its introduction by the French king, men wore lace cravats, or jabots, which took a large amount of time and effort to arrange. These cravats were often tied in place by cravat strings, arranged neatly and tied in a bow. International Necktie Day is celebrated on October 18 in Croatia and in various cities around the world, including in Dublin, Tübingen, Como, Tokyo, Sydney and other towns. === 1710–1800: stocks, solitaires, neckcloths, cravats === In 1715, another kind of neckwear, called "stocks" made its appearance. The term originally referred to a leather collar, laced at the back, worn by soldiers to promote holding the head high in a military bearing. The leather stock also afforded some protection to the major blood vessels of the neck from saber or bayonet attacks. General Sherman is seen wearing a leather stock in several American Civil War-era photographs. Stock ties were initially just a small piece of muslin folded into a narrow band wound a few times around the shirt collar and secured from behind with a pin. It was fashionable for men to wear their hair long, past shoulder length. The ends were tucked into a black silk bag worn at the nape of the neck. This was known as the bag-wig hairstyle, and the neckwear worn with it was the stock. The solitaire was a variation of the bag wig. This form had matching ribbons stitched around the bag. After the stock was in place, the ribbons would be brought forward and tied in a large bow in front of the wearer. Sometime in the late 18th century, cravats began to make an appearance again. This can be attributed to a group of young men called the macaronis (as mentioned in the song "Yankee Doodle"). These were young Englishmen who returned from Europe and brought with them new ideas about fashion from Italy. The French contemporaries of the macaronis were the 'petits-maîtres' and incroyables. === 1800–1850: cravat, stocks, scarves, bandanas === At this time, there was also much interest in the way to tie a proper cravat and this led to a series of publications. This began in 1818 with the publication of Neckclothitania, a style manual that contained illustrated instructions on how to tie 14 different cravats. Soon after, the immense skill required to tie the cravat in certain styles quickly became a mark of a man's elegance and wealth. It was also the first book to use the word tie in association with neckwear. It was about this time that black stocks made their appearance. Their popularity eclipsed the white cravat, except for formal and evening wear. These remained popular through the 1850s. At this time, another form of neckwear worn was the scarf. This was where a neckerchief or bandana was held in place by slipping the ends through a finger or scarf ring at the neck instead of using a knot. This is the classic sailor neckwear and may have been adopted from them === 1860s–1945: bow ties, scarf/neckerchief, the ascot, the long tie === With the Industrial Revolution, more people wanted neckwear that was easy to put on, was comfortable and would last an entire workday. Long ties were designed to be long, thin, and easy to knot, without accidentally coming undone. This is the necktie design still worn by millions. Academic tailors Castell & Son (Oxford) Limited, which opened in 1846 in Oxford, takes credit for creating the first modern style necktie in 1870. In 1903, Theodore Roosevelt became the first US president to wear the modern long tie in a presidential portrait. By this time, the sometimes complicated array of knots and styles of neckwear gave way to neckties and bow ties, the latter a much smaller, more convenient version of the cravat. Another type of neckwear, the ascot tie, was considered de rigueur for male guests at formal dinners and male spectators at races. These ascots had wide flaps that were crossed and pinned together on the chest. In 1922, a New York tie maker, Jesse Langsdorf, came up with a method of cutting the fabric on the bias and sewing it in three segments. This technique improved elasticity and facilitated the fabric's return to its original shape. Since that time, most men have worn the "Langsdorf" tie. Yet another development during that time was the method used to secure the lining and interlining once the tie had been folded into shape. === 1945–1995 === After the First World War, hand-painted ties became an accepted form of decoration in the U.S. The widths of some of these ties went up to 4.5 inches (11 cm). These loud, flamboyant ties sold very well through the 1950s. Before the Second World War ties were typically worn shorter than they are today. This was due, in part, to men at that time more commonly wearing trousers with a higher rise (at the natural waist, just above the belly button) and waistcoats; i.e., ties could be shorter because trousers sat higher up and, at any rate, the tip of the tie was almost always concealed. Around 1944, ties started to become not only wider but even wilder. This was the beginning of what was later labeled the Bold Look: ties that reflected the returning GIs' desire to break with wartime uniformity. Widths reached 5 inches (13 cm), and designs included Art Deco, hunting scenes, scenic "photographs", tropical themes, and even girlie prints, though more traditional designs were also available. The typical length was 48 inches (120 cm). The Bold Look lasted until about 1951 when the "Mister T" look (so termed by Esquire magazine) was introduced. The new style, characterized by tapered suits, slimmer lapels, and smaller hat brims, included thinner and not so wild ties. Tie widths slimmed to 3 inches (7.6 cm) by 1953 and continued getting thinner up until the mid-1960s; length increased to about 52 inches (130 cm) as men started wearing their trousers lower, closer to the hips. Through the 1950s, neckties remained somewhat colorful, yet more restrained than in the previous decade. Small geometric shapes were often employed against a solid background (i.e., foulards); diagonal stripes were also popular. By the early 1960s, dark, solid ties became very common, with widths slimming down to as little as 1 inch (2.5 cm). The 1960s brought about an influx of pop art influenced designs. The first was designed by Michael Fish when he worked at Turnbull & Asser, and was introduced in Britain in 1965; the term Kipper tie was a pun on his name, as well as a reference to the triangular shape of the front of the tie. The exuberance of the styles of the late 1960s and early 1970s gradually gave way to more restrained designs. Ties became wider, returning to their 4+1⁄2-inch (11 cm) width, sometimes with garish colors and designs. The traditional designs of the 1930s and 1950s, such as those produced by Tootal, reappeared, particularly Paisley patterns. Ties began to be sold along with shirts, and designers slowly began to experiment with bolder colors. In the 1980s, narrower ties, some as narrow as 1+1⁄2 inches (3.8 cm) but more typically 3 to 3+1⁄4 inches (7.6 to 8.3 cm) wide, became popular again. Into the 1990s, as ties got wider again, increasingly unusual designs became common. Novelty (or joke) ties or deliberately kitschy ties designed to make a statement gained a certain popularity in the 1980s and 1990s. These included ties featuring cartoon characters, commercial products, or pop culture icons, and those made of unusual materials, such as plastic or wood. During this period, with men wearing their trousers at their hips, ties lengthened to 57 inches (140 cm). The number of ties sold in the United States reached a peak of 110 million in the early 1990s. === 1995–present === During this period, the use of neckties in the workplace underwent a gradual decline. By 2001, the number of ties sold per year in the US had declined to 60 million. At the start of the 21st century, ties widened to 3+1⁄2 to 3+3⁄4 inches (8.9 to 9.5 cm) wide, with a broad range of patterns available, from traditional stripes, foulards, and club ties (ties with a crest or design signifying a club, organization, or order) to abstract, themed, and humorous ones. The standard length remains 57 inches (140 cm), though other lengths vary from 117 cm to 152 cm. While ties as wide as 3+3⁄4 inches (9.5 cm) are still available, ties under 3 inches (7.6 cm) wide also became popular, particularly with younger men and the fashion-conscious. == Types == === Modern Cravat and Ascot tie === The modern cravat is slightly different from the popular cravats during the Regency era. === Four-in-hand === The four-in-hand necktie (as distinct from the four-in-hand knot) was fashionable in Great Britain in the 1850s. Early neckties were simple, rectangular cloth strips cut on the square, with square ends. The term four-in-hand originally described a carriage with four horses and a driver; later, it also was the name of a London gentlemen's club, The Four-in-Hand Driving Company founded in 1856. Some etymologic reports are that carriage drivers knotted their reins with a four-in-hand knot (see below), whilst others claim the carriage drivers wore their scarves knotted 'four-in-hand', but, most likely, members of the club began wearing their neckties so knotted, thus making it fashionable. In the latter half of the 19th century, the four-in-hand knot and the four-in-hand necktie were synonymous. As fashion changed from stiff shirt collars to soft, turned-down collars, the four-in-hand necktie knot gained popularity; its sartorial dominance rendered the term four-in-hand redundant usage, shortened long tie and tie. In 1926, Jesse Langsdorf from New York City introduced ties cut on the bias (US) or cross-grain (UK), allowing the tie to evenly fall from the knot without twisting; this also caused any woven pattern such as stripes to appear diagonally across the tie. Today, four-in-hand ties are part of men's dress clothing in both Western and non-Western societies, particularly for business. Four-in-hand ties are generally made from silk or polyester and occasionally with cotton. Another material used is wool, usually knitted, common before World War II but not as popular nowadays. More recently, microfiber ties have also appeared; in the 1950s and 1960s, other manmade fabrics, such as Dacron and rayon, were also used, but have fallen into disfavor. Modern ties appear in a wide variety of colors and patterns, notably striped (usually diagonally); club ties (with a small motif repeated regularly all over the tie); foulards (with small geometric shapes on a solid background); paisleys; and solids. Novelty ties featuring icons from popular culture (such as cartoons, actors, or holiday images), sometimes with flashing lights, have enjoyed some popularity since the 1980s. === Six- and seven-fold ties === A seven-fold tie is an unlined construction variant of the four-in-hand necktie which pre-existed the use of interlining. Its creation at the end of the 19th century is attributed to the Parisian shirtmaker Washington Tremlett for an American customer. A seven-fold tie is constructed completely out of silk. A six-fold tie is a modern alteration of the seven-fold tie. This construction method is more symmetrical than the true seven-fold. It has an interlining which gives it a little more weight and is self-tipped. === Skinny tie === A skinny tie is a necktie that is narrower than the standard tie and often all-black. Skinny ties have widths of around 2+1⁄2 inches (6.4 cm) at their widest, compared to usually 3–4 inches (7.6–10.2 cm) for regular ties. Skinny ties were first popularized in the late 1950s and early 1960s by British bands such as the Beatles and the Kinks, alongside the subculture that embraced such bands, the mods. This is because clothes of the time evolved to become more form-fitting and tailored. They were later repopularized in the late 1970s and early 1980s by new wave and power pop bands such as the Knack, Blondie and Duran Duran. === "Pre-tied" ties and development of clip-ons === The "pre-tied" necktie, or more commonly, the clip-on necktie, is a permanently knotted four-in-hand or bow tie affixed by a clip or hook. The clip-on tie sees use with children, and in occupations where a traditional necktie might pose a safety hazard to mechanical equipment operators, etc. (see § Health and safety hazards below). The perceived utility of this development in the history of the style is evidenced by the series of patents issued for various forms of these ties, beginning in the late 19th century, and by the businesses filing these applications and fulfilling a market need for them. For instance, a patent filed by Joseph W. Less of the One-In-Hand Tie Company of Clinton, Iowa for "Pre-tied neckties and methods for making the same" noted that: [M]any efforts [...] in the past to provide a satisfactory four-in-hand tie so [...] that the wearer [...] need not tie the knot [...] had numerous disadvantages and [...] limited commercial success. Usually, such ties have not accurately simulated the Windsor knot, and have often had a[n] [...] unconventional made-up appearance. Frequently, [...] [they were] difficult to attach and uncomfortable when worn [...] [and] unduly expensive [...] [offering] little advantage over the conventional. The inventor proceeded to claim for the invention—the latest version of the 1930s–1950s product line from former concert violinist Joseph Less, Iowan brothers Walter and Louis, and son-in-law W. Emmett Thiessen evolved to be identifiable as the modern clip-on—"a novel method for making up the tie [...] [eliminating] the neckband of the tie, which is useless and uncomfortable in warm weather [...] [and providing] means of attachment which is effective and provides no discomfort to the wearer", and in doing so achieves "accurate simulation of the Windsor knot, and extremely low material and labor costs". Notably, the company made use of ordinary ties purchased from the New York garment industry and was a significant employer of women in the pre-war and World War II years. == Knots == There are four main knots used to knot neckties. In rising order of difficulty, they are: the four-in-hand knot. The four-in-hand knot may be the most common. the Pratt knot (the Shelby knot) the half-Windsor knot the Windsor knot (also redundantly called the "full Windsor" and the "Double Windsor"). Although he did not invent it, the Windsor knot is named after the Duke of Windsor. The Duke did favor a voluminous knot; however, he achieved this by having neckties specially made of thicker cloths. In the late 1990s, two researchers, Thomas Fink and Yong Mao of Cambridge's Cavendish Laboratory, used mathematical modeling to discover that 85 knots are possible with a conventional tie (limiting the number "moves" used to tie the knot to nine; longer sequences of moves result in too large a knot or leave the hanging ends of the tie too short). The models were published in academic journals, while the results and the 85 knots were published in layman's terms in a book entitled The 85 Ways to Tie a Tie. Of the 85 knots, Fink and Mao selected 13 knots as "aesthetic" knots, using the qualities of symmetry and balance. Based on these mathematical principles, the researchers came up with not only the four necktie knots in common use, but nine more, some of which had seen limited use, and some that are believed to have been codified for the first time. Other types of knots include: Small knot (also "oriental knot", "Kent knot"): the smallest possible necktie knot. It forms an equilateral triangle, like the half-Windsor, but much more compact (Fink–Mao notation: Lo Ri Co T, Knot 1). It is also the smallest knot to begin inside-out. Nicky knot: an alternative version of the Pratt knot, but better-balanced and self-releasing (Lo Ci Ro Li Co T, Knot 4). Supposedly named for Nikita Khrushchev, it tends to be equally referred to as the Pratt knot in men's style literature. This is the version of the Pratt knot favored by Fink and Mao. Atlantic knot: a reversed Pratt knot, highlighting the structure of the knot normally hidden on the back. For the wide blade to remain in front and right-side-out, the knot must begin right-side-out, and the thin end must be wrapped around the wide end. (Ri Co Ri Lo Ci T; not cataloged by Fink and Mao, but would be numbered 5r according to their classification.) Prince Albert knot (also "double knot", "cross Victoria knot"): A variant of the four-in-hand with an extra pass of the wide blade around the front, before passing the wide blade through both of the resultant loops (Li Ro Li Ro Li Co T T, Knot 62). A version knotted through only the outermost loop is known as the Victoria knot (Li Ro Li Ro Li Co T, Knot 6). Christensen knot (also "cross knot"): An elongated, symmetrical knot, whose main feature is the cruciform structure made by knotting the necktie through the double loop made in the front (Li Ro Ci Lo Ri Lo Ri Co T T, Knot 252). While it can be made with modern neckties, it is most effective with thinner ties of consistent width, which fell out of common use after the 19th century. Ediety knot (also "Merovingian knot"): a doubled Atlantic knot, best known as the tie knot worn by the character "the Merovingian" in the 2003 film The Matrix Reloaded. This tie can be knotted with the thin end over the wide end, as with the Atlantic knot, or with the wide end over the thin end to mimic the look seen in the film, with the narrow blade in front. (Ri Co Ri Lo Ci Ri Co Ri Lo Ci T – not cataloged by Fink and Mao, as its 10 moves exceed their parameters.) Trinity knot: This knot was first created by Christopher Johnson in Watertown, WI in 2004. He was inspired by the 2003 film The Matrix Reloaded. It is relatively easy to tie in spite of its complex look. It is best with a tie that is without taper or flare on the narrow blade. (Tying the thin end over the larger end, it can be described as Li Co Li Ro Ci Lo Ri Co T Li Ro T, with the final through move being like a Ci move. Due to having 11 moves and two through moves, it was not listed by Fink and Mao.) Herringbone knot (also "Eldredge knot"): This knot is tied in almost the same process as the Trinity knot, but tends to create more volume to the sides, and is thus most suited to spread or cutaway collars. Grantchester knot: A self-releasing, asymmetric knot. == Ties as a sign of membership and other patterns == === Club ties === Club ties are patterned ties, often featuring Heraldry patterns, representing institutions that are most often academic, such as universities and colleges. Club ties rarely feature striped patterns, and always feature a repeating shield, logo, or pattern of some kind. === Regimental ties === In Britain and other Commonwealth countries, Regimental ties have been used to denote association with a particular military regiment, corps, or service. It is considered inappropriate for persons who are unaffiliated with a regiment, university, school, or other organization, to wear a necktie affiliated with that organization. In Commonwealth countries, necktie stripes commonly run from the left shoulder down to the right side, following the expression; "From heart to sword." In the James Bond franchise, the titular character wears the regimental tie of the Royal Navy and other characters are seen wearing ties from other regiments and military organizations. Members of the British Royal Family are frequently seen wearing regimental striped ties corresponding to the military unit in which they have served or been appointed to an honorary position such as colonel-in-chief. The traditional method of styling regimental ties still remains, however, not all British regiments use the Regimental pattern in the modern era. Some regiments use the Club tie pattern, and some use the Repp tie pattern. === Repp ties === Prince Albert Edward was the first sitting member of the British Royal Family to ever visit the Americas, including trips to Canada and the United States. His visit to the United States began a phenomenon of replication in the Western Hemisphere, but either out of deference to the British regiments, or because the method of replication meant that the ties had to be produced in a mirror image, the American stripe tie was produced in the reverse of the Regimental tie: from the right shoulder to the left hip. When Brooks Brothers introduced similar striped ties in the United States, around the beginning of the 20th century, they had their stripes run from the right shoulder to the left side, in part to distinguish them from British regimental striped neckties. In the United States, diagonally striped ties are commonly worn with no connotation of a group membership. Typically, American striped ties have the stripes running downward from the wearer's right (the opposite of the European style). (However, when Americans wear striped ties as a sign of membership, the European stripe style may be used.) In some cases, American "repp stripe" ties may simply be reverse images of British regimental ties. Striped ties are strongly associated with the Ivy League and preppy style of dress. === School ties === School ties are most often found in Club pattern or Regimental patterns. The academic variant of a striped necktie is known as the Collegiate stripe. The use of coloured and patterned neckties indicating the wearer's membership in a club, military regiment, school, professional association (Royal Colleges, Inns of Courts) or other institution, dates only from late-19th century England. The immediate forerunners of today's college neckties were in 1880 the oarsmen of Exeter College, Oxford, who tied the bands of their straw hats around their necks. In the United Kingdom and many Commonwealth countries, neckties are commonly an essential component of a school uniform and are either worn daily, seasonally or on special occasions with the school blazer. In Hong Kong, Australia and New Zealand, neckties are worn as the everyday uniform, usually as part of the winter uniform. In countries with no winter such as Sri Lanka, Singapore, Malaysia, and many African countries, the necktie is usually worn as part of the formal uniform on special occasions or functions. Neckties may also denote membership in a house or a leadership role (i.e. school prefect, house captain, etc.). The most common pattern for such ties in the UK and most of Europe consists of diagonal stripes of alternating colors running down the tie from the wearer's left. Since neckties are cut on the bias (diagonally), the stripes on the cloth are parallel or perpendicular to the selvage, not diagonal. The colors themselves may be particularly significant. The dark blue and red regimental tie of the Household Division is said to represent the blue blood (i.e. nobility) of the Royal Family, and the red blood of the Guards.[citation needed] An alternative membership tie pattern to diagonal stripes is either a single emblem or a crest centered and placed where a tie pin normally would be, or a repeated pattern of such motifs. Sometimes, both types are used by an organization, either simply to offer a choice or to indicate a distinction among levels of membership. Occasionally, a hybrid design is used, in which alternating stripes of color are overlaid with repeated motif patterns. === Tartan === Tartan neckties are often found as variations on the theme of clan tartans in the Scottish Register of Tartans. Tartan (Scottish Gaelic: breacan [ˈpɾʲɛxkən]) is a patterned cloth consisting of crossing horizontal and vertical bands in multiple colours, forming repeating symmetrical patterns known as setts. Originating in woven wool, tartan is most strongly associated with Scotland, where it has been used for centuries in traditional clothing such as the kilt. Historically, specific tartans were linked to Scottish clans, families, or regions, with patterns and colours derived from local dyes. Tartan became a symbol of Scottish identity, especially from the 16th century onward, despite bans following the Jacobite rising of 1745 under the Dress Act 1746. The 19th-century Highland Revival popularized tartan globally, associating it with Highland dress and the Scottish diaspora. Today, tartan is used worldwide in clothing, accessories, and design, transcending its traditional roots. Modern tartans are registered for organisations, individuals, and commemorative purposes, with thousands of designs in the Scottish Register of Tartans. While often linked to Scottish heritage, tartans exist in other cultures, such as Africa, East and South Asia, and Eastern Europe. They also serve institutional roles, like military uniforms and corporate branding. Tartan patterns vary in complexity, from simple two-colour designs to intricate motifs with over twenty hues. Colours historically derived from natural dyes, such as lichens and alder bark, are now produced synthetically. === Paisley === Paisley is an ornamental textile design using the boteh (Persian: بته) or buta, a teardrop-shaped motif with a curved upper end. Of Persian origin, paisley designs became popular in the West in the 18th and 19th centuries, following imports of post-Mughal Empire versions of the design from India, especially in the form of Kashmir shawls, and were then replicated locally. The English language name for the patterns comes from the town of Paisley, in the west of Scotland, a centre for textiles where paisley designs were reproduced using jacquard looms. The pattern is still commonly seen in Britain and other English-speaking countries on neckties, waistcoats, and scarfs, and remains popular in other items of clothing and textiles in Iran and South and Central Asian countries. Some design scholars believe the buta is the convergence of a stylized floral spray and a cypress tree: a Zoroastriansymbol of life and eternity. The "bent" cedar is also a sign of strength and resistance but modesty. The floral motif originated in the Sassanid dynasty, was used later in the Safavid dynasty of Persia (1501–1736), and was a major textile pattern in Iran during the Qajar and Pahlavi dynasties. In these periods, the pattern was used to decorate royal regalia, crowns, and court garments, as well as textiles used by the general population. Persian and Central Asian designs usually range the motifs in orderly rows, with a plain background. == Use by women and girls == During the women's suffrage movement and women's liberation movement in the late 1800's, neckties were adopted heavily into women's fashion. Coco Chanel is often credited for advancing the acceptable wear of neckties by women in the 1930's, but a large movement occurred during World War II, when women started working in factories and offices in large numbers. Neckties are sometimes part of uniforms worn by women, which nowadays might be required in professions such as in the restaurant industry or in police forces. In many countries, girls are nowadays required to wear ties as part of primary and secondary school uniforms. Ties may also be used by women as a fashion statement. During the late 1970s and 1980s, it was not uncommon for young women in the United States to wear ties as part of a casual outfit. This trend was popularized by Diane Keaton, who wore a tie as the titular character in the 1977 film Annie Hall. In 1993, neckties reappeared as prominent fashion accessories for women in both Europe and the U.S. Canadian recording artist Avril Lavigne wore neckties with tank tops early in her career. == Occasions for neckties == Traditionally, ties are a staple of office attire, especially for professionals. Proponents of the tie's place in the office assert that ties neatly demarcate work and leisure time. The theory is that the physical presence of something around your neck serves as a reminder to knuckle down and focus on the job at hand. Conversely, loosening the tie after work signals that one can relax. Outside of these environments, ties are usually worn especially when attending traditionally formal or professional events, including weddings, important religious ceremonies, funerals, job interviews, court appearances, and fine dining. == Opposition to neckties == === Christian denominations teaching plain dress === Among many Christian denominations teaching the doctrine of plain dress, long neckties are not worn by men; this includes many Anabaptist communities (such as the Conservative Mennonite churches), traditional Quakers (who view neckties as contravening their testimony of simplicity), and some holiness denominations. While Reformed Mennonites, among some other Anabaptist communities, reject the long necktie, the wearing of the bow tie is customary. === Anti-necktie sentiment === In the early 20th century, the number of office workers began increasing. Many such men and women were required to wear neckties because it was perceived as improving work attitudes, morale, and sales. Removing the necktie as a social and sartorial business requirement (and sometimes forbidding it) is a modern trend often attributed to the rise of popular culture. Although it was common as everyday wear as late as 1966, over the years 1967–69, the necktie fell out of fashion almost everywhere, except where required. There was a resurgence in the 1980s, but in the 1990s, ties again fell out of favor, with many technology-based companies having casual dress requirements, including Apple, Amazon, eBay, Genentech, Microsoft, Monsanto, and Google. In western business culture, a phenomenon known as Casual Friday has arisen, in which employees are not required to wear ties on Fridays, and then—increasingly—on other, announced, special days. Some businesses have extended casual dress days to Thursday, and even Wednesday; others require neckties only on Monday (to start the workweek). At the furniture company IKEA, neckties are not allowed. An example of anti-necktie sentiment is found in Iran, where the government of the Islamic Republic considers neckties to be "decadent, un-Islamic and viewed as "symbols of the Cross" and the oppressive West." Most Iranian men in Iran have retained the Western-style long-sleeved collared shirt and three-piece suit, while excluding the necktie. While ties are viewed as "highly politicised clothing" in Iran, some Iranian men continue to wear them, as do many Westerners who visit the country. Neckties are viewed by various sub- and counter-culture movements as being a symbol of submission and slavery (i.e., having a symbolic chain around one's neck) to the corrupt elite of society, as a "wage slave". For 60 years, designers and manufacturers of neckties in the United States were members of the Men's Dress Furnishings Association but the trade group shut down in 2008 as a result of declining membership due to the declining numbers of men wearing neckties. In 1998 Dutch royal consort Prince Claus removed his tie at a public event, calling on the "tie-wearers of all countries" to unite and cast off the oppression of the tie. The incident gained a lot of media attention. In 2019, US presidential candidate Andrew Yang drew attention when he appeared on televised presidential debates without a tie. Yang dismissed media questions about it, saying that voters should be focused on more important issues. New Zealand Member of Parliament Rawiri Waititi has been vocal in his opposition to neckties, calling them a "colonial noose". In February 2021, he was ejected from Parliament for refusing to wear a tie, drawing attention and parliamentary debate, which ultimately resulted in the requirement being dropped from NZ parliament's appropriate business attire requirements for males. Richard Branson, founder of Virgin Group, believes ties are a symbol of oppression and slavery. == Health and safety hazards == Necktie wearing presents some risks for entanglement, infection, and vasoconstriction. A 2018 study published in the medical journal Neuroradiology found that a Windsor knot tightened to the point of "slight discomfort" could interrupt as much as 7.5% of cerebral blood flow. A 2013 study published in the British Journal of Ophthalmology found increased intraocular pressure in such cases, which can aggravate the condition of people with weakened retinas. There may be additional risks for people with glaucoma. Entanglement is a risk when working with machinery or in dangerous, possibly violent, jobs such as police officers and prison guards, and certain medical fields. Paramedics performing life support remove an injured man's necktie as a first step to ensure it does not block his airway. Neckties might also be a health risk for persons other than the wearer. They are believed to be vectors of disease transmission in hospitals. Notwithstanding such fears, many doctors and dentists wear neckties for a professional image. Hospitals take seriously the cross-infection of patients by doctors wearing infected neckties, because neckties are less frequently cleaned than most other clothes. On September 17, 2007, British hospitals published rules banning neckties. In such a context, some instead prefer to use bow ties due to their short length and relative lack of hindrance. Police officers, traffic wardens, and security guards in the UK wear clip-on ties which instantly unclip when pulled to prevent any risk of strangulation during a confrontation. They are part of the National Framework Contract for the police uniform. == See also == History of Western fashion Tie chain Tie clip Tie press Scarf Neckerchief Bolo tie == References == == Further reading == Chaille, François (1994). La grande histoire de la cravate. Paris: Flammarion. ISBN 2-08-201851-2. Dyer, Rod; Spark, Ron (1987). Fit to be Tied: Vintage ties of the Forties and Early Fifties. photography by Steve Sakai (1st ed.). New York: Abbeville Press. ISBN 0-89659-756-3. Keers, Paul (1987). A Gentleman's Wardrobe: Classic Clothes and the Modern Man. London: Weidenfeld & Nicolson. ISBN 978-0-297-79191-1. == External links ==
Wikipedia:Representation of a Lie superalgebra#0
In the mathematical field of representation theory, a representation of a Lie superalgebra is an action of Lie superalgebra L on a Z2-graded vector space V, such that if A and B are any two pure elements of L and X and Y are any two pure elements of V, then ( c 1 A + c 2 B ) ⋅ X = c 1 A ⋅ X + c 2 B ⋅ X {\displaystyle (c_{1}A+c_{2}B)\cdot X=c_{1}A\cdot X+c_{2}B\cdot X} A ⋅ ( c 1 X + c 2 Y ) = c 1 A ⋅ X + c 2 A ⋅ Y {\displaystyle A\cdot (c_{1}X+c_{2}Y)=c_{1}A\cdot X+c_{2}A\cdot Y} ( − 1 ) A ⋅ X = ( − 1 ) A ( − 1 ) X {\displaystyle (-1)^{A\cdot X}=(-1)^{A}(-1)^{X}} [ A , B ] ⋅ X = A ⋅ ( B ⋅ X ) − ( − 1 ) A B B ⋅ ( A ⋅ X ) . {\displaystyle [A,B]\cdot X=A\cdot (B\cdot X)-(-1)^{AB}B\cdot (A\cdot X).} Equivalently, a representation of L is a Z2-graded representation of the universal enveloping algebra of L which respects the third equation above. == Unitary representation of a star Lie superalgebra == A * Lie superalgebra is a complex Lie superalgebra equipped with an involutive antilinear map * such that * respects the grading and [a,b]*=[b*,a*]. A unitary representation of such a Lie algebra is a Z2 graded Hilbert space which is a representation of a Lie superalgebra as above together with the requirement that self-adjoint elements of the Lie superalgebra are represented by Hermitian transformations. This is a major concept in the study of supersymmetry together with representation of a Lie superalgebra on an algebra. Say A is an *-algebra representation of the Lie superalgebra (together with the additional requirement that * respects the grading and L[a]*=-(-1)LaL*[a*]) and H is the unitary rep and also, H is a unitary representation of A. These three reps are all compatible if for pure elements a in A, |ψ> in H and L in the Lie superalgebra, L[a|ψ>)]=(L[a])|ψ>+(-1)Laa(L[|ψ>]). Sometimes, the Lie superalgebra is embedded within A in the sense that there is a homomorphism from the universal enveloping algebra of the Lie superalgebra to A. In that case, the equation above reduces to L[a]=La-(-1)LaaL. This approach avoids working directly with a Lie supergroup, and hence avoids the use of auxiliary Grassmann numbers. == See also == Graded vector space Lie algebra representation Representation theory of Hopf algebras
Wikipedia:Resolvent cubic#0
In algebra, a resolvent cubic is one of several distinct, although related, cubic polynomials defined from a monic polynomial of degree four: P ( x ) = x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 . {\displaystyle P(x)=x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}.} In each case: The coefficients of the resolvent cubic can be obtained from the coefficients of P(x) using only sums, subtractions and multiplications. Knowing the roots of the resolvent cubic of P(x) is useful for finding the roots of P(x) itself. Hence the name “resolvent cubic”. The polynomial P(x) has a multiple root if and only if its resolvent cubic has a multiple root. == Definitions == Suppose that the coefficients of P(x) belong to a field k whose characteristic is different from 2. In other words, we are working in a field in which 1 + 1 ≠ 0. Whenever roots of P(x) are mentioned, they belong to some extension K of k such that P(x) factors into linear factors in K[x]. If k is the field Q of rational numbers, then K can be the field C of complex numbers or the field Q of algebraic numbers. In some cases, the concept of resolvent cubic is defined only when P(x) is a quartic in depressed form—that is, when a3 = 0. Note that the fourth and fifth definitions below also make sense and that the relationship between these resolvent cubics and P(x) are still valid if the characteristic of k is equal to 2. === First definition === Suppose that P(x) is a depressed quartic—that is, that a3 = 0. A possible definition of the resolvent cubic of P(x) is: R 1 ( y ) = 8 y 3 + 8 a 2 y 2 + ( 2 a 2 2 − 8 a 0 ) y − a 1 2 . {\displaystyle R_{1}(y)=8y^{3}+8a_{2}y^{2}+(2{a_{2}}^{2}-8a_{0})y-{a_{1}}^{2}.} The origin of this definition lies in applying Ferrari's method to find the roots of P(x). To be more precise: P ( x ) = 0 ⟺ x 4 + a 2 x 2 = − a 1 x − a 0 ⟺ ( x 2 + a 2 2 ) 2 = − a 1 x − a 0 + a 2 2 4 . {\displaystyle {\begin{aligned}P(x)=0&\Longleftrightarrow x^{4}+a_{2}x^{2}=-a_{1}x-a_{0}\\&\Longleftrightarrow \left(x^{2}+{\frac {a_{2}}{2}}\right)^{2}=-a_{1}x-a_{0}+{\frac {{a_{2}}^{2}}{4}}.\end{aligned}}} Add a new unknown, y, to x2 + a2/2. Now you have: ( x 2 + a 2 2 + y ) 2 = − a 1 x − a 0 + a 2 2 4 + 2 x 2 y + a 2 y + y 2 = 2 y x 2 − a 1 x − a 0 + a 2 2 4 + a 2 y + y 2 . {\displaystyle {\begin{aligned}\left(x^{2}+{\frac {a_{2}}{2}}+y\right)^{2}&=-a_{1}x-a_{0}+{\frac {{a_{2}}^{2}}{4}}+2x^{2}y+a_{2}y+y^{2}\\&=2yx^{2}-a_{1}x-a_{0}+{\frac {{a_{2}}^{2}}{4}}+a_{2}y+y^{2}.\end{aligned}}} If this expression is a square, it can only be the square of 2 y x − a 1 2 2 y . {\displaystyle {\sqrt {2y}}\,x-{\frac {a_{1}}{2{\sqrt {2y}}}}.} But the equality ( 2 y x − a 1 2 2 y ) 2 = 2 y x 2 − a 1 x − a 0 + a 2 2 4 + a 2 y + y 2 {\displaystyle \left({\sqrt {2y}}\,x-{\frac {a_{1}}{2{\sqrt {2y}}}}\right)^{2}=2yx^{2}-a_{1}x-a_{0}+{\frac {{a_{2}}^{2}}{4}}+a_{2}y+y^{2}} is equivalent to a 1 2 8 y = − a 0 + a 2 2 4 + a 2 y + y 2 , {\displaystyle {\frac {{a_{1}}^{2}}{8y}}=-a_{0}+{\frac {{a_{2}}^{2}}{4}}+a_{2}y+y^{2}{\text{,}}} and this is the same thing as the assertion that R1(y) = 0. If y0 is a root of R1(y), then it is a consequence of the computations made above that the roots of P(x) are the roots of the polynomial x 2 − 2 y 0 x + a 2 2 + y 0 + a 1 2 2 y 0 {\displaystyle x^{2}-{\sqrt {2y_{0}}}\,x+{\frac {a_{2}}{2}}+y_{0}+{\frac {a_{1}}{2{\sqrt {2y_{0}}}}}} together with the roots of the polynomial x 2 + 2 y 0 x + a 2 2 + y 0 − a 1 2 2 y 0 . {\displaystyle x^{2}+{\sqrt {2y_{0}}}\,x+{\frac {a_{2}}{2}}+y_{0}-{\frac {a_{1}}{2{\sqrt {2y_{0}}}}}.} Of course, this makes no sense if y0 = 0, but since the constant term of R1(y) is –a12, 0 is a root of R1(y) if and only if a1 = 0, and in this case the roots of P(x) can be found using the quadratic formula. === Second definition === Another possible definition (still supposing that P(x) is a depressed quartic) is R 2 ( y ) = 8 y 3 − 4 a 2 y 2 − 8 a 0 y + 4 a 2 a 0 − a 1 2 {\displaystyle R_{2}(y)=8y^{3}-4a_{2}y^{2}-8a_{0}y+4a_{2}a_{0}-{a_{1}}^{2}} The origin of this definition is similar to the previous one. This time, we start by doing: P ( x ) = 0 ⟺ x 4 = − a 2 x 2 − a 1 x − a 0 ⟺ ( x 2 + y ) 2 = − a 2 x 2 − a 1 x − a 0 + 2 y x 2 + y 2 {\displaystyle {\begin{aligned}P(x)=0&\Longleftrightarrow x^{4}=-a_{2}x^{2}-a_{1}x-a_{0}\\&\Longleftrightarrow (x^{2}+y)^{2}=-a_{2}x^{2}-a_{1}x-a_{0}+2yx^{2}+y^{2}\end{aligned}}} and a computation similar to the previous one shows that this last expression is a square if and only if 8 y 3 − 4 a 2 y 2 − 8 a 0 y + 4 a 2 a 0 − a 1 2 = 0 . {\displaystyle 8y^{3}-4a_{2}y^{2}-8a_{0}y+4a_{2}a_{0}-{a_{1}}^{2}=0{\text{.}}} A simple computation shows that R 2 ( y + a 2 2 ) = R 1 ( y ) . {\displaystyle R_{2}\left(y+{\frac {a_{2}}{2}}\right)=R_{1}(y).} === Third definition === Another possible definition (again, supposing that P(x) is a depressed quartic) is R 3 ( y ) = y 3 + 2 a 2 y 2 + ( a 2 2 − 4 a 0 ) y − a 1 2 . {\displaystyle R_{3}(y)=y^{3}+2a_{2}y^{2}+({a_{2}}^{2}-4a_{0})y-{a_{1}}^{2}{\text{.}}} The origin of this definition lies in another method of solving quartic equations, namely Descartes' method. If you try to find the roots of P(x) by expressing it as a product of two monic quadratic polynomials x2 + αx + β and x2 – αx + γ, then P ( x ) = ( x 2 + α x + β ) ( x 2 − α x + γ ) ⟺ { β + γ − α 2 = a 2 α ( − β + γ ) = a 1 β γ = a 0 . {\displaystyle P(x)=(x^{2}+\alpha x+\beta )(x^{2}-\alpha x+\gamma )\Longleftrightarrow \left\{{\begin{array}{l}\beta +\gamma -\alpha ^{2}=a_{2}\\\alpha (-\beta +\gamma )=a_{1}\\\beta \gamma =a_{0}.\end{array}}\right.} If there is a solution of this system with α ≠ 0 (note that if a1 ≠ 0, then this is automatically true for any solution), the previous system is equivalent to { β + γ = a 2 + α 2 − β + γ = a 1 α β γ = a 0 . {\displaystyle \left\{{\begin{array}{l}\beta +\gamma =a_{2}+\alpha ^{2}\\-\beta +\gamma ={\frac {a_{1}}{\alpha }}\\\beta \gamma =a_{0}.\end{array}}\right.} It is a consequence of the first two equations that then β = 1 2 ( a 2 + α 2 − a 1 α ) {\displaystyle \beta ={\frac {1}{2}}\left(a_{2}+\alpha ^{2}-{\frac {a_{1}}{\alpha }}\right)} and γ = 1 2 ( a 2 + α 2 + a 1 α ) . {\displaystyle \gamma ={\frac {1}{2}}\left(a_{2}+\alpha ^{2}+{\frac {a_{1}}{\alpha }}\right).} After replacing, in the third equation, β and γ by these values one gets that ( a 2 + α 2 ) 2 − a 1 2 α 2 = 4 a 0 , {\displaystyle \left(a_{2}+\alpha ^{2}\right)^{2}-{\frac {{a_{1}}^{2}}{\alpha ^{2}}}=4a_{0}{\text{,}}} and this is equivalent to the assertion that α2 is a root of R3(y). So, again, knowing the roots of R3(y) helps to determine the roots of P(x). Note that R 3 ( y ) = R 1 ( y 2 ) . {\displaystyle R_{3}(y)=R_{1}\left({\frac {y}{2}}\right){\text{.}}} === Fourth definition === Still another possible definition is R 4 ( y ) = y 3 − a 2 y 2 + ( a 1 a 3 − 4 a 0 ) y + 4 a 0 a 2 − a 1 2 − a 0 a 3 2 . {\displaystyle R_{4}(y)=y^{3}-a_{2}y^{2}+(a_{1}a_{3}-4a_{0})y+4a_{0}a_{2}-{a_{1}}^{2}-a_{0}{a_{3}}^{2}.} In fact, if the roots of P(x) are α1, α2, α3, and α4, then R 4 ( y ) = ( y − ( α 1 α 2 + α 3 α 4 ) ) ( y − ( α 1 α 3 + α 2 α 4 ) ) ( y − ( α 1 α 4 + α 2 α 3 ) ) , {\displaystyle R_{4}(y)={\bigl (}y-(\alpha _{1}\alpha _{2}+\alpha _{3}\alpha _{4}){\bigr )}{\bigl (}y-(\alpha _{1}\alpha _{3}+\alpha _{2}\alpha _{4}){\bigr )}{\bigl (}y-(\alpha _{1}\alpha _{4}+\alpha _{2}\alpha _{3}){\bigr )}{\text{,}}} a fact the follows from Vieta's formulas. In other words, R4(y) is the monic polynomial whose roots are α1α2 + α3α4, α1α3 + α2α4, and α1α4 + α2α3. It is easy to see that α 1 α 2 + α 3 α 4 − ( α 1 α 3 + α 2 α 4 ) = ( α 1 − α 4 ) ( α 2 − α 3 ) , {\displaystyle \alpha _{1}\alpha _{2}+\alpha _{3}\alpha _{4}-(\alpha _{1}\alpha _{3}+\alpha _{2}\alpha _{4})=(\alpha _{1}-\alpha _{4})(\alpha _{2}-\alpha _{3}){\text{,}}} α 1 α 3 + α 2 α 4 − ( α 1 α 4 + α 2 α 3 ) = ( α 1 − α 2 ) ( α 3 − α 4 ) , {\displaystyle \alpha _{1}\alpha _{3}+\alpha _{2}\alpha _{4}-(\alpha _{1}\alpha _{4}+\alpha _{2}\alpha _{3})=(\alpha _{1}-\alpha _{2})(\alpha _{3}-\alpha _{4}){\text{,}}} α 1 α 2 + α 3 α 4 − ( α 1 α 4 + α 2 α 3 ) = ( α 1 − α 3 ) ( α 2 − α 4 ) . {\displaystyle \alpha _{1}\alpha _{2}+\alpha _{3}\alpha _{4}-(\alpha _{1}\alpha _{4}+\alpha _{2}\alpha _{3})=(\alpha _{1}-\alpha _{3})(\alpha _{2}-\alpha _{4}){\text{.}}} Therefore, P(x) has a multiple root if and only if R4(y) has a multiple root. More precisely, P(x) and R4(y) have the same discriminant. One should note that if P(x) is a depressed polynomial, then R 4 ( y ) = y 3 − a 2 y 2 − 4 a 0 y + 4 a 0 a 2 − a 1 2 = R 2 ( y 2 ) . {\displaystyle {\begin{aligned}R_{4}(y)&=y^{3}-a_{2}y^{2}-4a_{0}y+4a_{0}a_{2}-{a_{1}}^{2}\\&=R_{2}\left({\frac {y}{2}}\right){\text{.}}\end{aligned}}} === Fifth definition === Yet another definition is R 5 ( y ) = y 3 − 2 a 2 y 2 + ( a 2 2 + a 3 a 1 − 4 a 0 ) y + a 1 2 − a 3 a 2 a 1 + a 3 2 a 0 . {\displaystyle R_{5}(y)=y^{3}-2a_{2}y^{2}+({a_{2}}^{2}+a_{3}a_{1}-4a_{0})y+{a_{1}}^{2}-a_{3}a_{2}a_{1}+{a_{3}}^{2}a_{0}{\text{.}}} If, as above, the roots of P(x) are α1, α2, α3, and α4, then R 5 ( y ) = ( y − ( α 1 + α 2 ) ( α 3 + α 4 ) ) ( y − ( α 1 + α 3 ) ( α 2 + α 4 ) ) ( y − ( α 1 + α 4 ) ( α 2 + α 3 ) ) , {\displaystyle R_{5}(y)={\bigl (}y-(\alpha _{1}+\alpha _{2})(\alpha _{3}+\alpha _{4}){\bigr )}{\bigl (}y-(\alpha _{1}+\alpha _{3})(\alpha _{2}+\alpha _{4}){\bigr )}{\bigl (}y-(\alpha _{1}+\alpha _{4})(\alpha _{2}+\alpha _{3}){\bigr )}{\text{,}}} again as a consequence of Vieta's formulas. In other words, R5(y) is the monic polynomial whose roots are (α1 + α2)(α3 + α4), (α1 + α3)(α2 + α4), and (α1 + α4)(α2 + α3). It is easy to see that ( α 1 + α 2 ) ( α 3 + α 4 ) − ( α 1 + α 3 ) ( α 2 + α 4 ) = − ( α 1 − α 4 ) ( α 2 − α 3 ) , {\displaystyle (\alpha _{1}+\alpha _{2})(\alpha _{3}+\alpha _{4})-(\alpha _{1}+\alpha _{3})(\alpha _{2}+\alpha _{4})=-(\alpha _{1}-\alpha _{4})(\alpha _{2}-\alpha _{3}){\text{,}}} ( α 1 + α 2 ) ( α 3 + α 4 ) − ( α 1 + α 4 ) ( α 2 + α 3 ) = − ( α 1 − α 3 ) ( α 2 − α 4 ) , {\displaystyle (\alpha _{1}+\alpha _{2})(\alpha _{3}+\alpha _{4})-(\alpha _{1}+\alpha _{4})(\alpha _{2}+\alpha _{3})=-(\alpha _{1}-\alpha _{3})(\alpha _{2}-\alpha _{4}){\text{,}}} ( α 1 + α 3 ) ( α 2 + α 4 ) − ( α 1 + α 4 ) ( α 2 + α 3 ) = − ( α 1 − α 2 ) ( α 3 − α 4 ) . {\displaystyle (\alpha _{1}+\alpha _{3})(\alpha _{2}+\alpha _{4})-(\alpha _{1}+\alpha _{4})(\alpha _{2}+\alpha _{3})=-(\alpha _{1}-\alpha _{2})(\alpha _{3}-\alpha _{4}){\text{.}}} Therefore, as it happens with R4(y), P(x) has a multiple root if and only if R5(y) has a multiple root. More precisely, P(x) and R5(y) have the same discriminant. This is also a consequence of the fact that R5(y + a2) = -R4(-y). Note that if P(x) is a depressed polynomial, then R 5 ( y ) = y 3 − 2 a 2 y 2 + ( a 2 2 − 4 a 0 ) y + a 1 2 = − R 3 ( − y ) = − R 1 ( − y 2 ) . {\displaystyle {\begin{aligned}R_{5}(y)&=y^{3}-2a_{2}y^{2}+({a_{2}}^{2}-4a_{0})y+{a_{1}}^{2}\\&=-R_{3}(-y)\\&=-R_{1}\left(-{\frac {y}{2}}\right){\text{.}}\end{aligned}}} == Applications == === Solving quartic equations === It was explained above how R1(y), R2(y), and R3(y) can be used to find the roots of P(x) if this polynomial is depressed. In the general case, one simply has to find the roots of the depressed polynomial P(x − a3/4). For each root x0 of this polynomial, x0 − a3/4 is a root of P(x). === Factoring quartic polynomials === If a quartic polynomial P(x) is reducible in k[x], then it is the product of two quadratic polynomials or the product of a linear polynomial by a cubic polynomial. This second possibility occurs if and only if P(x) has a root in k. In order to determine whether or not P(x) can be expressed as the product of two quadratic polynomials, let us assume, for simplicity, that P(x) is a depressed polynomial. Then it was seen above that if the resolvent cubic R3(y) has a non-null root of the form α2, for some α ∈ k, then such a decomposition exists. This can be used to prove that, in R[x], every quartic polynomial without real roots can be expressed as the product of two quadratic polynomials. Let P(x) be such a polynomial. We can assume without loss of generality that P(x) is monic. We can also assume without loss of generality that it is a reduced polynomial, because P(x) can be expressed as the product of two quadratic polynomials if and only if P(x − a3/4) can and this polynomial is a reduced one. Then R3(y) = y3 + 2a2y2 + (a22 − 4a0)y − a12. There are two cases: If a1 ≠ 0 then R3(0) = −a12 < 0. Since R3(y) > 0 if y is large enough, then, by the intermediate value theorem, R3(y) has a root y0 with y0 > 0. So, we can take α = √y0. If a1 = 0, then R3(y) = y3 + 2a2y2 + (a22 − 4a0)y. The roots of this polynomial are 0 and the roots of the quadratic polynomial y2 + 2a2y + a22 − 4a0. If a22 − 4a0 < 0, then the product of the two roots of this polynomial is smaller than 0 and therefore it has a root greater than 0 (which happens to be −a2 + 2√a0) and we can take α as the square root of that root. Otherwise, a22 − 4a0 ≥ 0 and then, P ( x ) = ( x 2 + a 2 + a 2 2 − 4 a 0 2 ) ( x 2 + a 2 − a 2 2 − 4 a 0 2 ) . {\displaystyle P(x)=\left(x^{2}+{\frac {a_{2}+{\sqrt {{a_{2}}^{2}-4a_{0}}}}{2}}\right)\left(x^{2}+{\frac {a_{2}-{\sqrt {{a_{2}}^{2}-4a_{0}}}}{2}}\right){\text{.}}} More generally, if k is a real closed field, then every quartic polynomial without roots in k can be expressed as the product of two quadratic polynomials in k[x]. Indeed, this statement can be expressed in first-order logic and any such statement that holds for R also holds for any real closed field. A similar approach can be used to get an algorithm to determine whether or not a quartic polynomial P(x) ∈ Q[x] is reducible and, if it is, how to express it as a product of polynomials of smaller degree. Again, we will suppose that P(x) is monic and depressed. Then P(x) is reducible if and only if at least one of the following conditions holds: The polynomial P(x) has a rational root (this can be determined using the rational root theorem). The resolvent cubic R3(y) has a root of the form α2, for some non-null rational number α (again, this can be determined using the rational root theorem). The number a22 − 4a0 is the square of a rational number and a1 = 0. Indeed: If P(x) has a rational root r, then P(x) is the product of x − r by a cubic polynomial in Q[x], which can be determined by polynomial long division or by Ruffini's rule. If there is a rational number α ≠ 0 such that α2 is a root of R3(y), it was shown above how to express P(x) as the product of two quadratic polynomials in Q[x]. Finally, if the third condition holds and if δ ∈ Q is such that δ2 = a22 − 4a0, then P(x) = (x2 + (a2 + δ)/2)(x2 + (a2 − δ)/2). === Galois groups of irreducible quartic polynomials === The resolvent cubic of an irreducible quartic polynomial P(x) can be used to determine its Galois group G; that is, the Galois group of the splitting field of P(x). Let m be the degree over k of the splitting field of the resolvent cubic (it can be either R4(y) or R5(y); they have the same splitting field). Then the group G is a subgroup of the symmetric group S4. More precisely: If m = 1 (that is, if the resolvent cubic factors into linear factors in k), then G is the group {e, (12)(34), (13)(24), (14)(23)}. If m = 2 (that is, if the resolvent cubic has one and, up to multiplicity, only one root in k), then, in order to determine G, one can determine whether or not P(x) is still irreducible after adjoining to the field k the roots of the resolvent cubic. If not, then G is a cyclic group of order 4; more precisely, it is one of the three cyclic subgroups of S4 generated by any of its six 4-cycles. If it is still irreducible, then G is one of the three subgroups of S4 of order 8, each of which is isomorphic to the dihedral group of order 8. If m = 3, then G is the alternating group A4. If m = 6, then G is the whole group S4. == See also == Resolvent (Galois theory) == References ==
Wikipedia:Resolvent set#0
In linear algebra and operator theory, the resolvent set of a linear operator is a set of complex numbers for which the operator is in some sense "well-behaved". The resolvent set plays an important role in the resolvent formalism. == Definitions == Let X be a Banach space and let L : D ( L ) → X {\displaystyle L\colon D(L)\rightarrow X} be a linear operator with domain D ( L ) ⊆ X {\displaystyle D(L)\subseteq X} . Let id denote the identity operator on X. For any λ ∈ C {\displaystyle \lambda \in \mathbb {C} } , let L λ = L − λ i d . {\displaystyle L_{\lambda }=L-\lambda \,\mathrm {id} .} A complex number λ {\displaystyle \lambda } is said to be a regular value if the following three statements are true: L λ {\displaystyle L_{\lambda }} is injective, that is, the corestriction of L λ {\displaystyle L_{\lambda }} to its image has an inverse R ( λ , L ) = ( L − λ i d ) − 1 {\displaystyle R(\lambda ,L)=(L-\lambda \,\mathrm {id} )^{-1}} called the resolvent; R ( λ , L ) {\displaystyle R(\lambda ,L)} is a bounded linear operator; R ( λ , L ) {\displaystyle R(\lambda ,L)} is defined on a dense subspace of X, that is, L λ {\displaystyle L_{\lambda }} has dense range. The resolvent set of L is the set of all regular values of L: ρ ( L ) = { λ ∈ C ∣ λ is a regular value of L } . {\displaystyle \rho (L)=\{\lambda \in \mathbb {C} \mid \lambda {\mbox{ is a regular value of }}L\}.} The spectrum is the complement of the resolvent set σ ( L ) = C ∖ ρ ( L ) , {\displaystyle \sigma (L)=\mathbb {C} \setminus \rho (L),} and subject to a mutually singular spectral decomposition into the point spectrum (when condition 1 fails), the continuous spectrum (when condition 2 fails) and the residual spectrum (when condition 3 fails). If L {\displaystyle L} is a closed operator, then so is each L λ {\displaystyle L_{\lambda }} , and condition 3 may be replaced by requiring that L λ {\displaystyle L_{\lambda }} be surjective. == Properties == The resolvent set ρ ( L ) ⊆ C {\displaystyle \rho (L)\subseteq \mathbb {C} } of a bounded linear operator L is an open set. More generally, the resolvent set of a densely defined closed unbounded operator is an open set. == Notes == == References == Reed, M.; Simon, B. (1980). Methods of Modern Mathematical Physics: Vol 1: Functional analysis. Academic Press. ISBN 978-0-12-585050-6. Renardy, Michael; Rogers, Robert C. (2004). An introduction to partial differential equations. Texts in Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. xiv+434. ISBN 0-387-00444-0. MR2028503 (See section 8.3) == External links == Voitsekhovskii, M.I. (2001) [1994], "Resolvent set", Encyclopedia of Mathematics, EMS Press == See also == Resolvent formalism Spectrum (functional analysis) Decomposition of spectrum (functional analysis)
Wikipedia:Restricted isometry property#0
In linear algebra, the restricted isometry property (RIP) characterizes matrices which are nearly orthonormal, at least when operating on sparse vectors. The concept was introduced by Emmanuel Candès and Terence Tao and is used to prove many theorems in the field of compressed sensing. There are no known large matrices with bounded restricted isometry constants (computing these constants is strongly NP-hard, and is hard to approximate as well), but many random matrices have been shown to remain bounded. In particular, it has been shown that with exponentially high probability, random Gaussian, Bernoulli, and partial Fourier matrices satisfy the RIP with number of measurements nearly linear in the sparsity level. The current smallest upper bounds for any large rectangular matrices are for those of Gaussian matrices. Web forms to evaluate bounds for the Gaussian ensemble are available at the Edinburgh Compressed Sensing RIC page. == Definition == Let A be an m × p matrix and let 1 ≤ s ≤ p be an integer. Suppose that there exists a constant δ s ∈ ( 0 , 1 ) {\displaystyle \delta _{s}\in (0,1)} such that, for every m × s submatrix As of A and for every s-dimensional vector y, ( 1 − δ s ) ‖ y ‖ 2 2 ≤ ‖ A s y ‖ 2 2 ≤ ( 1 + δ s ) ‖ y ‖ 2 2 . {\displaystyle (1-\delta _{s})\|y\|_{2}^{2}\leq \|A_{s}y\|_{2}^{2}\leq (1+\delta _{s})\|y\|_{2}^{2}.\,} Then, the matrix A is said to satisfy the s-restricted isometry property with restricted isometry constant δ s {\displaystyle \delta _{s}} . This condition is equivalent to the statement that for every m × s submatrix As of A we have ‖ A s ∗ A s − I s × s ‖ 2 → 2 ≤ δ s , {\displaystyle \|A_{s}^{*}A_{s}-I_{s\times s}\|_{2\to 2}\leq \delta _{s},} where I s × s {\displaystyle I_{s\times s}} is the s × s {\displaystyle s\times s} identity matrix and ‖ X ‖ 2 → 2 {\displaystyle \|X\|_{2\to 2}} is the operator norm. See for example for a proof. Finally this is equivalent to stating that all eigenvalues of A s ∗ A s {\displaystyle A_{s}^{*}A_{s}} are in the interval [ 1 − δ s , 1 + δ s ] {\displaystyle [1-\delta _{s},1+\delta _{s}]} . == Restricted Isometric Constant (RIC) == The RIC Constant is defined as the infimum of all possible δ {\displaystyle \delta } for a given A ∈ R n × m {\displaystyle A\in \mathbb {R} ^{n\times m}} . δ K = inf [ δ : ( 1 − δ ) ‖ y ‖ 2 2 ≤ ‖ A s y ‖ 2 2 ≤ ( 1 + δ ) ‖ y ‖ 2 2 ] , ∀ | s | ≤ K , ∀ y ∈ R | s | {\displaystyle \delta _{K}=\inf \left[\delta :(1-\delta )\|y\|_{2}^{2}\leq \|A_{s}y\|_{2}^{2}\leq (1+\delta )\|y\|_{2}^{2}\right],\ \forall |s|\leq K,\forall y\in R^{|s|}} It is denoted as δ K {\displaystyle \delta _{K}} . == Eigenvalues == For any matrix that satisfies the RIP property with a RIC of δ K {\displaystyle \delta _{K}} , the following condition holds: 1 − δ K ≤ λ m i n ( A τ ∗ A τ ) ≤ λ m a x ( A τ ∗ A τ ) ≤ 1 + δ K {\displaystyle 1-\delta _{K}\leq \lambda _{min}(A_{\tau }^{*}A_{\tau })\leq \lambda _{max}(A_{\tau }^{*}A_{\tau })\leq 1+\delta _{K}} . The tightest upper bound on the RIC can be computed for Gaussian matrices. This can be achieved by computing the exact probability that all the eigenvalues of Wishart matrices lie within an interval. == See also == Compressed sensing Mutual coherence (linear algebra) Terence Tao's website on compressed sensing lists several related conditions, such as the 'Exact reconstruction principle' (ERP) and 'Uniform uncertainty principle' (UUP) Nullspace property, another sufficient condition for sparse recovery Generalized restricted isometry property, a generalized sufficient condition for sparse recovery, where mutual coherence and restricted isometry property are both its special forms. Johnson-Lindenstrauss lemma == References ==
Wikipedia:Restricted power series#0
In algebra, the ring of restricted power series is the subring of a formal power series ring that consists of power series whose coefficients approach zero as degree goes to infinity. Over a non-archimedean complete field, the ring is also called a Tate algebra. Quotient rings of the ring are used in the study of a formal algebraic space as well as rigid analysis, the latter over non-archimedean complete fields. Over a discrete topological ring, the ring of restricted power series coincides with a polynomial ring; thus, in this sense, the notion of "restricted power series" is a generalization of a polynomial. == Definition == Let A be a linearly topologized ring, separated and complete and { I λ } {\displaystyle \{I_{\lambda }\}} the fundamental system of open ideals. Then the ring of restricted power series is defined as the projective limit of the polynomial rings over A / I λ {\displaystyle A/I_{\lambda }} : A ⟨ x 1 , … , x n ⟩ = lim ← λ ⁡ A / I λ [ x 1 , … , x n ] {\displaystyle A\langle x_{1},\dots ,x_{n}\rangle =\varprojlim _{\lambda }A/I_{\lambda }[x_{1},\dots ,x_{n}]} . In other words, it is the completion of the polynomial ring A [ x 1 , … , x n ] {\displaystyle A[x_{1},\dots ,x_{n}]} with respect to the filtration { I λ [ x 1 , … , x n ] } {\displaystyle \{I_{\lambda }[x_{1},\dots ,x_{n}]\}} . Sometimes this ring of restricted power series is also denoted by A { x 1 , … , x n } {\displaystyle A\{x_{1},\dots ,x_{n}\}} . Clearly, the ring A ⟨ x 1 , … , x n ⟩ {\displaystyle A\langle x_{1},\dots ,x_{n}\rangle } can be identified with the subring of the formal power series ring A [ [ x 1 , … , x n ] ] {\displaystyle A[[x_{1},\dots ,x_{n}]]} that consists of series ∑ c α x α {\displaystyle \sum c_{\alpha }x^{\alpha }} with coefficients c α → 0 {\displaystyle c_{\alpha }\to 0} ; i.e., each I λ {\displaystyle I_{\lambda }} contains all but finitely many coefficients c α {\displaystyle c_{\alpha }} . Also, the ring satisfies (and in fact is characterized by) the universal property: for (1) each continuous ring homomorphism A → B {\displaystyle A\to B} to a linearly topologized ring B {\displaystyle B} , separated and complete and (2) each elements b 1 , … , b n {\displaystyle b_{1},\dots ,b_{n}} in B {\displaystyle B} , there exists a unique continuous ring homomorphism A ⟨ x 1 , … , x n ⟩ → B , x i ↦ b i {\displaystyle A\langle x_{1},\dots ,x_{n}\rangle \to B,\,x_{i}\mapsto b_{i}} extending A → B {\displaystyle A\to B} . == Tate algebra == In rigid analysis, when the base ring A is the valuation ring of a complete non-archimedean field ( K , | ⋅ | ) {\displaystyle (K,|\cdot |)} , the ring of restricted power series tensored with K {\displaystyle K} , T n = K ⟨ ξ 1 , … ξ n ⟩ = A ⟨ ξ 1 , … , ξ n ⟩ ⊗ A K {\displaystyle T_{n}=K\langle \xi _{1},\dots \xi _{n}\rangle =A\langle \xi _{1},\dots ,\xi _{n}\rangle \otimes _{A}K} is called a Tate algebra, named for John Tate. It is equivalently the subring of formal power series k [ [ ξ 1 , … , ξ n ] ] {\displaystyle k[[\xi _{1},\dots ,\xi _{n}]]} which consists of series convergent on o k ¯ n {\displaystyle {\mathfrak {o}}_{\overline {k}}^{n}} , where o k ¯ := { x ∈ k ¯ : | x | ≤ 1 } {\displaystyle {\mathfrak {o}}_{\overline {k}}:=\{x\in {\overline {k}}:|x|\leq 1\}} is the valuation ring in the algebraic closure k ¯ {\displaystyle {\overline {k}}} . The maximal spectrum of T n {\displaystyle T_{n}} is then a rigid-analytic space that models an affine space in rigid geometry. Define the Gauss norm of f = ∑ a α ξ α {\displaystyle f=\sum a_{\alpha }\xi ^{\alpha }} in T n {\displaystyle T_{n}} by ‖ f ‖ = max α | a α | . {\displaystyle \|f\|=\max _{\alpha }|a_{\alpha }|.} This makes T n {\displaystyle T_{n}} a Banach algebra over k; i.e., a normed algebra that is complete as a metric space. With this norm, any ideal I {\displaystyle I} of T n {\displaystyle T_{n}} is closed and thus, if I is radical, the quotient T n / I {\displaystyle T_{n}/I} is also a (reduced) Banach algebra called an affinoid algebra. Some key results are: (Weierstrass division) Let g ∈ T n {\displaystyle g\in T_{n}} be a ξ n {\displaystyle \xi _{n}} -distinguished series of order s; i.e., g = ∑ ν = 0 ∞ g ν ξ n ν {\displaystyle g=\sum _{\nu =0}^{\infty }g_{\nu }\xi _{n}^{\nu }} where g ν ∈ T n − 1 {\displaystyle g_{\nu }\in T_{n-1}} , g s {\displaystyle g_{s}} is a unit element and | g s | = ‖ g ‖ > | g v | {\displaystyle |g_{s}|=\|g\|>|g_{v}|} for ν > s {\displaystyle \nu >s} . Then for each f ∈ T n {\displaystyle f\in T_{n}} , there exist a unique q ∈ T n {\displaystyle q\in T_{n}} and a unique polynomial r ∈ T n − 1 [ ξ n ] {\displaystyle r\in T_{n-1}[\xi _{n}]} of degree < s {\displaystyle <s} such that f = q g + r . {\displaystyle f=qg+r.} (Weierstrass preparation) As above, let g {\displaystyle g} be a ξ n {\displaystyle \xi _{n}} -distinguished series of order s. Then there exist a unique monic polynomial f ∈ T n − 1 [ ξ n ] {\displaystyle f\in T_{n-1}[\xi _{n}]} of degree s {\displaystyle s} and a unit element u ∈ T n {\displaystyle u\in T_{n}} such that g = f u {\displaystyle g=fu} . (Noether normalization) If a ⊂ T n {\displaystyle {\mathfrak {a}}\subset T_{n}} is an ideal, then there is a finite homomorphism T d ↪ T n / a {\displaystyle T_{d}\hookrightarrow T_{n}/{\mathfrak {a}}} . As consequence of the division, preparation theorems and Noether normalization, T n {\displaystyle T_{n}} is a Noetherian unique factorization domain of Krull dimension n. An analog of Hilbert's Nullstellensatz is valid: the radical of an ideal is the intersection of all maximal ideals containing the ideal (we say the ring is Jacobson). == Results == Results for polynomial rings such as Hensel's lemma, division algorithms (or the theory of Gröbner bases) are also true for the ring of restricted power series. Throughout the section, let A denote a linearly topologized ring, separated and complete. (Hensel) Let m ⊂ A {\displaystyle {\mathfrak {m}}\subset A} be a maximal ideal and φ : A → k := A / m {\displaystyle \varphi :A\to k:=A/{\mathfrak {m}}} the quotient map. Given an F {\displaystyle F} in A ⟨ ξ ⟩ {\displaystyle A\langle \xi \rangle } , if φ ( F ) = g h {\displaystyle \varphi (F)=gh} for some monic polynomial g ∈ k [ ξ ] {\displaystyle g\in k[\xi ]} and a restricted power series h ∈ k ⟨ ξ ⟩ {\displaystyle h\in k\langle \xi \rangle } such that g , h {\displaystyle g,h} generate the unit ideal of k ⟨ ξ ⟩ {\displaystyle k\langle \xi \rangle } , then there exist G {\displaystyle G} in A [ ξ ] {\displaystyle A[\xi ]} and H {\displaystyle H} in A ⟨ ξ ⟩ {\displaystyle A\langle \xi \rangle } such that F = G H , φ ( G ) = g , φ ( H ) = h {\displaystyle F=GH,\,\varphi (G)=g,\varphi (H)=h} . == Notes == == References == Bourbaki, N. (2006). Algèbre commutative: Chapitres 1 à 4. Springer Berlin Heidelberg. ISBN 9783540339373. Grothendieck, Alexandre; Dieudonné, Jean (1960). "Éléments de géométrie algébrique: I. Le langage des schémas". Publications Mathématiques de l'IHÉS. 4. doi:10.1007/bf02684778. MR 0217083. Bosch, Siegfried; Güntzer, Ulrich; Remmert, Reinhold (1984), "Chapter 5", Non-archimedean analysis, Springer Bosch, Siegfried (2014), Lectures on Formal and Rigid Geometry, ISBN 9783319044170 Fujiwara, Kazuhiro; Kato, Fumiharu (2018), Foundations of Rigid Geometry I == See also == Weierstrass preparation theorem == External links == https://ncatlab.org/nlab/show/restricted+formal+power+series http://math.stanford.edu/~conrad/papers/aws.pdf https://web.archive.org/web/20060916051553/http://www-math.mit.edu/~kedlaya//18.727/tate-algebras.pdf
Wikipedia:Resultant#0
In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root (possibly in a field extension), or, equivalently, a common factor (over their field of coefficients). In some older texts, the resultant is also called the eliminant. The resultant is widely used in number theory, either directly or through the discriminant, which is essentially the resultant of a polynomial and its derivative. The resultant of two polynomials with rational or polynomial coefficients may be computed efficiently on a computer. It is a basic tool of computer algebra, and is a built-in function of most computer algebra systems. It is used, among others, for cylindrical algebraic decomposition, integration of rational functions and drawing of curves defined by a bivariate polynomial equation. The resultant of n homogeneous polynomials in n variables (also called multivariate resultant, or Macaulay's resultant for distinguishing it from the usual resultant) is a generalization, introduced by Macaulay, of the usual resultant. It is, with Gröbner bases, one of the main tools of elimination theory. == Notation == The resultant of two univariate polynomials A and B is commonly denoted res ⁡ ( A , B ) {\displaystyle \operatorname {res} (A,B)} or Res ⁡ ( A , B ) . {\displaystyle \operatorname {Res} (A,B).} In many applications of the resultant, the polynomials depend on several indeterminates and may be considered as univariate polynomials in one of their indeterminates, with polynomials in the other indeterminates as coefficients. In this case, the indeterminate that is selected for defining and computing the resultant is indicated as a subscript: res x ⁡ ( A , B ) {\displaystyle \operatorname {res} _{x}(A,B)} or Res x ⁡ ( A , B ) . {\displaystyle \operatorname {Res} _{x}(A,B).} The degrees of the polynomials are used in the definition of the resultant. However, a polynomial of degree d may also be considered as a polynomial of higher degree where the leading coefficients are zero. If such a higher degree is used for the resultant, it is usually indicated as a subscript or a superscript, such as res d , e ⁡ ( A , B ) {\displaystyle \operatorname {res} _{d,e}(A,B)} or res x d , e ⁡ ( A , B ) . {\displaystyle \operatorname {res} _{x}^{d,e}(A,B).} == Definition == The resultant of two univariate polynomials over a field or over a commutative ring is commonly defined as the determinant of their Sylvester matrix. More precisely, let A = a 0 x d + a 1 x d − 1 + ⋯ + a d {\displaystyle A=a_{0}x^{d}+a_{1}x^{d-1}+\cdots +a_{d}} and B = b 0 x e + b 1 x e − 1 + ⋯ + b e {\displaystyle B=b_{0}x^{e}+b_{1}x^{e-1}+\cdots +b_{e}} be nonzero polynomials of degrees d and e respectively. Let us denote by P i {\displaystyle {\mathcal {P}}_{i}} the vector space (or free module if the coefficients belong to a commutative ring) of dimension i whose elements are the polynomials of degree strictly less than i. The map φ : P e × P d → P d + e {\displaystyle \varphi :{\mathcal {P}}_{e}\times {\mathcal {P}}_{d}\rightarrow {\mathcal {P}}_{d+e}} such that φ ( P , Q ) = A P + B Q {\displaystyle \varphi (P,Q)=AP+BQ} is a linear map between two spaces of the same dimension. Over the basis of the powers of x (listed in descending order), this map is represented by a square matrix of dimension d + e, which is called the Sylvester matrix of A and B (for many authors and in the article Sylvester matrix, the Sylvester matrix is defined as the transpose of this matrix; this convention is not used here, as it breaks the usual convention for writing the matrix of a linear map). The resultant of A and B is thus the determinant | a 0 0 ⋯ 0 b 0 0 ⋯ 0 a 1 a 0 ⋯ 0 b 1 b 0 ⋯ 0 a 2 a 1 ⋱ 0 b 2 b 1 ⋱ 0 ⋮ ⋮ ⋱ a 0 ⋮ ⋮ ⋱ b 0 a d a d − 1 ⋯ ⋮ b e b e − 1 ⋯ ⋮ 0 a d ⋱ ⋮ 0 b e ⋱ ⋮ ⋮ ⋮ ⋱ a d − 1 ⋮ ⋮ ⋱ b e − 1 0 0 ⋯ a d 0 0 ⋯ b e | , {\displaystyle {\begin{vmatrix}a_{0}&0&\cdots &0&b_{0}&0&\cdots &0\\a_{1}&a_{0}&\cdots &0&b_{1}&b_{0}&\cdots &0\\a_{2}&a_{1}&\ddots &0&b_{2}&b_{1}&\ddots &0\\\vdots &\vdots &\ddots &a_{0}&\vdots &\vdots &\ddots &b_{0}\\a_{d}&a_{d-1}&\cdots &\vdots &b_{e}&b_{e-1}&\cdots &\vdots \\0&a_{d}&\ddots &\vdots &0&b_{e}&\ddots &\vdots \\\vdots &\vdots &\ddots &a_{d-1}&\vdots &\vdots &\ddots &b_{e-1}\\0&0&\cdots &a_{d}&0&0&\cdots &b_{e}\end{vmatrix}},} which has e columns of ai and d columns of bj (the fact that the first column of a's and the first column of b's have the same length, that is d = e, is here only for simplifying the display of the determinant). For instance, taking d = 3 and e = 2 we get | a 0 0 b 0 0 0 a 1 a 0 b 1 b 0 0 a 2 a 1 b 2 b 1 b 0 a 3 a 2 0 b 2 b 1 0 a 3 0 0 b 2 | . {\displaystyle {\begin{vmatrix}a_{0}&0&b_{0}&0&0\\a_{1}&a_{0}&b_{1}&b_{0}&0\\a_{2}&a_{1}&b_{2}&b_{1}&b_{0}\\a_{3}&a_{2}&0&b_{2}&b_{1}\\0&a_{3}&0&0&b_{2}\end{vmatrix}}.} If the coefficients of the polynomials belong to an integral domain, then res ⁡ ( A , B ) = a 0 e b 0 d ∏ 1 ≤ i ≤ d 1 ≤ j ≤ e ( λ i − μ j ) = a 0 e ∏ i = 1 d B ( λ i ) = ( − 1 ) d e b 0 d ∏ j = 1 e A ( μ j ) , {\displaystyle \operatorname {res} (A,B)=a_{0}^{e}b_{0}^{d}\prod _{\begin{array}{c}1\leq i\leq d\\1\leq j\leq e\end{array}}(\lambda _{i}-\mu _{j})=a_{0}^{e}\prod _{i=1}^{d}B(\lambda _{i})=(-1)^{de}b_{0}^{d}\prod _{j=1}^{e}A(\mu _{j}),} where λ 1 , … , λ d {\displaystyle \lambda _{1},\dots ,\lambda _{d}} and μ 1 , … , μ e {\displaystyle \mu _{1},\dots ,\mu _{e}} are respectively the roots, counted with their multiplicities, of A and B in any algebraically closed field containing the integral domain. This is a straightforward consequence of the characterizing properties of the resultant that appear below. In the common case of integer coefficients, the algebraically closed field is generally chosen as the field of complex numbers. == Properties == In this section and its subsections, A and B are two polynomials in x of respective degrees d and e, and their resultant is denoted res ⁡ ( A , B ) . {\displaystyle \operatorname {res} (A,B).} === Characterizing properties === The following properties hold for the resultant of two polynomials with coefficients in a commutative ring R. If R is a field or more generally an integral domain, the resultant is the unique function of the coefficients of two polynomials that satisfies these properties. If R is a subring of another ring S, then res R ⁡ ( A , B ) = res S ⁡ ( A , B ) . {\displaystyle \operatorname {res} _{R}(A,B)=\operatorname {res} _{S}(A,B).} That is A and B have the same resultant when considered as polynomials over R or S. If d = 0 (that is if A = a 0 {\displaystyle A=a_{0}} is a nonzero constant) then res ⁡ ( A , B ) = a 0 e . {\displaystyle \operatorname {res} (A,B)=a_{0}^{e}.} Similarly, if e = 0, then res ⁡ ( A , B ) = b 0 d . {\displaystyle \operatorname {res} (A,B)=b_{0}^{d}.} res ⁡ ( x + a 1 , x + b 1 ) = b 1 − a 1 {\displaystyle \operatorname {res} (x+a_{1},x+b_{1})=b_{1}-a_{1}} res ⁡ ( B , A ) = ( − 1 ) d e res ⁡ ( A , B ) {\displaystyle \operatorname {res} (B,A)=(-1)^{de}\operatorname {res} (A,B)} res ⁡ ( A B , C ) = res ⁡ ( A , C ) res ⁡ ( B , C ) {\displaystyle \operatorname {res} (AB,C)=\operatorname {res} (A,C)\operatorname {res} (B,C)} === Zeros === The resultant of two polynomials with coefficients in an integral domain D is zero if and only if they have a common divisor of positive degree over the field of fractions of D. The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common root in an algebraically closed field containing the coefficients. There exists a polynomial P of degree less than e and a polynomial Q of degree less than d such that res ⁡ ( A , B ) = A P + B Q . {\displaystyle \operatorname {res} (A,B)=AP+BQ.} This is a generalization of Bézout's identity to polynomials over an arbitrary commutative ring. In other words, the resultant of two polynomials belongs to the ideal generated by these polynomials. === Invariance by ring homomorphisms === Let A and B be two polynomials of respective degrees d and e with coefficients in a commutative ring R, and φ : R → S {\displaystyle \varphi \colon R\to S} a ring homomorphism of R into another commutative ring S. Applying φ {\displaystyle \varphi } to the coefficients of a polynomial extends φ {\displaystyle \varphi } to a homomorphism of polynomial rings R [ x ] → S [ x ] {\displaystyle R[x]\to S[x]} , which is also denoted φ . {\displaystyle \varphi .} With this notation, we have: If φ {\displaystyle \varphi } preserves the degrees of A and B (that is if deg ⁡ ( φ ( A ) ) = d {\displaystyle \deg(\varphi (A))=d} and deg ⁡ ( φ ( B ) ) = e {\displaystyle \deg(\varphi (B))=e} ), then φ ( res ⁡ ( A , B ) ) = res ⁡ ( φ ( A ) , φ ( B ) ) . {\displaystyle \varphi (\operatorname {res} (A,B))=\operatorname {res} (\varphi (A),\varphi (B)).} If deg ⁡ ( φ ( A ) ) < d {\displaystyle \deg(\varphi (A))<d} and deg ⁡ ( φ ( B ) ) < e , {\displaystyle \deg(\varphi (B))<e,} then φ ( res ⁡ ( A , B ) ) = 0. {\displaystyle \varphi (\operatorname {res} (A,B))=0.} If deg ⁡ ( φ ( A ) ) = d {\displaystyle \deg(\varphi (A))=d} and deg ⁡ ( φ ( B ) ) = f < e , {\displaystyle \deg(\varphi (B))=f<e,} and the leading coefficient of A is a 0 {\displaystyle a_{0}} then φ ( res ⁡ ( A , B ) ) = φ ( a 0 ) e − f res ⁡ ( φ ( A ) , φ ( B ) ) . {\displaystyle \varphi (\operatorname {res} (A,B))=\varphi (a_{0})^{e-f}\operatorname {res} (\varphi (A),\varphi (B)).} If deg ⁡ ( φ ( A ) ) = f < d {\displaystyle \deg(\varphi (A))=f<d} and deg ⁡ ( φ ( B ) ) = e , {\displaystyle \deg(\varphi (B))=e,} and the leading coefficient of B is b 0 {\displaystyle b_{0}} then φ ( res ⁡ ( A , B ) ) = ( − 1 ) e ( d − f ) φ ( b 0 ) d − f res ⁡ ( φ ( A ) , φ ( B ) ) . {\displaystyle \varphi (\operatorname {res} (A,B))=(-1)^{e(d-f)}\varphi (b_{0})^{d-f}\operatorname {res} (\varphi (A),\varphi (B)).} These properties are easily deduced from the definition of the resultant as a determinant. They are mainly used in two situations. For computing a resultant of polynomials with integer coefficients, it is generally faster to compute it modulo several primes and to retrieve the desired resultant with Chinese remainder theorem. When R is a polynomial ring in other indeterminates, and S is the ring obtained by specializing to numerical values some or all indeterminates of R, these properties may be restated as if the degrees are preserved by the specialization, the resultant of the specialization of two polynomials is the specialization of the resultant. This property is fundamental, for example, for cylindrical algebraic decomposition. === Invariance under change of variable === res ⁡ ( A ( x + a ) , B ( x + a ) ) = res ⁡ ( A ( x ) , B ( x ) ) {\displaystyle \operatorname {res} (A(x+a),B(x+a))=\operatorname {res} (A(x),B(x))} res ⁡ ( A ( a x ) , B ( a x ) ) = a d e res ⁡ ( A ( x ) , B ( x ) ) {\displaystyle \operatorname {res} (A(ax),B(ax))=a^{de}\operatorname {res} (A(x),B(x))} If A r ( x ) = x d A ( 1 / x ) {\displaystyle A_{r}(x)=x^{d}A(1/x)} and B r ( x ) = x e B ( 1 / x ) {\displaystyle B_{r}(x)=x^{e}B(1/x)} are the reciprocal polynomials of A and B, respectively, then res ⁡ ( A r , B r ) = ( − 1 ) d e res ⁡ ( A , B ) {\displaystyle \operatorname {res} (A_{r},B_{r})=(-1)^{de}\operatorname {res} (A,B)} This means that the property of the resultant being zero is invariant under linear and projective changes of the variable. === Invariance under change of polynomials === If a and b are nonzero constants (that is they are independent of the indeterminate x), and A and B are as above, then res ⁡ ( a A , b B ) = a e b d res ⁡ ( A , B ) . {\displaystyle \operatorname {res} (aA,bB)=a^{e}b^{d}\operatorname {res} (A,B).} If A and B are as above, and C is another polynomial such that the degree of A – CB is δ, then res ⁡ ( B , A − C B ) = b 0 δ − d res ⁡ ( B , A ) . {\displaystyle \operatorname {res} (B,A-CB)=b_{0}^{\delta -d}\operatorname {res} (B,A).} It is only when ⁠ B C {\displaystyle BC} ⁠ and ⁠ A {\displaystyle A} ⁠ have the same degree that ⁠ δ {\displaystyle \delta } ⁠ cannot be deduced from the degrees of the given polynomials. If either B is monic, or deg C < deg A – deg B, then res ⁡ ( B , A − C B ) = res ⁡ ( B , A ) , {\displaystyle \operatorname {res} (B,A-CB)=\operatorname {res} (B,A),} If f = deg C > deg A – deg B = d – e, then res ⁡ ( B , A − C B ) = b 0 e + f − d res ⁡ ( B , A ) . {\displaystyle \operatorname {res} (B,A-CB)=b_{0}^{e+f-d}\operatorname {res} (B,A).} These properties imply that in the Euclidean algorithm for polynomials, and all its variants (pseudo-remainder sequences), the resultant of two successive remainders (or pseudo-remainders) differs from the resultant of the initial polynomials by a factor which is easy to compute. Conversely, this allows one to deduce the resultant of the initial polynomials from the value of the last remainder or pseudo-remainder. This is the starting idea of the subresultant-pseudo-remainder-sequence algorithm, which uses the above formulae for getting subresultant polynomials as pseudo-remainders, and the resultant as the last nonzero pseudo-remainder (provided that the resultant is not zero). This algorithm works for polynomials over the integers or, more generally, over an integral domain, without any division other than exact divisions (that is, without involving fractions). It involves O ( d e ) {\displaystyle O(de)} arithmetic operations, while the computation of the determinant of the Sylvester matrix with standard algorithms requires O ( ( d + e ) 3 ) {\displaystyle O((d+e)^{3})} arithmetic operations. === Generic properties === In this section, we consider two polynomials A = a 0 x d + a 1 x d − 1 + ⋯ + a d {\displaystyle A=a_{0}x^{d}+a_{1}x^{d-1}+\cdots +a_{d}} and B = b 0 x e + b 1 x e − 1 + ⋯ + b e {\displaystyle B=b_{0}x^{e}+b_{1}x^{e-1}+\cdots +b_{e}} whose d + e + 2 coefficients are distinct indeterminates. Let R = Z [ a 0 , … , a d , b 0 , … , b e ] {\displaystyle R=\mathbb {Z} [a_{0},\ldots ,a_{d},b_{0},\ldots ,b_{e}]} be the polynomial ring over the integers defined by these indeterminates. The resultant res ⁡ ( A , B ) {\displaystyle \operatorname {res} (A,B)} is often called the generic resultant for the degrees d and e. It has the following properties. res ⁡ ( A , B ) {\displaystyle \operatorname {res} (A,B)} is an absolutely irreducible polynomial. If I {\displaystyle I} is the ideal of R [ x ] {\displaystyle R[x]} generated by A and B, then I ∩ R {\displaystyle I\cap R} is the principal ideal generated by res ⁡ ( A , B ) {\displaystyle \operatorname {res} (A,B)} . === Homogeneity === The generic resultant for the degrees d and e is homogeneous in various ways. More precisely: It is homogeneous of degree e in a 0 , … , a d . {\displaystyle a_{0},\ldots ,a_{d}.} It is homogeneous of degree d in b 0 , … , b e . {\displaystyle b_{0},\ldots ,b_{e}.} It is homogeneous of degree d + e in all the variables a i {\displaystyle a_{i}} and b j . {\displaystyle b_{j}.} If a i {\displaystyle a_{i}} and b i {\displaystyle b_{i}} are given the weight i (that is, the weight of each coefficient is its degree as elementary symmetric polynomial), then it is quasi-homogeneous of total weight de. If P and Q are homogeneous multivariate polynomials of respective degrees d and e, then their resultant in degrees d and e with respect to an indeterminate x, denoted res x d , e ⁡ ( P , Q ) {\displaystyle \operatorname {res} _{x}^{d,e}(P,Q)} in § Notation, is homogeneous of degree de in the other indeterminates. === Elimination property === Let I = ⟨ A , B ⟩ {\displaystyle I=\langle A,B\rangle } be the ideal generated by two polynomials A and B in a polynomial ring R [ x ] , {\displaystyle R[x],} where R = k [ y 1 , … , y n ] {\displaystyle R=k[y_{1},\ldots ,y_{n}]} is itself a polynomial ring over a field. If at least one of A and B is monic in x, then: res x ⁡ ( A , B ) ∈ I ∩ R {\displaystyle \operatorname {res} _{x}(A,B)\in I\cap R} The ideals I ∩ R {\displaystyle I\cap R} and R res x ⁡ ( A , B ) {\displaystyle R\operatorname {res} _{x}(A,B)} define the same algebraic set. That is, a tuple of n elements of an algebraically closed field is a common zero of the elements of I ∩ R {\displaystyle I\cap R} if and only it is a zero of res x ⁡ ( A , B ) . {\displaystyle \operatorname {res} _{x}(A,B).} The ideal I ∩ R {\displaystyle I\cap R} has the same radical as the principal ideal R res x ⁡ ( A , B ) . {\displaystyle R\operatorname {res} _{x}(A,B).} That is, each element of I ∩ R {\displaystyle I\cap R} has a power that is a multiple of res x ⁡ ( A , B ) . {\displaystyle \operatorname {res} _{x}(A,B).} All irreducible factors of res x ⁡ ( A , B ) {\displaystyle \operatorname {res} _{x}(A,B)} divide every element of I ∩ R . {\displaystyle I\cap R.} The first assertion is a basic property of the resultant. The other assertions are immediate corollaries of the second one, which can be proved as follows. As at least one of A and B is monic, a tuple ( β 1 , … , β n ) {\displaystyle (\beta _{1},\ldots ,\beta _{n})} is a zero of res x ⁡ ( A , B ) {\displaystyle \operatorname {res} _{x}(A,B)} if and only if there exists α {\displaystyle \alpha } such that ( β 1 , … , β n , α ) {\displaystyle (\beta _{1},\ldots ,\beta _{n},\alpha )} is a common zero of A and B. Such a common zero is also a zero of all elements of I ∩ R . {\displaystyle I\cap R.} Conversely, if ( β 1 , … , β n ) {\displaystyle (\beta _{1},\ldots ,\beta _{n})} is a common zero of the elements of I ∩ R , {\displaystyle I\cap R,} it is a zero of the resultant, and there exists α {\displaystyle \alpha } such that ( β 1 , … , β n , α ) {\displaystyle (\beta _{1},\ldots ,\beta _{n},\alpha )} is a common zero of A and B. So I ∩ R {\displaystyle I\cap R} and R res x ⁡ ( A , B ) {\displaystyle R\operatorname {res} _{x}(A,B)} have exactly the same zeros. == Computation == Theoretically, the resultant could be computed by using the formula expressing it as a product of roots differences. However, as the roots may generally not be computed exactly, such an algorithm would be inefficient and numerically unstable. As the resultant is a symmetric function of the roots of each polynomial, it could also be computed by using the fundamental theorem of symmetric polynomials, but this would be highly inefficient. As the resultant is the determinant of the Sylvester matrix (and of the Bézout matrix), it may be computed by using any algorithm for computing determinants. This needs O ( n 3 ) {\displaystyle O(n^{3})} arithmetic operations. As algorithms are known with a better complexity (see below), this method is not used in practice. It follows from § Invariance under change of polynomials that the computation of a resultant is strongly related to the Euclidean algorithm for polynomials. This shows that the computation of the resultant of two polynomials of degrees d and e may be done in O ( d e ) {\displaystyle O(de)} arithmetic operations in the field of coefficients. However, when the coefficients are integers, rational numbers or polynomials, these arithmetic operations imply a number of GCD computations of coefficients which is of the same order and make the algorithm inefficient. The subresultant pseudo-remainder sequences were introduced to solve this problem and avoid any fraction and any GCD computation of coefficients. A more efficient algorithm is obtained by using the good behavior of the resultant under a ring homomorphism on the coefficients: to compute a resultant of two polynomials with integer coefficients, one computes their resultants modulo sufficiently many prime numbers and then reconstructs the result with the Chinese remainder theorem. The use of fast multiplication of integers and polynomials allows algorithms for resultants and greatest common divisors that have a better time complexity, which is of the order of the complexity of the multiplication, multiplied by the logarithm of the size of the input ( log ⁡ ( s ( d + e ) ) , {\displaystyle \log(s(d+e)),} where s is an upper bound of the number of digits of the input polynomials). == Application to polynomial systems == Resultants were introduced for solving systems of polynomial equations and provide the oldest proof that there exist algorithms for solving such systems. These are primarily intended for systems of two equations in two unknowns, but also allow solving general systems. === Case of two equations in two unknowns === Consider the system of two polynomial equations P ( x , y ) = 0 Q ( x , y ) = 0 , {\displaystyle {\begin{aligned}P(x,y)&=0\\Q(x,y)&=0,\end{aligned}}} where P and Q are polynomials of respective total degrees d and e. Then R = res y d , e ⁡ ( P , Q ) {\displaystyle R=\operatorname {res} _{y}^{d,e}(P,Q)} is a polynomial in x, which is generically of degree de (by properties of § Homogeneity). A value α {\displaystyle \alpha } of x is a root of R if and only if either there exist β {\displaystyle \beta } in an algebraically closed field containing the coefficients, such that P ( α , β ) = Q ( α , β ) = 0 {\displaystyle P(\alpha ,\beta )=Q(\alpha ,\beta )=0} , or deg ⁡ ( P ( α , y ) ) < d {\displaystyle \deg(P(\alpha ,y))<d} and deg ⁡ ( Q ( α , y ) ) < e {\displaystyle \deg(Q(\alpha ,y))<e} (in this case, one says that P and Q have a common root at infinity for x = α {\displaystyle x=\alpha } ). Therefore, solutions to the system are obtained by computing the roots of R, and for each root α , {\displaystyle \alpha ,} computing the common root(s) of P ( α , y ) , {\displaystyle P(\alpha ,y),} Q ( α , y ) , {\displaystyle Q(\alpha ,y),} and res x ⁡ ( P , Q ) . {\displaystyle \operatorname {res} _{x}(P,Q).} Bézout's theorem results from the value of deg ⁡ ( res y ⁡ ( P , Q ) ) ≤ d e {\displaystyle \deg \left(\operatorname {res} _{y}(P,Q)\right)\leq de} , the product of the degrees of P and Q. In fact, after a linear change of variables, one may suppose that, for each root x of the resultant, there is exactly one value of y such that (x, y) is a common zero of P and Q. This shows that the number of common zeros is at most the degree of the resultant, that is at most the product of the degrees of P and Q. With some technicalities, this proof may be extended to show that, counting multiplicities and zeros at infinity, the number of zeros is exactly the product of the degrees. === General case === At first glance, it seems that resultants may be applied to a general polynomial system of equations P 1 ( x 1 , … , x n ) = 0 ⋮ P k ( x 1 , … , x n ) = 0 {\displaystyle {\begin{aligned}P_{1}(x_{1},\ldots ,x_{n})&=0\\&\;\;\vdots \\P_{k}(x_{1},\ldots ,x_{n})&=0\end{aligned}}} by computing the resultants of every pair ( P i , P j ) {\displaystyle (P_{i},P_{j})} with respect to x n {\displaystyle x_{n}} for eliminating one unknown, and repeating the process until getting univariate polynomials. Unfortunately, this introduces many spurious solutions, which are difficult to remove. A method, introduced at the end of the 19th century, works as follows: introduce k − 1 new indeterminates U 2 , … , U k {\displaystyle U_{2},\ldots ,U_{k}} and compute res x n ⁡ ( P 1 , U 2 P 2 + ⋯ + U k P k ) . {\displaystyle \operatorname {res} _{x_{n}}(P_{1},U_{2}P_{2}+\cdots +U_{k}P_{k}).} This is a polynomial in U 2 , … , U k {\displaystyle U_{2},\ldots ,U_{k}} whose coefficients are polynomials in x 1 , … , x n − 1 , {\displaystyle x_{1},\ldots ,x_{n-1},} which have the property that α 1 , … , α n − 1 {\displaystyle \alpha _{1},\ldots ,\alpha _{n-1}} is a common zero of these polynomial coefficients, if and only if the univariate polynomials P i ( α 1 , … , α n − 1 , x n ) {\displaystyle P_{i}(\alpha _{1},\ldots ,\alpha _{n-1},x_{n})} have a common zero, possibly at infinity. This process may be iterated until finding univariate polynomials. To get a correct algorithm two complements have to be added to the method. Firstly, at each step, a linear change of variable may be needed in order that the degrees of the polynomials in the last variable are the same as their total degree. Secondly, if, at any step, the resultant is zero, this means that the polynomials have a common factor and that the solutions split in two components: one where the common factor is zero, and the other which is obtained by factoring out this common factor before continuing. This algorithm is very complicated and has a huge time complexity. Therefore, its interest is mainly historical. == Other applications == === Number theory === The discriminant of a polynomial, which is a fundamental tool in number theory, is a 0 − 1 ( − 1 ) n ( n − 1 ) / 2 res x ⁡ ( f ( x ) , f ′ ( x ) ) {\displaystyle a_{0}^{-1}(-1)^{n(n-1)/2}\operatorname {res} _{x}(f(x),f'(x))} , where a 0 {\displaystyle a_{0}} is the leading coefficient of f ( x ) {\displaystyle f(x)} and n {\displaystyle n} its degree. If α {\displaystyle \alpha } and β {\displaystyle \beta } are algebraic numbers such that P ( α ) = Q ( β ) = 0 {\displaystyle P(\alpha )=Q(\beta )=0} , then γ = α + β {\displaystyle \gamma =\alpha +\beta } is a root of the resultant res x ⁡ ( P ( x ) , Q ( z − x ) ) , {\displaystyle \operatorname {res} _{x}(P(x),Q(z-x)),} and τ = α β {\displaystyle \tau =\alpha \beta } is a root of res x ⁡ ( P ( x ) , x n Q ( z / x ) ) {\displaystyle \operatorname {res} _{x}(P(x),x^{n}Q(z/x))} , where n {\displaystyle n} is the degree of Q ( y ) {\displaystyle Q(y)} . Combined with the fact that 1 / β {\displaystyle 1/\beta } is a root of y n Q ( 1 / y ) = 0 {\displaystyle y^{n}Q(1/y)=0} , this shows that the set of algebraic numbers is a field. Let K ( α ) {\displaystyle K(\alpha )} be an algebraic field extension generated by an element α , {\displaystyle \alpha ,} which has P ( x ) {\displaystyle P(x)} as minimal polynomial. Every element of β ∈ K ( α ) {\displaystyle \beta \in K(\alpha )} may be written as β = Q ( α ) , {\displaystyle \beta =Q(\alpha ),} where Q {\displaystyle Q} is a polynomial. Then β {\displaystyle \beta } is a root of res x ⁡ ( P ( x ) , z − Q ( x ) ) , {\displaystyle \operatorname {res} _{x}(P(x),z-Q(x)),} and this resultant is a power of the minimal polynomial of β . {\displaystyle \beta .} === Algebraic geometry === Given two plane algebraic curves defined as the zeros of the polynomials P(x, y) and Q(x, y), the resultant allows the computation of their intersection. More precisely, the roots of res y ⁡ ( P , Q ) {\displaystyle \operatorname {res} _{y}(P,Q)} are the x-coordinates of the intersection points and of the common vertical asymptotes, and the roots of res x ⁡ ( P , Q ) {\displaystyle \operatorname {res} _{x}(P,Q)} are the y-coordinates of the intersection points and of the common horizontal asymptotes. A rational plane curve may be defined by a parametric equation x = P ( t ) R ( t ) , y = Q ( t ) R ( t ) , {\displaystyle x={\frac {P(t)}{R(t)}},\qquad y={\frac {Q(t)}{R(t)}},} where P, Q and R are polynomials. An implicit equation of the curve is given by res t ⁡ ( x R − P , y R − Q ) . {\displaystyle \operatorname {res} _{t}(xR-P,yR-Q).} The degree of this curve is the highest degree of P, Q and R, which is equal to the total degree of the resultant. === Symbolic integration === In symbolic integration, for computing the antiderivative of a rational fraction, one uses partial fraction decomposition for decomposing the integral into a "rational part", which is a sum of rational fractions whose antiprimitives are rational fractions, and a "logarithmic part" which is a sum of rational fractions of the form P ( x ) Q ( x ) , {\displaystyle {\frac {P(x)}{Q(x)}},} where Q is a square-free polynomial and P is a polynomial of lower degree than Q. The antiderivative of such a function involves necessarily logarithms, and generally algebraic numbers (the roots of Q). In fact, the antiderivative is ∫ P ( x ) Q ( x ) d x = ∑ Q ( α ) = 0 P ( α ) Q ′ ( α ) log ⁡ ( x − α ) , {\displaystyle \int {\frac {P(x)}{Q(x)}}dx=\sum _{Q(\alpha )=0}{\frac {P(\alpha )}{Q'(\alpha )}}\log(x-\alpha ),} where the sum runs over all complex roots of Q. The number of algebraic numbers involved by this expression is generally equal to the degree of Q, but it occurs frequently that an expression with less algebraic numbers may be computed. The Lazard–Rioboo–Trager method produces an expression, where the number of algebraic numbers is minimal, without any computation with algebraic numbers. Let S 1 ( r ) S 2 ( r ) 2 ⋯ S k ( r ) k = res r ⁡ ( r Q ′ ( x ) − P ( x ) , Q ( x ) ) {\displaystyle S_{1}(r)S_{2}(r)^{2}\cdots S_{k}(r)^{k}=\operatorname {res} _{r}(rQ'(x)-P(x),Q(x))} be the square-free factorization of the resultant which appears on the right. Trager proved that the antiderivative is ∫ P ( x ) Q ( x ) d x = ∑ i = 1 k ∑ S i ( α ) = 0 α log ⁡ ( T i ( α , x ) ) , {\displaystyle \int {\frac {P(x)}{Q(x)}}dx=\sum _{i=1}^{k}\sum _{S_{i}(\alpha )=0}\alpha \log(T_{i}(\alpha ,x)),} where the internal sums run over the roots of the S i {\displaystyle S_{i}} (if S i = 1 {\displaystyle S_{i}=1} the sum is zero, as being the empty sum), and T i ( r , x ) {\displaystyle T_{i}(r,x)} is a polynomial of degree i in x. The Lazard-Rioboo contribution is the proof that T i ( r , x ) {\displaystyle T_{i}(r,x)} is the subresultant of degree i of r Q ′ ( x ) − P ( x ) {\displaystyle rQ'(x)-P(x)} and Q ( x ) . {\displaystyle Q(x).} It is thus obtained for free if the resultant is computed by the subresultant pseudo-remainder sequence. === Computer algebra === All preceding applications, and many others, show that the resultant is a fundamental tool in computer algebra. In fact most computer algebra systems include an efficient implementation of the computation of resultants. == Homogeneous resultant == The resultant is also defined for two homogeneous polynomial in two indeterminates. Given two homogeneous polynomials P(x, y) and Q(x, y) of respective total degrees p and q, their homogeneous resultant is the determinant of the matrix over the monomial basis of the linear map ( A , B ) ↦ A P + B Q , {\displaystyle (A,B)\mapsto AP+BQ,} where A runs over the bivariate homogeneous polynomials of degree q − 1, and B runs over the homogeneous polynomials of degree p − 1. In other words, the homogeneous resultant of P and Q is the resultant of P(x, 1) and Q(x, 1) when they are considered as polynomials of degree p and q (their degree in x may be lower than their total degree): Res ⁡ ( P ( x , y ) , Q ( x , y ) ) = res p , q ⁡ ( P ( x , 1 ) , Q ( x , 1 ) ) . {\displaystyle \operatorname {Res} (P(x,y),Q(x,y))=\operatorname {res} _{p,q}(P(x,1),Q(x,1)).} (The capitalization of "Res" is used here for distinguishing the two resultants, although there is no standard rule for the capitalization of the abbreviation). The homogeneous resultant has essentially the same properties as the usual resultant, with essentially two differences: instead of polynomial roots, one considers zeros in the projective line, and the degree of a polynomial may not change under a ring homomorphism. That is: The resultant of two homogeneous polynomials over an integral domain is zero if and only if they have a non-zero common zero over an algebraically closed field containing the coefficients. If P and Q are two bivariate homogeneous polynomials with coefficients in a commutative ring R, and φ : R → S {\displaystyle \varphi \colon R\to S} a ring homomorphism of R into another commutative ring S, then extending φ {\displaystyle \varphi } to polynomials over R, ones has Res ⁡ ( φ ( P ) , φ ( Q ) ) = φ ( Res ⁡ ( P , Q ) ) . {\displaystyle \operatorname {Res} (\varphi (P),\varphi (Q))=\varphi (\operatorname {Res} (P,Q)).} The property of an homogeneous resultant to be zero is invariant under any projective change of variables. Any property of the usual resultant may similarly extended to the homogeneous resultant, and the resulting property is either very similar or simpler than the corresponding property of the usual resultant. == Macaulay's resultant == Macaulay's resultant, named after Francis Sowerby Macaulay, also called the multivariate resultant, or the multipolynomial resultant, is a generalization of the homogeneous resultant to n homogeneous polynomials in n indeterminates. Macaulay's resultant is a polynomial in the coefficients of these n homogeneous polynomials that vanishes if and only if the polynomials have a common non-zero solution in an algebraically closed field containing the coefficients, or, equivalently, if the n hyper surfaces defined by the polynomials have a common zero in the n –1 dimensional projective space. The multivariate resultant is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers). Like the homogeneous resultant, Macaulay's may be defined with determinants, and thus behaves well under ring homomorphisms. However, it cannot be defined by a single determinant. It follows that it is easier to define it first on generic polynomials. === Resultant of generic homogeneous polynomials === A homogeneous polynomial of degree d in n variables may have up to ( n + d − 1 n − 1 ) = ( n + d − 1 ) ! ( n − 1 ) ! d ! {\displaystyle {\binom {n+d-1}{n-1}}={\frac {(n+d-1)!}{(n-1)!\,d!}}} coefficients; it is said to be generic, if these coefficients are distinct indeterminates. Let P 1 , … , P n {\displaystyle P_{1},\ldots ,P_{n}} be n generic homogeneous polynomials in n indeterminates, of respective degrees d 1 , … , d n . {\displaystyle d_{1},\dots ,d_{n}.} Together, they involve ∑ i = 1 n ( n + d i − 1 n − 1 ) {\displaystyle \sum _{i=1}^{n}{\binom {n+d_{i}-1}{n-1}}} indeterminate coefficients. Let C be the polynomial ring over the integers, in all these indeterminate coefficients. The polynomials P 1 , … , P n {\displaystyle P_{1},\ldots ,P_{n}} belong thus to C [ x 1 , … , x n ] , {\displaystyle C[x_{1},\ldots ,x_{n}],} and their resultant (still to be defined) belongs to C. The Macaulay degree is the integer D = d 1 + ⋯ + d n − n + 1 , {\displaystyle D=d_{1}+\cdots +d_{n}-n+1,} which is fundamental in Macaulay's theory. For defining the resultant, one considers the Macaulay matrix, which is the matrix over the monomial basis of the C-linear map ( Q 1 , … , Q n ) ↦ Q 1 P 1 + ⋯ + Q n P n , {\displaystyle (Q_{1},\ldots ,Q_{n})\mapsto Q_{1}P_{1}+\cdots +Q_{n}P_{n},} in which each Q i {\displaystyle Q_{i}} runs over the homogeneous polynomials of degree D − d i , {\displaystyle D-d_{i},} and the codomain is the C-module of the homogeneous polynomials of degree D. If n = 2, the Macaulay matrix is the Sylvester matrix, and is a square matrix, but this is no longer true for n > 2. Thus, instead of considering the determinant, one considers all the maximal minors, that is the determinants of the square submatrices that have as many rows as the Macaulay matrix. Macaulay proved that the C-ideal generated by these principal minors is a principal ideal, which is generated by the greatest common divisor of these minors. As one is working with polynomials with integer coefficients, this greatest common divisor is defined up to its sign. The generic Macaulay resultant is the greatest common divisor which becomes 1, when, for each i, zero is substituted for all coefficients of P i , {\displaystyle P_{i},} except the coefficient of x i d i , {\displaystyle x_{i}^{d_{i}},} for which one is substituted. ==== Properties of the generic Macaulay resultant ==== The generic Macaulay resultant is an irreducible polynomial. It is homogeneous of degree B / d i {\displaystyle B/d_{i}} in the coefficients of P i , {\displaystyle P_{i},} where B = d 1 ⋯ d n {\displaystyle B=d_{1}\cdots d_{n}} is the Bézout bound. The product with the resultant of every monomial of degree D in x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} belongs to the ideal of C [ x 1 , … , x n ] {\displaystyle C[x_{1},\dots ,x_{n}]} generated by P 1 , … , P n . {\displaystyle P_{1},\dots ,P_{n}.} === Resultant of polynomials over a field === From now on, we consider that the homogeneous polynomials P 1 , … , P n {\displaystyle P_{1},\ldots ,P_{n}} of degrees d 1 , … , d n {\displaystyle d_{1},\ldots ,d_{n}} have their coefficients in a field k, that is that they belong to k [ x 1 , … , x n ] . {\displaystyle k[x_{1},\dots ,x_{n}].} Their resultant is defined as the element of k obtained by replacing in the generic resultant the indeterminate coefficients by the actual coefficients of the P i . {\displaystyle P_{i}.} The main property of the resultant is that it is zero if and only if P 1 , … , P n {\displaystyle P_{1},\ldots ,P_{n}} have a nonzero common zero in an algebraically closed extension of k. The "only if" part of this theorem results from the last property of the preceding paragraph, and is an effective version of projective Nullstellensatz: If the resultant is nonzero, then ⟨ x 1 , … , x n ⟩ D ⊆ ⟨ P 1 , … , P n ⟩ , {\displaystyle \langle x_{1},\ldots ,x_{n}\rangle ^{D}\subseteq \langle P_{1},\ldots ,P_{n}\rangle ,} where D = d 1 + ⋯ + d n − n + 1 {\displaystyle D=d_{1}+\cdots +d_{n}-n+1} is the Macaulay degree, and ⟨ x 1 , … , x n ⟩ {\displaystyle \langle x_{1},\ldots ,x_{n}\rangle } is the maximal homogeneous ideal. This implies that P 1 , … , P n {\displaystyle P_{1},\ldots ,P_{n}} have no other common zero than the unique common zero, (0, ..., 0), of x 1 , … , x n . {\displaystyle x_{1},\ldots ,x_{n}.} === Computability === As the computation of a resultant may be reduced to computing determinants and polynomial greatest common divisors, there are algorithms for computing resultants in a finite number of steps. However, the generic resultant is a polynomial of very high degree (exponential in n) depending on a huge number of indeterminates. It follows that, except for very small n and very small degrees of input polynomials, the generic resultant is, in practice, impossible to compute, even with modern computers. Moreover, the number of monomials of the generic resultant is so high, that, if it would be computable, the result could not be stored on available memory devices, even for rather small values of n and of the degrees of the input polynomials. Therefore, computing the resultant makes sense only for polynomials whose coefficients belong to a field or are polynomials in few indeterminates over a field. In the case of input polynomials with coefficients in a field, the exact value of the resultant is rarely important, only its equality (or not) to zero matters. As the resultant is zero if and only if the rank of the Macaulay matrix is lower than its number of its rows, this equality to zero may by tested by applying Gaussian elimination to the Macaulay matrix. This provides a computational complexity d O ( n ) , {\displaystyle d^{O(n)},} where d is the maximum degree of input polynomials. Another case where the computation of the resultant may provide useful information is when the coefficients of the input polynomials are polynomials in a small number of indeterminates, often called parameters. In this case, the resultant, if not zero, defines a hypersurface in the parameter space. A point belongs to this hyper surface, if and only if there are values of x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} which, together with the coordinates of the point are a zero of the input polynomials. In other words, the resultant is the result of the "elimination" of x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} from the input polynomials. === U-resultant === Macaulay's resultant provides a method, called "U-resultant" by Macaulay, for solving systems of polynomial equations. Given n − 1 homogeneous polynomials P 1 , … , P n − 1 , {\displaystyle P_{1},\ldots ,P_{n-1},} of degrees d 1 , … , d n − 1 , {\displaystyle d_{1},\ldots ,d_{n-1},} in n indeterminates x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} over a field k, their U-resultant is the resultant of the n polynomials P 1 , … , P n − 1 , P n , {\displaystyle P_{1},\ldots ,P_{n-1},P_{n},} where P n = u 1 x 1 + ⋯ + u n x n {\displaystyle P_{n}=u_{1}x_{1}+\cdots +u_{n}x_{n}} is the generic linear form whose coefficients are new indeterminates u 1 , … , u n . {\displaystyle u_{1},\ldots ,u_{n}.} Notation u i {\displaystyle u_{i}} or U i {\displaystyle U_{i}} for these generic coefficients is traditional, and is the origin of the term U-resultant. The U-resultant is a homogeneous polynomial in k [ u 1 , … , u n ] . {\displaystyle k[u_{1},\ldots ,u_{n}].} It is zero if and only if the common zeros of P 1 , … , P n − 1 {\displaystyle P_{1},\ldots ,P_{n-1}} form a projective algebraic set of positive dimension (that is, there are infinitely many projective zeros over an algebraically closed extension of k). If the U-resultant is not zero, its degree is the Bézout bound d 1 ⋯ d n − 1 . {\displaystyle d_{1}\cdots d_{n-1}.} The U-resultant factorizes over an algebraically closed extension of k into a product of linear forms. If α 1 u 1 + … + α n u n {\displaystyle \alpha _{1}u_{1}+\ldots +\alpha _{n}u_{n}} is such a linear factor, then α 1 , … , α n {\displaystyle \alpha _{1},\ldots ,\alpha _{n}} are the homogeneous coordinates of a common zero of P 1 , … , P n − 1 . {\displaystyle P_{1},\ldots ,P_{n-1}.} Moreover, every common zero may be obtained from one of these linear factors, and the multiplicity as a factor is equal to the intersection multiplicity of the P i {\displaystyle P_{i}} at this zero. In other words, the U-resultant provides a completely explicit version of Bézout's theorem. ==== Extension to more polynomials and computation ==== The U-resultant as defined by Macaulay requires the number of homogeneous polynomials in the system of equations to be n − 1 {\displaystyle n-1} , where n {\displaystyle n} is the number of indeterminates. In 1981, Daniel Lazard extended the notion to the case where the number of polynomials may differ from n − 1 {\displaystyle n-1} , and the resulting computation can be performed via a specialized Gaussian elimination procedure followed by symbolic determinant computation. Let P 1 , … , P k {\displaystyle P_{1},\ldots ,P_{k}} be homogeneous polynomials in x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} of degrees d 1 , … , d k , {\displaystyle d_{1},\ldots ,d_{k},} over a field k. Without loss of generality, one may suppose that d 1 ≥ d 2 ≥ ⋯ ≥ d k . {\displaystyle d_{1}\geq d_{2}\geq \cdots \geq d_{k}.} Setting d i = 1 {\displaystyle d_{i}=1} for i > k, the Macaulay bound is D = d 1 + ⋯ + d n − n + 1. {\displaystyle D=d_{1}+\cdots +d_{n}-n+1.} Let u 1 , … , u n {\displaystyle u_{1},\ldots ,u_{n}} be new indeterminates and define P k + 1 = u 1 x 1 + ⋯ + u n x n . {\displaystyle P_{k+1}=u_{1}x_{1}+\cdots +u_{n}x_{n}.} In this case, the Macaulay matrix is defined to be the matrix, over the basis of the monomials in x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} of the linear map ( Q 1 , … , Q k + 1 ) ↦ P 1 Q 1 + ⋯ + P k + 1 Q k + 1 , {\displaystyle (Q_{1},\ldots ,Q_{k+1})\mapsto P_{1}Q_{1}+\cdots +P_{k+1}Q_{k+1},} where, for each i, Q i {\displaystyle Q_{i}} runs over the linear space consisting of zero and the homogeneous polynomials of degree D − d i {\displaystyle D-d_{i}} . Reducing the Macaulay matrix by a variant of Gaussian elimination, one obtains a square matrix of linear forms in u 1 , … , u n . {\displaystyle u_{1},\ldots ,u_{n}.} The determinant of this matrix is the U-resultant. As with the original U-resultant, it is zero if and only if P 1 , … , P k {\displaystyle P_{1},\ldots ,P_{k}} have infinitely many common projective zeros (that is if the projective algebraic set defined by P 1 , … , P k {\displaystyle P_{1},\ldots ,P_{k}} has infinitely many points over an algebraic closure of k). Again as with the original U-resultant, when this U-resultant is not zero, it factorizes into linear factors over any algebraically closed extension of k. The coefficients of these linear factors are the homogeneous coordinates of the common zeros of P 1 , … , P k , {\displaystyle P_{1},\ldots ,P_{k},} and the multiplicity of a common zero equals the multiplicity of the corresponding linear factor. The number of rows of the Macaulay matrix is less than ( e d ) n , {\displaystyle (ed)^{n},} where e ~ 2.7182 is the usual mathematical constant, and d is the arithmetic mean of the degrees of the P i . {\displaystyle P_{i}.} It follows that all solutions of a system of polynomial equations with a finite number of projective zeros can be determined in time d O ( n ) . {\displaystyle d^{O(n)}.} Although this bound is large, it is nearly optimal in the following sense: if all input degrees are equal, then the time complexity of the procedure is polynomial in the expected number of solutions (Bézout's theorem). This computation may be practically viable when n, k and d are not large. == See also == Elimination theory Subresultant Nonlinear algebra == Notes == == References == == External links == Weisstein, Eric W. "Resultant". MathWorld.
Wikipedia:Reuben Burrow#0
Reuben Burrow (30 December 1747 – 7 June 1792) was an English mathematician, surveyor and orientalist. Initially a teacher, he was appointed assistant to Sir Nevil Maskelyne, the fifth Astronomer Royal, at the Royal Greenwich Observatory, and was involved in the Schiehallion experiment. He later conducted research in India, teaching himself Sanskrit and becoming one of the first members of the Asiatic Society. He was the first to measure the length of a degree of an arc of longitude along the Tropic of Cancer. His other major achievements included a study of Indian mathematics, although he earned a reputation for being rude and unpolished amid the leading figures in science who came mostly from the upper-class. One commentator called him "an able mathematician but a most vulgar and scurrilous dog." == Biography == Burrow was born at Hoberley, near Shadwell, Leeds. His father, a small tenant farmer, gave him some schooling, occasionally interrupted by labour on the farm. He showed an ability and keenness for mathematics early on, and after some instruction from a schoolmaster named Crooks at Leeds. At the age of 18 he walked for four days, all the way from Leeds to London, to seek a job and obtained a clerkship in the office of a London merchant. A year later he became usher in a school of Benjamin Webb, the ‘celebrated writing-master.’ He next set up as schoolmaster on his own account at Portsmouth. Here he taught mathematics for navigation to aspiring midshipmen. His reputation in mathematics reached Nevil Maskelyne and he received an offer for a position of "labourer" at the Greenwich Observatory. Burrow, an argumentative man, was however unable to work with the genteel and refined astronomer-royal and he soon left. In 1772 he married Anne Purvis, daughter of a poulterer in Leadenhall Street, and started a school at Greenwich. In 1774, Burrow and William Menzies aided Maskelyne in his observations in the Schiehallion experiment, to examine the deflection by gravity of the plumbline towards the mountain. He complained later that his services were insufficiently recognised. Burrow was liable to use words of abuse at anyone and he did not think Maskelyne deserved his position. Soon afterwards, however, he was appointed ‘mathematical teacher in the drawing-room at the Tower,’ where there was then a training school for artillery officers, afterwards merged into the Royal Military Academy, Woolwich. His salary was £100 a year. Here he became editor of the Ladies and Gentlemen's Diary, or Royal Almanack. It was started by Thomas Carnan, in opposition to the Ladies' Diary, published by the Stationers' Company and edited by Charles Hutton and like it included mathematical puzzles. The company claimed a monopoly of almanacs, but their claim was disallowed by the court of common pleas, on their bringing an action against Carnan, who published the first number of his diary in December 1775. It continued till 1786, the word 'Gentlemen' being dropped after 1780. Part of it was devoted to mathematical problems by Burrow and various contributors. Burrow quarrelled with his rival, Charles Hutton. Falling out with Maskelyne and others, he eked out his living by taking private pupils, and did a little work for publishers; a bit of work at London brought him in contact with Colonel Henry Watson, for many years chief engineer in Bengal under Lord Clive. Watson recommended him to Lord Townsend as a good candidate "to teach mathematics to the Cadets of the drawing room" of the tower. In 1777 he worked on a survey of the coast "from Naze in Essex to Hollseby bay in Sussex" (actually Suffolk) to assess vulnerabilities to attack by the French. He was joined by a party of pupils and the fleet was commanded by Admiral Howe. He later complained to master-general of the ordnance, the duke of Richmond, that he was not paid "a farthing". Henry Watson in the meantime was recalled to Bengal and offered to take Burrow along. Burrow resigned on 30 April 1782 "in order to go to the East Indies." === India === Burrow left his wife and growing family behind and boarded the east indiaman General Coote. The journey included fights with others including the first mate who Burrow suspected of plotting a mutiny. Soon after reaching India he wrote to the Governor-General Warren Hastings, a school friend of Maskelyne, stating his desire to generate more money in order to conduct further research. For a while he earned a living by teaching in Calcutta and it was reported that one of his Kashmiri students, Tafazzul Husain Khan Kashmiri (studied under James Dinwiddie and died in 1800), was translating Newton's Principia into Persian. He was also interested in ancient geometry, as shown by his book on Apollonius: A Restitution of the Geometrical Treatise of Apollonius Pergæus on Inclinations (1779), and was curious to investigate the mathematical treatises in ancient Hindu and other Oriental literature. He later published Hindoo Knowledge of the Binomial Theorem. He asked for Hastings's encouragement; and other letters and papers show that he pursued these inquiries, having learnt Sanskrit on his own for the purpose (he already knew Latin, French and Italian and had picked up some Arabic and Persian), and collected many Sanskrit and Persian manuscripts. He was appointed mathematical teacher of the engineers' corps, and afterwards had some employment in connection with a proposed trigonometrical survey of Bengal. Burrow was one of the first members of The Asiatic Society founded by fellow orientalist William Jones and contributed to their research. In 1788, William Roy suggested that Burrow would be ideally qualified to conduct experiments in Bengal to examine trigonometrically to measure a meridian of the arc. In 1790 Burrow wrote to the directors of the East India Company to set up an "Indian Greenwich". It was turned down as the Madras Observatory was being considered around the same time. Burrow had obtained instruments used by the late Colonel Thomas Dean Pearse (1741–1789) and began his measurement of a baseline near Calcutta in 1791. Burrow measured both along the latitude and along the longitude somewhere near Cawksally near Krishnagar in Nadia District. His equipment was not standard and there were a lot of errors arising from methodology. From his observations a length of a meridian arc along the Tropic of Cancer was later determined by Isaac Dalby as 362,742 feet, or 68.70 miles (105.64 km). Weakened by malaria, Burrow died at Buxar in India on 7 June 1792. His wife, with his son and his three daughters who had joined him in India in 1790 returned after his death. His son died as an officer in the service of the East India Company. A Short Account of the late Mr. Burrow's Measurement of a Degree of Longitude and another of Latitude near the tropic in Bengal was published posthumously by his friend Isaac Dalby in 1796. Burrow wrote crude poems and a few written pseudonymously lampooned Maskelyne and nearly all his mathematical peers other than William Emerson who had taught him briefly. He earned a reputation for drinking, pugilism and came to be characterized by a contemporary as "an able mathematician but a most vulgar and scurrilous dog." == Writings == Most of Burrow's writings were in the Asiatick Researchers. A few of his findings were communicated posthumously by his friend Isaac Dalby. A method of calculating the moon's parallaxes in latitude and longitude. Asiatic Researches 1:320. Hints relative to friction in Mechanics. Asiatic Researches 1:171. Remarks on the Artificial Horizon. Asiatic Researches 1:327. Demonstration of a theorem concerning the intersection of curves. Asiatic Researches 1:330. Corrections of the lunar method of finding the longitude. Asiatic Researches 1:433. A synopsis of the different cases that may happen in deducing the longitude of one place from another by means of Arnold's chronometers. Asiatic Researches 2:473. A proof that the Hindoos had the binomial theorem. Asiatic Researches 2:487 Memorandums concerning an old building. Asiatic Researches 2:477 Observations on some of the eclipses of Jupiter's satellites. Asiatic Researches 2:483 A demonstration of one of the Hindoo rules of arithmetic. Asiatic Researches 3:145 == See also == Tafazzul Husain Kashmiri James Dinwiddie == References == == External links == "Newton on the Ganges: Asiatic Enlightenment of British Astronomy," 2008 lecture by Simon Schaffer that discusses Burrow's time in India and his mathematical work in assisting the translation of Newton's Principia into Arabic. This article incorporates text from a publication now in the public domain: Stephen, Leslie (1886). "Burrow, Reuben". In Stephen, Leslie (ed.). Dictionary of National Biography. Vol. 07. London: Smith, Elder & Co.
Wikipedia:Reuven Rubinstein#0
Reuven Rubinstein (Hebrew: ראובן רובינשטיין; 1938–2012) was an Israeli scientist known for his contributions to Monte Carlo simulation, applied probability, stochastic modeling, and stochastic optimization, having authored more than one hundred papers and six books. During his career, Rubinstein made fundamental and important contributions in these fields and advanced the theory and application of adaptive importance sampling, rare-event simulation, stochastic optimization, sensitivity analysis of simulation-based models, the splitting method, and counting problems concerning NP-complete problems. He is well known as the founder of several breakthrough methods, such as the score-function method, the stochastic counterpart method, and the cross-entropy method, which have numerous applications in combinatorial optimization and simulation. == Early life and education == Rubinstein was born on August 25, 1938, in Kaunas, Lithuania. In 1960, he received a BSc (unknown subject) and MSc (Electrical Engineering) from the Kaunas Polytechnical Institute. In 1969, he received a DSc in Operations Research from the Riga Polytechnical Institute. == Career and awards == His citation index is in the top 5% among his colleagues in operations research and management sciences. His 1981 book "Simulation and the Monte Carlo Method", Wiley (second edition 2008 and third edition 2017, with Dirk Kroese) alone has over 10,000 citations. He has held visiting positions in various research institutes, including the University of Illinois Urbana-Champaign, Columbia University, Harvard University, Stanford University, IBM, and Bell Laboratories. He has given invited and plenary lectures in many international conferences around the globe. In 2010 Prof. Rubinstein won the INFORMS Simulation Society's highest prize—the Lifetime Professional Achievement Award (LPAA), which recognizes scholars who have made fundamental contributions to the field of simulation that persist over most of a professional career. In 2011 Reuven Rubinstein won from the Operations Research Society of Israel (ORSIS) the Lifetime Professional Award (LPA), which recognizes scholars who have made fundamental contributions to the field of operations research over most of a professional career and constitutes ORSIS's highest award. == Publications == === Books === Rubinstein, R.Y., "Simulation and the Monte Carlo Method", Wiley, 1981. Rubinstein, R.Y., "Monte Carlo Optimization, Simulation and Sensitivity of Queueing Networks", Wiley, 1986. Rubinstein, R.Y., and A. Shapiro, "Discrete Event Systems: Sensitivity Analysis and Stochastic Optimization", Wiley, 1993. Melamed, B., and R.Y. Rubinstein, "Modern Simulation and Modeling", Wiley, 1998. Rubinstein, R.Y., and D.P. Kroese, "The Cross-Entropy Method: a Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning", Springer, 2004. Rubinstein, R.Y., and D.P. Kroese, "Simulation and the Monte Carlo Method", Second Edition, Wiley, 2008. Rubinstein, R.Y., Rider, A., and R. Vaisman, "Fast Sequential Monte Carlo Methods for Counting and Optimization", Wiley, 2014. Rubinstein, R.Y., and D.P. Kroese, "Simulation and the Monte Carlo Method", Third Edition, Wiley, 2017. === Journal articles === Rubinstein, R.Y., "The cross-entropy method for combinatorial and continuous optimization", Methodology and Computing in Applied Probability, 2, 127–190, 1999. Rubinstein R.Y., "Randomized algorithms with splitting: Why the classic randomized algorithms do not work and how to make them work", Methodology and Computing in Applied Probability, 2009. doi:10.1007/s11009-009-9126-6 == References ==
Wikipedia:Rhind Mathematical Papyrus#0
The Rhind Mathematical Papyrus (RMP; also designated as papyrus British Museum 10057, pBM 10058, and Brooklyn Museum 37.1784Ea-b) is one of the best known examples of ancient Egyptian mathematics. It is one of two well-known mathematical papyri, along with the Moscow Mathematical Papyrus. The Rhind Papyrus is the larger, but younger, of the two. In the papyrus' opening paragraphs Ahmes presents the papyrus as giving "Accurate reckoning for inquiring into things, and the knowledge of all things, mysteries ... all secrets". He continues: This book was copied in regnal year 33, month 4 of Akhet, under the majesty of the King of Upper and Lower Egypt, Awserre, given life, from an ancient copy made in the time of the King of Upper and Lower Egypt Nimaatre. The scribe Ahmose writes this copy. Several books and articles about the Rhind Mathematical Papyrus have been published, and a handful of these stand out. The Rhind Papyrus was published in 1923 by the English Egyptologist T. Eric Peet and contains a discussion of the text that followed Francis Llewellyn Griffith's Book I, II and III outline. Chace published a compendium in 1927–29 which included photographs of the text. A more recent overview of the Rhind Papyrus was published in 1987 by Robins and Shute. == History == The Rhind Mathematical Papyrus dates to the Second Intermediate Period of Egypt. It was copied by the scribe Ahmes (i.e., Ahmose; Ahmes is an older transcription favoured by historians of mathematics) from a now-lost text from the reign of the 12th dynasty king Amenemhat III. It dates to around 1550 BC. The document is dated to Year 33 of the Hyksos king Apophis and also contains a separate later historical note on its verso likely dating from "Year 11" of his successor, Khamudi. Alexander Henry Rhind, a Scottish antiquarian, purchased two parts of the papyrus in 1858 in Luxor, Egypt; it was stated to have been found in "one of the small buildings near the Ramesseum", near Luxor. The British Museum, where the majority of the papyrus is now kept, acquired it in 1865 along with the Egyptian Mathematical Leather Roll, also owned by Henry Rhind. Fragments of the text were independently purchased in Luxor by American Egyptologist Edwin Smith in the mid 1860s, were donated by his daughter in 1906 to the New York Historical Society, and are now held by the Brooklyn Museum. An 18 cm (7.1 in) central section is missing. The papyrus began to be transliterated and mathematically translated in the late 19th century. The mathematical-translation aspect remains incomplete in several respects. == Books == === Book I – Arithmetic and Algebra === The first part of the Rhind papyrus consists of reference tables and a collection of 21 arithmetic and 20 algebraic problems. The problems start out with simple fractional expressions, followed by completion (sekem) problems and more involved linear equations (aha problems). The first part of the papyrus is taken up by the 2/n table. The fractions 2/n for odd n ranging from 3 to 101 are expressed as sums of unit fractions. For example, 2 15 = 1 10 + 1 30 {\displaystyle {\frac {2}{15}}={\frac {1}{10}}+{\frac {1}{30}}} . The decomposition of 2/n into unit fractions is never more than 4 terms long as in for example: 2 101 = 1 101 + 1 202 + 1 303 + 1 606 {\displaystyle {\frac {2}{101}}={\frac {1}{101}}+{\frac {1}{202}}+{\frac {1}{303}}+{\frac {1}{606}}} This table is followed by a much smaller, tiny table of fractional expressions for the numbers 1 through 9 divided by 10. For instance the division of 7 by 10 is recorded as: 7 divided by 10 yields 2/3 + 1/30 After these two tables, the papyrus records 91 problems altogether, which have been designated by moderns as problems (or numbers) 1–87, including four other items which have been designated as problems 7B, 59B, 61B and 82B. Problems 1–7, 7B and 8–40 are concerned with arithmetic and elementary algebra. Problems 1–6 compute divisions of a certain number of loaves of bread by 10 men and record the outcome in unit fractions. Problems 7–20 show how to multiply the expressions 1 + 1/2 + 1/4 = 7/4, and 1 + 2/3 + 1/3 = 2 by different fractions. Problems 21–23 are problems in completion, which in modern notation are simply subtraction problems. Problems 24–34 are ‘‘aha’’ problems; these are linear equations. Problem 32 for instance corresponds (in modern notation) to solving x + 1/3 x + 1/4 x = 2 for x. Problems 35–38 involve divisions of the heqat, which is an ancient Egyptian unit of volume. Beginning at this point, assorted units of measurement become much more important throughout the remainder of the papyrus, and indeed a major consideration throughout the rest of the papyrus is dimensional analysis. Problems 39 and 40 compute the division of loaves and use arithmetic progressions. === Book II – Geometry === The second part of the Rhind papyrus, being problems 41–59, 59B and 60, consists of geometry problems. Peet referred to these problems as "mensuration problems". ==== Volumes ==== Problems 41–46 show how to find the volume of both cylindrical and rectangular granaries. In problem 41 Ahmes computes the volume of a cylindrical granary. Given the diameter d and the height h, the volume V is given by: V = [ ( 1 − 1 / 9 ) d ] 2 h {\displaystyle V=\left[\right(1-1/9\left)d\right]^{2}h} In modern mathematical notation (and using d = 2r) this gives V = ( 8 / 9 ) 2 d 2 h = ( 256 / 81 ) r 2 h {\displaystyle V=(8/9)^{2}d^{2}h=(256/81)r^{2}h} . The fractional term 256/81 approximates the value of π as being 3.1605..., an error of less than one percent. Problem 47 is a table with fractional equalities which represent the ten situations where the physical volume quantity of "100 quadruple heqats" is divided by each of the multiples of ten, from ten through one hundred. The quotients are expressed in terms of Horus eye fractions, sometimes also using a much smaller unit of volume known as a "quadruple ro". The quadruple heqat and the quadruple ro are units of volume derived from the simpler heqat and ro, such that these four units of volume satisfy the following relationships: 1 quadruple heqat = 4 heqat = 1280 ro = 320 quadruple ro. Thus, 100/10 quadruple heqat = 10 quadruple heqat 100/20 quadruple heqat = 5 quadruple heqat 100/30 quadruple heqat = (3 + 1/4 + 1/16 + 1/64) quadruple heqat + (1 + 2/3) quadruple ro 100/40 quadruple heqat = (2 + 1/2) quadruple heqat 100/50 quadruple heqat = 2 quadruple heqat 100/60 quadruple heqat = (1 + 1/2 + 1/8 + 1/32) quadruple heqat + (3 + 1/3) quadruple ro 100/70 quadruple heqat = (1 + 1/4 + 1/8 + 1/32 + 1/64) quadruple heqat + (2 + 1/14 + 1/21 + 1/42) quadruple ro 100/80 quadruple heqat = (1 + 1/4) quadruple heqat 100/90 quadruple heqat = (1 + 1/16 + 1/32 + 1/64) quadruple heqat + (1/2 + 1/18) quadruple ro 100/100 quadruple heqat = 1 quadruple heqat ==== Areas ==== Problems 48–55 show how to compute an assortment of areas. Problem 48 is notable in that it succinctly computes the area of a circle by approximating π. Specifically, problem 48 explicitly reinforces the convention (used throughout the geometry section) that "a circle's area stands to that of its circumscribing square in the ratio 64/81." Equivalently, the papyrus approximates π as 256/81, as was already noted above in the explanation of problem 41. Other problems show how to find the area of rectangles, triangles and trapezoids. ==== Pyramids ==== The final six problems are related to the slopes of pyramids. A seked problem is reported as follows: If a pyramid is 250 cubits high and the side of its base 360 cubits long, what is its seked?" The solution to the problem is given as the ratio of half the side of the base of the pyramid to its height, or the run-to-rise ratio of its face. In other words, the quantity found for the seked is the cotangent of the angle to the base of the pyramid and its face. === Book III – Miscellany === The third part of the Rhind papyrus consists of the remainder of the 91 problems, being 61, 61B, 62–82, 82B, 83–84, and "numbers" 85–87, which are items that are not mathematical in nature. This final section contains more complicated tables of data (which frequently involve Horus eye fractions), several pefsu problems which are elementary algebraic problems concerning food preparation, and even an amusing problem (79) which is suggestive of geometric progressions, geometric series, and certain later problems and riddles in history. Problem 79 explicitly cites, "seven houses, 49 cats, 343 mice, 2401 ears of spelt, 16807 hekats." In particular problem 79 concerns a situation in which 7 houses each contain seven cats, which all eat seven mice, each of which would have eaten seven ears of grain, each of which would have produced seven measures of grain. The third part of the Rhind papyrus is therefore a kind of miscellany, building on what has already been presented. Problem 61 is concerned with multiplications of fractions. Problem 61B, meanwhile, gives a general expression for computing 2/3 of 1/n, where n is odd. In modern notation the formula given is 2 3 n = 1 2 n + 1 6 n {\displaystyle {\frac {2}{3n}}={\frac {1}{2n}}+{\frac {1}{6n}}} The technique given in 61B is closely related to the derivation of the 2/n table. Problems 62–68 are general problems of an algebraic nature. Problems 69–78 are all pefsu problems in some form or another. They involve computations regarding the strength of bread and beer, with respect to certain raw materials used in their production. Problem 79 sums five terms in a geometric progression. Its language is strongly suggestive of the more modern riddle and nursery rhyme "As I was going to St Ives". Problems 80 and 81 compute Horus eye fractions of hinu (or heqats). The last four mathematical items, problems 82, 82B and 83–84, compute the amount of feed necessary for various animals, such as fowl and oxen. However, these problems, especially 84, are plagued by pervasive ambiguity, confusion, and simple inaccuracy. The final three items on the Rhind papyrus are designated as "numbers" 85–87, as opposed to "problems", and they are scattered widely across the papyrus's back side, or verso. They are, respectively, a small phrase which ends the document (and has a few possibilities for translation, given below), a piece of scrap paper unrelated to the body of the document, used to hold it together (yet containing words and Egyptian fractions which are by now familiar to a reader of the document), and a small historical note which is thought to have been written some time after the completion of the body of the papyrus's writing. This note is thought to describe events during the "Hyksos domination", a period of external interruption in ancient Egyptian society which is closely related with its second intermediary period. With these non-mathematical yet historically and philologically intriguing errata, the papyrus's writing comes to an end. == Unit concordance == Much of the Rhind Papyrus's material is concerned with Ancient Egyptian units of measurement and especially the dimensional analysis used to convert between them. A concordance of units of measurement used in the papyrus is given in the image. == Content == This table summarizes the content of the Rhind Papyrus by means of a concise modern paraphrase. It is based upon the two-volume exposition of the papyrus which was published by Arnold Buffum Chace in 1927, and in 1929. In general, the papyrus consists of four sections: a title page, the 2/n table, a tiny "1–9/10 table", and 91 problems, or "numbers". The latter are numbered from 1 through 87 and include four mathematical items which have been designated by moderns as problems 7B, 59B, 61B, and 82B. Numbers 85–87, meanwhile, are not mathematical items forming part of the body of the document, but instead are respectively: a small phrase ending the document, a piece of "scrap-paper" used to hold the document together (having already contained unrelated writing), and a historical note which is thought to describe a time period shortly after the completion of the body of the papyrus. These three latter items are written on disparate areas of the papyrus's verso (back side), far away from the mathematical content. Chace therefore differentiates them by styling them as numbers as opposed to problems, like the other 88 numbered items. == See also == List of ancient Egyptian papyri Akhmim wooden tablet Ancient Egyptian units of measurement As I was going to St. Ives Berlin Papyrus 6619 History of mathematics Lahun Mathematical Papyri == Bibliography == Chace, Arnold Buffum; et al. (1927). The Rhind Mathematical Papyrus. Vol. 1. Oberlin, Ohio: Mathematical Association of America – via Internet Archive. Chace, Arnold Buffum; et al. (1929). The Rhind Mathematical Papyrus. Vol. 2. Oberlin, Ohio: Mathematical Association of America – via Internet Archive. Gillings, Richard J. (1972). Mathematics in the Time of the Pharaohs (Dover reprint ed.). MIT Press. ISBN 0-486-24315-X. Robins, Gay; Shute, Charles (1987). The Rhind Mathematical Papyrus: an Ancient Egyptian Text. London: British Museum Publications Limited. ISBN 0-7141-0944-4. == References == == External links == British Museum webpage on the first section of the Papyrus British Museum webpage on the second section of the Papyrus British Museum webpage on the Papyrus at the Wayback Machine (archived June 29, 2015). "Mathematics in Egyptian Papyri". MacTutor History of Mathematics archive. Weisstein, Eric W. "Rhind Papyrus". MathWorld. Williams, Scott W. Mathematicians of the African Diaspora, containing a page on Egyptian Mathematics Papyri.
Wikipedia:Rhind Mathematical Papyrus 2/n table#0
The Rhind Mathematical Papyrus (RMP; also designated as papyrus British Museum 10057, pBM 10058, and Brooklyn Museum 37.1784Ea-b) is one of the best known examples of ancient Egyptian mathematics. It is one of two well-known mathematical papyri, along with the Moscow Mathematical Papyrus. The Rhind Papyrus is the larger, but younger, of the two. In the papyrus' opening paragraphs Ahmes presents the papyrus as giving "Accurate reckoning for inquiring into things, and the knowledge of all things, mysteries ... all secrets". He continues: This book was copied in regnal year 33, month 4 of Akhet, under the majesty of the King of Upper and Lower Egypt, Awserre, given life, from an ancient copy made in the time of the King of Upper and Lower Egypt Nimaatre. The scribe Ahmose writes this copy. Several books and articles about the Rhind Mathematical Papyrus have been published, and a handful of these stand out. The Rhind Papyrus was published in 1923 by the English Egyptologist T. Eric Peet and contains a discussion of the text that followed Francis Llewellyn Griffith's Book I, II and III outline. Chace published a compendium in 1927–29 which included photographs of the text. A more recent overview of the Rhind Papyrus was published in 1987 by Robins and Shute. == History == The Rhind Mathematical Papyrus dates to the Second Intermediate Period of Egypt. It was copied by the scribe Ahmes (i.e., Ahmose; Ahmes is an older transcription favoured by historians of mathematics) from a now-lost text from the reign of the 12th dynasty king Amenemhat III. It dates to around 1550 BC. The document is dated to Year 33 of the Hyksos king Apophis and also contains a separate later historical note on its verso likely dating from "Year 11" of his successor, Khamudi. Alexander Henry Rhind, a Scottish antiquarian, purchased two parts of the papyrus in 1858 in Luxor, Egypt; it was stated to have been found in "one of the small buildings near the Ramesseum", near Luxor. The British Museum, where the majority of the papyrus is now kept, acquired it in 1865 along with the Egyptian Mathematical Leather Roll, also owned by Henry Rhind. Fragments of the text were independently purchased in Luxor by American Egyptologist Edwin Smith in the mid 1860s, were donated by his daughter in 1906 to the New York Historical Society, and are now held by the Brooklyn Museum. An 18 cm (7.1 in) central section is missing. The papyrus began to be transliterated and mathematically translated in the late 19th century. The mathematical-translation aspect remains incomplete in several respects. == Books == === Book I – Arithmetic and Algebra === The first part of the Rhind papyrus consists of reference tables and a collection of 21 arithmetic and 20 algebraic problems. The problems start out with simple fractional expressions, followed by completion (sekem) problems and more involved linear equations (aha problems). The first part of the papyrus is taken up by the 2/n table. The fractions 2/n for odd n ranging from 3 to 101 are expressed as sums of unit fractions. For example, 2 15 = 1 10 + 1 30 {\displaystyle {\frac {2}{15}}={\frac {1}{10}}+{\frac {1}{30}}} . The decomposition of 2/n into unit fractions is never more than 4 terms long as in for example: 2 101 = 1 101 + 1 202 + 1 303 + 1 606 {\displaystyle {\frac {2}{101}}={\frac {1}{101}}+{\frac {1}{202}}+{\frac {1}{303}}+{\frac {1}{606}}} This table is followed by a much smaller, tiny table of fractional expressions for the numbers 1 through 9 divided by 10. For instance the division of 7 by 10 is recorded as: 7 divided by 10 yields 2/3 + 1/30 After these two tables, the papyrus records 91 problems altogether, which have been designated by moderns as problems (or numbers) 1–87, including four other items which have been designated as problems 7B, 59B, 61B and 82B. Problems 1–7, 7B and 8–40 are concerned with arithmetic and elementary algebra. Problems 1–6 compute divisions of a certain number of loaves of bread by 10 men and record the outcome in unit fractions. Problems 7–20 show how to multiply the expressions 1 + 1/2 + 1/4 = 7/4, and 1 + 2/3 + 1/3 = 2 by different fractions. Problems 21–23 are problems in completion, which in modern notation are simply subtraction problems. Problems 24–34 are ‘‘aha’’ problems; these are linear equations. Problem 32 for instance corresponds (in modern notation) to solving x + 1/3 x + 1/4 x = 2 for x. Problems 35–38 involve divisions of the heqat, which is an ancient Egyptian unit of volume. Beginning at this point, assorted units of measurement become much more important throughout the remainder of the papyrus, and indeed a major consideration throughout the rest of the papyrus is dimensional analysis. Problems 39 and 40 compute the division of loaves and use arithmetic progressions. === Book II – Geometry === The second part of the Rhind papyrus, being problems 41–59, 59B and 60, consists of geometry problems. Peet referred to these problems as "mensuration problems". ==== Volumes ==== Problems 41–46 show how to find the volume of both cylindrical and rectangular granaries. In problem 41 Ahmes computes the volume of a cylindrical granary. Given the diameter d and the height h, the volume V is given by: V = [ ( 1 − 1 / 9 ) d ] 2 h {\displaystyle V=\left[\right(1-1/9\left)d\right]^{2}h} In modern mathematical notation (and using d = 2r) this gives V = ( 8 / 9 ) 2 d 2 h = ( 256 / 81 ) r 2 h {\displaystyle V=(8/9)^{2}d^{2}h=(256/81)r^{2}h} . The fractional term 256/81 approximates the value of π as being 3.1605..., an error of less than one percent. Problem 47 is a table with fractional equalities which represent the ten situations where the physical volume quantity of "100 quadruple heqats" is divided by each of the multiples of ten, from ten through one hundred. The quotients are expressed in terms of Horus eye fractions, sometimes also using a much smaller unit of volume known as a "quadruple ro". The quadruple heqat and the quadruple ro are units of volume derived from the simpler heqat and ro, such that these four units of volume satisfy the following relationships: 1 quadruple heqat = 4 heqat = 1280 ro = 320 quadruple ro. Thus, 100/10 quadruple heqat = 10 quadruple heqat 100/20 quadruple heqat = 5 quadruple heqat 100/30 quadruple heqat = (3 + 1/4 + 1/16 + 1/64) quadruple heqat + (1 + 2/3) quadruple ro 100/40 quadruple heqat = (2 + 1/2) quadruple heqat 100/50 quadruple heqat = 2 quadruple heqat 100/60 quadruple heqat = (1 + 1/2 + 1/8 + 1/32) quadruple heqat + (3 + 1/3) quadruple ro 100/70 quadruple heqat = (1 + 1/4 + 1/8 + 1/32 + 1/64) quadruple heqat + (2 + 1/14 + 1/21 + 1/42) quadruple ro 100/80 quadruple heqat = (1 + 1/4) quadruple heqat 100/90 quadruple heqat = (1 + 1/16 + 1/32 + 1/64) quadruple heqat + (1/2 + 1/18) quadruple ro 100/100 quadruple heqat = 1 quadruple heqat ==== Areas ==== Problems 48–55 show how to compute an assortment of areas. Problem 48 is notable in that it succinctly computes the area of a circle by approximating π. Specifically, problem 48 explicitly reinforces the convention (used throughout the geometry section) that "a circle's area stands to that of its circumscribing square in the ratio 64/81." Equivalently, the papyrus approximates π as 256/81, as was already noted above in the explanation of problem 41. Other problems show how to find the area of rectangles, triangles and trapezoids. ==== Pyramids ==== The final six problems are related to the slopes of pyramids. A seked problem is reported as follows: If a pyramid is 250 cubits high and the side of its base 360 cubits long, what is its seked?" The solution to the problem is given as the ratio of half the side of the base of the pyramid to its height, or the run-to-rise ratio of its face. In other words, the quantity found for the seked is the cotangent of the angle to the base of the pyramid and its face. === Book III – Miscellany === The third part of the Rhind papyrus consists of the remainder of the 91 problems, being 61, 61B, 62–82, 82B, 83–84, and "numbers" 85–87, which are items that are not mathematical in nature. This final section contains more complicated tables of data (which frequently involve Horus eye fractions), several pefsu problems which are elementary algebraic problems concerning food preparation, and even an amusing problem (79) which is suggestive of geometric progressions, geometric series, and certain later problems and riddles in history. Problem 79 explicitly cites, "seven houses, 49 cats, 343 mice, 2401 ears of spelt, 16807 hekats." In particular problem 79 concerns a situation in which 7 houses each contain seven cats, which all eat seven mice, each of which would have eaten seven ears of grain, each of which would have produced seven measures of grain. The third part of the Rhind papyrus is therefore a kind of miscellany, building on what has already been presented. Problem 61 is concerned with multiplications of fractions. Problem 61B, meanwhile, gives a general expression for computing 2/3 of 1/n, where n is odd. In modern notation the formula given is 2 3 n = 1 2 n + 1 6 n {\displaystyle {\frac {2}{3n}}={\frac {1}{2n}}+{\frac {1}{6n}}} The technique given in 61B is closely related to the derivation of the 2/n table. Problems 62–68 are general problems of an algebraic nature. Problems 69–78 are all pefsu problems in some form or another. They involve computations regarding the strength of bread and beer, with respect to certain raw materials used in their production. Problem 79 sums five terms in a geometric progression. Its language is strongly suggestive of the more modern riddle and nursery rhyme "As I was going to St Ives". Problems 80 and 81 compute Horus eye fractions of hinu (or heqats). The last four mathematical items, problems 82, 82B and 83–84, compute the amount of feed necessary for various animals, such as fowl and oxen. However, these problems, especially 84, are plagued by pervasive ambiguity, confusion, and simple inaccuracy. The final three items on the Rhind papyrus are designated as "numbers" 85–87, as opposed to "problems", and they are scattered widely across the papyrus's back side, or verso. They are, respectively, a small phrase which ends the document (and has a few possibilities for translation, given below), a piece of scrap paper unrelated to the body of the document, used to hold it together (yet containing words and Egyptian fractions which are by now familiar to a reader of the document), and a small historical note which is thought to have been written some time after the completion of the body of the papyrus's writing. This note is thought to describe events during the "Hyksos domination", a period of external interruption in ancient Egyptian society which is closely related with its second intermediary period. With these non-mathematical yet historically and philologically intriguing errata, the papyrus's writing comes to an end. == Unit concordance == Much of the Rhind Papyrus's material is concerned with Ancient Egyptian units of measurement and especially the dimensional analysis used to convert between them. A concordance of units of measurement used in the papyrus is given in the image. == Content == This table summarizes the content of the Rhind Papyrus by means of a concise modern paraphrase. It is based upon the two-volume exposition of the papyrus which was published by Arnold Buffum Chace in 1927, and in 1929. In general, the papyrus consists of four sections: a title page, the 2/n table, a tiny "1–9/10 table", and 91 problems, or "numbers". The latter are numbered from 1 through 87 and include four mathematical items which have been designated by moderns as problems 7B, 59B, 61B, and 82B. Numbers 85–87, meanwhile, are not mathematical items forming part of the body of the document, but instead are respectively: a small phrase ending the document, a piece of "scrap-paper" used to hold the document together (having already contained unrelated writing), and a historical note which is thought to describe a time period shortly after the completion of the body of the papyrus. These three latter items are written on disparate areas of the papyrus's verso (back side), far away from the mathematical content. Chace therefore differentiates them by styling them as numbers as opposed to problems, like the other 88 numbered items. == See also == List of ancient Egyptian papyri Akhmim wooden tablet Ancient Egyptian units of measurement As I was going to St. Ives Berlin Papyrus 6619 History of mathematics Lahun Mathematical Papyri == Bibliography == Chace, Arnold Buffum; et al. (1927). The Rhind Mathematical Papyrus. Vol. 1. Oberlin, Ohio: Mathematical Association of America – via Internet Archive. Chace, Arnold Buffum; et al. (1929). The Rhind Mathematical Papyrus. Vol. 2. Oberlin, Ohio: Mathematical Association of America – via Internet Archive. Gillings, Richard J. (1972). Mathematics in the Time of the Pharaohs (Dover reprint ed.). MIT Press. ISBN 0-486-24315-X. Robins, Gay; Shute, Charles (1987). The Rhind Mathematical Papyrus: an Ancient Egyptian Text. London: British Museum Publications Limited. ISBN 0-7141-0944-4. == References == == External links == British Museum webpage on the first section of the Papyrus British Museum webpage on the second section of the Papyrus British Museum webpage on the Papyrus at the Wayback Machine (archived June 29, 2015). "Mathematics in Egyptian Papyri". MacTutor History of Mathematics archive. Weisstein, Eric W. "Rhind Papyrus". MathWorld. Williams, Scott W. Mathematicians of the African Diaspora, containing a page on Egyptian Mathematics Papyri.
Wikipedia:Riaz Ahsan#0
Syed Riaz Ahsan (24 December 1951 – 8 September 2008) was a Pakistani statistician and mathematician who has worked in applied statistics, applied analysis, applications of special functions. He was a noted professor of applied statistics in University of Karachi, Karachi. Previously, he has served as the president of Sindh Professors and Lecturers’ Association (SPLA). Prof. Riaz Ahsan was born on 24 December 1951 and he died on 8 September 2008. == Education and career == Born in Karachi, Ahsan had his initial education at the BYJ School, Karachi, and completed his intermediate from Adamjee Government Science College, Karachi. He received his B.Sc. (Hons) in Statistics and Mathematics. He did his M.Sc. with distinctions in statistics and M.A. in mathematics with honors from Karachi University. Later, he received his double Ph.D. in applied statistics and applied mathematics from the same alma mater in 1974. == Academic career == In 1975, he started his career from DJ Science College, Karachi. He also served as the president of Sindh Professors and Lecturers Association (SPLA) that works for the betterment of professors and lecturers in the province of Sindh. In 1982, he went to Nigeria on deputation and taught statistics and mathematics at the University of Nigeria, Nsukka. Upon his return continued as a teacher at the Government Degree Science College, Karachi, from where he was later transferred to Saint Patrick's College, Karachi. In early 1990s, Ahsan joined Karachi University as a full professor of statistics and mathematics and continued his career until his death. == Death == Dr. Ahsan was diagnosed with cancer in one of his legs in 2004 and was on medication since then. He recovered for a while and participated in many campaigns launched by the SPLA and fell ill again in 2008, never to recover. Registrar Prof. Raees Alvi paid him the following tribute: "He was a wise man who always guided teachers in the right direction. He always searched for development and never had any prejudice. He always worked for the promotion of merit. This great man always fought for the rights of teachers and I would call it an irreparable loss for all of us." "The void he left behind can never be filled. It is a huge loss for SPLA and we would never get a personality like him in future", SPLA Senior Vice-president Manzoor Hussain Chishti said. == Family == His father, Professor Syed Zaheer Ahsan, was a professor of Geography at Karachi University. == References == == External links == Prof. Riaz Ahsan
Wikipedia:Richard Birkeland#0
Richard Birkeland (6 June 1879 – 10 April 1928) was a Norwegian mathematician, author and professor. He is known for his contributions to the theory of algebraic equations. == Biography == He was born at Farsund in Vest-Agder, Norway. He was the son of Theodor Birkeland (1834–1913) and Therese Karoline Overwien (1846–1883). He graduated from the Christiania Technical School in 1899. In 1906, he received a scholarship to study mathematics in Paris and Göttingen. He became a professor at the Norwegian Institute of Technology from 1910. He was rector of the Norwegian Institute of Technology and from 1923 professor at the University of Oslo. He was a co-founder of the Norwegian Mathematical Society in 1918 and he was its vice chairman in the early years. He was for a time chairman of Trondheim Polytechnic Association. He was decorated Knight of the Order of St. Olav. == Selected works == Sur certaines singularités des équations différentielles (1909) Lærebok i matematisk analyse : differential- og integralregning, differentialligninger tillæg (1917) == Personal life == He was a cousin of physics professor Kristian Birkeland (1867–1917). In 1909, he married Agnes Hoff (1883–1980). Their son Øivind (1910–2004) was a civil engineer. == References ==
Wikipedia:Richard Brauer#0
Richard Dagobert Brauer (February 10, 1901 – April 17, 1977) was a German and American mathematician. He worked mainly in abstract algebra, but made important contributions to number theory. He was the founder of modular representation theory. == Education and career == Alfred Brauer was Richard's brother and seven years older. They were born to a Jewish family. Both were interested in science and mathematics, but Alfred was injured in combat in World War I. As a boy, Richard dreamt of becoming an inventor, and in February 1919 enrolled in Technische Hochschule Berlin-Charlottenburg. He soon transferred to University of Berlin. Except for the summer of 1920 when he studied at University of Freiburg, he studied in Berlin, being awarded his PhD on 16 March 1926. Issai Schur conducted a seminar and posed a problem in 1921 that Alfred and Richard worked on together, and published a result. The problem also was solved by Heinz Hopf at the same time. Richard wrote his thesis under Schur, providing an algebraic approach to irreducible, continuous, finite-dimensional representations of real orthogonal (rotation) groups. Ilse Karger also studied mathematics at the University of Berlin; she and Brauer were married 17 September 1925. Their sons George Ulrich (born 1927) and Fred Gunther (born 1932) also became mathematicians. Brauer began his teaching career in Königsberg (now Kaliningrad) working as Konrad Knopp’s assistant. Brauer expounded central division algebras over a perfect field while in Königsberg; the isomorphism classes of such algebras form the elements of the Brauer group he introduced. When the Nazi Party took over in 1933, the Emergency Committee in Aid of Displaced Foreign Scholars took action to help Brauer and other Jewish scientists. Brauer was offered an assistant professorship at University of Kentucky. Brauer accepted the offer, and by the end of 1933 he was in Lexington, Kentucky, teaching in English. Ilse followed the next year with George and Fred; brother Alfred made it to the United States in 1939, but their sister Alice was killed in the Holocaust. Hermann Weyl invited Brauer to assist him at Princeton's Institute for Advanced Study in 1934. Brauer and Nathan Jacobson edited Weyl's lectures Structure and Representation of Continuous Groups. Through the influence of Emmy Noether, Brauer was invited to University of Toronto to take up a faculty position. With his graduate student Cecil J. Nesbitt he developed modular representation theory, published in 1937. Robert Steinberg, Stephen Arthur Jennings, and Ralph Stanton were also Brauer’s students in Toronto. Brauer also conducted international research with Tadasi Nakayama on representations of algebras. In 1941 University of Wisconsin hosted visiting professor Brauer. The following year he visited the Institute for Advanced Study and Bloomington, Indiana where Emil Artin was teaching. In 1948, Brauer moved to Ann Arbor, Michigan where he and Robert M. Thrall contributed to the program in modern algebra at University of Michigan. In 1952, Brauer joined the faculty of Harvard University and retired in 1971. His students included Donald John Lewis, Donald Passman, and I. Martin Isaacs. Brauer was elected to the American Academy of Arts and Sciences in 1954, the United States National Academy of Sciences in 1955, and the American Philosophical Society in 1974. The Brauers frequently traveled to see their friends such as Reinhold Baer, Werner Wolfgang Rogosinski, and Carl Ludwig Siegel. == Mathematical work == Several theorems bear his name, including Brauer's induction theorem, which has applications in number theory as well as finite group theory, and its corollary Brauer's characterization of characters, which is central to the theory of group characters. The Brauer–Fowler theorem, published in 1956, later provided significant impetus towards the classification of finite simple groups, for it implied that there could only be finitely many finite simple groups for which the centralizer of an involution (element of order 2) had a specified structure. Brauer introduced the idea of "resolvent degree" in 1975. He applied modular representation theory to obtain subtle information about group characters, particularly via his three main theorems. These methods were particularly useful in the classification of finite simple groups with low rank Sylow 2-subgroups. The Brauer–Suzuki theorem showed that no finite simple group could have a generalized quaternion Sylow 2-subgroup, and the Alperin–Brauer–Gorenstein theorem classified finite groups with wreathed or quasidihedral Sylow 2-subgroups. The methods developed by Brauer were also instrumental in contributions by others to the classification program: for example, the Gorenstein–Walter theorem, classifying finite groups with a dihedral Sylow 2-subgroup, and Glauberman's Z* theorem. The theory of a block with a cyclic defect group, first worked out by Brauer in the case when the principal block has defect group of order p, and later worked out in full generality by E. C. Dade, also had several applications to group theory, for example to finite groups of matrices over the complex numbers in small dimension. The Brauer tree is a combinatorial object associated to a block with cyclic defect group which encodes much information about the structure of the block. Brauer formulated numerous influential problems on modular representation theory, among which the Brauer height zero conjecture and the k(B) conjecture. In 1970, he was awarded the National Medal of Science. == Hypercomplex numbers == Eduard Study had written an article on hypercomplex numbers for Klein's encyclopedia in 1898. This article was expanded for the French language edition by Henri Cartan in 1908. By the 1930s there was evident need to update Study’s article, and Brauer was commissioned to write on the topic for the project. As it turned out, when Brauer had his manuscript prepared in Toronto in 1936, though it was accepted for publication, politics and war intervened. Nevertheless, Brauer kept his manuscript through the 1940s, 1950s, and 1960s, and in 1979 it was published by Okayama University in Japan. It also appeared posthumously as paper #22 in the first volume of his Collected Papers. His title was "Algebra der hyperkomplexen Zahlensysteme (Algebren)". Unlike the articles by Study and Cartan, which were exploratory, Brauer’s article reads as a modern abstract algebra text with its universal coverage. Consider his introduction: In the beginning of the 19th century, the usual complex numbers and their introduction through computations with number-pairs or points in the plane, became a general tool of mathematicians. Naturally the question arose whether or not a similar "hypercomplex" number can be defined using points of n-dimensional space. As it turns out, such extension of the system of real numbers requires the concession of some of the usual axioms (Weierstrass 1863). The selection of rules of computation, which cannot be avoided in hypercomplex numbers, naturally allows some choice. Yet in any cases set out, the resulting number systems allow a unique theory with regard to their structural properties and their classification. Further, one desires that these theories stand in close connection with other areas of mathematics, wherewith the possibility of their applications is given. While still in Königsberg in 1929, Brauer published an article in Mathematische Zeitschrift "Über Systeme hyperkomplexer Zahlen" which was primarily concerned with integral domains (Nullteilerfrei systeme) and the field theory which he used later in Toronto. == Publications == Brauer, R.; Sah, Chih-han, eds. (1969), Theory of finite groups: A symposium, W. A. Benjamin, Inc., New York-Amsterdam, MR 0240186 Brauer, R. (1980), Fong, Paul; Wong, Warren J. (eds.), Collected Papers. Vol. I, Mathematicians of Our Time, vol. 17, MIT Press, ISBN 978-0-262-02135-7, MR 0581120 Brauer, R. (1980), Fong, Paul; Wong, Warren J. (eds.), Collected Papers. Vol. II, Mathematicians of Our Time, vol. 18, MIT Press, ISBN 978-0-262-02148-7, MR 0581120 Brauer, R. (1980), Fong, Paul; Wong, Warren J. (eds.), Collected Papers. Vol. III, Mathematicians of Our Time, vol. 19, MIT Press, ISBN 978-0-262-02149-4, MR 0581120 == See also == Brauer algebra Brauer–Cartan–Hua theorem Brauer–Nesbitt theorem Brauer–Manin obstruction Brauer–Siegel theorem Brauer's theorem on forms Albert–Brauer–Hasse–Noether theorem Weyl-Brauer matrices == Notes == == References == Curtis, Charles W. (2003), Pioneers of Representation Theory: Frobenius, Burnside, Schur, and Brauer, History of Mathematics, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2677-5, MR 1715145 Review Charles W. Curtis (2003) "Richard Brauer: Sketches from His Life and Work", American Mathematical Monthly 110:665–77. James Alexander Green (1978) "Richard Dagobert Brauer", Bulletin of the London Mathematical Society 10:317–42. Feit, Walter (1979), "Richard D. Brauer", Bulletin of the American Mathematical Society, New Series, 1 (1): 1–20, doi:10.1090/S0273-0979-1979-14547-6, ISSN 0002-9904, MR 0513747 == External links == O'Connor, John J.; Robertson, Edmund F., "Richard Brauer", MacTutor History of Mathematics Archive, University of St Andrews Richard Brauer at the Mathematics Genealogy Project National Academy of Sciences Biographical Memoir
Wikipedia:Richard Bruce Paris#0
Richard Bruce Paris (23 January 1946 – 8 July 2022) was a British mathematician and reader at the Abertay University in Dundee, who specialized in calculus. He also had an honorary readership of the University of St. Andrews, Scotland. The research activity of Paris particularly concerned the asymptotics of integrals and properties of special functions. He is the author of Hadamard Expansions and Hyperasymptotic Evaluation: An Extension of the Method of Steepest Descent as well as the co-author of Asymptotics and Mellin-Barnes Integrals and of Asymptotics of High Order Differential Equations. In addition, he contributed to the NIST Handbook of Mathematical Functions and also released numerous papers for Proceedings of the Royal Society A, Methods and Applications of Analysis and the Journal of Computational and Applied Mathematics. == Personal life == Born in 1946, Richard Bruce Paris was the son of an engineer. He spent his early childhood in the Yorkshire area until his family moved to the Wirral Peninsula, Cheshire, in the mid-1950s, due to the work of his father. There, Paris visited the Calday Grange Grammar School in West Kirby to eventually discover his interest in mathematics. Paris was married to Jocelyne Marie-Louise Neidinger with whom he has a daughter Gaëlle and a son Simon. == Career == In 1967, Paris acquired a first class honours degree in Mechanical Engineering from the Victoria University of Manchester. He continued his study at the university's department of mathematics, which he graduated as a Doctor of Philosophy in 1971. Paris was a doctoral student of the British-Australian astronomer Leon Mestel. His PhD thesis was finished under the title The Role of the Magnetic Field in Cosmogony. After Paris finished his doctoral thesis, he moved to France to work for Euratom at the Department of Plasma Physics and Controlled Fusion in Fontenay-aux-Roses. In addition, from the late-1970s to the mid-1980s, Paris did several research visits in Los Alamos, USA. Finally, in 1984 he had to move to Southern France, due to a job transfer to Cadarache. In 1987, Paris quit his job at Euratom and moved to Scotland to work as a senior lecturer at the Abertay University in Dundee. A year later, in 1988, he received the honorary readership of the University of St. Andrews, Scotland. In 1999, he also achieved the degree of a Doctor of Science at the University of Manchester. Paris stayed at the University of Abertay, where he eventually obtained the status of a reader, until his retirement in 2010. Yet, this was not the end of his mathematical work but he kept contributing until his unexpected death in July 2022. In fact, one month earlier he shared his final article on ResearchGate. In 1986, Paris became an elected fellow of the British Institute of Mathematics and its Applications. == Work == The work of Paris deals with the asymptotic behaviour of a wide scope of special functions, in many case with a connection to physical problems. In collaboration with David Kaminski, associate professor of mathematics at the University of Lethbridge, Paris published the monograph Asymptotics and Mellin-Barnes integrals. It is one of the few textbooks that extensively treats the application of Mellin transforms particularly to different asymptotic problems. Mellin-Barnes integrals constitute a special class of contour integrals that feature special functions in the integrand, most frequently products of gamma functions. Their evaluation relies on the residue theorem and requires appropriate manipulations of the integration path. The name is due to the mathematicians R. H. Mellin and E. W. Barnes. Many integrals can be transformed to a Mellin-Barnes representation, by writing their integrands in terms of inverse Mellin transforms. In the context of Laplace-type integrals, this technique provides a powerful alternative to Laplace's method. In general, however, it admits a broader applicability and much space for modifications. This versatility is shown by means of several examples from number theory and integrals of higher dimension. In his monograph Hadamard Expansions and Hyperasymptotic Evaluation: An Extension of the Method of Steepest Descent, by means of theoretical and numerical examples, Paris illustrates the application of Laplace's method and possibilities to achieve a higher accuracy. The term Hadamard expansions describes a special kind of asymptotic expansions whose coefficients are again series. It refers to the French mathematician Jacques Hadamard who introduced the first series of this kind in 1908 in his paper Sur l'expression asymptotique de la fonction de Bessel. Paris also organized the chapters 8 and 11, respectively about the incomplete Gamma and about the Struve functions and related functions, of the NIST Digital Library of Mathematical Functions and of the NIST Handbook of Mathematical Functions. He validated the original release in 2010 and was the Associate Editor for his chapters from 2015 until his death. == Publications == with A. D. Wood: Asymptotics of Higher Order Differential Equations, Longman Scientific and Technical, 1986, ISBN 0-470-20375-7 with D. Kaminski: Asymptotics and Mellin-Barnes Integrals, Cambridge University Press, 2001, ISBN 978-0-521-79001-7 (vol. 85 of the Encyclopedia of Mathematics and its Applications) Hadamard Expansions and Hyperasymptotic Evaluation: An Extension of the Method of Steepest Descent, Cambridge University Press, 2011, ISBN 978-1-107-00258-6 (vol. 141 of the Encyclopedia of Mathematics and its Applications) with F. W. J. Olver, R. Askey et al.: NIST Handbook of Mathematical Functions, Cambridge University Press, 2010, Hardback ISBN 978-0-521-19225-5, Paperback ISBN 978-0-521-14063-8 == References == == External links == Richard Bruce Paris at ResearchGate
Wikipedia:Richard C. Maclaurin#0
Richard Cockburn Maclaurin ( KOH-bərn; June 5, 1870 – January 15, 1920) was a Scottish-born U.S. educator and mathematical physicist. He was made president of MIT in 1909, and held the position until his death in 1920. During his tenure as president of MIT, the Institute moved across the Charles River from Boston to its present campus in Cambridge. In Maclaurin's honor, the buildings that surround Killian Court on the oldest part of the campus are sometimes called the Maclaurin Buildings. Earlier, he was a foundation professor of the then Victoria College of the University of New Zealand from 1899 to 1907. A collection of lecture theatres at the Kelburn campus of that university were named after him. He was also a professor at Columbia University from 1907 to 1908. == Personal == Maclaurin was born in Scotland, and was related to the noted Scottish mathematician Colin Maclaurin. He emigrated to New Zealand with his family at the age of four. In 1904 he married Alice Young of Auckland, and they had two sons. His brother James Scott Maclaurin (1864–1939) was a noted chemist, who invented a process for extracting gold with cyanide. == Education == University Entrance Scholar, 1887, Auckland Grammar School B.Sc. (Hons), Mathematics, 1890, Auckland University College. BA, 1895 (12th wrangler); LL.D., 1904, St John's College, University of Cambridge. == Publications == On the Nature and Evidence of Title to Realty, 1901 Treatise on the Theory of Light, 1908 == Honors == Smith's Prize in Mathematics, 1896 Yorke Prize in Law, University of Cambridge, 1898 Elected member of the American Philosophical Society, 1910 Elected member of the America Academy of Arts and Sciences, 1911 == References == == External links == Works by or about Richard C. Maclaurin at the Internet Archive 'MACLAURIN, Richard Cockburn', from An Encyclopaedia of New Zealand, edited by A. H. McLintock, originally published in 1966. 'Richard Cockburn Maclaurin, 1870–1920', from History of the Office of the MIT President, Institute Archives, MIT Libraries, October 2004. Maclaurin in Mathematics at Victoria University College
Wikipedia:Richard Courant#0
Richard Courant (January 8, 1888 – January 27, 1972) was a German-American mathematician. He is best known by the general public for the book What is Mathematics?, co-written with Herbert Robbins. His research focused on the areas of real analysis, mathematical physics, the calculus of variations and partial differential equations. He wrote textbooks widely used by generations of students of physics and mathematics. He is also known for founding the institute now bearing his name. == Life and career == Courant was born in Lublinitz, in the Prussian Province of Silesia (now in Poland). His parents were Siegmund Courant and Martha Freund of Oels. Edith Stein was Richard's cousin on the maternal side. During his youth his parents moved often, including to Glatz, then to Breslau and in 1905 to Berlin. He stayed in Breslau and entered the university there, then continued his studies at the University of Zürich and the University of Göttingen. He became David Hilbert's assistant in Göttingen and obtained his doctorate there in 1910. He was obliged to serve in World War I, but was wounded shortly after enlisting and therefore dismissed from the military. Courant left the University of Münster in 1921 to take over Erich Hecke's position at the University of Göttingen. There he founded the Mathematical Institute, which he headed as director from 1928 until 1933. Courant left Germany in 1933, earlier than many Jewish escapees. He did not lose his position due to being Jewish, as his previous service as a front-line soldier exempted him; however, his public membership in the social-democratic left was reason enough (for the Nazis) for dismissal. In 1936, after one year at Cambridge, Courant accepted a professorship at New York University in New York City. There he founded an institute for graduate studies in applied mathematics. The Courant Institute of Mathematical Sciences (as it was renamed in 1964) is now one of the most respected research centers in applied mathematics. Courant and David Hilbert authored the influential textbook Methoden der mathematischen Physik, which, with its revised editions, is still current and widely used since its publication in 1924. With Herbert Robbins he coauthored a popular overview of higher mathematics, intended for the general public, titled What is Mathematics?. With Fritz John he also coauthored the two-volume work Introduction to Calculus and Analysis, first published in 1965. Courant's name is also attached to the finite element method, with his numerical treatment of the plain torsion problem for multiply-connected domains, published in 1943. This method is now one of the ways to solve partial differential equations numerically. Courant is a namesake of the Courant–Friedrichs–Lewy condition and the Courant minimax principle. Courant was an elected member of both the American Philosophical Society (1953) and the United States National Academy of Sciences (1955). In 1965, the Mathematical Association of America recognized his contributions to Mathematics with their Award for Distinguished Service to Mathematics. Courant died of a stroke in New Rochelle, New York on January 27, 1972, aged 84. == Perspective on mathematics == Commenting upon his analysis of experimental results from in-laboratory soap film formations, Courant explained why the existence of a physical solution does not obviate mathematical proof. Here is a quote from Courant on his mathematical perspective: Empirical evidence can never establish mathematical existence–nor can the mathematician's demand for existence be dismissed by the physicist as useless rigor. Only a mathematical existence proof can ensure that the mathematical description of a physical phenomenon is meaningful. == Personal life == In 1912, Courant married Nelly Neumann, who had earned her doctorate at Breslau in synthetic geometry in 1909. They lived together in Göttingen until they were divorced in 1916. She was later murdered by the Nazis in 1942 for being Jewish. In 1919, Courant married Nerina (Nina) Runge (1891–1991), a daughter of the Göttingen professor for Applied Mathematics, Carl Runge (of Runge–Kutta fame). Richard and Nerina had four children: Ernest, a particle physicist and innovator in particle accelerators; Gertrude (1922–2014), a biologist and wife of the mathematician Jürgen Moser (1928–1999); Hans (1924-2019), a physicist who participated in the Manhattan Project; and Leonore (known as "Lori," 1928–2015), a professional violist and wife of the mathematician Jerome Berkowitz (1928–1998) and subsequently wife of mathematician Peter Lax until her death. == Publications == Courant, R. (1937), Differential and Integral Calculus, vol. I, translated by McShane, E. J. (2nd ed.), New York: Interscience, ISBN 978-4-87187-838-8 {{citation}}: ISBN / Date incompatibility (help) Courant, R. (1936), Differential and Integral Calculus, vol. II, translated by McShane, E. J., New York: Interscience, ISBN 978-4-87187-835-7 {{citation}}: ISBN / Date incompatibility (help) Courant, Richard; John, Fritz (1965), Introduction to Calculus and Analysis, vol. I, New York: Interscience, ISBN 978-3-540-65058-4 Courant, Richard; John, Fritz (1974), Introduction to Calculus and Analysis, vol. II/1, New York: Interscience, ISBN 978-3-540-66569-4 Courant, Richard; John, Fritz (1974), Introduction to Calculus and Analysis, vol. II/2, New York: Interscience, ISBN 978-3-540-66570-0 Courant, Richard; Hilbert, David (1953), Methods of Mathematical Physics, vol. I (2nd ed.), New York: Interscience, ISBN 978-0-471-50447-4, MR 0065391 {{citation}}: ISBN / Date incompatibility (help) (archive) (translated from German: Methoden der mathematischen Physik I, 2nd ed, 1931) Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, vol. II, New York: Interscience, doi:10.1002/9783527617234, ISBN 978-0-471-50439-9, MR 0140802 {{citation}}: ISBN / Date incompatibility (help) (translated from German: Methoden der mathematischen Physik II, 1937) Courant, R.; Friedrichs, K. O. (1948), Supersonic Flow and Shock Waves, New York: Interscience, Bibcode:1948sfsw.book.....C Courant, Richard; Robbins, Herbert (1941), What is Mathematics?, Oxford University Press == References == == Sources == Reid, Constance (1976). Courant in Göttingen and New York. The Story of an Improbable Mathematician. New York, Heidelberg, Berlin: Springer-Verlag. ISBN 978-0-387-90194-7. Medawar, Jean; Pyke, David (2012). Hitler's Gift: The True Story of the Scientists Expelled by the Nazi Regime. New York: Arcade Publishing. ISBN 978-1-61145-709-4. == External links == Richard Courant at the Mathematics Genealogy Project O'Connor, John J.; Robertson, Edmund F., "Richard Courant", MacTutor History of Mathematics Archive, University of St Andrews Oral History interview transcript with Richard Courant on 9 May 1962, American Institute of Physics, Niels Bohr Library & Archives National Academy of Sciences Biographical Memoir 2015 Video Interview with Hans Courant by Atomic Heritage Foundation Voices of the Manhattan Project US News Rankings of Applied Mathematics Programs
Wikipedia:Richard Dedekind#0
Julius Wilhelm Richard Dedekind (; German: [ˈdeːdəˌkɪnt]; 6 October 1831 – 12 February 1916) was a German mathematician who made important contributions to number theory, abstract algebra (particularly ring theory), and the axiomatic foundations of arithmetic. His best known contribution is the definition of real numbers through the notion of Dedekind cut. He is also considered a pioneer in the development of modern set theory and of the philosophy of mathematics known as logicism. == Life == Dedekind's father was Julius Levin Ulrich Dedekind, an administrator of Collegium Carolinum in Braunschweig. His mother was Caroline Henriette Dedekind (née Emperius), the daughter of a professor at the Collegium. Richard Dedekind had three older siblings. As an adult, he never used the names Julius Wilhelm. He was born in Braunschweig (often called "Brunswick" in English), which is where he lived most of his life and died. His body rests at Braunschweig Main Cemetery. He first attended the Collegium Carolinum in 1848 before transferring to the University of Göttingen in 1850. There, Dedekind was taught number theory by professor Moritz Stern. Gauss was still teaching, although mostly at an elementary level, and Dedekind became his last student. Dedekind received his doctorate in 1852, for a thesis titled Über die Theorie der Eulerschen Integrale ("On the Theory of Eulerian integrals"). This thesis did not display the talent evident in Dedekind's subsequent publications. At that time, the University of Berlin, not Göttingen, was the main facility for mathematical research in Germany. Thus Dedekind went to Berlin for two years of study, where he and Bernhard Riemann were contemporaries; they were both awarded the habilitation in 1854. Dedekind returned to Göttingen to teach as a Privatdozent, giving courses on probability and geometry. He studied for a while with Peter Gustav Lejeune Dirichlet, and they became good friends. Because of lingering weaknesses in his mathematical knowledge, he studied elliptic and abelian functions. Yet he was also the first at Göttingen to lecture concerning Galois theory. About this time, he became one of the first people to understand the importance of the notion of groups for algebra and arithmetic. In 1858, he began teaching at the Polytechnic school in Zürich (now ETH Zürich). When the Collegium Carolinum was upgraded to a Technische Hochschule (Institute of Technology) in 1862, Dedekind returned to his native Braunschweig, where he spent the rest of his life, teaching at the Institute. He retired in 1894, but did occasional teaching and continued to publish. He never married, instead living with his sister Julia. Dedekind was elected to the Academies of Berlin (1880) and Rome, and to the French Academy of Sciences (1900). He received honorary doctorates from the universities of Oslo, Zurich, and Braunschweig. == Work == While teaching calculus for the first time at the Polytechnic school, Dedekind developed the notion now known as a Dedekind cut (German: Schnitt), now a standard definition of the real numbers. The idea of a cut is that an irrational number divides the rational numbers into two classes (sets), with all the numbers of one class (greater) being strictly greater than all the numbers of the other (lesser) class. For example, the square root of 2 defines all the nonnegative numbers whose squares are less than 2 and the negative numbers into the lesser class, and the positive numbers whose squares are greater than 2 into the greater class. Every location on the number line continuum contains either a rational or an irrational number. Thus there are no empty locations, gaps, or discontinuities. Dedekind published his thoughts on irrational numbers and Dedekind cuts in his pamphlet "Stetigkeit und irrationale Zahlen" ("Continuity and irrational numbers"); in modern terminology, Vollständigkeit, completeness. Dedekind defined two sets to be "similar" when there exists a one-to-one correspondence between them. He invoked similarity to give the first precise definition of an infinite set: a set is infinite when it is "similar to a proper part of itself," in modern terminology, is equinumerous to one of its proper subsets. Thus the set N of natural numbers can be shown to be similar to the subset of N whose members are the squares of every member of N, (N → N2): N 1 2 3 4 5 6 7 8 9 10 ... ↓ N2 1 4 9 16 25 36 49 64 81 100 ... Dedekind's work in this area anticipated that of Georg Cantor, who is commonly considered the founder of set theory. Likewise, his contributions to the foundations of mathematics anticipated later works by major proponents of logicism, such as Gottlob Frege and Bertrand Russell. Dedekind edited the collected works of Lejeune Dirichlet, Gauss, and Riemann. Dedekind's study of Lejeune Dirichlet's work led him to his later study of algebraic number fields and ideals. In 1863, he published Lejeune Dirichlet's lectures on number theory as Vorlesungen über Zahlentheorie ("Lectures on Number Theory") about which it has been written that: Although the book is assuredly based on Dirichlet's lectures, and although Dedekind himself referred to the book throughout his life as Dirichlet's, the book itself was entirely written by Dedekind, for the most part after Dirichlet's death. The 1879 and 1894 editions of the Vorlesungen included supplements introducing the notion of an ideal, fundamental to ring theory. (The word "Ring", introduced later by Hilbert, does not appear in Dedekind's work.) Dedekind defined an ideal as a subset of a set of numbers, composed of algebraic integers that satisfy polynomial equations with integer coefficients. The concept underwent further development in the hands of Hilbert and, especially, of Emmy Noether. Ideals generalize Ernst Eduard Kummer's ideal numbers, devised as part of Kummer's 1843 attempt to prove Fermat's Last Theorem. (Thus Dedekind can be said to have been Kummer's most important disciple.) In an 1882 article, Dedekind and Heinrich Martin Weber applied ideals to Riemann surfaces, giving an algebraic proof of the Riemann–Roch theorem. In 1888, he published a short monograph titled Was sind und was sollen die Zahlen? ("What are numbers and what are they good for?" Ewald 1996: 790), which included his definition of an infinite set. He also proposed an axiomatic foundation for the natural numbers, whose primitive notions were the number one and the successor function. The next year, Giuseppe Peano, citing Dedekind, formulated an equivalent but simpler set of axioms, now the standard ones. Dedekind made other contributions to algebra. For instance, around 1900, he wrote the first papers on modular lattices. In 1872, while on holiday in Interlaken, Dedekind met Georg Cantor. Thus began an enduring relationship of mutual respect, and Dedekind became one of the first mathematicians to admire Cantor's work concerning infinite sets, proving a valued ally in Cantor's disputes with Leopold Kronecker, who was philosophically opposed to Cantor's transfinite numbers. == Bibliography == Primary literature in English: 1890. "Letter to Keferstein" in Jean van Heijenoort, 1967. A Source Book in Mathematical Logic, 1879–1931. Harvard Univ. Press: 98–103. 1963 (1901). Essays on the Theory of Numbers. Beman, W. W., ed. and trans. Dover. Contains English translations of Stetigkeit und irrationale Zahlen and Was sind und was sollen die Zahlen? 1996. Theory of Algebraic Integers. Stillwell, John, ed. and trans. Cambridge Uni. Press. A translation of Über die Theorie der ganzen algebraischen Zahlen. Ewald, William B., ed., 1996. From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols. Oxford Uni. Press. 1854. "On the introduction of new functions in mathematics," 754–61. 1872. "Continuity and irrational numbers," 765–78. (translation of Stetigkeit...) 1888. What are numbers and what should they be?, 787–832. (translation of Was sind und...) 1872–82, 1899. Correspondence with Cantor, 843–77, 930–40. Primary literature in German: Gesammelte mathematische Werke (Complete mathematical works, Vol. 1–3). Retrieved 5 August 2009. == Pronunciation == dehdehkhind is the way a German would pronounce Dedekind. == See also == List of things named after Richard Dedekind Dedekind cut Dedekind domain Dedekind eta function Dedekind-infinite set Dedekind number Dedekind psi function Dedekind sum Dedekind zeta function Ideal (ring theory) == Notes == == References == Biermann, Kurt-R (2008). "Dedekind, (Julius Wilhelm) Richard". Complete Dictionary of Scientific Biography. Vol. 4. Detroit: Charles Scribner's Sons. pp. 1–5. ISBN 978-0-684-31559-1. == Further reading == Edwards, H. M., 1983, "Dedekind's invention of ideals," Bull. London Math. Soc. 15: 8–17. William Everdell (1998). The First Moderns. Chicago: University of Chicago Press. ISBN 0-226-22480-5. Gillies, Douglas A., 1982. Frege, Dedekind, and Peano on the foundations of arithmetic. Assen, Netherlands: Van Gorcum. Ferreirós, José, 2007. Labyrinth of Thought: A history of set theory and its role in modern mathematics. Basel: Birkhäuser, chap. 3, 4 and 7. Ivor Grattan-Guinness, 2000. The Search for Mathematical Roots 1870–1940. Princeton Uni. Press. There is an online bibliography of the secondary literature on Dedekind. Also consult Stillwell's "Introduction" to Dedekind (1996). == External links == O'Connor, John J.; Robertson, Edmund F., "Richard Dedekind", MacTutor History of Mathematics Archive, University of St Andrews Works by Richard Dedekind at Project Gutenberg Works by or about Richard Dedekind at the Internet Archive Dedekind, Richard, Essays on the Theory of Numbers. Open Court Publishing Company, Chicago, 1901. at the Internet Archive Dedekind's Contributions to the Foundations of Mathematics http://plato.stanford.edu/entries/dedekind-foundations/.
Wikipedia:Richard F. Bass#0
Richard Franklin Bass is an American mathematician, the Board of Trustees Distinguished Professor Emeritus of Mathematics at the University of Connecticut. He is known for his work in probability theory. Bass earned his Ph.D. from the University of California, Berkeley in 1977 under the supervision of Pressley Millar. He taught at the University of Washington before moving to Connecticut. Bass is a fellow of the Institute of Mathematical Statistics. In 2012 he became a fellow of the American Mathematical Society. == Books == Bass is the author of: Probabilistic Techniques in Analysis (Springer, 1995) Diffusions and Elliptic Operators (Springer, 1997) Stochastic Processes (Cambridge University Press, 2011) Bass, Richard Franklin (2013) [2011]. Real analysis for graduate students (Second ed.). Createspace Independent Publishing. ISBN 978-1-4818-6914-0. == References == == External links == Home page
Wikipedia:Richard H. Stockbridge#0
Richard H. Stockbridge is a Distinguished Professor Emeritus of Mathematics at the University of Wisconsin-Milwaukee. His contributions to research primarily involve stochastic control theory, optimal stopping and mathematical finance. Most notably, alongside Professors Thomas G. Kurtz, Kurt Helmes, and Chao Zhu, he developed the methodology of using linear programming to solve stochastic control problems. == Education == Stockbridge obtained his Ph.D. from the University of Wisconsin-Madison under the supervision of Thomas G. Kurtz with a dissertation entitled "Time-Average Control of Martingale Problems". He also holds a master's degree in mathematics from the University of Wisconsin-Madison and attended St. Lawrence University in Canton, New York, for his baccalaureate studies. == Academic career == Following the awarding of his Ph.D., Stockbridge served as an assistant professor in the Department of Mathematics and Statistics at Case Western Reserve University from 1987 to 1988. He then took an assistant professor position at the University of Kentucky from 1988 to 1993, leading to an associate professorship which he held until 2000. Later, Stockbridge began working at the University of Wisconsin-Milwaukee and became a full professor in 2002. In 2018, he was awarded the title of "distinguished professor" by the University of Wisconsin Milwaukee Distinguished Faculty Committee. Stockbridge has also held various visiting positions, including: Visiting Scholar, Heriot Watt University, School of Mathematical and Computer Science, Edinburgh, Scotland, March–August 2016 Sabbaticant Professor, University of Botswana, Department of Mathematics, July 2008 – January 2009 Visiting Fellow, Bath University, Department of Mathematical Sciences, Bath, England, January–July 1997 Visiting Assistant Professor, University of Kentucky, Department of Mathematics, Lexington, Kentucky, 1988–89 == Research == Professor Stockbridge's research is focused on developing linear programming techniques in stochastic control. These techniques give an alternative formulation to the traditional dynamic programming framework used in stochastic control problems and have been demonstrated in examples including control of the running maximum of a diffusion, optimal stopping problems, and regime-switching diffusions. Through the completion of his Ph.D. dissertation, Stockbridge examined the relationship between long-term average stochastic control problems and linear programs spanning the space of stationary distributions for that controlled process, ultimately concluding their equivalence. This dissertation served as a basis for significant work in the field. Following his graduate studies, Stockbridge helped expand the applications of this equivalence between linear programming and stochastic control to include discounted, first-exit and finite horizon problems. == Publications == Notable publications by Richard Stockbridge include: == References ==
Wikipedia:Richard J. Wood#0
Richard J. Wood is a mathematics professor at Dalhousie University in Halifax, Nova Scotia, Canada. He graduated from McMaster University in 1972 with his M.Sc. and then later went on to do his Ph.D. at Dalhousie University. He is interested in category theory and lattice theory. == References == == Publications == Ernest F. Haeussler, Jr.; Richard S. Paul; Richard Wood (2005). Introductory Mathematical Analysis (for Business, Economics, and the Life and Social Sciences) (11th ed.). Pearson/Prentice Hall. ISBN 9780131139480. == External links == Richard James Wood at the Mathematics Genealogy Project
Wikipedia:Richard Jozsa#0
Richard Jozsa is an Australian mathematician who holds the Leigh Trapnell Chair in Quantum Physics at the University of Cambridge. He is a fellow of King's College, Cambridge, where his research investigates quantum information science. A pioneer of his field, he is the co-author of the Deutsch–Jozsa algorithm and one of the co-inventors of quantum teleportation. == Education == Jozsa received his Doctor of Philosophy degree on twistor theory at Oxford, under the supervision of Roger Penrose. == Career and research == Jozsa has held previous positions at the University of Bristol, the University of Plymouth and the Université de Montréal. === Awards and honours === His work was recognised in 2004 by the London Mathematical Society with the award of the Naylor Prize for 'his fundamental contributions to the new field of quantum information science'. Since 2016, Jozsa is a member of the Academia Europaea. == References ==
Wikipedia:Richard Maunder#0
Charles Richard Francis Maunder (23 November 1937 – 5 June 2018) was a British mathematician and musicologist. == Early life == Maunder was educated at the Royal Grammar School, High Wycombe, and Jesus College, Cambridge, before going on to complete a PhD at Christ’s College, Cambridge, in 1962. After teaching at Southampton University he became a fellow of Christ’s in 1964. == Mathematics == Maunder's field of work was algebraic topology. He used Postnikov systems to give an alternative construction of the Atiyah–Hirzebruch spectral sequence. With this construction, the differentials can be better described. The family of higher cohomology operations on mod-2 cohomology that he constructed has been discussed by several authors. In 1981 he gave a short proof of the Kan-Thurston theorem, according to which for every path-connected topological space X there is a discrete group π such that there is a homology isomorphism of the Eilenberg–MacLane space K(π,1) after X. His textbook Algebraic Topology (1970) continues to circulate in the 1996 Dover edition. == Musicology == Maunder created a new version of Mozart's Requiem. Following on from other musicologists such as Ernst Hess, Franz Beyer and Robert D. Levin, he presented a fundamental revision of Mozart's last work, in which, like his predecessors, he wanted to remove Süssmayr's additions as far as possible and replace them with Mozart's own ideas. This new version was recorded by Christopher Hogwood with the Academy of Ancient Music in 1983 and the score was published in 1988. In 1992 it was recorded by Rupert Gottfried Frieberger. In doing so, Maunder rejected Süssmayr's Sanctus and Benedictus completely and removed them from the work; he considered only the Agnus Dei to be authentic because of its comparisons with other church music works by Mozart. Maunder also composed an Amen fugue for the conclusion of the Lacrimosa, for which he took Mozart's sketch sheet and a fugue for organ roll by Mozart (K. 608) as a starting point. He also fundamentally revised Süssmayr's instrumentation throughout the Requiem. This version was performed several times in German-speaking countries, including a dance version Requiem! by Birgit Scherzer. Maunder's edition of Mozart's C minor Mass was published in 1990 and was first recorded by Hogwood in the same year. Maunder edited also pieces by Francesco Geminiani, Tomaso Albinoni, Henry Purcell, members of the Bach Family, Giuseppe Sammartini and others. https://imslp.org/wiki/Category:Maunder,_Richard == Works == === Mathematics === Maunder, C. R. F. (1963). "Cohomology operations of the Nth kind". Proceedings of the London Mathematical Society (Third Series). 13 (1): 125–154. doi:10.1112/plms/s3-13.1.125. ISSN 0024-6115. Maunder, C. R. F. (1963). "The spectral sequence of an extraordinary cohomology theory". Mathematical Proceedings of the Cambridge Philosophical Society. 59 (3): 567–574. Bibcode:1963PCPS...59..567M. doi:10.1017/S0305004100037245. ISSN 0305-0041. S2CID 122794658. Maunder, C. R. F. (1970). Algebraic Topology. London: Van Nostrand Reinhold. ISBN 0-442-05168-9. Reissued in 1980 (Cambridge University Press, ISBN 0-521-29840-7) and 1996 (Dover Publications, Mineola, New York, ISBN 0-486-69131-4) Maunder, C. R. F. (1981). "A short proof of a theorem of Kan and Thurston". Bulletin of the London Mathematical Society. 13 (4): 325–327. doi:10.1112/blms/13.4.325. ISSN 0024-6093. === Musicology === (as editor) Mozart, Wolfgang Amadeus (1988). Requiem, K. 626 (Full score). Oxford University Press. ISBN 0-19-337618-0. Maunder, Richard (1988). Mozart's Requiem: On preparing a new edition. Oxford: Clarendon Press. ISBN 0-19-316413-2. (as editor) Mozart, Wolfgang Amadeus (1990). Mass in C Minor K427. Oxford University Press. ISBN 0-19-337615-6. Maunder, Richard (1998). Keyboard instruments in eighteenth-century Vienna. Clarendon Press. ISBN 0-19-816637-0. == References ==
Wikipedia:Richard Nickl#0
Richard Nickl (born 13 June 1980) is an Austrian mathematician and Professor of Mathematical Statistics at the University of Cambridge. He is a fellow of Gonville and Caius College. He grew up in Vienna, attended secondary school at the Theresianum there (graduating in 1998 with distinction) and obtained his academic degrees from the University of Vienna, including a PhD in 2005. He has made contributions to various areas of mathematical statistics; including non-parametric and high-dimensional statistics, empirical process theory, and Bayesian inference for statistical inverse problems and partial differential equations. Jointly with Evarist Giné, he is the author of the book `Mathematical foundations of infinite-dimensional statistical models', published with Cambridge University Press, which won the 2017 PROSE Award for best monograph in the mathematics category. He was an invited speaker at the 2022 International Congress of Mathematicians (ICM) and at the 8th European Congress of Mathematics (ECM). He has been awarded the 2017 Ethel Newbold Prize of the Bernoulli Society as well as a Consolidator Grant and an Advanced Grant by the European Research Council. == Selected publications == Evarist Giné & Richard Nickl, Mathematical foundations of infinite-dimensional statistical models, Cambridge University Press (2016) Richard Nickl, Bayesian non-linear statistical inverse problems, European Mathematical Society Press (2023) == References ==